**41**Comments November 24, 2009

# MIT Linear Algebra, Lecture 1: The Geometry of Linear Equations

This is going to be my summary of the freely available^{*} Linear Algebra course from MIT. I watched the lectures of this course in the summer of 2008. This was not the first time I learned linear algebra. I had already had two terms of linear algebra when I studied physics back in 2004. But it was not enough for a curious mind like mine. I wanted to see how it was taught at the world's best university.

The rationale of why I am posting these mathematics lectures on my programming blog is because I believe that if you want to be a great programmer and work on the most exciting and world changing problems, then you have to know linear algebra. Larry and Sergey wouldn't have created Google if they didn't know linear algebra. Take a look at this publication if you don't believe me "Linear Algebra Behind Google." Linear algebra has also tens and hundreds of other computational applications, to name a few, data coding and compression, pattern recognition, machine learning, image processing and computer simulations.

The course contains 35 lectures. Each lecture is 40 minutes long, but you can speed them up and watch one in 20 mins. The course is taught by no other than Gilbert Strang. He's the world's leading expert in linear algebra and its applications and has helped the development of Matlab mathematics software. The course is based on working out a lot of examples, there are almost no theorems or proofs.

The textbook used in this course is Introduction to Linear Algebra by Gilbert Strang:

I'll write the summary in the same style as I did my summary of MIT Introduction to Algorithms. I'll split up the whole course in 30 or so posts, each post will contain one or more lectures together with my comments, my scanned notes, embedded video lectures and a timeline of the topics in the lecture. But, not to flood my blog with just mathematics, I will write one or two programming posts in between. You should subscribe to my posts through RSS here.

***** The whole course is available at MIT's Open Course Ware: Course 18.06, Linear Algebra.

I'll review the first lecture today.

## Lecture 1: The Geometry of Linear Equations

The first lecture starts with Gilbert Strang stating the **fundamental problem of linear algebra**, which is to solve systems of linear equations. He proceeds with an example.

The example is a system of two equations in two unknowns:

There are three ways to look at this system. The first is to look at it a row at a time, the second is to look a column at a time, and the third is use the matrix form.

If we look at this equation a row at a time, we have two independent equations 2x - y = 0 and -x + 2y = 3. These are both line equations. If we plot them we get the row picture:

The row picture shows the two lines meeting at a single point (x=1, y=2). That's the solution of the system of equations. If the lines didn't intersect, there would have been no solution.

Now let's look at the columns. The column at the x's is (2, -1), the column at y's is (-1, 2) and the column at right hand side is (0, 3). We can write it down as following:

This is a linear combination of columns. What this tells us is that we have to combine the right amount of vector (2, -1) and vector (-1, 2) to produce the vector (0, 3). We can plot the vectors in the column picture:

If we take one green x vector and two blue y vectors (in gray), we get the red vector. Therefore the solution is again (x=1, y=2).

The third way to look at this system entirely through matrices and use the matrix form of the equations. The matrix form in general is the following: A**x** = **b** where A is the coefficient matrix, **x** is the unknown vector and **b** is the right hand side vector.

How to solve the equations written in matrix form will be discussed in the next lectures. But I can tell you beforehand that the method is called Gauss elimination with back substitution.

For this two equations, two unknowns system, the matrix equation Ax=b looks like this:

The next example in the lecture is a system of three equations in three unknowns:

We can no longer plot it in two dimensions because there are three unknowns. This is going to be a 3D plot. Since the equations are linear in unknowns x, y, z, we are going to get three planes intersecting at a single point (if there is a solution).

Here is the row picture of 3 equations in 3 unknowns:

The red is the 2x - y = 0 plane. The green is the -x + 2y - z = -1 plane, and the blue is the -3y + 4z = 4 plane.

Notice how difficult it is to spot the point of intersection? Almost impossible! And all this of going one dimension higher. Imagine what happens if we go 4 or higher dimensions. (The intersection is at (x=0, y=0, z=1) and I marked it with a small white dot.)

The column picture is almost as difficult to understand as the row picture. Here it is for this system of 3 equations in 3 unknowns:

The first column (2, -1, 0) is red, the second column (-1, 2, -3) is green, the fourth column (0, -1, 4) is blue, and the result (0, -1, 4) is gray.

Again, it's pretty hard to visualize how to manipulate these vectors to produce the solution vector (0, -1, 4). But we are lucky in this particular example. Notice that if we take none of red vector, none of green vector and one of blue vector, we get the gray vector! That is, we didn't even need red and green vectors!

This is all still tricky, and gets much more complicated if we go to more equations with more unknowns. Therefore we need better methods for solving systems of equations than drawing plane or column pictures.

The lecture ends with several questions:

- Can A
**x**=**b**be solved for any**b**? - When do the linear combination of columns fill the whole space?,
- What's the method to solve 9 equations with 9 unknowns?

The examples I analyzed here are also carefully explained in the lecture, you're welcome to watch it:

Direct link: http://www.youtube.com/watch?v=ZK3O402wf1c

Topics covered in lecture one:

- [00:20] Information on textbook and course website.
- [01:05] Fundamental problem of linear algebra: solve systems of linear equations.
- [01:15] Nice case: n equations, n unknowns.
- [02:20] Solving 2 equations with 2 unknowns
- [02:54] Coefficient matrix.
- [03:35] Matrix form of the equations. Ax=b.
- [04:20] Row picture of the equations - lines.
- [08:05] Solution (x=1, y=2) from the row picture.
- [08:40] Column picture of the equations - 2 dimensional vectors.
- [09:50] Linear combination of columns.
- [12:00] Solution from the column picture x=1, y=2.
- [12:05] Plotting the columns to produce the solution vector.
- [15:40] Solving 3 equations with 3 unknowns
- [16:46] Matrix form for this 3x3 equation.
- [17:30] Row picture - planes.
- [22:00] Column picture - 3 dim vectors.
- [24:00] Solution (x=0, y=0, z=1) from the column picture by noticing that z vector is equal to b vector.
- [28:10] Can Ax=b be solved for every b?
- [28:50] Do the linear combinations of columns fill the 3d space?
- [32:30] What if there are 9 equations and 9 unknowns?
- [36:00] How to multiply a matrix by a vector? Two ways.
- [36:40] Ax is a linear combination of columns of A.

Here are my notes of lecture one. Sorry about the handwriting. It seems that I hadn't written much at that time and the handwriting had gotten really bad. But it gets better with each new lecture. At lecture 5 and 6 it will be as good as it gets.

Have fun with this lecture! The next post is going to be about a systematic way to find a solution to a system of equations called elimination.

PS. This course is taught from Introduction to Linear Algebra textbook. Get it here:

**44**Comments November 17, 2009

# Feedburner Graphs Suck, or How to Generate Nice Graphs for Feedburner

Feedburner used to have a really nice RSS subscriber growth graph. I loved it. But then one day they were acquired by Google and they changed their nice chart to an interactive flash thing that was slow and looked just awful.

Here is how awesome the graph used to look like:

This graph was taken from my "one year of blogging" post.

And how it looks now:

This graph was taken from Feedburner stats dashboard today.

Choose "Show stats for" -> "all time" to generate this graph.

This critter takes 35MB of RAM, responds in 4 seconds and worst of all, looks very, very ugly. I don't know why would anyone replace a nice 6.5KB image with a 35MB monster.

I don't want to see this ugliness anymore, therefore I'll create a Perl program that generates the awesome graph they used to have. I'll write my thought process in creating this program in this post. Here it goes.

First I need to get the data somehow. I remember they had some kind of an API to get the data. A quick Google search for feedburner api returns this link Feedburner Awareness API. Ya, that's it. This is the documentation of their API.

Accessing the following URL gets me subscriber count data from July 1, 2007 to November 17, 2009:

http://feedburner.google.com/api/awareness/1.0/GetFeedData?uri=catonmat&dates=2007-07-01,2009-11-17

Excellent, now I can write the Perl program. It will need to parse the XML data, draw the chart and save the image to a file.

Hmm, how should I invoke my program? Ok, here is how:

$ generate_feedburner_graph.pl <<strong>feed name</strong>> [<<strong>start date</strong>> [<<strong>end date</strong>>]] # if <strong>end date</strong> is not specified, it's set to today. # if <strong>start date</strong> is not specified, it's set to first # day when the feed had at least one subscriber.

This program will use LibGD to generate the image. It will save it to a file called **feed_name-start_date-end_date.png**.

Now I need to find the colors used in the awesome feedburner graph. For this purpose I'll use ColorZilla Firefox plugin. The green one is #95CF9C, the background is #F2F8FC, the light grid is #CCCECE, the one that separates the green area from background is #687279, and the x-y axis are #808080.

Alright, now I have everything I need to create the program.

... Several hours later ...

Done!

One thing I forgot to mention is that you will need DejaVuSans TrueType font to run this program (it uses it to draw text). Download it and put the DejaVuSans.ttf in the same directory as the program.

#!/usr/bin/perl # # Feedburner graph generator # Version 1.0 # use warnings; use strict; use WWW::Mechanize; use List::Util 'max'; use XML::Simple; use POSIX; use GD; # This is the API URL that returns XML data with feed statistics by day. my $feedburner_url = "http://feedburner.google.com/api/awareness/1.0/GetFeedData?uri=%s&dates=%s,%s"; # This function prints the usage and terminates the program. sub usage { printf "Usage: %s <feed name> [<start date> [<end date>]]\n", $0; print "Parameters:\n"; print "<feed name> - your feed name, for example 'catonmat'\n"; print "<start date> - start date (YYYY-MM-DD)\n"; print "<end date> - end date (YYYY-MM-DD), today if not specified\n"; exit(1); } # This function checks if DejaVuSans font is present, if not # it prints the instructions on how to download and terminates the program. sub check_dejavu_sans { unless (-e 'DejaVuSans.ttf') { print "Please download DejaVu fonts and put DejaVuSans.ttf file in\n"; print "the same directory as this program.\n"; print "http://dejavu-fonts.org/wiki/index.php?title=Download\n"; exit(1); } } # Given year, month, day from `struct tm` (man 3 mktime), # it constructs a YYYY-MM-DD string. sub format_date { my ($y, $m, $d) = @_; return sprintf("%04d-%02d-%02d", $y+1900, $m+1, $d); } # Given the `struct tm` (man 3 mktime) as a 9-list (perldoc -f localtime), # it constructs a YYYY-MM-DD string. sub yyyymmdd_from_9list { my ($y, $m, $d) = @_[5,4,3]; return format_date $y, $m, $d; } # This function returns a YYYY-MM-DD string for today. sub today { return yyyymmdd_from_9list localtime } # This function constructs the 9-list (perldoc -f localtime) for a # date that was $months_ago months ago. sub months_ago { my $months_ago = shift; my @date = @_; $date[4] -= $months_ago; return localtime mktime @date; } # Given feed data from feedburner's api (array of hashrefs), it finds # the first date that had non-zero circulation. # If no such date exists, it returns undef. sub find_first_nonzero { my @feed_data = @_; return if $feed_data[0]->{circulation} != 0; my $prev_item; for my $item (@feed_data) { return $prev_item if $item->{circulation}; $prev_item = $item; } return } # Given feed's name, this function finds the first date the # feed had some subscribers, i.e., feed's start date. sub find_start_date { my $feed = shift; print "Finding feed's start date...\n"; my @ago = months_ago 6, localtime; my $end_date = today(); while (1) { my $start_date = format_date @ago[5,4,3]; print "Trying $start_date as start date...\n"; my @feed_data = get_feed_data($feed, $start_date, $end_date); my $non_zero = find_first_nonzero(@feed_data); if ($non_zero) { print "Found $non_zero->{date} as start date!\n"; return $non_zero->{date}; } $end_date = yyyymmdd_from_9list @ago; @ago = months_ago 6, @ago; } } # This function returns an array of hashrefs of feeds data. # Each hash contains 'reach', 'hits', 'date', and 'circulation' keys. sub get_feed_data { my $raw_feed_data = get_raw_feed_data(@_); my $feed_data = XML::Simple->new->XMLin($raw_feed_data); if ($feed_data->{stat} ne "ok") { die $feed_data->{err}{msg} } return @{$feed_data->{'feed'}{'entry'}}; } # This function formats the $feedburner_url and uses WWW::Mechanize # to get the feed data via feedburner's API. sub get_raw_feed_data { my ($feed, $start_date, $end_date) = @_; my $url = sprintf($feedburner_url, $feed, $start_date, $end_date); return WWW::Mechanize->new(agent => 'Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.9.1.5) Gecko/20091102 Firefox/3.5.5')->get($url)->content; } # This function drops feed items when they can't fit in graph's width. sub drop_data { my ($width, @data) = @_; my $len = $#data; my $delta = @data - $width; my @drop = map { int($len / $delta * $_) } 1..$delta; splice @data, $_, 1 for reverse @drop; return @data; } # This function duplicates feed items when there are not enough items # to fill the whole graph. sub dupe_data { my ($width, @data) = @_; my $len = $#data; my $delta = $width - @data; my @dupe = map { int($len / $delta * $_) } 1..$delta; splice @data, $_, 0, $data[$_] for reverse @dupe; return @data; } # This function draws the outline of the graph box where the green # lines are drawn. sub draw_outline { my ($gd, $grid, $xy, $bg) = @_; $gd->rectangle(40, 4, 482, 100, $grid); $gd->filledRectangle(41, 5, 481, 99, $bg); $gd->line(40, 4, 40, 100, $xy); $gd->line(38, 100, 482, 100, $xy); } # This function draws the grid lines. sub draw_grid { my ($gd, $xy, $grid) = @_; # horizontal $gd->line(41, 26, 482, 26, $grid); $gd->line(38, 26, 40, 26, $xy); $gd->line(41, 63, 482, 63, $grid); $gd->line(38, 63, 40, 63, $xy); # vertical for (my $x = 77; $x <= 442; $x += 73) { $gd->line($x, 4, $x, 99, $grid); $gd->line($x, 100, $x, 102, $xy); } } # This function saves the $gd image to a file named # "feed_name-start_date-end_date.png" sub save_image { my ($gd, $feed, $start_date, $end_date, @data) = @_; my $filename = "$feed-$start_date-$end_date.png"; $filename =~ s|/|_|g; open my $fh, '>', $filename or die $!; print $fh $gd->png; close $fh; print "Done. Image written to $filename\n"; } # This function draws the date thingies on the x axis. sub draw_date { my ($gd, $item, $text_color, $x) = @_; my @mons = qw/Nul Jan Feb Mar Apr May Jun Jul Aug Sep Oct Nov Dec/; my ($y, $m, $d) = $item->{date} =~ /(\d{4})-(\d{2})-(\d{2})/; $m = $mons[$m]; my $text1 = sprintf("%s-", $m); my $text2 = sprintf("%s-%d", $m, $y); my @bounds = GD::Image->stringTTF($text_color, './DejaVuSans.ttf', 7, 0, 0, 0, $text1); my $offset = $bounds[4]; $gd->stringTTF($text_color, './DejaVuSans.ttf', 7, 0, $x-$offset+2, 114, $text2); } # This function draws the feed usage numbers on the y axis. sub draw_count { my ($gd, $count, $text_color, $y) = @_; my $text = int $count; my @bounds = GD::Image->stringTTF($text_color, './DejaVuSans.ttf', 7, 0, 0, 0, $text); my $width = $bounds[4] - $bounds[6]; $gd->stringTTF($text_color, './DejaVuSans.ttf', 7, 0, 34-$width, $y+4, $text); } # This function creates the GD image and draws everything. sub draw_feedburner_image { my ($feed, $start_date, $end_date, @data) = @_; print "Creating the awesome feedburner image.\n"; my $gd = GD::Image->new(490, 120, 1); my $white = $gd->colorAllocate(0xff, 0xff, 0xff); my $green = $gd->colorAllocate(0x95, 0xcf, 0x9c); my $bg = $gd->colorAllocate(0xf2, 0xf8, 0xfc); my $grid = $gd->colorAllocate(0xcc, 0xce, 0xce); my $xy = $gd->colorAllocate(0x80, 0x80, 0x80); my $alphagrid = $gd->colorAllocateAlpha(0xcc, 0xce, 0xce, 0x30); my $border = $gd->colorAllocate(0x68, 0x72, 0x79); my $text = $gd->colorAllocate(0, 0 , 0); $gd->alphaBlending(1); $gd->filledRectangle(0, 0, 489, 119, $white); $gd->setAntiAliased($border); draw_outline($gd, $grid, $xy, $bg); my $t_height = 90; my $t_width = 441; my $max_circulation = max map { $_->{circulation} } @data; my $compress_factor = @data/$t_width; if ($compress_factor > 1) { @data = drop_data($t_width, @data); } elsif ($compress_factor < 1) { @data = dupe_data($t_width, @data); } my ($prev_x1, $prev_y1); my $x = 41; my %x_markers = (77 => 1, 150 => 1, 223 => 1, 296 => 1, 369 => 1, 442 => 1); for my $item (@data) { my $height = int($item->{circulation}/$max_circulation * $t_height); my ($x1, $y1, $x2, $y2) = ($x, 99, $x, 99-$height); $gd->line($x1, $y1, $x2, $y2, $green); if ($prev_x1) { $gd->line($prev_x1, $prev_y1, $x2, $y2, gdAntiAliased); } ($prev_x1, $prev_y1) = ($x1, $y2); if (exists $x_markers{$x}) { draw_date($gd, $item, $text, $x) } $x++; } draw_grid($gd, $xy, $alphagrid); draw_count($gd, 0, $text, 100); draw_count($gd, $max_circulation * 74/90, $text, 26); draw_count($gd, $max_circulation * 37/90, $text, 63); save_image($gd, $feed, $start_date, $end_date); } # The main function, does everything. sub main { check_dejavu_sans; my $feed = shift || usage(); my $start_date = shift || find_start_date($feed); my $end_date = shift || today(); unless ($start_date =~ /^\d{4}-\d{2}-\d{2}$/) { die "Invalid start date. Format: YYYY-MM-DD." } unless ($end_date =~ /^\d{4}-\d{2}-\d{2}$/) { die "Invalid end date. Format: YYYY-MM-DD." } print "Getting feed data for $feed from $start_date to $end_date\n"; my @feed_data = get_feed_data($feed, $start_date, $end_date); draw_feedburner_image($feed, $start_date, $end_date, @feed_data); } main @ARGV;

Download: generate_feedburner_graph.perl

Let's test run it.

$ generate_feedburner_graph.pl catonmat Finding feed's start date... Trying 2009-05-17 as start date... Trying 2008-11-17 as start date... Trying 2008-05-17 as start date... Trying 2007-11-17 as start date... Trying 2007-05-17 as start date... Found 2007-07-15 as start date! Getting feed data for catonmat from 2007-07-15 to 2009-11-17 Creating the awesome feedburner image.Done. Image written to catonmat-2007-07-15-2009-11-17.png

Here is the result:

catonmat.net feed statistics from 2007-07-15 to 2009-11-17.

This looks divine. I love it!!!

As I was writing this I had the coolest idea to make a set of tools for probloggers. I added this idea to my idea-list and will try to make it happen. This tool could be the first in problogger tool suite!

Download "**generate_feedburner_graph.pl**":

Download: generate_feedburner_graph.perl

Downloaded: 2883 times.

Download url: http://www.catonmat.net/download/generate_feedburner_graph.perl

And finally, help me reach 10,000 subscribers! If you haven't yet subscribed, subscribe to my blog!

**71**Comments November 11, 2009

# Summary of all the MIT Introduction to Algorithms lectures

As you all may know, I watched and posted my lecture notes of the whole MIT Introduction to Algorithms course. In this post I want to summarize all the topics that were covered in the lectures and point out some of the most interesting things in them.

Actually, before I wrote this article, I had started writing an article called "*The coolest things that I learned from MIT's Introduction to Algorithms*" but quickly did I realize that what I was doing was listing the topics in each article and not really pointing out *the coolest things*. Therefore I decided to write a summary article first (I had promised to do so), and only then write an article on really the most exciting topics.

Talking about the summary, I watched a total of 23 lectures and it resulted in 14 blog posts. It took nearly a year to publish them here. The first blog post in this series was written in August 2008, and the last in July 2009. Here is a list of all the posts:

- Lectures 1 and 2: Analysis of Algorithms
- Lecture 3: Divide and Conquer
- Lectures 4 and 5: Sorting
- Lecture 6: Order Statistics
- Lectures 7 and 8: Hashing
- Lectures 9 and 10: Search Trees
- Lecture 11: Augmenting Data Structures
- Lecture 12: Skip Lists
- Lectures 13 and 14: Amortized Analysis and Self-Organizing Lists
- Lecture 15: Dynamic Programming
- Lecture 16: Greedy Algorithms
- Lectures 17, 18 and 19: Shortest Path Algorithms
- Lectures 20 and 21: Parallel Algorithms
- Lectures 22 and 23: Cache Oblivious Algorithms

I'll now go through each of the lectures. They require quite a bit of math knowledge to understand. If you are uncertain about your math skills, I'd suggest reading Knuth's Concrete Mathematics book. It contains absolutely all the necessary math to understand this course.

## Lecture 1: Analysis of Algorithms

If you're a student, or even if you're not, you must never miss the first lecture of any course, ever! The first lecture tells you what to expect from the course, how it will be taught, what it will cover, who the professor is, what the prerequisites are, and a bunch of other important and interesting things.

In this lecture you also get to know professor Charles E. Leiserson (author of CLRS) and he explains the following topics:

- Why study algorithms and their performance?
- What is the analysis of algorithms?
- What can be more important than the performance of algorithms?
- The sorting problem.
- Insertion sort algorithm.
- Running time analysis of insertion sort.
- Asymptotic analysis.
- Worst-case, average-case, best-case running time analysis.
- Analysis of insertion sort's worst-case running time.
- Asymptotic notation - theta notation - Θ.
- Merge sort algorithm.
- The recursive nature of merge sort algorithm.
- Running time recurrence for merge sort.
- Recursion trees.
- Running time analysis of merge sort by looking at the recursion tree.
- General recurrence for divide and conquer algorithms.

I personally found the list of things that can be more important than the performance of the program interesting. These things are modularity, correctness, maintainability, security, functionality, robustness, user-friendliness, programmer's time, simplicity, extensibility, reliability, scalability.

Follow this link to the full review of lecture one.

## Lecture 2: Analysis of Algorithms (continued)

The second lecture is presented by Eric Demaine. He's the youngest professor in the history of MIT.

Here are the topics that he explains in the second lecture:

- Asymptotic notation.
- Big-o notation - O.
- Set definition of O-notation.
- Capital-omega notation - Ω.
- Theta notation - Θ.
- Small-o notation - o.
- Small-omega notation - ω.
- Solving recurrences by substitution method.
- Solving recurrences by recursion-tree method.
- Solving recurrences by the Master's method.
- Intuitive sketch proof of the Master's method.

An interesting thing in this lecture is the analogy of (O, Ω, Θ, o, ω) to (≤, ≥, =, <, >).

For example, if we say f(n) = O(n^{2}) then by using the analogy we can think of it as f(n) ≤ c·n^{2}, that is, function f(n) is always smaller than or equal to c·n^{2}, or in other words, it's bounded above by function c·n^{2}, which is exactly what f(n) = O(n^{2}) means.

Follow this link to the full review of lecture two.

## Lecture 3: Divide and Conquer

The third lecture is all about the divide-and-conquer algorithm design method and its applications. The divide and conquer method solves a problem by 1) breaking it into a number of subproblems (divide step), 2) solving each problem recursively (conquer step), 3) combining the solutions (combine step).

Here are the topics explained in the third lecture:

- The nature of divide and conquer algorithms.
- An example of divide and conquer - merge sort.
- Solving for running time of merge sort by Master's method.
- Binary search.
- Powering a number.
- Fibonacci numbers.
- Algorithms for computing Fibonacci numbers.
- Fibonacci by naive recursive algorithm.
- Fibonacci by bottom-up algorithm.
- Fibonacci by naive recursive squaring.
- Fibonacci by matrix recursive squaring.
- Matrix multiplication
- Strassen's algorithm.
- VLSI (very large scale integration) layout problem.

I was the most impressed by the four algorithms for computing Fibonacci numbers. I actually wrote about one of them in my publication "On the Linear Time Algorithm For Finding Fibonacci Numbers," which explains how this algorithms is actually quadratic in practice (but linear in theory).

Follow this link to the full review of lecture three.

## Lecture 4: Sorting

Lecture four is devoted entirely to the quicksort algorithm. It's the industry standard algorithm that is used for sorting in most of the computer systems. You just have to know it.

Topics explained in lecture four:

- Divide and conquer approach to sorting.
- Quicksort algorithm.
- The partition routine in the quicksort algorithm.
- Running time analysis of quicksort.
- Worst-case analysis of quicksort.
- Intuitive, best-case analysis of quicksort.
- Randomized quicksort.
- Indicator random variables.
- Running time analysis of randomized quicksort in expectation.

I loved how the idea of randomizing the partition subroutine in quicksort algorithm led to a running time that is independent of element order. The deterministic quicksort could always be fed an input that triggers the worst-case running time O(n^{2}), but the worst-case running time of randomized quicksort is determined only by the output of the random number generator.

I once wrote another post about quicksort called "Three Beautiful Quicksorts" where I summarized what Jon Bentley's had to say about the experimental analysis of quicksort's running time and how the current quicksort algorithm looks in the industry libraries (such as c standard library, which provides qsort function).

Follow this link to the full review of lecture four.

## Lecture 5: Sorting (continued)

Lecture five continues on sorting and looks at what limits the running time of sorting to O(n·lg(n)). It then breaks out of this limitation and shows several linear time sorting algorithms.

Topics explained in lecture five:

- How fast can we sort?
- Comparsion sort model.
- Decision trees.
- Comparsion sort algorithms based on decision trees.
- Lower bound for decision-tree sorting.
- Sorting in linear time.
- Counting sort.
- The concept of stable sorting.
- Radix sort.
- Correctness of radix sort.
- Running time analysis of radix sort.

The most interesting topic here was how any comparison sort algorithm can be translated into a decision tree (and vice versa), which limits how fast we can sort.

Follow this link to the full review of lecture five.

## Lecture 6: Order Statistics

Lecture six deals with the order statistics problem - how to find the k-th smallest element among n elements. The naive algorithm is to sort the list of n elements and return the k-th element in the sorted list, but this approach makes it run in O(n·lg(n)) time. This lecture shows how a randomized, linear-time algorithm (in expectation) for this problem can be constructed.

Topics explained in lecture six:

- Order statistics.
- Naive order statistics algorithm via sorting.
- Randomized divide and conquer order statistics algorithm.
- Expected running time analysis of randomized order statistics algorithm.
- Worst-case linear-time order-statistics.

An interesting point in this lecture is that the worst-case, deterministic, linear-time algorithm for order statistics isn't being used in practice because it performs poorly compared to the randomized linear-time algorithm.

Follow this link to the full review of lecture six.

## Lecture 7: Hashing

This is the first lecture of two on hashing. It introduces hashing and various collision resolution strategies.

All the topics explained in lecture seven:

- Symbol table problem.
- Direct-access table.
- The concept of hashing.
- Collisions in hashing.
- Resolving collisions by chaining.
- Analysis of worst-case and average-case search time of chaining.
- Hash functions.
- Division hash method.
- Multiplication hash method.
- Resolving collisions by open addressing.
- Probing strategies.
- Linear probing.
- Double hashing.
- Analysis of open addressing.

Follow this link to the full review of lecture seven.

## Lecture 8: Hashing (continued)

The second lecture on hashing. It addresses the weakness of hashing - for any choice of hash function, there exists a bad set of keys that all hash to the same value. An adversary can take an advantage of this and attack our program. Universal hashing solves this problem. The other topic explained in this lecture is perfect hashing - given n keys, how to construct a hash table of size O(n) where search takes O(1) guaranteed.

All the topics in lecture eight:

- Weakness of hashing.
- Universal hashing.
- Construction of universal hash functions.
- Perfect hashing.
- Markov inequality.

Follow this link to the full review of lecture eight.

## Lecture 9: Search Trees

This lecture primarily discusses randomly built binary search trees. (It assumes you know what binary trees are.) Similar to universal hashing (see previous lecture), they solve a problem when you need to build a tree from untrusted data. It turns out that the expected height of a randomly built binary search tree is still O(lg(n)), more precisely, it's expected to be 3·lg(n) at most.

Topics explained in lecture nine:

- What are good and bad binary search trees?
- Binary search tree sort.
- Analysis of binary search tree sort.
- BST sort relation to quicksort.
- Randomized BST sort.
- Randomly built binary search trees.
- Convex functions, Jensen's inequality.
- Expected height of a randomly built BST.

The most surprising idea in this lecture is that the binary search tree sort (introduced in this lecture) does the same element comparsions as quicksort, that is, they produce the same decision tree.

Follow this link to the full review of lecture nine.

## Lecture 10: Search Trees (continued)

This is the second lecture on search trees. It discusses self-balancing trees, more specifically, red-black trees. They balance themselves in such a manner that no matter what the input is, their height is always O(lg(n)).

Topics explained in lecture ten:

- Balanced search trees.
- Red-black trees.
- Height of red-black trees.
- Rotations in binary trees.
- How to insert an element in a red-black tree?
- Insert-element algorithm for red-black trees.

Follow this link to the full review of lecture ten.

## Lecture 11: Augmenting Data Structures

The eleventh lecture explains how to build new data structures out of existing ones. For example, how to build a data structure that you can update and query quickly for the i-th smallest element. This is the problem of dynamic order statistics and an easy solution is to augment a binary tree, such as a red-black tree. Another example is interval trees - how to quickly find an interval (such as 5-9) that overlaps some other intervals (such as 4-11 and 8-20).

Topics explained in lecture eleven:

- Dynamic order statistics.
- Data structure augmentation.
- Interval trees.
- Augmenting red-black trees to have them perform as interval trees.
- Correctness of augmented red-black tree data structure.

Augmenting data structures require a lot of creativity. First you need to find an underlying data structure (the easiest step) and then think of a way to augment it with data to make it do what you want (the hardest step).

Follow this link to the full review of lecture eleven.

## Lecture 12: Skip Lists

This lecture explains skip lists, which is a simple, efficient, easily implementable, randomized search structure. It performs as well as a balanced binary search tree but is much easier to implement. Eric Demaine says he implemented it in 40 minutes before the class (10 minutes to implement and 30 to debug).

In this lecture Eric builds this data structure from scratch. He starts with a linked list and builds up to a pair of linked lists, to three linked lists, until it finds the optimal number of linked lists needed to achieve logarithmic search time.

Next he continues to explain how to algorithmically build such a structure and proves that the search in this data structure is indeed quick.

Follow this link to the full review of lecture twelve.

## Lecture 13: Amortized Analysis

Amortized analysis is a technique to show that even if several operations in a sequence of operations are costly, the overall performance is still good. A good example is adding elements to a dynamic list (such as a list in Python). Every time the list is full, Python has to allocate more space and this is costly. Amortized analysis can be used to show that the average cost per insert is still O(1), even though Python occasionally has to allocate more space for the list.

Topics explained in lecture thirteen:

- How large should a hash table be?
- Dynamic tables.
- Amortized analysis.
- Accounting method of amortized analysis.
- Dynamic table analysis with accounting method.
- Potential method of amortized analysis.
- Dynamic table analysis with potential method.

This is one of the most mathematically complicated lectures.

Follow this link to the full review of lecture thirteen.

## Lecture 14: Self-Organizing Lists and Competitive Analysis

This lecture concentrates on self-orginizing lists. A self-organizing list is a list that reorders itself to improve the average access time. The goal is to find a reordering that minimizes the total access time. For example, each time an element is accessed, it's moved to the front of the list, hoping that it might be accessed soon again. This is called move-to-front heuristic.

Competitive analysis can be used to theoretically reason how well such a strategy as moving items to front performs.

Topics explained in lecture fourteen:

- Self-organizing lists.
- Online and offline algorithms
- Worst-case analysis of self-organizing lists.
- Competitive analysis.
- Move-to-front heuristic for self-organizing lists.
- Amortized cost of move-to-front heuristic.

Follow this link to the full review of lecture fourteen.

## Lecture 15: Dynamic Programming

This lecture is about the dynamic programming algorithm design technique. It's a tabular method (involving constructing a table or some part of a table) that leads to a much faster running time of the algorithm.

The lecture focuses on the longest common subsequence problem, first showing the brute force algorithm, then a recursive one, and finally a dynamic programming algorithm. The brute force algorithm is exponential in the length of strings, the recursive one is also exponential, but the dynamic programming solution is O(n·m) where n is the length of one string, and m is the length of the other.

Topics explained in lecture fifteen:

- The idea of dynamic programming.
- Longest common subsequence problem (LCS).
- Brute force algorithm for LCS.
- Analysis of brute-force algorithm.
- Simplified algorithm for LCS.
- Dynamic programming hallmark #1: optimal substructure.
- Dynamic programming hallmark #2: overlapping subproblems.
- Recursive algorithm for LCS.
- Memoization.
- Dynamic programming algorithm for LCS.

The most interesting thing in this lecture is the two hallmarks that indicate that the problem may be solved with dynamic programming. They are "optimal substructure" and "overlapping subproblems".

The first one means that an optimal solution to a problem contains the optimal solution to subproblems. For example, if z = LCS(x,y) - z is the solution to the problem LCS(x,y) - then any prefix of z is a solution to LCS of a prefix of x and prefix of y (prefix of z is a solution to subproblems).

The second one means exactly what it says, that the problem contains many overlapping subproblems.

Follow this link to the full review of lecture fifteen.

## Lecture 16: Greedy Algorithms

This lecture introduced greedy algorithms via the minimum spanning three problem. The minimum spanning tree problem asks to find a tree that connects all the vertices of a graph with minimum edge weight. It seems at first that dynamic programming solution could solve it effectively, but if analyzed more carefully, it can be noticed that the problem exhibits another powerful property -- the best solution to each of the subproblems leads to globally optimal solution. Therefore it's called greedy, it always chooses the best solution for subproblems without ever thinking about the whole problem in general.

Topics explained in lecture sixteen:

- Review of graphs.
- Graph representations.
- Adjacency matrices.
- Adjacency lists.
- Sparse and dense graphs.
- Hand shaking lemma.
- Minimum spanning trees (MSTs).
- Hallmark for greedy algorithms: greedy choice property.
- Prim's algorithm for finding MST.
- Running time analysis of Prim's algorithm.
- Idea of Kruskal's algorithm for MSTs.

Follow this link to the full review of lecture sixteen.

## Lecture 17: Shortest Path Algorithms

This lecture starts a trilogy on shortest path algorithm. In this first episode single-source shortest path algorithms are discussed. The problem can be described as following -- how to get from one point on a graph to another by traveling the shortest distance (think of a road network). The Dijkstra's algorithm solves this problem effectively.

Topics explained in lecture seventeen:

- Paths in graphs.
- Shortest paths.
- Path weights.
- Negative path weights.
- Single-source shortest path.
- Dijkstra's algorithm.
- Example of Dijkstra's algorithm.
- Correctness of Dijkstra's algorithm.
- Unweighted graphs.
- Breadth First Search.

The most interesting thing here is that the Dijkstra's algorithm for unweighted graphs reduces to breadth first search algorithm which uses a FIFO instead of a priority queue because there is no longer a need to keep track of the shortest distance (all the paths have the same weight).

Follow this link to the full review of lecture seventeen.

## Lecture 18: Shortest Path Algorithms (continued)

The second lecture in trilogy on shortest paths deals with single-source shortest paths that may have negative edge weights. Bellman-Ford algorithm solves the shortest path problem for graphs with negative edges.

Topics explained in lecture eighteen:

- Bellman-Ford algorithm for shortest paths with negative edges.
- Negative weight cycles.
- Correctness of Bellman-Ford algorithm.
- Linear programming.
- Linear feasibility problem.
- Difference constraints.
- Constraint graph.
- Using Bellman-Ford algorithm to solve a system of difference constraints.
- Solving VLSI (very large scale integration) layout problem via Bellman-Ford.

Follow this link to the full review of lecture eighteen.

## Lecture 19: Shortest Path Algorithms (continued)

The last lecture in trilogy deals with all-pairs shortest paths problem -- determine of the shortest distances between every pair of vertices in a given graph.

Topics explained in lecture nineteen:

- Review of single source shortest path problem.
- All-pairs shortest paths.
- Dynamic programming.
- Idea from matrix multiplication.
- Floyd-Warshall algorithm for all-pairs shortest paths.
- Transitive closure of directed graph.
- Johnson's algorithm for all-pairs shortest paths.

An interesting point here is how the Floyd-Warshall algorithm that runs in O((number of vertices)^{3}) can be transformed into something similar to Strassen's algorithm to compute the transitive closure of a graph (now it runs in O((number of vertices)^{lg7}).

Follow this link to the full review of lecture nineteen.

## Lecture 20: Parallel Algorithms

This is an introductory lecture to multithreaded algorithm analysis. It explains the terminology used in multithreaded algorithms, such as, work, critical path length, speedup, parallelism, scheduling, and others.

Topics explained in lecture twenty:

- Dynamic multithreading.
- Subroutines: spawn and sync.
- Logical parallelism and actual parallelism.
- Multithreaded computation.
- An example of a multithreaded execution on a recursive Fibonacci algorithm.
- Measuring performance of a multithreaded computation.
- The concept of speedup.
- Maximum possible speedup.
- Linear speedup.
- Super-linear speedup.
- Parallelism.
- Scheduling.
- Greedy scheduler.
- Grand and Brent theorem of competitiveness of greedy schedules.
- *Socrates and Cilkchess chess programs.

Follow this link to the full review of lecture twenty.

## Lecture 21: Parallel Algorithms (continued)

The second lecture on parallel algorithms shows how to design and analyze multithreaded matrix multiplication algorithm and multithreaded sorting.

Topics explained in lecture twenty-one:

- Multithreaded algorithms.
- Multithreaded matrix multiplication.
- Performance analysis of the multithreaded matrix multiplication algorithm.
- Multithreaded sorting.
- Multithreaded merge-sort algorithm.
- Parallel-merge subroutine.
- Analysis of merge-sort with parallel-merge subroutine.

Follow this link to the full review of lecture twenty-one.

## Lecture 22: Cache Oblivious Algorithms

Cache-oblivious algorithms take into account something that has been ignored in all the algorithms so far, particularly, the cache. An algorithm that can be transformed into using cache effectively will perform much better than a one that doesn't. This lecture is all about how to lay out data structures in memory in such a way that memory transfers are minimized.

Topics explained in lecture twenty-two:

- Modern memory hierarchy.
- The concept of spatial locality and temporal locality.
- Two-level memory model.
- Cache-oblivious algorithms.
- Blocking of memory.
- Memory transfers in a simple scanning algorithm.
- Memory transfers in string-reverse algorithm.
- Memory analysis of binary search.
- Cache oblivious order statistics.
- Cache oblivious matrix multiplication algorithm.

Follow this link to the full review of lecture twenty-two.

## Lecture 23: Cache Oblivious Algorithms (continued)

This is the final lecture of the course. It continues on cache oblivious algorithms and shows how to store binary search trees in memory so that memory transfers are minimized when searching in them. It wraps up with cache oblivious sorting.

Topics explained in lecture twenty-three:

- Static search trees.
- Memory efficient layout of static binary search trees in memory.
- Analysis of static search trees.
- Cache aware sorting.
- Cache-oblivious sorting.
- Funnel sort.
- K-funnel data structure.

This is the most complicated lecture in the whole course. It takes a day to understand the k-funnel data structure.

Follow this link to the full review of lecture twenty-three.

That's it. This was the final lecture. I hope you find this summary useful.

**Upcoming on my blog -- review of MIT's Linear Algebra course.**

At first I thought I'd post Linear Algebra to a separate blog section that does not appear in the RSS feed but then I gave it another thought and came to a conclusion that every competent programmer must know the linear algebra and therefore it's worth putting them in the feed.

You can surely be a good programmer without knowing linear algebra, but if you want to work on great problems and make a difference, then you absolutely have to know it.

Stay tuned!

This is the fifth post in the article series "**Vim Plugins You Should Know About**". This time I am going to introduce you to a nifty plugin called "**a.vim**".

A.vim allows you to quickly switch between related source code files. For example, if you're programming in C, you can alternate between source.c and the corresponding header source.h by just typing **:A**.

It saves you only a few seconds every time you use it, but don't forget that these seconds can add up to hours during several weeks.

This plugin was written by Mike Sharpe.

For the introduction of this article series see part one - surround.vim.

## Other bindings in a.vim

Besides the " **:A** " command that alternates between source files in the same buffer, a.vim also defines several other commands:

**:AS**-- alternate in a horizontal split,**:AV**-- alternate in a vertical split, and**:AT**-- alternate in a new tab.

The author of the plugin also defines the command " **:IH** ", which opens the file under cursor, but it's really unnecessary because "**gf**" already does that.

## Extending a.vim

By default a.vim defines alternation for the following languages:

- C -- .c <-> .h,
- C++ -- .c / .cpp / .cxx / .cc <-> .h / .hpp,
- lex and yacc -- .l / .lex / .lpp <-> .y / .ypp / .yacc,
- ASP.NET -- .aspx <-> .aspx.cs / .aspx.vb

The alternation can be extended to other extensions by defining the following variable in your .vimrc:

let g:alternateExtensions_foo = "bar,baz"

This will set up alternation between .foo, .bar and .baz files.

## How to install a.vim?

To get the latest version:

- 1. Download a.vim.
- 2. Put a.vim in ~/.vim/plugin (on Unix/Linux) or ~\vimfiles\plugin (on Windows).
- 3. Download alternate.txt from the same page.
- 4. Put alternate.txt in a.vim in ~/.vim/doc (on Unix/Linux) or ~/vimfiles/doc (on Windows).
- 5. Run :helptags ~/.vim/doc (on Unix/Linux) or :helptags ~/vimfiles/doc (on Windows) to rebuild the tags file (so that you can read :help alternate.)
- 6. Restart Vim.

I have mapped the **:A** command to " **,a** ". You can also map it to the same combination by putting "map ,a :A<CR>" in your .vimrc file.

## Have Fun!

Have fun with this plugin and until next time!

**72**Comments November 03, 2009

# Famous Perl One-Liners Explained, Part III: Calculations

This is the third part of a seven-part article on **famous Perl one-liners**. In this part I will create various one-liners for **calculations**. See part one for introduction of the series.

Famous Perl one-liners is my attempt to create "**perl1line.txt**" that is similar to "awk1line.txt" and "sed1line.txt" that have been so popular among Awk and Sed programmers.

The article on famous Perl one-liners will consist of at least seven parts:

- Part I: File spacing.
- Part II: Line numbering.
**Part III: Calculations (this part)**.- Part IV: String creation. Array creation.
- Part V: Text conversion and substitution.
- Part VI: Selective printing and deleting of certain lines.
- Part VII: Handy regular expressions.
- Part VIII: Release of perl1line.txt.
- Part IX: Release of Perl One-Liners e-book.

After I'm done explaining all these one-liners, I'll publish an ebook. Subscribe to my blog to know when that happens!

The one-liners will make heavy use of Perl special variables. A few years ago I compiled all the Perl special variables in a single file and called it Perl special variable cheat-sheet. Even tho it's mostly copied out of perldoc perlvar, it's still handy to have in front of you, so print it.

**Awesome news:** I have written an e-book based on this article series. Check it out:

And here are today's one-liners:

## Calculations

**21. Check if a number is a prime.**

perl -lne '(1x$_) !~ /^1?$|^(11+?)\1+$/ && print "$_ is prime"'

This one-liner uses an ingenious regular expression to detect if a given number is a prime or not. Don't take it too seriously, though. I included it for its artistic value.

First, the number is converted in its unary representation by " (1x$_) ". For example, 5 gets converted into " 1x5 ", which is " 11111 ".

Next, the unary number gets tested against the ingenious regular expression. If it *doesn't match*, the number is a prime, otherwise it's a composite.

The regular expression works this way. It consists of two parts " ^1?$ " and " ^(11+?)\1+$ ".

The first part matches " 1 " and empty string. Clearly, empty string and 1 are not prime numbers, therefore this regular expression matches, which indicated that they are not prime numbers.

The second part determines if two or more 1s repeatedly make up the whole number. If two or mores 1s repeatedly make up the whole number, the regex matches, which means that the number is composite. Otherwise it's a prime.

Let's look at the second regex part on numbers 5 and 6.

The number 5 in unary representation is " 11111 ". The " (11+?) " matches first two ones " 11 ". The back-reference " \1 " becomes " 11 " and the whole regex now becomes " ^11(11)+$ ". It can't match five ones, therefore it fails. But since it used " +? ", it backtracks and matches the first three ones " 111 ". The back-reference becomes " 111 " and the whole regex becomes " ^111(111)+$ ". It doesn't match again. This repeats for " 1111 " and " 11111 ", which also don't match, therefore the whole regex doesn't match and the number is a prime.

The number 4 in unary representation is " 1111 ". The " (11+?) " matches the first two ones " 11 ". The back-reference " \1 " becomes " 11 " and the regex becomes " ^11(11)+$ ". It matches the original string, therefore the number is not a prime.

The " -lne " command line options have been explained in parts one and two.

**22. Print the sum of all the fields on a line.**

perl -MList::Util=sum -alne 'print sum @F'

This one-liner turns on field auto-splitting with " -a " command line option and imports the "sum" function from "List::Util" module with " -MList::Util=sum " option. The "List::Util" is in the Perl core so you don't need to worry about installing it.

As a result of auto-splitting the split fields end up in the " @F " array and the " sum " function just sums them up.

The -Mmodule=arg option imports arg from module and is the same as writing:

use module qw(arg)

This one-liner is equivalent to the following:

use List::Util qw(sum); while (<>) { @F = split(' '); print sum @F, "\n"; }

**23. Print the sum of all the fields on all lines.**

perl -MList::Util=sum -alne 'push @S,@F; END { print sum @S }'

This one-liner keeps pushing the split fields in " @F " to the " @S " array. Once the input is over and perl is about quit, END { } block gets called that outputs the sum of all items in @F. This sum is the sum of all fields over all lines.

This solution isn't too good - it creates a massive array @S. A better solution is to keep just the running:

perl -MList::Util=sum -alne '$s += sum @F; END { print $s }'

**24. Shuffle all fields on a line.**

perl -MList::Util=shuffle -alne 'print "@{[shuffle @F]}"'

This is almost the same as one-liner #22 above. Instead of summing all fields, it shuffles and prints them.

The " @{[shuffle @F]} " construct creates an array reference to the contents of " shuffle @F " and " @ { ... } " dereferences it. This is a tricky way to execute code inside quotes. It was needed to get the values of shuffled @F separated by a space when printing them out.

Another way to do the same is join the elements of @F by a space, but it's longer:

perl -MList::Util=shuffle -alne 'print join " ", shuffle @F'

**25. Find the minimum element on a line.**

perl -MList::Util=min -alne 'print min @F'

This one-liner uses the "min" function from "List::Util". It's similar to all the previous ones. After the line has been automatically split by " -a ", the "min" function finds minimum element and prints it.

**26. Find the minimum element over all the lines.**

perl -MList::Util=min -alne '@M = (@M, @F); END { print min @M }'

This one-liner is a combination of the previous one and the #23.

The "@M = (@M, @F)" construct is the same as "push @M, @F". It appends the contents of @F to the array @M.

This one-liner stores all the data in memory. If you run it on a 10 terabyte file, it will die. Therefore it's better to keep the running minimum element in memory and print it out at the end:

perl -MList::Util=min -alne '$min = min @F; $rmin = $min unless defined $rmin && $min > $rmin; END { print $rmin }'

It finds the minimum of each line and stores in $min, then it checks if $min is smaller than the running minimum. Once the input ends, it prints the running minimum, which is the smallest value over all input.

**27. Find the maximum element on a line.**

perl -MList::Util=max -alne 'print max @F'

This is the same as #25, except "min" has been replaced with "max".

**28. Find the maximum element over all the lines.**

perl -MList::Util=max -alne '@M = (@M, @F); END { print max @M }'

This is the same as #26.

Or:

perl -MList::Util=max -alne '$max = max @F; $rmax = $max unless defined $rmax && $max < $rmax; END { print $rmax }'

**29. Replace each field with its absolute value.**

perl -alne 'print "@{[map { abs } @F]}"'

This one-liner auto-splits the line by " -a " command line option. The split fields, as I already explained, end up in the @F variable. Next it calls the absolute value function "abs" on each field by the help of "map" function. Finally it prints it joins all the fields by the help of array interpolation in double quotes.

The " @{ ... } " construct was explained in one-liner #24.

**30. Find the total number of fields (words) on each line.**

perl -alne 'print scalar @F'

This one-liner forces to evaluate the @F in scalar context, which in Perl means "the number of elements in @F." Therefore this one-liner prints out the number of elements on each line.

**31. Print the total number of fields (words) on each line followed by the line.**

perl -alne 'print scalar @F, " $_"'

This is exactly the same as #30, except " $_ " is added at the end that prints out the whole line. (Remember that " -n " option caused each line to be put in the $_ variable.)

**32. Find the total number of fields (words) on all lines.**

perl -alne '$t += @F; END { print $t}'

Here we just keep adding the number of fields on each line to variable " $t ", and at the end we print it out. The result is number of words on all lines.

**33. Print the total number of fields that match a pattern.**

perl -alne 'map { /regex/ && $t++ } @F; END { print $t }'

This one-liner uses the " map " function that applies some operation on each of the elements in @F array. In this case the operation checks if each element matches /regex/ and if it does, it increments variable $t. At the end it prints this variable $t that contains the number of fields that matched /regex/ pattern.

A better way to do it is by looping:

perl -alne '$t += /regex/ for @F; END { print $t }'

Each element in `@F`

is tested against `regex`

. If it matches, `/regex/`

returns 1 (true), which gets added to variable `$t`

. This way the number of matches get counted in `$t`

.

The best way is to use grep in scalar context:

perl -alne '$t += grep /regex/, @F; END { print $t }'

Grep in scalar context returns the number of matches. This number gets accumulated in `$t`

.

**34. Print the total number of lines that match a pattern.**

perl -lne '/regex/ && $t++; END { print $t }'

The `/regex/`

evaluates to true if the current line of input matches this regular expression. Writing `/regex/ && $t++`

is the same as `if ($_ =~ /regex/) { $t++ }`

, which increments variable `$t`

if the line matched the pattern. Finally in the `END`

block the variable `$t`

contains the total number of pattern matches and it gets printed out.

**35. Print the number PI to n decimal places.**

perl -Mbignum=bpi -le 'print bpi(21)'

The bignum package exports **bpi** function that calculates constant PI to wanted accuracy. This one-liner prints PI to 20 decimal places.

The bignum library also exports constant PI alone to 39 decimal places:

perl -Mbignum=PI -le 'print PI'

**36. Print the number E to n decimal places.**

perl -Mbignum=bexp -le 'print bexp(1,21)'

The bignum library also exports bexp function that takes two arguments - the power to raise e to and accuracy. This one-liner prints the constant e to 20 decimal places.

You can print the value of e^2 to 30 decimal places this way:

perl -Mbignum=bexp -le 'print bexp(2,31)'

Just the same as with PI, bignum exports the constant e alone to 39 decimal places:

perl -Mbignum=e -le 'print e'

**37. Print UNIX time (seconds since Jan 1, 1970, 00:00:00 UTC).**

perl -le 'print time'

The built-in function "time" returns seconds since the epoch.

**38. Print GMT (Greenwich Mean Time) and local computer time.**

perl -le 'print scalar gmtime'

The "gmtime" function is a Perl built-in function. If used in scalar context, it prints the time localized to Greenwich time zone.

perl -le 'print scalar localtime'

The "localtime" built-in function acts the same way as "gmtime", except it prints the computer's local time.

In array context both "gmtime" and "localtime" return a 9 element list (struct tm) with the following elements.

($second, [0] $minute, [1] $hour, [2] $month_day, [3] $month, [4] $year, [5] $week_day, [6] $year_day, [7] $is_daylight_saving [8] )

You may slice this list, or print individual elements if you need just some part of this information.

For example, to print H:M:S, slice elements 2, 1 and 0 from localtime:

perl -le 'print join ":", (localtime)[2,1,0]'

**39. Print yesterday's date.**

perl -MPOSIX -le '@now = localtime; $now[3] -= 1; print scalar localtime mktime @now'

Remember that localtime returns a 9-list (see above) of various date elements. The 4th element in the list is current month's day. If we subtract one from it we get yesterday. The "mktime" function constructs a Unix epoch time from this modified 9-list. And "scalar localtime" construct prints out the new date, which is yesterday.

The POSIX package was needed because it exports mktime function. It's supposed to normalize negative values.

**40. Print date 14 months, 9 days and 7 seconds ago.**

perl -MPOSIX -le '@now = localtime; $now[0] -= 7; $now[4] -= 14; $now[7] -= 9; print scalar localtime mktime @now'

This one-liner modifies 0th, 4th, and 7th elements of @now list. The 0th is seconds, the 4th is months and 7th is days (see the table of 9 element time list above).

Next, mktime creates Unix time from this new structure, and localtime, evaluated in scalar context, prints out the date that was 14 months, 9 days and 7 seconds ago.

**41. Calculate factorial.**

perl -MMath::BigInt -le 'print Math::BigInt->new(5)->bfac()'

This one-liner uses bfac() function from Math::BigInt module that is in the Perl core (no need to install).

Math::BigInt->new(5) construction creates a new Math::BigInt object with value 5, then a method bfac() is called on the newly created object to calculate the factorial of 5. Change 5 to any number you wish to find factorial for the value you are interested in.

Another way to calculate factorial is by just multiplying numbers from 1 to n together:

perl -le '$f = 1; $f *= $_ for 1..5; print $f'

Here we initially set $f to 1. Then do a loop from 1 to 5 and multiply $f by each of the values. The result is 1*2*3*4*5, which is the factorial of 5.

**42. Calculate greatest common divisor.**

perl -MMath::BigInt=bgcd -le 'print bgcd(@list_of_numbers)'

Math::BigInt has several other useful math functions. One of them is bgcd that calculates the greatest common divisor of a list of numbers.

For example, to find gcd of (20, 60, 30), you'd execute the one-liner this way:

perl -MMath::BigInt=bgcd -le 'print bgcd(20,60,30)'

Surely, you can also use Euclid's algorithm. Given two numbers $n and $m, this one-liner finds the gcd of $n and $m. The result is stored in $m.

perl -le '$n = 20; $m = 35; ($m,$n) = ($n,$m%$n) while $n; print $m'

**43. Calculate least common multiple.**

Another function from Math::BigInt is lcm - the least common multiplicator. This one-liner finds lcm of (35, 20, 8):

perl -MMath::BigInt=blcm -le 'print blcm(35,20,8)'

If you know some number theory, then you'll recall that there is a connection between gcd and lcm. Given two numbers $n and $m, their lcm is $n*$m/gcd($n,$m), therefore one-liner follows:

perl -le '$a = $n = 20; $b = $m = 35; ($m,$n) = ($n,$m%$n) while $n; print $a*$b/$m'

**44. Generate 10 random numbers between 5 and 15 (excluding 15).**

perl -le '$n=10; $min=5; $max=15; $, = " "; print map { int(rand($max-$min))+$min } 1..$n'

You can modify this one-liner by changing variables $n, $min, $max. The variable $n stands for how many random numbers to generate, and [$min,$max) is the generation range.

The variable $, gets set to a space because it's the output field separator for print and it's undef by default. If we didn't set it to a space, the numbers would get printed concatenated together.

**45. Find and print all permutations of a list.**

perl -MAlgorithm::Permute -le '$l = [1,2,3,4,5]; $p = Algorithm::Permute->new($l); print @r while @r = $p->next'

This one-liner uses the object-oriented interface of Algorithm::Permute module to find the permutations (all ways to rearrange items).

The constructor of `Algorithm::Permute`

takes an array reference to an array of elements to permute. In this particular one-liner the elements are numbers 1, 2, 3, 4, 5.

The `next`

object function returns the next permutation. Calling it repeatedly iterates over all permutations. Each permutation is put in `@r`

array and is then printed.

Please note that the output list gets large really quickly. There are n! permutations for a list of n elements.

Another way to print out all permutations is to use the exported `permute`

subroutine:

perl -MAlgorithm::Permute -le '@l = (1,2,3,4,5); Algorithm::Permute::permute { print "@l" } @l'

**46. Generate the power set.**

perl -MList::PowerSet=powerset -le '@l = (1,2,3,4,5); for (@{powerset(@l)}) { print "@$_" }'

Here I use the List::PowerSet module from CPAN.

It exports the `powerset`

function, which takes a list of elements and returns a reference to a list containing references to subset lists.

In the `for()`

loop, I call the `powerset`

function, pass it the list of elements in `@l`

. Next I dereference the return value of `powerset`

, which is a reference to a list of subsets. Next, I dereference each individual subset `@$_`

and print it.

For a set of n elements, there are exactly 2^{n} subsets in the powerset.

**47. Convert an IP address to unsigned integer.**

perl -le '$i=3; $u += ($_<<8*$i--) for "127.0.0.1" =~ /(\d+)/g; print $u'

This one-liner converts the IP address `127.0.0.1`

into unsigned integer (which happens to be `2130706433`

).

It does it by first doing a global match of `(\d+)`

on the IP address. Doing a `for`

loop over a global match iterates over all the matches. These matches are the four parts of the IP address.

Next the matches are added together in the `$u`

variable, with first being bit shifted `8*3 = 24`

places, the second being shifted `8*2 = 16`

places, the third `8`

places and the last just getting added to `$u`

.

But this one-liner doesn't do any error checking on the format of an IP address. You may use a more sophisticated regular expression to add checking, such as `/^(\d+)\.(\d+)\.(\d+)\.(\d+)$/g`

.

I had a discussion about this with a friend and we came up with several more one-liner:

perl -le '$ip="127.0.0.1"; $ip =~ s/(\d+)\.?/sprintf("%02x", $1)/ge; print hex($ip)'

This one-liner utilizes the fact that `127.0.0.1`

can be easily converted to hex as `7f000001`

and then converted to decimal from hex by the `hex`

Perl function.

Another way is to use unpack:

perl -le 'print unpack("N", 127.0.0.1)'

This one-liner is probably as short as it can get. It uses the vstring literals (version strings) to express the IP address. A vstring forms a string literal composed of characters with the specified ordinal values. Next, the newly formed string literal is unpacked into a number from a string in **N**etwork byte order (Big-Endian order) and it gets printed.

If you have a string with an IP (and not a vstring), then you first have to convert the string with the function `inet_aton`

to byte form:

perl -MSocket -le 'print unpack("N", inet_aton("127.0.0.1"))'

Here `inet_aton`

converts the string " 127.0.0.1 " to the byte form (which is the same as pure vstring `127.0.0.1`

) and next it unpacks it as the same was as in previous one-liner.

If you want a reference of pack and unpack templates (such as "N" for Network order), get my Perl pack/unpack cheat sheet!

**48. Convert an unsigned integer to an IP address.**

perl -MSocket -le 'print inet_ntoa(pack("N", 2130706433))'

Here the integer `2130706433`

first gets packed into a number in Big-Endian and then it gets passed to `inet_ntoa`

function that converts a number back to an IP address.

Another way to do it is by bit shifting and printing one byte at a time:

perl -le '$ip = 2130706433; print join ".", map { (($ip>>8*($_))&0xFF) } reverse 0..3'

And by the way, `join "."`

can be replaced by the special variable `$,`

that acts as a value separator for `print`

statement:

perl -le '$ip = 2130706433; $, = "."; print map { (($ip>>8*($_))&0xFF) } reverse 0..3'

See my Perl special variable cheat sheet for the list of all variables.

## Perl one-liners explained e-book

I've now written the "Perl One-Liners Explained" e-book based on this article series. I went through all the one-liners, improved explanations, fixed mistakes and typos, added a bunch of new one-liners, added an introduction to Perl one-liners and a new chapter on Perl's special variables. Please take a look:

## Have Fun!

Have fun with these one-liners for now. The next part is going to be about string and array creation.

**Can you think of other calculations that I did not include here?**

**update:** 2009.11.07 added printing PI and E (one liners 35 and 36).

**update:** 2009.11.15 added date calculations (one liners 37, 38, 39, 40).

**update:** 2009.12.14 added factorial, gcd, lcm, random numbers, (one liners 41, 42, 43, 44).

**update:** 2009.12.26 added permutations, power sets (one liners 45, 46).

**update:** 2009.12.27 added ip address calculations (one liners 47, 48).