Feedburner used to have a really nice RSS subscriber growth graph. I loved it. But then one day they were acquired by Google and they changed their nice chart to an interactive flash thing that was slow and looked just awful.

Here is how awesome the graph used to look like:


This graph was taken from my "one year of blogging" post.

And how it looks now:

Crappy Feedburner Stats
This graph was taken from Feedburner stats dashboard today.
Choose "Show stats for" -> "all time" to generate this graph.

This critter takes 35MB of RAM, responds in 4 seconds and worst of all, looks very, very ugly. I don't know why would anyone replace a nice 6.5KB image with a 35MB monster.

I don't want to see this ugliness anymore, therefore I'll create a Perl program that generates the awesome graph they used to have. I'll write my thought process in creating this program in this post. Here it goes.

First I need to get the data somehow. I remember they had some kind of an API to get the data. A quick Google search for feedburner api returns this link Feedburner Awareness API. Ya, that's it. This is the documentation of their API.

Accessing the following URL gets me subscriber count data from July 1, 2007 to November 17, 2009:

http://feedburner.google.com/api/awareness/1.0/GetFeedData?uri=catonmat&dates=2007-07-01,2009-11-17

Excellent, now I can write the Perl program. It will need to parse the XML data, draw the chart and save the image to a file.

Hmm, how should I invoke my program? Ok, here is how:

$ generate_feedburner_graph.pl <<strong>feed name</strong>> [<<strong>start date</strong>> [<<strong>end date</strong>>]]

# if <strong>end date</strong> is not specified, it's set to today.
# if <strong>start date</strong> is not specified, it's set to first
# day when the feed had at least one subscriber.

This program will use LibGD to generate the image. It will save it to a file called feed_name-start_date-end_date.png.

Now I need to find the colors used in the awesome feedburner graph. For this purpose I'll use ColorZilla Firefox plugin. The green one is #95CF9C, the background is #F2F8FC, the light grid is #CCCECE, the one that separates the green area from background is #687279, and the x-y axis are #808080.

Alright, now I have everything I need to create the program.

... Several hours later ...

Done!

One thing I forgot to mention is that you will need DejaVuSans TrueType font to run this program (it uses it to draw text). Download it and put the DejaVuSans.ttf in the same directory as the program.

#!/usr/bin/perl
#
# Feedburner graph generator
# Version 1.0
#

use warnings;
use strict;

use WWW::Mechanize;
use List::Util 'max';
use XML::Simple;
use POSIX;
use GD;

# This is the API URL that returns XML data with feed statistics by day.
my $feedburner_url = "http://feedburner.google.com/api/awareness/1.0/GetFeedData?uri=%s&dates=%s,%s";

# This function prints the usage and terminates the program.
sub usage {
    printf "Usage: %s <feed name> [<start date> [<end date>]]\n", $0;
    print "Parameters:\n";
    print "<feed name>  - your feed name, for example 'catonmat'\n";
    print "<start date> - start date (YYYY-MM-DD)\n";
    print "<end date>   - end date (YYYY-MM-DD), today if not specified\n";
    exit(1);
}

# This function checks if DejaVuSans font is present, if not
# it prints the instructions on how to download and terminates the program.
sub check_dejavu_sans {
    unless (-e 'DejaVuSans.ttf') {
        print "Please download DejaVu fonts and put DejaVuSans.ttf file in\n";
        print "the same directory as this program.\n";
        print "http://dejavu-fonts.org/wiki/index.php?title=Download\n";
        exit(1);
    }
}

# Given year, month, day from `struct tm` (man 3 mktime),
# it constructs a YYYY-MM-DD string.
sub format_date {
    my ($y, $m, $d) = @_;
    return sprintf("%04d-%02d-%02d", $y+1900, $m+1, $d);
}

# Given the `struct tm` (man 3 mktime) as a 9-list (perldoc -f localtime),
# it constructs a YYYY-MM-DD string.
sub yyyymmdd_from_9list {
    my ($y, $m, $d) = @_[5,4,3];
    return format_date $y, $m, $d;
}

# This function returns a YYYY-MM-DD string for today.
sub today {
    return yyyymmdd_from_9list localtime
}

# This function constructs the 9-list (perldoc -f localtime) for a 
# date that was $months_ago months ago.
sub months_ago {
    my $months_ago = shift;
    my @date = @_;
    $date[4] -= $months_ago;
    return localtime mktime @date;
}

# Given feed data from feedburner's api (array of hashrefs), it finds
# the first date that had non-zero circulation.
# If no such date exists, it returns undef.
sub find_first_nonzero {
    my @feed_data = @_;
    return if $feed_data[0]->{circulation} != 0;
    my $prev_item;
    for my $item (@feed_data) {
        return $prev_item if $item->{circulation};
        $prev_item = $item;
    }
    return
}

# Given feed's name, this function finds the first date the
# feed had some subscribers, i.e., feed's start date.
sub find_start_date {
    my $feed = shift;
    print "Finding feed's start date...\n";
    my @ago = months_ago 6, localtime;
    my $end_date = today();
    while (1) {
        my $start_date = format_date @ago[5,4,3];

        print "Trying $start_date as start date...\n";
        my @feed_data = get_feed_data($feed, $start_date, $end_date);
        my $non_zero = find_first_nonzero(@feed_data);
        if ($non_zero) {
            print "Found $non_zero->{date} as start date!\n";
            return $non_zero->{date};
        }

        $end_date = yyyymmdd_from_9list @ago;
        @ago = months_ago 6, @ago;
    }
}

# This function returns an array of hashrefs of feeds data.
# Each hash contains 'reach', 'hits', 'date', and 'circulation' keys.
sub get_feed_data {
    my $raw_feed_data = get_raw_feed_data(@_);
    my $feed_data = XML::Simple->new->XMLin($raw_feed_data);
    if ($feed_data->{stat} ne "ok") {
        die $feed_data->{err}{msg}
    }
    return @{$feed_data->{'feed'}{'entry'}};
}

# This function formats the $feedburner_url and uses WWW::Mechanize
# to get the feed data via feedburner's API.
sub get_raw_feed_data {
    my ($feed, $start_date, $end_date) = @_;
    my $url = sprintf($feedburner_url, $feed, $start_date, $end_date);
    return WWW::Mechanize->new(agent => 'Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.9.1.5) Gecko/20091102 Firefox/3.5.5')->get($url)->content;
}

# This function drops feed items when they can't fit in graph's width.
sub drop_data {
    my ($width, @data) = @_;
    my $len = $#data;
    my $delta = @data - $width;
    my @drop = map { int($len / $delta * $_) } 1..$delta;
    splice @data, $_, 1 for reverse @drop;
    return @data;
}

# This function duplicates feed items when there are not enough items
# to fill the whole graph.
sub dupe_data {
    my ($width, @data) = @_;
    my $len = $#data;
    my $delta = $width - @data;
    my @dupe = map { int($len / $delta * $_) } 1..$delta;
    splice @data, $_, 0, $data[$_] for reverse @dupe;
    return @data;
}

# This function draws the outline of the graph box where the green
# lines are drawn.
sub draw_outline {
    my ($gd, $grid, $xy, $bg) = @_;
    $gd->rectangle(40, 4, 482, 100, $grid);
    $gd->filledRectangle(41, 5, 481, 99, $bg);
    $gd->line(40, 4, 40, 100, $xy);
    $gd->line(38, 100, 482, 100, $xy);
}

# This function draws the grid lines.
sub draw_grid {
    my ($gd, $xy, $grid) = @_;

    # horizontal
    $gd->line(41, 26, 482, 26, $grid);
    $gd->line(38, 26, 40, 26, $xy);
    $gd->line(41, 63, 482, 63, $grid);
    $gd->line(38, 63, 40, 63, $xy);

    # vertical
    for (my $x = 77; $x <= 442; $x += 73) {
        $gd->line($x, 4, $x, 99, $grid);
        $gd->line($x, 100, $x, 102, $xy);
    }
}

# This function saves the $gd image to a file named
# "feed_name-start_date-end_date.png"
sub save_image {
    my ($gd, $feed, $start_date, $end_date, @data) = @_;

    my $filename = "$feed-$start_date-$end_date.png";
    $filename =~ s|/|_|g;

    open my $fh, '>', $filename or die $!;
    print $fh $gd->png;
    close $fh;

    print "Done. Image written to $filename\n";
}

# This function draws the date thingies on the x axis.
sub draw_date {
    my ($gd, $item, $text_color, $x) = @_;
    my @mons = qw/Nul Jan Feb Mar Apr May Jun Jul Aug Sep Oct Nov Dec/;

    my ($y, $m, $d) = $item->{date} =~ /(\d{4})-(\d{2})-(\d{2})/;
    $m = $mons[$m];
    my $text1 = sprintf("%s-", $m);
    my $text2 = sprintf("%s-%d", $m, $y);

    my @bounds = GD::Image->stringTTF($text_color, './DejaVuSans.ttf', 7, 0, 0, 0, $text1);
    my $offset = $bounds[4];

    $gd->stringTTF($text_color, './DejaVuSans.ttf', 7, 0, $x-$offset+2, 114, $text2);
}

# This function draws the feed usage numbers on the y axis.
sub draw_count {
    my ($gd, $count, $text_color, $y) = @_;

    my $text = int $count;

    my @bounds = GD::Image->stringTTF($text_color, './DejaVuSans.ttf', 7, 0, 0, 0, $text);
    my $width = $bounds[4] - $bounds[6];

    $gd->stringTTF($text_color, './DejaVuSans.ttf', 7, 0, 34-$width, $y+4, $text);
}

# This function creates the GD image and draws everything.
sub draw_feedburner_image {
    my ($feed, $start_date, $end_date, @data) = @_;

    print "Creating the awesome feedburner image.\n";

    my $gd = GD::Image->new(490, 120, 1);
    my $white  = $gd->colorAllocate(0xff, 0xff, 0xff);
    my $green  = $gd->colorAllocate(0x95, 0xcf, 0x9c);
    my $bg     = $gd->colorAllocate(0xf2, 0xf8, 0xfc);
    my $grid   = $gd->colorAllocate(0xcc, 0xce, 0xce);
    my $xy     = $gd->colorAllocate(0x80, 0x80, 0x80);
    my $alphagrid = $gd->colorAllocateAlpha(0xcc, 0xce, 0xce, 0x30);
    my $border = $gd->colorAllocate(0x68, 0x72, 0x79);
    my $text   = $gd->colorAllocate(0, 0 , 0);

    $gd->alphaBlending(1);
    $gd->filledRectangle(0, 0, 489, 119, $white);
    $gd->setAntiAliased($border);

    draw_outline($gd, $grid, $xy, $bg);

    my $t_height = 90;
    my $t_width = 441;
    my $max_circulation = max map { $_->{circulation} } @data;

    my $compress_factor = @data/$t_width;

    if ($compress_factor > 1) {
        @data = drop_data($t_width, @data);
    }
    elsif ($compress_factor < 1) {
        @data = dupe_data($t_width, @data);
    }

    my ($prev_x1, $prev_y1);
    my $x = 41;
    my %x_markers = (77 => 1, 150 => 1, 223 => 1, 296 => 1, 369 => 1, 442 => 1);
    for my $item (@data) {
        my $height = int($item->{circulation}/$max_circulation * $t_height);
        my ($x1, $y1, $x2, $y2) = ($x, 99, $x, 99-$height);
        $gd->line($x1, $y1, $x2, $y2, $green);
        if ($prev_x1) {
            $gd->line($prev_x1, $prev_y1, $x2, $y2, gdAntiAliased);
        }
        ($prev_x1, $prev_y1) = ($x1, $y2);

        if (exists $x_markers{$x}) {
            draw_date($gd, $item, $text, $x)
        }

        $x++;
    }

    draw_grid($gd, $xy, $alphagrid);
    draw_count($gd, 0,  $text, 100);
    draw_count($gd, $max_circulation * 74/90, $text, 26);
    draw_count($gd, $max_circulation * 37/90, $text, 63);
    save_image($gd, $feed, $start_date, $end_date);
}

# The main function, does everything.
sub main {
    check_dejavu_sans;

    my $feed = shift || usage();
    my $start_date = shift || find_start_date($feed);
    my $end_date = shift || today();

    unless ($start_date =~ /^\d{4}-\d{2}-\d{2}$/) {
        die "Invalid start date. Format: YYYY-MM-DD."
    }
    unless ($end_date =~ /^\d{4}-\d{2}-\d{2}$/) {
        die "Invalid end date. Format: YYYY-MM-DD."
    }

    print "Getting feed data for $feed from $start_date to $end_date\n";
    my @feed_data = get_feed_data($feed, $start_date, $end_date);

    draw_feedburner_image($feed, $start_date, $end_date, @feed_data);
}

main @ARGV;

Download: generate_feedburner_graph.perl

Let's test run it.

$ generate_feedburner_graph.pl catonmat
Finding feed's start date...
Trying 2009-05-17 as start date...
Trying 2008-11-17 as start date...
Trying 2008-05-17 as start date...
Trying 2007-11-17 as start date...
Trying 2007-05-17 as start date...
Found 2007-07-15 as start date!
Getting feed data for catonmat from 2007-07-15 to 2009-11-17
Creating the awesome feedburner image.
Done. Image written to catonmat-2007-07-15-2009-11-17.png

Here is the result:

catonmat feed statistics from 2007-07-15 to 2009-11-17
catonmat.net feed statistics from 2007-07-15 to 2009-11-17.

This looks divine. I love it!!!

As I was writing this I had the coolest idea to make a set of tools for probloggers. I added this idea to my idea-list and will try to make it happen. This tool could be the first in problogger tool suite!

Download "generate_feedburner_graph.pl":

Download: generate_feedburner_graph.perl
Downloaded: 2875 times.
Download url: http://www.catonmat.net/download/generate_feedburner_graph.perl

And finally, help me reach 10,000 subscribers! If you haven't yet subscribed, subscribe to my blog!

This article is part of the article series "MIT Introduction to Algorithms."
<- previous article next article ->

As you all may know, I watched and posted my lecture notes of the whole MIT Introduction to Algorithms course. In this post I want to summarize all the topics that were covered in the lectures and point out some of the most interesting things in them.

Actually, before I wrote this article, I had started writing an article called "The coolest things that I learned from MIT's Introduction to Algorithms" but quickly did I realize that what I was doing was listing the topics in each article and not really pointing out the coolest things. Therefore I decided to write a summary article first (I had promised to do so), and only then write an article on really the most exciting topics.

Talking about the summary, I watched a total of 23 lectures and it resulted in 14 blog posts. It took nearly a year to publish them here. The first blog post in this series was written in August 2008, and the last in July 2009. Here is a list of all the posts:

I'll now go through each of the lectures. They require quite a bit of math knowledge to understand. If you are uncertain about your math skills, I'd suggest reading Knuth's Concrete Mathematics book. It contains absolutely all the necessary math to understand this course.

Lecture 1: Analysis of Algorithms

If you're a student, or even if you're not, you must never miss the first lecture of any course, ever! The first lecture tells you what to expect from the course, how it will be taught, what it will cover, who the professor is, what the prerequisites are, and a bunch of other important and interesting things.

In this lecture you also get to know professor Charles E. Leiserson (author of CLRS) and he explains the following topics:

  • Why study algorithms and their performance?
  • What is the analysis of algorithms?
  • What can be more important than the performance of algorithms?
  • The sorting problem.
  • Insertion sort algorithm.
  • Running time analysis of insertion sort.
  • Asymptotic analysis.
  • Worst-case, average-case, best-case running time analysis.
  • Analysis of insertion sort's worst-case running time.
  • Asymptotic notation - theta notation - Θ.
  • Merge sort algorithm.
  • The recursive nature of merge sort algorithm.
  • Running time recurrence for merge sort.
  • Recursion trees.
  • Running time analysis of merge sort by looking at the recursion tree.
  • General recurrence for divide and conquer algorithms.

I personally found the list of things that can be more important than the performance of the program interesting. These things are modularity, correctness, maintainability, security, functionality, robustness, user-friendliness, programmer's time, simplicity, extensibility, reliability, scalability.

Follow this link to the full review of lecture one.

Lecture 2: Analysis of Algorithms (continued)

The second lecture is presented by Eric Demaine. He's the youngest professor in the history of MIT.

Here are the topics that he explains in the second lecture:

  • Asymptotic notation.
  • Big-o notation - O.
  • Set definition of O-notation.
  • Capital-omega notation - Ω.
  • Theta notation - Θ.
  • Small-o notation - o.
  • Small-omega notation - ω.
  • Solving recurrences by substitution method.
  • Solving recurrences by recursion-tree method.
  • Solving recurrences by the Master's method.
  • Intuitive sketch proof of the Master's method.

An interesting thing in this lecture is the analogy of (O, Ω, Θ, o, ω) to (≤, ≥, =, <, >).

For example, if we say f(n) = O(n2) then by using the analogy we can think of it as f(n) ≤ c·n2, that is, function f(n) is always smaller than or equal to c·n2, or in other words, it's bounded above by function c·n2, which is exactly what f(n) = O(n2) means.

Follow this link to the full review of lecture two.

Lecture 3: Divide and Conquer

The third lecture is all about the divide-and-conquer algorithm design method and its applications. The divide and conquer method solves a problem by 1) breaking it into a number of subproblems (divide step), 2) solving each problem recursively (conquer step), 3) combining the solutions (combine step).

Here are the topics explained in the third lecture:

  • The nature of divide and conquer algorithms.
  • An example of divide and conquer - merge sort.
  • Solving for running time of merge sort by Master's method.
  • Binary search.
  • Powering a number.
  • Fibonacci numbers.
  • Algorithms for computing Fibonacci numbers.
  • Fibonacci by naive recursive algorithm.
  • Fibonacci by bottom-up algorithm.
  • Fibonacci by naive recursive squaring.
  • Fibonacci by matrix recursive squaring.
  • Matrix multiplication
  • Strassen's algorithm.
  • VLSI (very large scale integration) layout problem.

I was the most impressed by the four algorithms for computing Fibonacci numbers. I actually wrote about one of them in my publication "On the Linear Time Algorithm For Finding Fibonacci Numbers," which explains how this algorithms is actually quadratic in practice (but linear in theory).

Follow this link to the full review of lecture three.

Lecture 4: Sorting

Lecture four is devoted entirely to the quicksort algorithm. It's the industry standard algorithm that is used for sorting in most of the computer systems. You just have to know it.

Topics explained in lecture four:

  • Divide and conquer approach to sorting.
  • Quicksort algorithm.
  • The partition routine in the quicksort algorithm.
  • Running time analysis of quicksort.
  • Worst-case analysis of quicksort.
  • Intuitive, best-case analysis of quicksort.
  • Randomized quicksort.
  • Indicator random variables.
  • Running time analysis of randomized quicksort in expectation.

I loved how the idea of randomizing the partition subroutine in quicksort algorithm led to a running time that is independent of element order. The deterministic quicksort could always be fed an input that triggers the worst-case running time O(n2), but the worst-case running time of randomized quicksort is determined only by the output of the random number generator.

I once wrote another post about quicksort called "Three Beautiful Quicksorts" where I summarized what Jon Bentley's had to say about the experimental analysis of quicksort's running time and how the current quicksort algorithm looks in the industry libraries (such as c standard library, which provides qsort function).

Follow this link to the full review of lecture four.

Lecture 5: Sorting (continued)

Lecture five continues on sorting and looks at what limits the running time of sorting to O(n·lg(n)). It then breaks out of this limitation and shows several linear time sorting algorithms.

Topics explained in lecture five:

  • How fast can we sort?
  • Comparsion sort model.
  • Decision trees.
  • Comparsion sort algorithms based on decision trees.
  • Lower bound for decision-tree sorting.
  • Sorting in linear time.
  • Counting sort.
  • The concept of stable sorting.
  • Radix sort.
  • Correctness of radix sort.
  • Running time analysis of radix sort.

The most interesting topic here was how any comparison sort algorithm can be translated into a decision tree (and vice versa), which limits how fast we can sort.

Follow this link to the full review of lecture five.

Lecture 6: Order Statistics

Lecture six deals with the order statistics problem - how to find the k-th smallest element among n elements. The naive algorithm is to sort the list of n elements and return the k-th element in the sorted list, but this approach makes it run in O(n·lg(n)) time. This lecture shows how a randomized, linear-time algorithm (in expectation) for this problem can be constructed.

Topics explained in lecture six:

  • Order statistics.
  • Naive order statistics algorithm via sorting.
  • Randomized divide and conquer order statistics algorithm.
  • Expected running time analysis of randomized order statistics algorithm.
  • Worst-case linear-time order-statistics.

An interesting point in this lecture is that the worst-case, deterministic, linear-time algorithm for order statistics isn't being used in practice because it performs poorly compared to the randomized linear-time algorithm.

Follow this link to the full review of lecture six.

Lecture 7: Hashing

This is the first lecture of two on hashing. It introduces hashing and various collision resolution strategies.

All the topics explained in lecture seven:

  • Symbol table problem.
  • Direct-access table.
  • The concept of hashing.
  • Collisions in hashing.
  • Resolving collisions by chaining.
  • Analysis of worst-case and average-case search time of chaining.
  • Hash functions.
  • Division hash method.
  • Multiplication hash method.
  • Resolving collisions by open addressing.
  • Probing strategies.
  • Linear probing.
  • Double hashing.
  • Analysis of open addressing.

Follow this link to the full review of lecture seven.

Lecture 8: Hashing (continued)

The second lecture on hashing. It addresses the weakness of hashing - for any choice of hash function, there exists a bad set of keys that all hash to the same value. An adversary can take an advantage of this and attack our program. Universal hashing solves this problem. The other topic explained in this lecture is perfect hashing - given n keys, how to construct a hash table of size O(n) where search takes O(1) guaranteed.

All the topics in lecture eight:

  • Weakness of hashing.
  • Universal hashing.
  • Construction of universal hash functions.
  • Perfect hashing.
  • Markov inequality.

Follow this link to the full review of lecture eight.

Lecture 9: Search Trees

This lecture primarily discusses randomly built binary search trees. (It assumes you know what binary trees are.) Similar to universal hashing (see previous lecture), they solve a problem when you need to build a tree from untrusted data. It turns out that the expected height of a randomly built binary search tree is still O(lg(n)), more precisely, it's expected to be 3·lg(n) at most.

Topics explained in lecture nine:

  • What are good and bad binary search trees?
  • Binary search tree sort.
  • Analysis of binary search tree sort.
  • BST sort relation to quicksort.
  • Randomized BST sort.
  • Randomly built binary search trees.
  • Convex functions, Jensen's inequality.
  • Expected height of a randomly built BST.

The most surprising idea in this lecture is that the binary search tree sort (introduced in this lecture) does the same element comparsions as quicksort, that is, they produce the same decision tree.

Follow this link to the full review of lecture nine.

Lecture 10: Search Trees (continued)

This is the second lecture on search trees. It discusses self-balancing trees, more specifically, red-black trees. They balance themselves in such a manner that no matter what the input is, their height is always O(lg(n)).

Topics explained in lecture ten:

  • Balanced search trees.
  • Red-black trees.
  • Height of red-black trees.
  • Rotations in binary trees.
  • How to insert an element in a red-black tree?
  • Insert-element algorithm for red-black trees.

Follow this link to the full review of lecture ten.

Lecture 11: Augmenting Data Structures

The eleventh lecture explains how to build new data structures out of existing ones. For example, how to build a data structure that you can update and query quickly for the i-th smallest element. This is the problem of dynamic order statistics and an easy solution is to augment a binary tree, such as a red-black tree. Another example is interval trees - how to quickly find an interval (such as 5-9) that overlaps some other intervals (such as 4-11 and 8-20).

Topics explained in lecture eleven:

  • Dynamic order statistics.
  • Data structure augmentation.
  • Interval trees.
  • Augmenting red-black trees to have them perform as interval trees.
  • Correctness of augmented red-black tree data structure.

Augmenting data structures require a lot of creativity. First you need to find an underlying data structure (the easiest step) and then think of a way to augment it with data to make it do what you want (the hardest step).

Follow this link to the full review of lecture eleven.

Lecture 12: Skip Lists

This lecture explains skip lists, which is a simple, efficient, easily implementable, randomized search structure. It performs as well as a balanced binary search tree but is much easier to implement. Eric Demaine says he implemented it in 40 minutes before the class (10 minutes to implement and 30 to debug).

In this lecture Eric builds this data structure from scratch. He starts with a linked list and builds up to a pair of linked lists, to three linked lists, until it finds the optimal number of linked lists needed to achieve logarithmic search time.

Next he continues to explain how to algorithmically build such a structure and proves that the search in this data structure is indeed quick.

Follow this link to the full review of lecture twelve.

Lecture 13: Amortized Analysis

Amortized analysis is a technique to show that even if several operations in a sequence of operations are costly, the overall performance is still good. A good example is adding elements to a dynamic list (such as a list in Python). Every time the list is full, Python has to allocate more space and this is costly. Amortized analysis can be used to show that the average cost per insert is still O(1), even though Python occasionally has to allocate more space for the list.

Topics explained in lecture thirteen:

  • How large should a hash table be?
  • Dynamic tables.
  • Amortized analysis.
  • Accounting method of amortized analysis.
  • Dynamic table analysis with accounting method.
  • Potential method of amortized analysis.
  • Dynamic table analysis with potential method.

This is one of the most mathematically complicated lectures.

Follow this link to the full review of lecture thirteen.

Lecture 14: Self-Organizing Lists and Competitive Analysis

This lecture concentrates on self-orginizing lists. A self-organizing list is a list that reorders itself to improve the average access time. The goal is to find a reordering that minimizes the total access time. For example, each time an element is accessed, it's moved to the front of the list, hoping that it might be accessed soon again. This is called move-to-front heuristic.

Competitive analysis can be used to theoretically reason how well such a strategy as moving items to front performs.

Topics explained in lecture fourteen:

  • Self-organizing lists.
  • Online and offline algorithms
  • Worst-case analysis of self-organizing lists.
  • Competitive analysis.
  • Move-to-front heuristic for self-organizing lists.
  • Amortized cost of move-to-front heuristic.

Follow this link to the full review of lecture fourteen.

Lecture 15: Dynamic Programming

This lecture is about the dynamic programming algorithm design technique. It's a tabular method (involving constructing a table or some part of a table) that leads to a much faster running time of the algorithm.

The lecture focuses on the longest common subsequence problem, first showing the brute force algorithm, then a recursive one, and finally a dynamic programming algorithm. The brute force algorithm is exponential in the length of strings, the recursive one is also exponential, but the dynamic programming solution is O(n·m) where n is the length of one string, and m is the length of the other.

Topics explained in lecture fifteen:

  • The idea of dynamic programming.
  • Longest common subsequence problem (LCS).
  • Brute force algorithm for LCS.
  • Analysis of brute-force algorithm.
  • Simplified algorithm for LCS.
  • Dynamic programming hallmark #1: optimal substructure.
  • Dynamic programming hallmark #2: overlapping subproblems.
  • Recursive algorithm for LCS.
  • Memoization.
  • Dynamic programming algorithm for LCS.

The most interesting thing in this lecture is the two hallmarks that indicate that the problem may be solved with dynamic programming. They are "optimal substructure" and "overlapping subproblems".

The first one means that an optimal solution to a problem contains the optimal solution to subproblems. For example, if z = LCS(x,y) - z is the solution to the problem LCS(x,y) - then any prefix of z is a solution to LCS of a prefix of x and prefix of y (prefix of z is a solution to subproblems).

The second one means exactly what it says, that the problem contains many overlapping subproblems.

Follow this link to the full review of lecture fifteen.

Lecture 16: Greedy Algorithms

This lecture introduced greedy algorithms via the minimum spanning three problem. The minimum spanning tree problem asks to find a tree that connects all the vertices of a graph with minimum edge weight. It seems at first that dynamic programming solution could solve it effectively, but if analyzed more carefully, it can be noticed that the problem exhibits another powerful property -- the best solution to each of the subproblems leads to globally optimal solution. Therefore it's called greedy, it always chooses the best solution for subproblems without ever thinking about the whole problem in general.

Topics explained in lecture sixteen:

  • Review of graphs.
  • Graph representations.
  • Adjacency matrices.
  • Adjacency lists.
  • Sparse and dense graphs.
  • Hand shaking lemma.
  • Minimum spanning trees (MSTs).
  • Hallmark for greedy algorithms: greedy choice property.
  • Prim's algorithm for finding MST.
  • Running time analysis of Prim's algorithm.
  • Idea of Kruskal's algorithm for MSTs.

Follow this link to the full review of lecture sixteen.

Lecture 17: Shortest Path Algorithms

This lecture starts a trilogy on shortest path algorithm. In this first episode single-source shortest path algorithms are discussed. The problem can be described as following -- how to get from one point on a graph to another by traveling the shortest distance (think of a road network). The Dijkstra's algorithm solves this problem effectively.

Topics explained in lecture seventeen:

  • Paths in graphs.
  • Shortest paths.
  • Path weights.
  • Negative path weights.
  • Single-source shortest path.
  • Dijkstra's algorithm.
  • Example of Dijkstra's algorithm.
  • Correctness of Dijkstra's algorithm.
  • Unweighted graphs.
  • Breadth First Search.

The most interesting thing here is that the Dijkstra's algorithm for unweighted graphs reduces to breadth first search algorithm which uses a FIFO instead of a priority queue because there is no longer a need to keep track of the shortest distance (all the paths have the same weight).

Follow this link to the full review of lecture seventeen.

Lecture 18: Shortest Path Algorithms (continued)

The second lecture in trilogy on shortest paths deals with single-source shortest paths that may have negative edge weights. Bellman-Ford algorithm solves the shortest path problem for graphs with negative edges.

Topics explained in lecture eighteen:

  • Bellman-Ford algorithm for shortest paths with negative edges.
  • Negative weight cycles.
  • Correctness of Bellman-Ford algorithm.
  • Linear programming.
  • Linear feasibility problem.
  • Difference constraints.
  • Constraint graph.
  • Using Bellman-Ford algorithm to solve a system of difference constraints.
  • Solving VLSI (very large scale integration) layout problem via Bellman-Ford.

Follow this link to the full review of lecture eighteen.

Lecture 19: Shortest Path Algorithms (continued)

The last lecture in trilogy deals with all-pairs shortest paths problem -- determine of the shortest distances between every pair of vertices in a given graph.

Topics explained in lecture nineteen:

  • Review of single source shortest path problem.
  • All-pairs shortest paths.
  • Dynamic programming.
  • Idea from matrix multiplication.
  • Floyd-Warshall algorithm for all-pairs shortest paths.
  • Transitive closure of directed graph.
  • Johnson's algorithm for all-pairs shortest paths.

An interesting point here is how the Floyd-Warshall algorithm that runs in O((number of vertices)3) can be transformed into something similar to Strassen's algorithm to compute the transitive closure of a graph (now it runs in O((number of vertices)lg7).

Follow this link to the full review of lecture nineteen.

Lecture 20: Parallel Algorithms

This is an introductory lecture to multithreaded algorithm analysis. It explains the terminology used in multithreaded algorithms, such as, work, critical path length, speedup, parallelism, scheduling, and others.

Topics explained in lecture twenty:

  • Dynamic multithreading.
  • Subroutines: spawn and sync.
  • Logical parallelism and actual parallelism.
  • Multithreaded computation.
  • An example of a multithreaded execution on a recursive Fibonacci algorithm.
  • Measuring performance of a multithreaded computation.
  • The concept of speedup.
  • Maximum possible speedup.
  • Linear speedup.
  • Super-linear speedup.
  • Parallelism.
  • Scheduling.
  • Greedy scheduler.
  • Grand and Brent theorem of competitiveness of greedy schedules.
  • *Socrates and Cilkchess chess programs.

Follow this link to the full review of lecture twenty.

Lecture 21: Parallel Algorithms (continued)

The second lecture on parallel algorithms shows how to design and analyze multithreaded matrix multiplication algorithm and multithreaded sorting.

Topics explained in lecture twenty-one:

  • Multithreaded algorithms.
  • Multithreaded matrix multiplication.
  • Performance analysis of the multithreaded matrix multiplication algorithm.
  • Multithreaded sorting.
  • Multithreaded merge-sort algorithm.
  • Parallel-merge subroutine.
  • Analysis of merge-sort with parallel-merge subroutine.

Follow this link to the full review of lecture twenty-one.

Lecture 22: Cache Oblivious Algorithms

Cache-oblivious algorithms take into account something that has been ignored in all the algorithms so far, particularly, the cache. An algorithm that can be transformed into using cache effectively will perform much better than a one that doesn't. This lecture is all about how to lay out data structures in memory in such a way that memory transfers are minimized.

Topics explained in lecture twenty-two:

  • Modern memory hierarchy.
  • The concept of spatial locality and temporal locality.
  • Two-level memory model.
  • Cache-oblivious algorithms.
  • Blocking of memory.
  • Memory transfers in a simple scanning algorithm.
  • Memory transfers in string-reverse algorithm.
  • Memory analysis of binary search.
  • Cache oblivious order statistics.
  • Cache oblivious matrix multiplication algorithm.

Follow this link to the full review of lecture twenty-two.

Lecture 23: Cache Oblivious Algorithms (continued)

This is the final lecture of the course. It continues on cache oblivious algorithms and shows how to store binary search trees in memory so that memory transfers are minimized when searching in them. It wraps up with cache oblivious sorting.

Topics explained in lecture twenty-three:

  • Static search trees.
  • Memory efficient layout of static binary search trees in memory.
  • Analysis of static search trees.
  • Cache aware sorting.
  • Cache-oblivious sorting.
  • Funnel sort.
  • K-funnel data structure.

This is the most complicated lecture in the whole course. It takes a day to understand the k-funnel data structure.

Follow this link to the full review of lecture twenty-three.

That's it. This was the final lecture. I hope you find this summary useful.

Upcoming on my blog -- review of MIT's Linear Algebra course.

At first I thought I'd post Linear Algebra to a separate blog section that does not appear in the RSS feed but then I gave it another thought and came to a conclusion that every competent programmer must know the linear algebra and therefore it's worth putting them in the feed.

You can surely be a good programmer without knowing linear algebra, but if you want to work on great problems and make a difference, then you absolutely have to know it.

Stay tuned!

This article is part of the article series "Vim Plugins You Should Know About."
<- previous article next article ->

Vim Plugins, surround.vimThis is the fifth post in the article series "Vim Plugins You Should Know About". This time I am going to introduce you to a nifty plugin called "a.vim".

A.vim allows you to quickly switch between related source code files. For example, if you're programming in C, you can alternate between source.c and the corresponding header source.h by just typing :A.

It saves you only a few seconds every time you use it, but don't forget that these seconds can add up to hours during several weeks.

This plugin was written by Mike Sharpe.

For the introduction of this article series see part one - surround.vim.

Other bindings in a.vim

Besides the " :A " command that alternates between source files in the same buffer, a.vim also defines several other commands:

  • :AS -- alternate in a horizontal split,
  • :AV -- alternate in a vertical split, and
  • :AT -- alternate in a new tab.

The author of the plugin also defines the command " :IH ", which opens the file under cursor, but it's really unnecessary because "gf" already does that.

Extending a.vim

By default a.vim defines alternation for the following languages:

  • C -- .c <-> .h,
  • C++ -- .c / .cpp / .cxx / .cc <-> .h / .hpp,
  • lex and yacc -- .l / .lex / .lpp <-> .y / .ypp / .yacc,
  • ASP.NET -- .aspx <-> .aspx.cs / .aspx.vb

The alternation can be extended to other extensions by defining the following variable in your .vimrc:

let g:alternateExtensions_foo = "bar,baz"

This will set up alternation between .foo, .bar and .baz files.

How to install a.vim?

To get the latest version:

  • 1. Download a.vim.
  • 2. Put a.vim in ~/.vim/plugin (on Unix/Linux) or ~\vimfiles\plugin (on Windows).
  • 3. Download alternate.txt from the same page.
  • 4. Put alternate.txt in a.vim in ~/.vim/doc (on Unix/Linux) or ~/vimfiles/doc (on Windows).
  • 5. Run :helptags ~/.vim/doc (on Unix/Linux) or :helptags ~/vimfiles/doc (on Windows) to rebuild the tags file (so that you can read :help alternate.)
  • 6. Restart Vim.

I have mapped the :A command to " ,a ". You can also map it to the same combination by putting "map ,a :A<CR>" in your .vimrc file.

Have Fun!

Have fun with this plugin and until next time!

This article is part of the article series "Perl One-Liners Explained."
<- previous article next article ->

Perl One LinersThis is the third part of a seven-part article on famous Perl one-liners. In this part I will create various one-liners for calculations. See part one for introduction of the series.

Famous Perl one-liners is my attempt to create "perl1line.txt" that is similar to "awk1line.txt" and "sed1line.txt" that have been so popular among Awk and Sed programmers.

The article on famous Perl one-liners will consist of at least seven parts:

After I'm done explaining all these one-liners, I'll publish an ebook. Subscribe to my blog to know when that happens!

The one-liners will make heavy use of Perl special variables. A few years ago I compiled all the Perl special variables in a single file and called it Perl special variable cheat-sheet. Even tho it's mostly copied out of perldoc perlvar, it's still handy to have in front of you, so print it.

Awesome news: I have written an e-book based on this article series. Check it out:

And here are today's one-liners:

Calculations

21. Check if a number is a prime.

perl -lne '(1x$_) !~ /^1?$|^(11+?)\1+$/ && print "$_ is prime"'

This one-liner uses an ingenious regular expression to detect if a given number is a prime or not. Don't take it too seriously, though. I included it for its artistic value.

First, the number is converted in its unary representation by " (1x$_) ". For example, 5 gets converted into " 1x5 ", which is " 11111 ".

Next, the unary number gets tested against the ingenious regular expression. If it doesn't match, the number is a prime, otherwise it's a composite.

The regular expression works this way. It consists of two parts " ^1?$ " and " ^(11+?)\1+$ ".

The first part matches " 1 " and empty string. Clearly, empty string and 1 are not prime numbers, therefore this regular expression matches, which indicated that they are not prime numbers.

The second part determines if two or more 1s repeatedly make up the whole number. If two or mores 1s repeatedly make up the whole number, the regex matches, which means that the number is composite. Otherwise it's a prime.

Let's look at the second regex part on numbers 5 and 6.

The number 5 in unary representation is " 11111 ". The " (11+?) " matches first two ones " 11 ". The back-reference " \1 " becomes " 11 " and the whole regex now becomes " ^11(11)+$ ". It can't match five ones, therefore it fails. But since it used " +? ", it backtracks and matches the first three ones " 111 ". The back-reference becomes " 111 " and the whole regex becomes " ^111(111)+$ ". It doesn't match again. This repeats for " 1111 " and " 11111 ", which also don't match, therefore the whole regex doesn't match and the number is a prime.

The number 4 in unary representation is " 1111 ". The " (11+?) " matches the first two ones " 11 ". The back-reference " \1 " becomes " 11 " and the regex becomes " ^11(11)+$ ". It matches the original string, therefore the number is not a prime.

The " -lne " command line options have been explained in parts one and two.

22. Print the sum of all the fields on a line.

perl -MList::Util=sum -alne 'print sum @F'

This one-liner turns on field auto-splitting with " -a " command line option and imports the "sum" function from "List::Util" module with " -MList::Util=sum " option. The "List::Util" is in the Perl core so you don't need to worry about installing it.

As a result of auto-splitting the split fields end up in the " @F " array and the " sum " function just sums them up.

The -Mmodule=arg option imports arg from module and is the same as writing:

use module qw(arg)

This one-liner is equivalent to the following:

use List::Util qw(sum);
while (<>) {
    @F = split(' ');
    print sum @F, "\n";
}

23. Print the sum of all the fields on all lines.

perl -MList::Util=sum -alne 'push @S,@F; END { print sum @S }'

This one-liner keeps pushing the split fields in " @F " to the " @S " array. Once the input is over and perl is about quit, END { } block gets called that outputs the sum of all items in @F. This sum is the sum of all fields over all lines.

This solution isn't too good - it creates a massive array @S. A better solution is to keep just the running:

perl -MList::Util=sum -alne '$s += sum @F; END { print $s }'

24. Shuffle all fields on a line.

perl -MList::Util=shuffle -alne 'print "@{[shuffle @F]}"'

This is almost the same as one-liner #22 above. Instead of summing all fields, it shuffles and prints them.

The " @{[shuffle @F]} " construct creates an array reference to the contents of " shuffle @F " and " @ { ... } " dereferences it. This is a tricky way to execute code inside quotes. It was needed to get the values of shuffled @F separated by a space when printing them out.

Another way to do the same is join the elements of @F by a space, but it's longer:

perl -MList::Util=shuffle -alne 'print join " ", shuffle @F'

25. Find the minimum element on a line.

perl -MList::Util=min -alne 'print min @F'

This one-liner uses the "min" function from "List::Util". It's similar to all the previous ones. After the line has been automatically split by " -a ", the "min" function finds minimum element and prints it.

26. Find the minimum element over all the lines.

perl -MList::Util=min -alne '@M = (@M, @F); END { print min @M }'

This one-liner is a combination of the previous one and the #23.

The "@M = (@M, @F)" construct is the same as "push @M, @F". It appends the contents of @F to the array @M.

This one-liner stores all the data in memory. If you run it on a 10 terabyte file, it will die. Therefore it's better to keep the running minimum element in memory and print it out at the end:

perl -MList::Util=min -alne '$min = min @F; $rmin = $min unless defined $rmin && $min > $rmin; END { print $rmin }'

It finds the minimum of each line and stores in $min, then it checks if $min is smaller than the running minimum. Once the input ends, it prints the running minimum, which is the smallest value over all input.

27. Find the maximum element on a line.

perl -MList::Util=max -alne 'print max @F'

This is the same as #25, except "min" has been replaced with "max".

28. Find the maximum element over all the lines.

perl -MList::Util=max -alne '@M = (@M, @F); END { print max @M }'

This is the same as #26.

Or:

perl -MList::Util=max -alne '$max = max @F; $rmax = $max unless defined $rmax && $max < $rmax; END { print $rmax }'

29. Replace each field with its absolute value.

perl -alne 'print "@{[map { abs } @F]}"'

This one-liner auto-splits the line by " -a " command line option. The split fields, as I already explained, end up in the @F variable. Next it calls the absolute value function "abs" on each field by the help of "map" function. Finally it prints it joins all the fields by the help of array interpolation in double quotes.

The " @{ ... } " construct was explained in one-liner #24.

30. Find the total number of fields (words) on each line.

perl -alne 'print scalar @F'

This one-liner forces to evaluate the @F in scalar context, which in Perl means "the number of elements in @F." Therefore this one-liner prints out the number of elements on each line.

31. Print the total number of fields (words) on each line followed by the line.

perl -alne 'print scalar @F, " $_"'

This is exactly the same as #30, except " $_ " is added at the end that prints out the whole line. (Remember that " -n " option caused each line to be put in the $_ variable.)

32. Find the total number of fields (words) on all lines.

perl -alne '$t += @F; END { print $t}'

Here we just keep adding the number of fields on each line to variable " $t ", and at the end we print it out. The result is number of words on all lines.

33. Print the total number of fields that match a pattern.

perl -alne 'map { /regex/ && $t++ } @F; END { print $t }'

This one-liner uses the " map " function that applies some operation on each of the elements in @F array. In this case the operation checks if each element matches /regex/ and if it does, it increments variable $t. At the end it prints this variable $t that contains the number of fields that matched /regex/ pattern.

A better way to do it is by looping:

perl -alne '$t += /regex/ for @F; END { print $t }'

Each element in @F is tested against regex. If it matches, /regex/ returns 1 (true), which gets added to variable $t. This way the number of matches get counted in $t.

The best way is to use grep in scalar context:

perl -alne '$t += grep /regex/, @F; END { print $t }'

Grep in scalar context returns the number of matches. This number gets accumulated in $t.

34. Print the total number of lines that match a pattern.

perl -lne '/regex/ && $t++; END { print $t }'

The /regex/ evaluates to true if the current line of input matches this regular expression. Writing /regex/ && $t++ is the same as if ($_ =~ /regex/) { $t++ }, which increments variable $t if the line matched the pattern. Finally in the END block the variable $t contains the total number of pattern matches and it gets printed out.

35. Print the number PI to n decimal places.

perl -Mbignum=bpi -le 'print bpi(21)'

The bignum package exports bpi function that calculates constant PI to wanted accuracy. This one-liner prints PI to 20 decimal places.

The bignum library also exports constant PI alone to 39 decimal places:

perl -Mbignum=PI -le 'print PI'

36. Print the number E to n decimal places.

perl -Mbignum=bexp -le 'print bexp(1,21)'

The bignum library also exports bexp function that takes two arguments - the power to raise e to and accuracy. This one-liner prints the constant e to 20 decimal places.

You can print the value of e^2 to 30 decimal places this way:

perl -Mbignum=bexp -le 'print bexp(2,31)'

Just the same as with PI, bignum exports the constant e alone to 39 decimal places:

perl -Mbignum=e -le 'print e'

37. Print UNIX time (seconds since Jan 1, 1970, 00:00:00 UTC).

perl -le 'print time'

The built-in function "time" returns seconds since the epoch.

38. Print GMT (Greenwich Mean Time) and local computer time.

perl -le 'print scalar gmtime'

The "gmtime" function is a Perl built-in function. If used in scalar context, it prints the time localized to Greenwich time zone.

perl -le 'print scalar localtime'

The "localtime" built-in function acts the same way as "gmtime", except it prints the computer's local time.

In array context both "gmtime" and "localtime" return a 9 element list (struct tm) with the following elements.

($second,             [0]
$minute,              [1]
$hour,                [2]
$month_day,           [3]
$month,               [4]
$year,                [5]
$week_day,            [6]
$year_day,            [7]
$is_daylight_saving   [8]
)

You may slice this list, or print individual elements if you need just some part of this information.

For example, to print H:M:S, slice elements 2, 1 and 0 from localtime:

perl -le 'print join ":", (localtime)[2,1,0]'

39. Print yesterday's date.

perl -MPOSIX -le '@now = localtime; $now[3] -= 1; print scalar localtime mktime @now'

Remember that localtime returns a 9-list (see above) of various date elements. The 4th element in the list is current month's day. If we subtract one from it we get yesterday. The "mktime" function constructs a Unix epoch time from this modified 9-list. And "scalar localtime" construct prints out the new date, which is yesterday.

The POSIX package was needed because it exports mktime function. It's supposed to normalize negative values.

40. Print date 14 months, 9 days and 7 seconds ago.

perl -MPOSIX -le '@now = localtime; $now[0] -= 7; $now[4] -= 14; $now[7] -= 9; print scalar localtime mktime @now'

This one-liner modifies 0th, 4th, and 7th elements of @now list. The 0th is seconds, the 4th is months and 7th is days (see the table of 9 element time list above).

Next, mktime creates Unix time from this new structure, and localtime, evaluated in scalar context, prints out the date that was 14 months, 9 days and 7 seconds ago.

41. Calculate factorial.

perl -MMath::BigInt -le 'print Math::BigInt->new(5)->bfac()'

This one-liner uses bfac() function from Math::BigInt module that is in the Perl core (no need to install).

Math::BigInt->new(5) construction creates a new Math::BigInt object with value 5, then a method bfac() is called on the newly created object to calculate the factorial of 5. Change 5 to any number you wish to find factorial for the value you are interested in.

Another way to calculate factorial is by just multiplying numbers from 1 to n together:

perl -le '$f = 1; $f *= $_ for 1..5; print $f'

Here we initially set $f to 1. Then do a loop from 1 to 5 and multiply $f by each of the values. The result is 1*2*3*4*5, which is the factorial of 5.

42. Calculate greatest common divisor.

perl -MMath::BigInt=bgcd -le 'print bgcd(@list_of_numbers)'

Math::BigInt has several other useful math functions. One of them is bgcd that calculates the greatest common divisor of a list of numbers.

For example, to find gcd of (20, 60, 30), you'd execute the one-liner this way:

perl -MMath::BigInt=bgcd -le 'print bgcd(20,60,30)'

Surely, you can also use Euclid's algorithm. Given two numbers $n and $m, this one-liner finds the gcd of $n and $m. The result is stored in $m.

perl -le '$n = 20; $m = 35; ($m,$n) = ($n,$m%$n) while $n; print $m'

43. Calculate least common multiple.

Another function from Math::BigInt is lcm - the least common multiplicator. This one-liner finds lcm of (35, 20, 8):

perl -MMath::BigInt=blcm -le 'print blcm(35,20,8)'

If you know some number theory, then you'll recall that there is a connection between gcd and lcm. Given two numbers $n and $m, their lcm is $n*$m/gcd($n,$m), therefore one-liner follows:

perl -le '$a = $n = 20; $b = $m = 35; ($m,$n) = ($n,$m%$n) while $n; print $a*$b/$m'

44. Generate 10 random numbers between 5 and 15 (excluding 15).

perl -le '$n=10; $min=5; $max=15; $, = " "; print map { int(rand($max-$min))+$min } 1..$n'

You can modify this one-liner by changing variables $n, $min, $max. The variable $n stands for how many random numbers to generate, and [$min,$max) is the generation range.

The variable $, gets set to a space because it's the output field separator for print and it's undef by default. If we didn't set it to a space, the numbers would get printed concatenated together.

45. Find and print all permutations of a list.

perl -MAlgorithm::Permute -le '$l = [1,2,3,4,5]; $p = Algorithm::Permute->new($l); print @r while @r = $p->next'

This one-liner uses the object-oriented interface of Algorithm::Permute module to find the permutations (all ways to rearrange items).

The constructor of Algorithm::Permute takes an array reference to an array of elements to permute. In this particular one-liner the elements are numbers 1, 2, 3, 4, 5.

The next object function returns the next permutation. Calling it repeatedly iterates over all permutations. Each permutation is put in @r array and is then printed.

Please note that the output list gets large really quickly. There are n! permutations for a list of n elements.

Another way to print out all permutations is to use the exported permute subroutine:

perl -MAlgorithm::Permute -le '@l = (1,2,3,4,5); Algorithm::Permute::permute { print "@l" } @l'

46. Generate the power set.

perl -MList::PowerSet=powerset -le '@l = (1,2,3,4,5); for (@{powerset(@l)}) { print "@$_" }'

Here I use the List::PowerSet module from CPAN.

It exports the powerset function, which takes a list of elements and returns a reference to a list containing references to subset lists.

In the for() loop, I call the powerset function, pass it the list of elements in @l. Next I dereference the return value of powerset, which is a reference to a list of subsets. Next, I dereference each individual subset @$_ and print it.

For a set of n elements, there are exactly 2n subsets in the powerset.

47. Convert an IP address to unsigned integer.

perl -le '$i=3; $u += ($_<<8*$i--) for "127.0.0.1" =~ /(\d+)/g; print $u'

This one-liner converts the IP address 127.0.0.1 into unsigned integer (which happens to be 2130706433).

It does it by first doing a global match of (\d+) on the IP address. Doing a for loop over a global match iterates over all the matches. These matches are the four parts of the IP address.

Next the matches are added together in the $u variable, with first being bit shifted 8*3 = 24 places, the second being shifted 8*2 = 16 places, the third 8 places and the last just getting added to $u.

But this one-liner doesn't do any error checking on the format of an IP address. You may use a more sophisticated regular expression to add checking, such as /^(\d+)\.(\d+)\.(\d+)\.(\d+)$/g.

I had a discussion about this with a friend and we came up with several more one-liner:

perl -le '$ip="127.0.0.1"; $ip =~ s/(\d+)\.?/sprintf("%02x", $1)/ge; print hex($ip)'

This one-liner utilizes the fact that 127.0.0.1 can be easily converted to hex as 7f000001 and then converted to decimal from hex by the hex Perl function.

Another way is to use unpack:

perl -le 'print unpack("N", 127.0.0.1)'

This one-liner is probably as short as it can get. It uses the vstring literals (version strings) to express the IP address. A vstring forms a string literal composed of characters with the specified ordinal values. Next, the newly formed string literal is unpacked into a number from a string in Network byte order (Big-Endian order) and it gets printed.

If you have a string with an IP (and not a vstring), then you first have to convert the string with the function inet_aton to byte form:

perl -MSocket -le 'print unpack("N", inet_aton("127.0.0.1"))'

Here inet_aton converts the string " 127.0.0.1 " to the byte form (which is the same as pure vstring 127.0.0.1) and next it unpacks it as the same was as in previous one-liner.

If you want a reference of pack and unpack templates (such as "N" for Network order), get my Perl pack/unpack cheat sheet!

48. Convert an unsigned integer to an IP address.

perl -MSocket -le 'print inet_ntoa(pack("N", 2130706433))'

Here the integer 2130706433 first gets packed into a number in Big-Endian and then it gets passed to inet_ntoa function that converts a number back to an IP address.

Another way to do it is by bit shifting and printing one byte at a time:

perl -le '$ip = 2130706433; print join ".", map { (($ip>>8*($_))&0xFF) } reverse 0..3'

And by the way, join "." can be replaced by the special variable $, that acts as a value separator for print statement:

perl -le '$ip = 2130706433; $, = "."; print map { (($ip>>8*($_))&0xFF) } reverse 0..3'

See my Perl special variable cheat sheet for the list of all variables.

Perl one-liners explained e-book

I've now written the "Perl One-Liners Explained" e-book based on this article series. I went through all the one-liners, improved explanations, fixed mistakes and typos, added a bunch of new one-liners, added an introduction to Perl one-liners and a new chapter on Perl's special variables. Please take a look:

Have Fun!

Have fun with these one-liners for now. The next part is going to be about string and array creation.

Can you think of other calculations that I did not include here?

update: 2009.11.07 added printing PI and E (one liners 35 and 36).
update: 2009.11.15 added date calculations (one liners 37, 38, 39, 40).
update: 2009.12.14 added factorial, gcd, lcm, random numbers, (one liners 41, 42, 43, 44).
update: 2009.12.26 added permutations, power sets (one liners 45, 46).
update: 2009.12.27 added ip address calculations (one liners 47, 48).

Busy beaver putting ones on a Turing Machine’s tape
Busy Beaver puts another one on the Turing Machine's tape.
(image from a book "the new turing omnibus")

The busy beaver problem is a fun theoretical computer science problem to know. Intuitively, the problem is to find the smallest program that outputs as many data as possible and eventually halts. More formally it goes like this -- given an n-state Turing Machine with a two symbol alphabet {0, 1}, what is the maximum number of 1s that the machine may print on an initially blank tape (0-filled) before halting?

It turns out that this problem can't be solved. For a small number of states it can be reasoned about, but it can't be solved in general. Theorists call such problems non-computable.

Currently people have managed to solve it for n=1,2,3,4 (for Turing Machines with 1, 2, 3 and 4 states) by reasoning about and running all the possible Turing Machines, but for n ≥ 5 this task has currently been impossible. While most likely it will be solved for n=5, theorists doubt that it shall ever be computed for n=6.

Let's denote the number of 1s that the busy beaver puts on a tape after halting as S(n) and call it the busy beaver function (this is the solution to the busy beaver problem). The busy beaver function is also interesting -- it grows faster than any computable function. It grows like this:

  • S(1) = 1
  • S(2) = 4
  • S(3) = 6
  • S(4) = 13
  • S(5) ≥ 4098 (the exact result has not yet been found)
  • S(6) ≥ 4.6 · 101439 (the exact result shall never be known)

If we were to use one atom for each 1 that the busy beaver puts on the tape, at n=6 we would have filled the whole universe! That's how fast the busy beaver function is growing.

I decided to play with the busy beaver myself to verify the known results for n ≤ 5. I implemented a Turing Machine in Python, which turned out to be too slow, so I reimplemented it in C++ (source code of both implementations below).

I also wrote a visualization tool in Perl that shows how the Turing Machine's tape changes from the start to the finish (source code also below).

I used the following best known Turing Machines. Their tapes are initially filled with 0's, their starting state is " a " and halting state is " h ". The notation " a0 -> b1l " means "if we are in the state "a" and the current symbol on the tape is "0" then put a "1" in that cell, switch to state "b" and move to left "l". This process repeats until the machine ends up in the halting state.

Turing Machine for 1-state Busy Beaver:

a0 -> h1r

Turing Machine for 2-state Busy Beaver:

a0 -> b1r    a1 -> b1l
b0 -> a1l    b1 -> h1r

Here is how the trace of tape changes look like for the 2-state busy beaver:

Two State Busy Beaver
Tape changes for 2 state busy beaver.

Turing Machine for 3-state Busy Beaver:

a0 -> b1r    a1 -> h1r
b0 -> c0r    b1 -> b1r
c0 -> c1l    c1 -> a1l

Two State Busy Beaver
Tape changes for 3 state busy beaver.

Turing Machine for 4-state Busy Beaver:

a0 -> b1r    a1 -> b1l
b0 -> a1l    b1 -> c0l
c0 -> h1r    c1 -> d1l
d0 -> d1r    d1 -> a0r

Two State Busy Beaver
Tape changes for 4 state busy beaver.

Turing Machine for 5-state Busy Beaver:

a0 -> b1l    a1 -> a1l
b0 -> c1r    b1 -> b1r
c0 -> a1l    c1 -> d1r
d0 -> a1l    d1 -> e1r
e0 -> h1r    e1 -> c0r

This image is huge (6146 x 14293 pixels, but only 110KB in size). Click for full size.


Two State Busy Beaver

Tape changes for 5 state busy beaver.

Turing Machine for 6 state Busy Beaver:

a0 -> b1r    a1 -> e0l
b0 -> c1l    b1 -> a0r
c0 -> d1l    c1 -> c0r
d0 -> e1l    d1 -> f0l
e0 -> a1l    e1 -> c1l
f0 -> e1l    f1 -> h1r

Here is my Python program to simulate all these Turing Machines. But as I said, it turned out to be too slow. For the 5 state Busy Beaver it took 5 minutes to generate the currently best known solution.

Download: busy_beaver.py (3233)

#!/usr/bin/python
#
# Turing Machine simulator for Busy Beaver problem.
# Version 1.0
#

import sys

class Error(Exception):
    pass

class TuringMachine(object):
    def __init__(self, program, start, halt, init):
        self.program = program
        self.start = start
        self.halt = halt
        self.init = init
        self.tape = [self.init]
        self.pos = 0
        self.state = self.start
        self.set_tape_callback(None)
        self.tape_changed = 1
        self.movez = 0

    def run(self):
        tape_callback = self.get_tape_callback()
        while self.state != self.halt:
            if tape_callback:
                tape_callback(self.tape, self.tape_changed)

            lhs = self.get_lhs()
            rhs = self.get_rhs(lhs)

            new_state, new_symbol, move = rhs

            old_symbol = lhs[1]
            self.update_tape(old_symbol, new_symbol)
            self.update_state(new_state)
            self.move_head(move)

        if tape_callback:
            tape_callback(self.tape, self.tape_changed)

    def set_tape_callback(self, fn):
        self.tape_callback = fn

    def get_tape_callback(self):
        return self.tape_callback

    property(get_tape_callback, set_tape_callback)

    @property
    def moves(self):
        return self.movez

    def update_tape(self, old_symbol, new_symbol):
        if old_symbol != new_symbol:
            self.tape[self.pos] = new_symbol
            self.tape_changed += 1
        else:
            self.tape_changed = 0

    def update_state(self, state):
        self.state = state

    def get_lhs(self):
        under_cursor = self.tape[self.pos]
        lhs = self.state + under_cursor
        return lhs

    def get_rhs(self, lhs):
        if lhs not in self.program:
            raise Error('Could not find transition for state "%s".' % lhs)
        return self.program[lhs]

    def move_head(self, move):
        if move == 'l':
            self.pos -= 1
        elif move == 'r':
            self.pos += 1
        else:
            raise Error('Unknown move "%s". It can only be left or right.' % move)

        if self.pos < 0:
            self.tape.insert(0, self.init)
            self.pos = 0
        if self.pos >= len(self.tape):
            self.tape.append(self.init)

        self.movez += 1

beaver_programs = [
    { },

    {'a0': 'h1r' },

    {'a0': 'b1r', 'a1': 'b1l',
     'b0': 'a1l', 'b1': 'h1r'},

    {'a0': 'b1r', 'a1': 'h1r',
     'b0': 'c0r', 'b1': 'b1r',
     'c0': 'c1l', 'c1': 'a1l'},

    {'a0': 'b1r', 'a1': 'b1l',
     'b0': 'a1l', 'b1': 'c0l',
     'c0': 'h1r', 'c1': 'd1l',
     'd0': 'd1r', 'd1': 'a0r'},

    {'a0': 'b1l', 'a1': 'a1l',
     'b0': 'c1r', 'b1': 'b1r',
     'c0': 'a1l', 'c1': 'd1r',
     'd0': 'a1l', 'd1': 'e1r',
     'e0': 'h1r', 'e1': 'c0r'},

    {'a0': 'b1r', 'a1': 'e0l',
     'b0': 'c1l', 'b1': 'a0r',
     'c0': 'd1l', 'c1': 'c0r',
     'd0': 'e1l', 'd1': 'f0l',
     'e0': 'a1l', 'e1': 'c1l',
     'f0': 'e1l', 'f1': 'h1r'}
]

def busy_beaver(n):
    def tape_callback(tape, tape_changed):
        if tape_changed:
            print ''.join(tape)

    program = beaver_programs[n]

    print "Running Busy Beaver with %d states." % n
    tm = TuringMachine(program, 'a', 'h', '0')
    tm.set_tape_callback(tape_callback)
    tm.run()
    print "Busy beaver finished in %d steps." % tm.moves

def usage():
    print "Usage: %s [1|2|3|4|5|6]" % sys.argv[0]
    print "Runs Busy Beaver problem for 1 or 2 or 3 or 4 or 5 or 6 states."
    sys.exit(1)

if __name__ == "__main__":
    if len(sys.argv[1:]) < 1:
        usage()

    n = int(sys.argv[1])

    if n < 1 or n > 6:
        print "n must be between 1 and 6 inclusive"
        print
        usage()

    busy_beaver(n)

I rewrote the Turing Machine simulator in C++ and the speedup was huge. Now it took 14 seconds to execute the same Busy Beaver 5.

Download: busy_beaver.cpp (3153)

/*
** Turing Machine simulator for Busy Beaver problem.
** Version 1.0
*/

#include <cstdlib>
#include <iostream>
#include <utility>
#include <vector>
#include <string>
#include <map>

using namespace std;

typedef vector<char> Tape;
typedef map<string, string> Program;

class TuringMachine {
private:
    Tape tape;
    Program program;
    char start, halt, init, state;
    bool tape_changed;
    int moves;
    int pos;
public:
    TuringMachine(Program program, char start, char halt, char init):
        tape(1, init), program(program), start(start), halt(halt),
        init(init), state(start), moves(0), tape_changed(1), pos(0)
    { }

    void run() {
        while (state != halt) {
            print_tape();
            string lhs = get_lhs();
            string rhs = get_rhs(lhs);

            char new_state = rhs[0];
            char new_symbol = rhs[1];
            char move = rhs[2];

            char old_symbol = lhs[1];
            update_tape(old_symbol, new_symbol);
            update_state(new_state);
            move_head(move);
        }
        print_tape();
    }

    int get_moves() {
        return moves;
    }

private:
    inline void print_tape() {
        if (tape_changed) {
            for (int i=0; i<tape.size(); i++)
                cout << tape[i];
            cout << endl;
        }
    }

    inline string get_lhs() {
        char sp[3] = {0};
        sp[0] = state;
        sp[1] = tape[pos];
        return string(sp);
    }

    inline string get_rhs(string &lhs) {
        return program[lhs];
    }

    inline void update_tape(char old_symbol, char new_symbol) {
        if (old_symbol != new_symbol) {
            tape[pos] = new_symbol;
            tape_changed++;
        }
        else {
            tape_changed = 0;
        }
    }

    inline void update_state(char new_state) {
        state = new_state;
    }

    inline void move_head(char move) {
        if (move == 'l')
            pos -= 1;
        else if (move == 'r')
            pos += 1;
        else
            throw string("unknown state");

        if (pos < 0) {
            tape.insert(tape.begin(), init);
            pos = 0;
        }
        if (pos >= tape.size()) {
            tape.push_back(init);
        }
        moves++;
    }

};

vector<Program> busy_beavers;

void init_bb6()
{
    Program bb6;
    bb6.insert(make_pair("a0", "b1r"));
    bb6.insert(make_pair("b0", "c1l"));
    bb6.insert(make_pair("c0", "d1l"));
    bb6.insert(make_pair("d0", "e1l"));
    bb6.insert(make_pair("e0", "a1l"));
    bb6.insert(make_pair("f0", "e1l"));

    bb6.insert(make_pair("a1", "e0l"));
    bb6.insert(make_pair("b1", "a0r"));
    bb6.insert(make_pair("c1", "c0r"));
    bb6.insert(make_pair("d1", "f0l"));
    bb6.insert(make_pair("e1", "c1l"));
    bb6.insert(make_pair("f1", "h1r"));

    busy_beavers.push_back(bb6);
}

void init_bb5() 
{
    Program bb5;
    bb5.insert(make_pair("a0", "b1l"));
    bb5.insert(make_pair("b0", "c1r"));
    bb5.insert(make_pair("c0", "a1l"));
    bb5.insert(make_pair("d0", "a1l"));
    bb5.insert(make_pair("e0", "h1r"));

    bb5.insert(make_pair("a1", "a1l"));
    bb5.insert(make_pair("b1", "b1r"));
    bb5.insert(make_pair("c1", "d1r"));
    bb5.insert(make_pair("d1", "e1r"));
    bb5.insert(make_pair("e1", "c0r"));

    busy_beavers.push_back(bb5);
}

void init_bb4()
{
    Program bb4;
    bb4.insert(make_pair("a0", "b1r"));
    bb4.insert(make_pair("b0", "a1l"));
    bb4.insert(make_pair("c0", "h1r"));
    bb4.insert(make_pair("d0", "d1r"));

    bb4.insert(make_pair("a1", "b1l"));
    bb4.insert(make_pair("b1", "c0l"));
    bb4.insert(make_pair("c1", "d1l"));
    bb4.insert(make_pair("d1", "a0r"));

    busy_beavers.push_back(bb4);

}

void init_bb3()
{
    Program bb3;
    bb3.insert(make_pair("a0", "b1r"));
    bb3.insert(make_pair("b0", "c0r"));
    bb3.insert(make_pair("c0", "c1l"));

    bb3.insert(make_pair("a1", "h1r"));
    bb3.insert(make_pair("b1", "b1r"));
    bb3.insert(make_pair("c1", "a1l"));

    busy_beavers.push_back(bb3);
}

void init_bb2()
{
    Program bb2;
    bb2.insert(make_pair("a0", "b1r"));
    bb2.insert(make_pair("b0", "a1l"));

    bb2.insert(make_pair("a1", "b1l"));
    bb2.insert(make_pair("b1", "h1r"));

    busy_beavers.push_back(bb2);
}

void init_bb1()
{
    Program bb1;
    bb1.insert(make_pair("a0", "h1r"));

    busy_beavers.push_back(bb1);
}

void init_busy_beavers()
{
    busy_beavers.push_back(Program());
    init_bb1();
    init_bb2();
    init_bb3();
    init_bb4();
    init_bb5();
    init_bb6();
}

void busy_beaver(int n)
{
    cout << "Running Busy Beaver with " << n << " states." << endl;
    TuringMachine tm(busy_beavers[n], 'a', 'h', '0');
    tm.run();
    cout << "Busy Beaver finished in " << tm.get_moves() << " steps." << endl;
}

void usage(const char *prog)
{
    cout << "Usage: " << prog << " [1|2|3|4|5|6]\n";
    cout << "Runs Busy Beaver problem for 1 or 2 or 3 or 4 or 5 or 6 states." << endl;
    exit(1);
}

int main(int argc, char **argv)
{
    if (argc < 2) {
        usage(argv[0]);
    }

    int n = atoi(argv[1]);
    if (n < 1 || n > 6) {
        cout << "n must be between 1 and 6 inclusive!\n";
        cout << "\n";
        usage(argv[0]);
    }

    init_busy_beavers();
    busy_beaver(n);
}

And I also wrote a Perl program that uses the GD library to draw the tape changes on Turing Machines.

Download: draw_turing_machine.perl (3024)

#!/usr/bin/perl
#
# Given output from busy_beaver.py or busy_beaver.cpp,
# draws the turing machine tape changes.
#

use warnings;
use strict;
use GD;

$|++;

my $input_file = shift or die 'Usage: $0 <file with TM state transitions>';
my $cell_size = shift || 4;
my $im_file = "$input_file.png";

sub line_count { 
    my $count = 0;
    open my $fh, '<', shift or die $!;
    $count += tr/\n/\n/ while sysread($fh, $_, 2**20);
    return $count;
}

sub get_last_line {
    my $file = shift;
    my $last_line = `tail -1 $file`;
    chomp $last_line;
    return $last_line;
}

my $nr_lines = line_count $input_file;
my $last_line = get_last_line $input_file;
my $last_width = length($last_line);

my ($width, $height) = ($cell_size*$last_width, $cell_size*$nr_lines);

my $im = GD::Image->new($width, $height);
my $white = $im->colorAllocate(255,255,255);
my $dark = $im->colorAllocate(40, 40, 40);

my ($x, $y) = (0, $height-$cell_size);

print "Starting to draw the image. Total states: $nr_lines.\n";
print "It will be $width x $height pizels in size.\n";

my $prev_line;
my ($pad_left, $pad_right) = (0, 0);

sub pad {
    my ($line, $left, $right) = @_;
    return '0'x$left . $line . '0'x$right;
}

open my $fh, "-|", "/usr/bin/tac $input_file" or die $!;
while (<$fh>) {
    chomp;
    print "." if $. % 10 == 0;
    print "($.)" if $. % 500 == 0;

    $prev_line = $_ unless defined $prev_line;

    my $new_line;
    if (length $_ != length $prev_line) {
        if ($prev_line =~ /0$/) {
            $pad_right++;
        }
        elsif ($prev_line =~ /^0/) {
            $pad_left++;
        }
        else {
            die "unexpected data at $. in file $input_file";
        }
    }
    $new_line = pad($_, $pad_left, $pad_right);
    $prev_line = $_;

    my @cells = split //, $new_line;
    for my $cell (@cells) {
        $im->filledRectangle($x, $y, $x + $cell_size, $y + $cell_size,
                $cell ? $dark : $white);
        $x += $cell_size;
    }
    $y -= $cell_size;
    $x = 0;

}

print "\n";

{
    open my $fh, ">", $im_file or die $!;
    print $fh $im->png;
    close $fh;
}

print "Done. Image saved to $im_file.\n";

You can play with these programs yourself. Here is how. Run "busy_beaver.py <n>" with n=1,2,3,4,5,6. This will run the n-state busy beaver Turing Machine. The output will be the tape changes. Then use "draw_turing_machine.pl" to visualize the tape changes.

For example:

$ busy_beaver.py 4 > bb4
$ draw_turing_machine.pl bb4

There are variations of this problem. For example, the busiest beaver with 3 and more symbols. See "The Busy Beaver Competition" for these. The historical development is also interesting -- see The Busy Beaver Historical Survey for more info.

I first learned about the Busy Beaver problem from a book called "The New Turing Omnibus." It contains 66 different essays on various computer science subjects such as algorithms, turing machines, grammars, computability, randomness, and other fun topics. These essays are written in an accessible style that even a high school student can understand them. Each essay doesn't take more than 10 minutes to read. I recommend this book. Get it here: