This article is part of the article series "Vim Plugins You Should Know About."
<- previous article next article ->

Vim Plugins, surround.vimThis is the fourth post in the article series "Vim Plugins You Should Know About". This time I am going to introduce you to a plugin called "snipmate.vim".

If you are intrigued by this topic, I suggest that you subscribe to my posts! For the introduction and first post in this article series, follow this link - Vim Plugins You Should Know About, Part I: surround.vim.

Snipmate.vim is probably the best snippets plugin for vim. A snippet is a piece of often-typed text or programming construct that you can insert into your document by using a trigger followed by a <tab>. It was written by Michael Sanders. He says he modeled this plugin after TextMate's snippets.

Here is an example usage of snipmate.vim. If you are a C programmer, then one of the most often used forms of a loop is "for (i=0; i<n; i++) { ... }". Without snippets you'd have to type this out every time. Even though it takes just another second, these seconds can add to minutes throughout the day and minutes can add to hours over longer periods of time. Why waste your time this way? With snippets you can type just "for<tab>" and snipmate will insert this whole construct in your source code automatically! If "i" or "n" weren't the variable you wanted to use, you can now use <tab> and <shift-tab> to jump to next/previous item in the loop and rename them!

Michael also created an introduction video for his plugin where he demonstrates how to use it. Check it out:

How to install snipmate.vim?

To get the latest version:

  • 1. Download snipmate.zip.
  • 2. Extract snipmate.zip to ~/.vim (on Unix/Linux) or ~\vimfiles (on Windows).
  • 3. Run :helptags ~/.vim/doc (on Unix/Linux) or :helptags ~/vimfiles/doc (on Windows) to rebuild the tags file (so that you can read :help snipmate.)
  • 4. Restart Vim.

The plugin comes with predefined snippets for more than a dozen languages (C, C++, HTML, Java, JavaScript, Objective C, Perl, PHP, Python, Ruby, Tcl, Shell, HTML, Mako templates, LaTeX, VimScript). Be sure to check out the snippet files in the "snippets" directory under your ~/.vim or ~\vimfiles directory.

If you need to define your own snippets (which you most likely will need), create a new file named "language-foo.snippets" in the "snippets" directory. For example, to define your own snippets for C language, you'd create a file called "c-foo.snippets" and place snippets in it.

To learn about snipmate snippet syntax, type ":help snipmate" and locate the syntax section in the help file.

Have Fun!

Have fun with this time saving plugin!

This article is part of the article series "Perl One-Liners Explained."
<- previous article next article ->

Perl One LinersThis is the second part of a seven-part article on famous Perl one-liners. In this part I will create various one-liners for line numbering. See part one for introduction of the series.

Famous Perl one-liners is my attempt to create "perl1line.txt" that is similar to "awk1line.txt" and "sed1line.txt" that have been so popular among Awk and Sed programmers.

The article on famous Perl one-liners will consist of at least seven parts:

The one-liners will make heavy use of Perl special variables. A few years ago I compiled all the Perl special variables in a single file and called it Perl special variable cheat-sheet. Even tho it's mostly copied out of perldoc perlvar, it's still handy to have in front of you. Print it!

Awesome news: I have written an e-book based on this article series. Check it out:

And here are today's one-liners:

Line Numbering

9. Number all lines in a file.

perl -pe '$_ = "$. $_"'

As I explained in the first one-liner, "-p" causes Perl to assume a loop around the program (specified by "-e") that reads each line of input into the " $_ " variable, executes the program and then prints the " $_ " variable.

In this one-liner I simply modify " $_ " and prepend the " $. " variable to it. The special variable " $. " contains the current line number of input.

The result is that each line gets its line number prepended.

10. Number only non-empty lines in a file.

perl -pe '$_ = ++$a." $_" if /./'

Here we employ the "action if condition" statement that executes "action" only if "condition" is true. In this case the condition is a regular expression "/./", which matches any character except newline (that is, it matches a non-empty line); and the action is " $_ = ++$a." $_" ", which prepends variable " $a " incremented by one to the current line. As we didn't use strict pragma, $a was created automatically.

The result is that at each non-empty line " $a " gets incremented by one and prepended to that line. And at each empty line nothing gets modified and the empty line gets printed as is.

11. Number and print only non-empty lines in a file (drop empty lines).

perl -ne 'print ++$a." $_" if /./'

This one-liner uses the "-n" program argument that places the line in " $_ " variable and then executes the program specified by "-e". Unlike "-p", it does not print the line after executing code in "-e", so we have to call "print" explicitly to get it printed.

The one-liner calls "print" only on lines that have at least one character in them. And exactly like in the previous one-liner, it increments the line number in variable " $a " by one for each non-empty line.

The empty lines simply get ignored and never get printed.

12. Number all lines but print line numbers only non-empty lines.

perl -pe '$_ = "$. $_" if /./'

This one-liner is similar to one-liner #10. Here I modify the " $_ " variable that holds the entire line only if the line has at least one character. All other lines (empty ones) get printed without line numbers.

13. Number only lines that match a pattern, print others unmodified.

perl -pe '$_ = ++$a." $_" if /regex/'

Here we again use the "action if condition" statement but the condition in this case is a pattern (regular expression) "/regex/". The action is the same as in one-liner #10. I don't want to repeat, see #10 for explanation.

14. Number and print only lines that match a pattern.

perl -ne 'print ++$a." $_" if /regex/'

This one-liner is almost exactly like #11. The only difference is that it prints numbered lines that match only "/regex/".

15. Number all lines, but print line numbers only for lines that match a pattern.

perl -pe '$_ = "$. $_" if /regex/'

This one-liner is similar to the previous one-liner and to one-liner #12. Here the line gets its line number prepended if it matches a /regex/, otherwise it just gets printed without a line number.

16. Number all lines in a file using a custom format (emulate cat -n).

perl -ne 'printf "%-5d %s", $., $_'

This one-liner uses the formatted print "printf" function to print the line number together with line. In this particular example the line numbers are left aligned on 5 char boundary.

Some other nice format strings are "%5d" that right-aligns line numbers on 5 char boundary and "%05d" that zero-fills and right-justifies the line numbers.

Here my Perl printf cheat sheet might come handy that lists all the possible format specifiers.

17. Print the total number of lines in a file (emulate wc -l).

perl -lne 'END { print $. }'

This one-liner uses the "END" block that Perl probably took as a feature from Awk language. The END block gets executed after the program has executed. In this case the program is the hidden loop over the input that was created by the "-n" argument. After it has looped over the input, the special variable " $. " contains the number of lines there was in the input. The END block prints this variable. The " -l " parameter sets the output record separator for "print" to a newline (so that we didn't have to print "$.\n").

Another way to do the same is:

perl -le 'print $n=()=<>'

This is a tricky one, but easy to understand if you know about Perl contexts. In this one-liner the " ()=<> " part causes the <> operator (the diamond operator) to evaluate in list context, that causes the diamond operator to read the whole file in a list. Next, " $n " gets evaluated in scalar context. Evaluating a list in a scalar context returns the number of elements in the list. Thus the " $n=()=<> " construction is equal to the number of lines in the input, that is number of lines in the file. The print statement prints this number out. The " -l " argument makes sure a newline gets added after printing out this number.

This is the same as writing the following, except longer:

perl -le 'print scalar(()=<>)'

And completely obvious version:

perl -le 'print scalar(@foo=<>)'

Yet another way to do it:

perl -ne '}{print $.'

This one-liner uses the eskimo operator "}{" in conjunction with "-n" command line argument. As I explained in one-liner #11, the "-n" argument forces Perl to assume a " while(<>) { } " loop around the program. The eskimo operator forces Perl to escape the loop, and the program turns out to be:

while (<>) {
}{                    # eskimo operator here
    print $.;
}

It's easy to see that this program just loops over all the input and after it's done doing so, it prints the " $. ", which is the number of lines in the input.

18. Print the number of non-empty lines in a file.

perl -le 'print scalar(grep{/./}<>)'

This one-liner uses the "grep" function that is similar to the grep Unix command. Given a list of values, " grep {condition} " returns only those values that match condition. In this case the condition is a regular expression that matches at least one character, so the input gets filtered and the "grep{/./}" returns all lines that were non empty. To get the number of characters we evaluate the list in scalar context and print the result. (As I mentioned in the previous one-liner list in scalar context evaluates to number of elements in the list).

A golfer's version of this one-liner would be to replace "scalar()" with " ~~ " (double bitwise negate), thus it can be shortened:

perl -le 'print ~~grep{/./}<>'

This can be made even shorter:

perl -le 'print~~grep/./,<>'

19. Print the number of empty lines in a file.

perl -lne '$a++ if /^$/; END {print $a+0}'

Here I use variable $a to count how many empty lines have I encountered. Once I have finished looping over all the lines, I print the value of $a in the END block. I use " $a+0 " construction to make sure " 0 " gets output if no lines were empty.

I could have also modified the previous one-liner:

perl -le 'print scalar(grep{/^$/}<>)'

Or written it with " ~~ ":

perl -le 'print ~~grep{/^$/}<>'

These last two versions are not as effective, as they would read the whole file in memory. Where as the first one would do it line by line.

20. Print the number of lines in a file that match a pattern (emulate grep -c).

perl -lne '$a++ if /regex/; END {print $a+0}'

This one-liner is basically the same as the previous one, except it increments the line counter $a by one in case a line matches a regular expression /regex/.

Perl one-liners explained e-book

I've now written the "Perl One-Liners Explained" e-book based on this article series. I went through all the one-liners, improved explanations, fixed mistakes and typos, added a bunch of new one-liners, added an introduction to Perl one-liners and a new chapter on Perl's special variables. Please take a look:

Have Fun!

Have fun with these one-liners. These were really easy this time. The next part is going to be about various calculations.

Can you think of other numbering operations that I did not include here?

A Year of BloggingHoly smokes! It has now been two years since I started this blog. It seems almost like yesterday when I posted the "A Year of Blogging" article. And now it's two! With this post I'd like to celebrate the 2nd birthday and share various interesting statistics that I managed to gather.

During this year (July 20, 2008 - July 26, 2009) I wrote 55 posts, which received around 1000 comments. According to StatCounter and Google Analytics my blog was visited by 1,050,000 unique people who viewed 1,700,000 pages. Wow, 1 million visitors! That's very impressive!

Here is a Google Analytics graph of monthly page views for the last year (click for a larger version):

Catonmat.Net Page Views Per Month (Second Year of Blogging)

In the last three months I did not manage to write much and you can see how that reflected on the page views. A good lesson to be learned is to be persistent and keep writing articles consistently.

Here is the same graph with two years of data, showing a complete picture of my blog's growth:

Catonmat.Net Page Views Per Month (Two Years of Blogging)

I like this seemingly linear growth. I hope it continues the same way the next year!

Here are the top 5 referring sites that my visitors came from:

And here are the top 5 referring blogs:

I found that just a handful of blogs had linked to me during this year. The main reason, I suspect, is that I do not link out much myself... It's something to improve upon.

If you remember, I ended the last year's post with the following words (I had only 1000 subscribers at that time):

I am setting myself a goal of reaching 5000 subscribers by the end of the next year of blogging (July 2009)! I know that this is very ambitious goal but I am ready to take the challenge!

I can proudly say that I reached my ambitious goal! My blog now has almost 7000 subscribers! If you have not yet subscribed, click here to do it!

Here is the RSS subscriber graph for the whole two years:

RSS Subscriber Count, Two Years of Blogging

Several months ago I approximated the subscriber data with an exponent function and it produced a good fit. Probably if I had continued writing articles at the same pace I did three months ago, I'd have over 10,000 subscribers now.

Anyway, let's now turn to the top 10 most viewed posts:

The article that I liked the most myself but which didn't make it to top ten was the "Set Operations in Unix Shell". I just love this Unix stuff I did there.

I am also very proud for the following three article series that I wrote:

  • 1. Review of MIT's Introduction to Algorithms course (14 parts).
  • 2. Famous Awk One-Liners Explained (4 parts: 1, 2, 3, 4).
  • 3. Famous Sed One-Liners Explained (3 parts: 1, 2, 3)

Finally, here is a list of ideas that I have thought for the third year of blogging:

  • Publish three e-books on Awk One-Liners, Sed One-Liners and Perl One-Liners.
  • Launch mathematics, physics and general science blog.
  • Write about mathematical foundations of cryptography and try to implement various cryptosystems and cryptography protocols.
  • Publish my review of MIT's Linear Algebra course (in math blog, so the main topic of catonmat stays computing).
  • Publish my review of MIT's Physics courses on Mechanics, Electromagnetism, and Waves (in physics blog).
  • Publish my notes on how I learned the C++ language.
  • Write more about computer security and ethical hacking.
  • Write several book reviews.
  • Create a bunch of various fun utilities and programs.
  • Create at least one useful web project.
  • Add a knowledge database to catonmat, create software to allow easy publishing to it.
  • If time allows, publish reviews of important computer science publications.

I'll document everything here as I go, so if you are interested in these topics stay with me by subscribing to my rss feed!

And to make things more challenging again, I am setting a new goal for the next year of blogging. The goal is to reach 20,000 subscribers by July 2010!

Hope to see you all on my blog again! Now it's time for this delicious cake:

Second Birthday Portal Game Cake

This article is part of the article series "MIT Introduction to Algorithms."
<- previous article next article ->

MIT AlgorithmsThis is a happy and sad moment at the same time - I have finally reached the last two lectures of MIT's undergraduate algorithms course. These last two lectures are on a fairly new area of algorithm research called "cache oblivious algorithms."

Cache-oblivious algorithms take into account something that has been ignored in all the lectures so far, particularly, the multilevel memory hierarchy of modern computers. Retrieving items from various levels of memory and cache make up a dominant factor of running time, so for speed it is crucial to minimize these costs. The main idea of cache-oblivious algorithms is to achieve optimal use of caches on all levels of a memory hierarchy without knowledge of their size.

Cache-oblivious algorithms should not be confused with cache-aware algorithms. Cache-aware algorithms and data structures explicitly depend on various hardware configuration parameters, such as the cache size. Cache-oblivious algorithms do not depend on any hardware parameters. An example of cache-aware (not cache-oblivious) data structure is a B-Tree that has the explicit parameter B, the size of a node. The main disadvantage of cache-aware algorithms is that they are based on the knowledge of the memory structure and size, which makes it difficult to move implementations from one architecture to another. Another problem is that it is very difficult, if not impossible, to adapt some of these algorithms to work with multiple levels in the memory hierarchy. Cache-oblivious algorithms solve both problems.

Lecture twenty-two introduces the terminology and notation used in cache-oblivious algorithms, explains the difference between cache-oblivious and cache-aware algorithms, does a simple memory analysis of several simple algorithms and culminates with a cache-oblivious algorithm for matrix multiplication.

The final lecture twenty-three is the most difficult in the whole course and shows cache-oblivious binary search trees and cache-oblivious sorting called funnel sort.

Use this supplementary reading material by professor Demaine to understand the material better: Cache-oblivious algorithms and data structures (.pdf).

Lecture 22: Cache Oblivious Algorithms I

Lecture twenty-two starts with an introduction to the modern memory hierarchy (CPU cache L1, L2, L3, main memory, disk cache, etc.) and with the notation and core concepts used in cache-oblivious algorithms.

A powerful result in cache-oblivious algorithm design is that if an algorithm is efficient on two levels of cache, then it's efficient on any number of levels. Thus the study of cache-obliviousness can be simplified to two-level memory hierarchy, say the CPU cache and main memory, where the accesses to cache are instant but are orders of magnitude slower to main memory. Therefore the main question cache-oblivious algorithm analysis tries to address is how many memory transfers (MTs) does a problem of size N take. The notation used for this is MT(N). For an algorithm to be efficient, the number of memory transfers should be as small as possible.

Next the lecture analysis the number of memory transfers for basic array scanning and array reverse algorithms. Since array scanning is consequential, N elements can be processed with O(N/B) accesses, where B is the block size - number of elements that are automatically fetched as N-th element is accessed. That is MT(N) = O(N/B) for array scanning. The same bound holds for reversing an array, since it can be viewed two scans - one from the beginning and one from the end.

Next it's shown that the classical binary search (covered in lecture 3) is not cache efficient, but order statistics problem (covered in lecture 6) is cache efficient.

Finally the lecture describes a cache efficient way to multiply matrices by storing them block-wise in memory.

You're welcome to watch lecture twenty-two:

Topics covered in lecture twenty-two:

  • [00:10] Introduction and history of cache-oblivious algorithms.
  • [02:00] Modern memory hierarchy in computers: Caches L1, L2, L3, main memory, disk cache.
  • [06:15] Formula for calculating the cost to access a block of memory.
  • [08:18] Amortized cost to access one element in memory.
  • [11:00] Spatial and temporal locality of algorithms.
  • [13:45] Two-level memory model.
  • [16:30] Notation: total cache size M, block size B, number of blocks M/B.
  • [20:40] Notation: MT(N) - number of memory transfers of a problem of size N.
  • [21:45] Cache-aware algorithms.
  • [22:50] Cache-oblivious algorithms.
  • [28:35] Blocking of memory.
  • [32:45] Cache-oblivious scanning algorithm (visitor pattern).
  • [36:20] Cache-oblivious Array-Reverse algorithm.
  • [39:05] Memory transfers in classical binary search algorithm.
  • [43:45] Divide and conquer algorithms.
  • [45:50] Analysis of memory transfers in order statistics algorithm.
  • [01:00:50] Analysis of classical matrix multiplication (with row major, column major memory layout).
  • [01:07:30] Cache oblivious matrix multiplication.

Lecture twenty-two notes:

MIT Algorithms Lecture 22 Notes Thumbnail. Page 1 of 2.
Lecture 22, page 1 of 2: modern memory hierarchy, spacial and temporal locality, two-level memory model, blocking of memory, basic algorithms: parallel scan.

MIT Algorithms Lecture 22 Notes Thumbnail. Page 2 of 2.
Lecture 22, page 2 of 2: basic algorithms: binary search, divide and conquer algorithms, order statistics, matrix multiplication, block algorithms.

Lecture 23: Cache Oblivious Algorithms II

This was probably the most complicated lecture in the whole course. The whole lecture is devoted to two subjects - cache-oblivious search trees and cache-oblivious sorting.

While it's relatively easy to understand the design of cache-oblivious way of storing search trees in memory, it's amazingly difficult to understand the cache-efficient sorting. It's called funnel sort which is basically an n-way merge sort (covered in lecture 1) with special cache-oblivious merging function called k-funnel.

You're welcome to watch lecture twenty-three:

Topics covered in lecture twenty-three:

  • [01:00] Cache-oblivious static search trees (binary search trees).
  • [09:35] Analysis of static search trees.
  • [18:15] Cache-aware sorting.
  • [19:00] Sorting by repeated insertion in binary tree.
  • [21:40] Sorting by binary merge sort.
  • [31:20] Sorting by N-way mergesort.
  • [36:20] Sorting bound for cache-oblivious sorting algorithms.
  • [38:30] Cache-oblivious sorting.
  • [41:40] Definition of K-Funnel (cache-oblivious merging).
  • [43:35] Funnel sort.
  • [54:05] Construction of K-Funnel.
  • [01:03:10] How to fill buffer in k-funnel.
  • [01:07:30] Analysis of fill buffer.

Lecture twenty-three notes:

MIT Algorithms Lecture 23 Notes Thumbnail. Page 1 of 2.
Lecture 23, page 1 of 2: static search trees, cache aware sorting.

MIT Algorithms Lecture 23 Notes Thumbnail. Page 2 of 2.
Lecture 23, page 2 of 2: cache oblivious sorting, k-funnels, funnel sort, fill-buffer algorithm and analysis.


Have fun with the cache oblivious algorithms! I'll do a few more posts that will summarize all these lectures and highlight key ideas.

If you loved this, please subscribe to my blog!

Fibonacci of PisaIn this article I'd like to show how the theory does not always match the practice. I am sure you all know the linear time algorithm for finding Fibonacci numbers. The analysis says that the running time of this algorithm is O(n). But is it still O(n) if we actually run it? If not, what is wrong?

Let's start with the simplest linear time implementation of the Fibonacci number generating algorithm in Python:

def LinearFibonacci(n):
  fn = f1 = f2 = 1
  for x in xrange(2, n):
    fn = f1 + f2
    f2, f1 = f1, fn
  return fn

The theory says that this algorithm should run in O(n) - given the n-th Fibonacci number to find, the algorithm does a single loop up to n.

Now let's verify if this algorithm is really linear in practice. If it's linear then the plot of n vs. running time of LinearFibonacci(n) should be a line. I plotted these values for n up to 200,000 and here is the plot that I got:

Quadratic performance of linear algorithm
Note: Each data point was averaged over 10 calculcations.

Oh no! This does not look linear at all! It looks quadratic! I fitted the data with a quadratic function and it fit nearly perfectly. Do you know why the seemingly linear algorithm went quadratic?

The answer is that the theoretical analysis assumed that all the operations in the algorithm executed in constant time. But this is not the case when we run the algorithm on a real machine! As the Fibonacci numbers get larger, each addition operation for calculating the next Fibonacci number "fn = f1 + f2 " runs in time proportional to the length of the previous Fibonacci number. It's because these huge numbers no longer fit in the basic units of computation in the CPU; so a big integer library is required. The addition of two numbers of length O(n) in a big integer library takes time of O(n).

I'll show you that the running time of the real-life linear Fibonacci algorithm really is O(n^2) by taking into account this hidden cost of bigint library.

So at each iteration i we have a hidden cost of O(number of digits of fi) = O(digits(fi)). Let's sum these hidden cost for the whole loop up to n:

Hidden bigint cost in linear fibonacci

Now let's find the number of digits in the n-th Fibonacci number. To do that let's use the well-known Binet's formula, which tells us that the n-th Fibonacci number fn can be expressed as:

Binet’s Fibonacci Formula

It is also well-known that the number of digits in a number is integer part of log10(number) + 1. Thus the number of digits in the n-th Fibonacci number is:

Digits in the n-th Fibonacci number

Thus if we now sum all the hidden costs for finding the n-th Fibonacci number we get:

Hidden integer library cost in linear fibonacci algorithm

There we have it. The running time of this "linear" algorithm is actually quadratic if we take into consideration that each addition operation runs proportionally to the length of addends.

Next time I'll show you that if the addition operation runs in constant time, then the algorithm is truly linear; and later I will do a similar analysis of the logarithmic time algorithm for finding Fibonnaci numbers that uses this awesome matrix identity:

Fibonacci Fibonnaci Matrix Identity

Don't forget to subscribe if you are interested! It's well worth every byte!