Hi all. I'm starting yet another article series here. This one is going to be about Unix utilities that you should know about. The articles will discuss one Unix program at a time. I'll try to write a good introduction to the tool and give as many examples as I can think of.
Before I start, I want to clarify one thing  Why am I starting so many article series? The answer is that I want to write about many topics simultaneously and switch between them as I feel inspired.
The first post in this series is going to be about not so well known Unix program called Pipe Viewer or pv for short. Pipe viewer is a terminalbased tool for monitoring the progress of data through a pipeline. It can be inserted into any normal pipeline between two processes to give a visual indication of how quickly data is passing through, how long it has taken, how near to completion it is, and an estimate of how long it will be until completion.
Pipe viewer is written by Andrew Wood, an experienced Unix sysadmin. The homepage of pv utility is here: pv utility.
If you feel like you are interested in this stuff, I suggest that you subscribe to my rss feed to receive my future posts automatically.
How to use pv?
Ok, let's start with some really easy examples and progress to more complicated ones.
Suppose that you had a file "access.log" that is a few gigabytes in size and contains web logs. You want to compress it into a smaller file, let's say a gunzip archive (.gz). The obvious way would be to do:
$ gzip c access.log > access.log.gz
As the file is so huge (several gigabytes), you have no idea how long to wait. Will it finish soon? Or will it take another 30 mins?
By using pv you can precisely time how long it will take. Take a look at doing the same through pv:
$ pv access.log  gzip > access.log.gz 611MB 0:00:11 [58.3MB/s] [=> ] 15% ETA 0:00:59
Pipe viewer acts as "cat" here, except it also adds a progress bar. We can see that gzip processed 611MB of data in 11 seconds. It has processed 15% of all data and it will take 59 more seconds to finish.
You may stick several pv processes in between. For example, you can time how fast the data is being read from the disk and how much data is gzip outputting:
$ pv cN source access.log  gzip  pv cN gzip > access.log.gz source: 760MB 0:00:15 [37.4MB/s] [=> ] 19% ETA 0:01:02 gzip: 34.5MB 0:00:15 [1.74MB/s] [ <=> ]
Here we specified the "N" parameter to pv to create a named stream. The "c" parameter makes sure the output is not garbaged by one pv process writing over the other.
This example shows that "access.log" file is being read at a speed of 37.4MB/s but gzip is writing data at only 1.74MB/s. We can immediately calculate the compression rate. It's 37.4/1.74 = 21x!
Notice how the gzip does not include how much data is left or how fast it will finish. It's because the pv process after gzip has no idea how much data gzip will produce (it's just outputting compressed data from input stream). The first pv process, however, knows how much data is left, because it's reading it.
Another similar example would be to pack the whole directory of files into a compressed tarball:
$ tar czf  .  pv > out.tgz 117MB 0:00:55 [2.7MB/s] [> ]
In this example pv shows just the output rate of "tar czf" command. Not very interesting and it does not provide information about how much data is left. We need to provide the total size of data we are tarring to pv, it's done this way:
$ tar cf  .  pv s $(du sb .  awk '{print $1}')  gzip > out.tgz 253MB 0:00:05 [46.7MB/s] [> ] 1% ETA 0:04:49
What happens here is we tell tar to create "c" an archive of all files in current dir "." (recursively) and output the data to stdout "f ". Next we specify the size "s" to pv of all files in current dir. The "du sb .  awk '{print $1}'" returns number of bytes in current dir, and it gets fed as "s" parameter to pv. Next we gzip the whole content and output the result to out.tgz file. This way "pv" knows how much data is still left to be processed and shows us that it will take yet another 4 mins 49 secs to finish.
Another fine example is copying large amounts of data over network by using help of "nc" utility that I will write about some other time.
Suppose you have two computers A and B. You want to transfer a directory from A to B very quickly. The fastest way is to use tar and nc, and time the operation with pv.
# on computer A, with IP address 192.168.1.100 $ tar cf  /path/to/dir  pv  nc l p 6666 q 5
# on computer B $ nc 192.168.1.100 6666  pv  tar xf 
That's it. All the files in /path/to/dir on computer A will get transferred to computer B, and you'll be able to see how fast the operation is going.
If you want the progress bar, you have to do the "pv s $(...)" trick from the previous example (only on computer A).
Another funny example is by my blog reader alexandru. He shows how to time how fast the computer reads from /dev/zero:
$ pv /dev/zero > /dev/null 157GB 0:00:38 [4,17GB/s]
That's about it. I hope you enjoyed my examples and learned something new. I love explaining things and teaching! :)
How to install pv?
If you're on Debian or Debian based system such as Ubuntu do the following:
$ sudo aptitude install pv
If you're on Fedora or Fedora based system such as CentOS do:
$ sudo yum install pv
If you're on Slackware, go to pv homepage, download the pvversion.tar.gz archive and do:
$ tar zxf pvversion.tar.gz $ cd pvversion $ ./configure && sudo make install
If you're a Mac user:
$ sudo port install pv
If you're OpenSolaris user:
$ pfexec pkg install pv
If you're a Windows user on Cygwin:
$ ./configure $ export DESTDIR=/cygdrive/c/cygwin $ make $ make install
The manual of the utility can be found here man pv.
Have fun measuring your pipes with pv, and until next time!
MIT's Introduction to Algorithms, Lectures 17, 18 and 19: Shortest Path Algorithms
This is the twelfth post in an article series about MIT's lecture course "Introduction to Algorithms." In this post I will review a trilogy of lectures on graph and shortest path algorithms. They are lectures seventeen, eighteen and nineteen. They'll cover Dijkstra's Algorithm, BreadthFirst Search Algorithm and BellmanFord Algorithm for finding singlesource shortest paths as well as FloydWarshall Algorithm and Johnson's Algorithm for finding allpairs shortest paths.
These algorithms require a thorough understanding of graphs. See the previous lecture for a good review of graphs.
Lecture seventeen focuses on the singlesource shortestpaths problem: Given a graph G = (V, E), we want to find a shortest path from a given source vertex s ∈ V to each vertex v ∈ V. In this lecture the weights of edges are restricted to be positive which leads it to Dijkstra's algorithm and in a special case when all edges have unit weight to Breadthfirst search algorithm.
Lecture eighteen also focuses on the same singlesource shortestpaths problem, but allows edges to be negative. In this case a negativeweight cycles may exist and Dijkstra's algorithm would no longer work and would produce incorrect results. BellmanFord algorithm therefore is introduced that runs slower than Dijkstra's but detects negative cycles. As a corollary it is shown that BellmanFord solves Linear Programming problems with constraints in form x_{j}  x_{i} <= w_{ij}.
Lecture nineteen focuses on the allpairs shortestpaths problem: Find a shortest path from u to v for every pair of vertices u and v. Although this problem can be solved by running a singlesource algorithm once from each vertex, it can be solved faster with FloydWarshall algorithm or Johnson's algorithm.
Lecture 17: Shortest Paths I: SingleSource Shortest Paths and Dijkstra's Algorithm
Lecture seventeen starts with a small review of paths and shortest paths.
It reminds that given a graph G = (V, E, w), where V is a set of vertices, E is a set of edges and w is weight function that maps edges to realvalued weights, a path p from a vertex u to a vertex v in this graph is a sequence of vertices (v_{0}, v_{1}, ..., v_{k}) such that u = v_{0}, v = v_{k} and (v_{i1}, v_{i}) ∈ E. The weight w(p) of this path is a sum of weights over all edges = w(v_{0}, v_{1}) + w(v_{1}, v_{2}) + ... + w(v_{k1}, v_{k}). It also reminds that a shortest path from u to v is the path with minimum weight of all paths from u to v, and that a shortest path in a graph might not exist if it contains a negative weight cycle.
The lecture then notes that shortest paths exhibit the optimal substructure property  a subpath of a shortest path is also a shortest path. The proof of this property is given by cut and paste argument. If you remember from previous two lectures on dynamic programming and greedy algorithms, an optimal substructure property suggests that these two techniques could be applied to solve the problem efficiently. Indeed, applying the greedy idea, Dijkstra's algorithm emerges.
Here is a somewhat precise definition of singlesource shortest paths problem with nonnegative edge weights: Given a graph G = (V, E), and a starting vertex s ∈ V, find shortestpath weights for all vertices v ∈ V.
Here is the greedy idea of Dijkstra's algorithm:
 1. Maintain a set S of vertices whose shortestpath from s are known (s ∈ S initially).
 2. At each step add vertex v from the set VS to the set S. Choose v that has minimal distance from s (be greedy).
 3. Update the distance estimates of vertices adjacent to v.
I have also posted a video interview with Edsger Dijkstra  Edsger Dijkstra: Discipline in Thought, please take a look if you want to see how Dijkstra looked like. :)
The lecture continues with an example of running Dijkstra's algorithm on a nontrivial graph. It also introduces to a concept of a shortest path tree  a tree that is formed by edges that were last relaxed in each iteration (hard to explain in English, see lecture at 43:40).
The other half of lecture is devoted to three correctness arguments of Dijkstra's algorithm. The first one proves that relaxation never makes a mistake. The second proves that relaxation always makes the right greedy choice. And the third proves that when algorithm terminates the results are correct.
At the final minutes of lecture, running time of Dijkstra's algorithm is analyzed. Turns out that the running time depends on what data structure is used for maintaining the priority queue of the set VS (step 2). If we use an array, the running time is O(V^{2}), if we use binary heap, it's O(E·lg(V)) and if we use Fibonacci heap, it's O(E + V·lg(V)).
Finally a special case of weighted graphs is considered when all weights are unit weights. In this case a singlesource shortestpaths problem can be solved by a the Breadthfirst search (BFS) algorithm that is actually a simpler version of Dijkstra's algorithm with priority queue replaced by a FIFO! The running time of BFS is O(V+E).
You're welcome to watch lecture seventeen:
Topics covered in lecture seventeen:
 [01:40] Review of paths.
 [03:15] Edge weight functions.
 [03:30] Example of a path, its edge weights, and weight of the path.
 [04:22] Review of shortestpaths.
 [05:15] Shortestpath weight.
 [06:30] Negative edge weights.
 [10:55] Optimal substructure of a shortest path.
 [11:50] Proof of optimal substructure property: cut and paste.
 [14:23] Triangle inequality.
 [15:15] Geometric proof of triangle inequality.
 [16:30] Singlesource shortest paths problem.
 [18:32] Restricted singlesource shortest paths problem: all edge weights positive or zero.
 [19:35] Greedy idea for ss shortest paths.
 [26:40] Dijkstra's algorithm.
 [35:30] Example of Dijkstra's algorithm.
 [43:40] Shortest path trees.
 [45:12] Correctness of Dijkstra's algorithm: why relaxation never makes mistake.
 [53:55] Correctness of Dijkstra's algorithm: why relaxation makes progress.
 [01:01:00] Correctness of Dijkstra's algorithm: why it gives correct answer when it terminates.
 [01:15:40] Running time of Dijkstra's algorithm.
 [01:18:40] Running time depending on using array O(V^2), binary heap O(E·lg(V)) and Fibonacci heap O(E + V·lg(V)) for priority queue.
 [01:20:00] Unweighted graphs.
 [01:20:40] BreadthFirst Search (BFS) algorithm.
 [01:23:23] Running time of BFS: O(V+E).
Lecture seventeen notes:
Lecture 18: Shortest Paths II: BellmanFord Algorithm
Lecture eighteen begins with recalling that if a graph contains a negative weight cycle, then a shortest path may not exist and gives a geometric illustration of this fact.
Right after this fact, it jumps to BellmanFord algorithm. The BellmanFord algorithm solves the singlesource shortestpaths problem in the general case in which edge weights may be negative. Given a weighted, directed graph G = (V, E) with source s and weight function w: E → R, the BellmanFord algorithm produces the shortest paths from s and their weights, if there is no negative weight cycle, and it produces no answer if there is a negative weight cycle.
The algorithm uses relaxation, progressively decreasing an estimate on the weight of a shortest path from the source s to each vertex v ∈ V until it achieves the actual shortestpath weight.
The running time of BellmanFord algorithm is O(VE). The lecture also gives a correctness proof of BellmanFord algorithm.
The other half of the lecture is devoted to a problem that can be effectively solved by BellmanFord. It's called the linear feasibility problem that it is a special case of linear programming (LP) problem, where there is no objective but the constraints are in form x_{i} <= w_{ij}. It is noted that BellmanFord is actually a simple case of LP problem.
The lecture ends with an application of BellmanFord to solving a special case of VLSI layout problem in 1 dimension.
You're welcome to watch lecture eighteen:
Topics covered in lecture eighteen:
 [00:20] A long, long time ago... in a galaxy far, far away...
 [00:40] Quick review of previous lecture  Dijkstra's algorithm, nonnegative edge weights.
 [01:40] Description of BellmanFord algorithm.
 [04:30] BellmanFord algorithm.
 [08:50] Running time of BellmanFord O(VE).
 [10:05] Example of BellmenFord algorithm.
 [18:40] Correctness of BellmanFord algorithm:
 [36:30] Linear programming (LP).
 [42:48] Efficient algorithms for solving LPs: simplex algorithm (exponential in worst case, but practical), ellipsoid algorithm (polynomial time, impractical), interior point methods (polynomial), random sampling (brand new, discovered at MIT).
 [45:58] Linear feasibility problem  LP with no objective.
 [47:30] Difference constraints  constraints in form x_{j}  x_{i} <= w_{ij}.
 [49:50] Example of difference constraints.
 [51:04] Constraint graph.
 [54:05] Theorem: Negative weight cycle in constraint means difference constraints are infeasible/unsatisfiable.
 [54:50] Proof.
 [59:15] Theorem: If no negative weight cycle then satisfiable.
 [01:00:23] Proof.
 [01:08:20] Corollary: BellmanFord solves a system of of m difference constraints on n variables in O(mn) time.
 [01:12:30] VLSI Layout problem solved by BellmanFord.
Lecture eighteen notes:


Lecture 19: Shortest Paths III: AllPairs Shortest Paths and FloydWarshall Algorithm
Lecture nineteen starts with a quick review of lectures seventeen and eighteen. It reminds the running times of various singlesource shortest path algorithms, and mentions that in case of a directed acyclic graphs (which was not covered in previous lectures), you can run topological sort and 1 round of BellmanFord that makes it find singlesource shortest paths in linear time (for graphs) in O(V+E).
The lecture continues allpairs shortest paths problem, where we want to know the shortest path between every pair of vertices.
A naive approach to this problem is run singlesource shortest path from each vertex. For example, on an unweighted graph we'd run BFS algorithm V times that would give O(VE) running time. On a nonnegative edge weight graph it would be V times Dijkstra's algorithm, giving O(VE + V^{2}lg(V)) time. And in general case we'd run BellmanFord V times that would make the algorithm run in O(V^{2}E) time.
The lecture continues with a precise definition of allpairs shortest paths problem: given a directed graph, find an NxN matrix (N = V), where each entry a_{ij} is the shortest path from vertex i to vertex j.
In general case, if the graph has negative edges and it's dense, the best we can do so far is run BellmanFord V times. Recalling that E = O(V^{2}) in a dense graph, the running time is O(V^{2}E) = O(V^{4})  hypercubed in number of vertices = slow.
Lecture then proceeds with a dynamic programming algorithm without knowing if it will be faster or not. It's too complicated to explain here, and I recommend watching lecture at 11:54 to understand it. Turns out this dynamic programming algorithm does not give a performance boost and is still O(V^{4}), but it gives some wicked ideas.
The most wicked idea is to connect matrix multiplication with the dynamic programming recurrence and using repeated squaring to beat O(V^{4}). This craziness gives O(V^{3}lgV) time that is an improvement. Please see 23:40 in the lecture for full explanation.
After all this the lecture arrives at FloydWarshall algorithm that finds allpairs shortest paths in O(V^{3}). The algorithm is derived from a recurrence that the shortest path from vertex i to vertex j is minimum of { shortest path from i to j directly or shortest path from i to k and shortest path from k to j }.
Finally the lecture explains Johnson's algorithm that runs in O(VE + V^{2}log(V)) time for sparse graphs. The key idea in this algorithm is to reweigh all edges so that they are all positive, then run Dijkstra from each vertex and finally undo the reweighing.
It turns out, however, that to find the function for reweighing all edges, a set of difference constraints need to be satisfied. It makes us first run BellmanFord to solve these constraints.
Reweighing takes O(EV) time, running Dijkstra on each vertex takes O(VE + V^{2}lgV) and undoing reweighing takes O(V^{2}) time. Of these terms O(VE + V^{2}lgV) dominates and defines algorithm's running time (for dense it's still O(V^{3}).
You're welcome to watch lecture nineteen:
Topics covered in lecture nineteen:
 [01:00] Review of singlesource shortest path algorithms.
 [04:45] Allpairs shortest paths by running singlesource shortest path algorithms from each vertex.
 [05:35] Unweighted edges: VxBFS = O(VE).
 [06:35] Nonnegative edge weights: VxDijkstra = O(VE + V^{2}lg(V)).
 [07:40] General case: VxBellmanFord = O(V^{2}E).
 [09:10] Formal definition of allpairs shortest paths problem.
 [11:08] VxBellmanFord for dense graphs (E = O(V^{2})) is O(V^{4}).
 [11:54] Trying to beat O(V^{4}) with dynamic programming.
 [19:30] Dynamic programming algorithm, still O(V^{4}).
 [23:40] A better algorithm via wicked analogy with matrix multiplication. Running time: O(V^{3}lg(V)).
 [37:45] FloydWarshall algorithm. Runs in O(V^{3}).
 [47:35] Transitive closure problem of directed graphs.
 [53:30] Johnson's algorithm. Runs in O(VE + V^{2}log(V))
Lecture nineteen notes:
Have fun with shortest path algorithms! The next post is going to be an introduction to parallel algorithms  things like dynamic multithreading, scheduling and multithreaded algorithms.
PS. This course is taught from the CLRS book (also called "Introduction to Algorithms"). Chapters 24 and 25, called "SingleSource Shortest Paths" and "AllPairs Shortest Paths" explain everything I wrote about here in much more detail. If these topics excite you, you may want to buy this awesome book:
Famous Sed OneLiners Explained, Part III: Selective Deletion of Certain Lines and Special Applications
This is the third and final part of an article on the famous sed oneliners. This part will explain sed oneliners for selective deletion of certain lines and special applications of sed. See part one for introduction of the series.
Similarly to famous Awk oneliners, sed oneliners are short and concise sed programs that span less than one terminal line. They were written by Eric Pement and are floating around on the Internet as 'sed1line.txt' file.
If you are intrigued by this article series, I suggest that you subscribe to my posts!
Update: Spanish translation of part three is available!
Eric's sed oneliners are divided into several sections:
 1. File spacing (explained in part one).
 2. Numbering (explained in part one).
 3. Text conversion and substitution (explained in part one).
 4. Selective printing of certain lines (explained in part two).
 5. Selective deletion of certain lines (explained in this part).
 6. Special applications (explained in this part).
 7. Release of Sed OneLiners Explained ebook.
Grab my "sed cheat sheet" and a local copy of sed1line.txt file, and let the explanation begin!
Awesome news: I have written an ebook based on this article series. Check it out:
5. Selective Deletion of Certain Lines
68. Print all lines in the file except a section between two regular expressions.
sed '/Iowa/,/Montana/d'
This oneliner continues where the previous left off. Oneliner #67 used the range match "/start/,/finish/" to print lines between two regular expressions (inclusive). This oneliner, on the other hand, deletes lines between two regular expressions and prints all the lines outside this range. Just to remind you, a range "/start/,/finish/" matches all lines starting from the first line that matches a regular expression "/start/" to the first line that matches a regular expression "/finish/". In this particular oneliner the "d", delete, command is applied to these lines. The delete command prevents the matching lines from ever seeing the light.
For example, suppose your input to this oneliner was:
Florida <strong>Iowa New York San Jose Montana</strong> Texas Fairbanks
Then after the sed program has finished running, the output is:
Florida Texas Fairbanks
We see this output because the lines from Iowa to Montana matched the "/Iowa/,/Montana/" range match (i put the matched lines in bold) and were deleted.
69. Delete duplicate, consecutive lines from a file (emulates "uniq").
sed '$!N; /^\(.*\)\n\1$/!P; D'
This oneliner acts as the "uniq" Unix utility. So how does it work? First of all, for every line that is not the very last line of input, sed appends the next line to the pattern space by the "N" command. The "N" command is restricted to all but the last line by "$!" restriction pattern. The newly appended line is separated from the previous line by the "\n" character. Next, the pattern space is matched against "/^\(.*\)\n\1$/" regular expression. This regular expression captures the previous line up to "\n" character and saves it in the match group "\1". Then it tests if the newly appended line is the same as the previous one. If it is not, the "P" gets executed. If it is, the "P" command does not get executed. The "P" command prints everything in the pattern space up to the first "\n" character. Next the "D" command executes and deletes everything up to the first "\n" char, leaving only the newly read line in pattern space. It also forces the sed script to begin from the first command.
This way it loops over all lines, comparing two consecutive lines. If they are equal, the first line gets deleted, and a new line gets appended to what's left. If they are not equal, the first one gets deleted, and deleted.
I think it's hard to understand what is going on from this description. I'll illustrate it with an example. Suppose this is the input:
foo foo foo bar bar baz
The first thing sed does is it reads the first line of input in pattern space. The pattern space now contains "foo". Now the "N" command executed. The pattern space now contains "foo\nfoo". Next the pattern space is tested against "/^\(.*\)\n\1$/" regular expression. This regular expression matches because "\(.*\)" is "foo" and "/^\(.*\)\n\1$/" is "foo\nfoo", exactly what we have in the pattern space. As it matched, the "P" command does not get executed. Now the "D" command executes, deleting the everything up to first "\n" from pattern space. The pattern space now contains just "foo". The "D" command forces sed to start from the first command. Now the "N" is executed again, the pattern space now contains "foo\nfoo" again and the same thing happens, "P" does not get executed and "D" deletes the first "foo", leaving the pattern space with just "foo" in it. Now the "N" gets executed once again, this time "bar" gets appended to pattern space. It contains "foo\nbar" now. The regular expression "/^\(.*\)\n\1$/" does not match and "P" gets executed, printing "foo". After that "D" gets executed wiping "foo" from pattern space. The pattern space now contains "bar". The commands restart and "N" gets executed, it appends the next "bar" to current pattern space. Now it contains "bar\nbar". Just like with "foo\nfoo", nothing gets printed, and "D" deletes the first "bar", leaving pattern space with "bar". The oneliner restarts its execution. Now "N" reads in the final line "baz". The pattern space contains "bar\nbaz" which does not match the regular expression. The "P" prints out the "bar" and "D" deletes "bar". Now "N" does not get executed because we are at the last line of input. The "$!N" restricts "N" to all lines but last. At this moment pattern space contains only the last "baz", the regular expression does not match, so "baz" gets printed. The "D" command executes, emptying the pattern space. There is no more input and sed quits.
The output for this example is:
foo bar baz
I think this is one of the most detailed explanations I have written about a single one liner. :)
70. Delete duplicate, nonconsecutive lines from a file.
sed n 'G; s/\n/&&/; /^\([ ~]*\n\).*\n\1/d; s/\n//; h; P'
This is a very tricky oneliner. It stores the unique lines in hold buffer and at each newly read line, tests if the new line already is in the hold buffer. If it is, then the new line is purged. If it's not, then it's saved in hold buffer for future tests and printed.
A more detailed description  at each line this oneliner appends the contents of hold buffer to pattern space with "G" command. The appended string gets separated from the existing contents of pattern space by "\n" character. Next, a substitution is made to that substitutes the "\n" character with two "\n\n". The substitute command "s/\n/&&/" does that. The "&" means the matched string. As the matched string was "\n", then "&&" is two copies of it "\n\n". Next, a test "/^\([ ~]*\n\).*\n\1/" is done to see if the contents of group capture group 1 is repeated. The capture group 1 is all the characters from space " " to "~" (which include all printable chars). The "[ ~]*" matches that. Replacing one "\n" with two was the key idea here. As "\([ ~]*\n\)" is greedy (matches as much as possible), the double newline makes sure that it matches as little text as possible. If the test is successful, the current input line was already seen and "d" purges the whole pattern space and starts script execution from the beginning. If the test was not successful, the doubled "\n\n" gets replaced with a single "\n" by "s/\n//" command. Then "h" copies the whole string to hold buffer, and "P" prints the new line.
71. Delete all lines except duplicate consecutive lines (emulates "uniq d").
sed '$!N; s/^\(.*\)\n\1$/\1/; t; D'
This sed oneliner prints only the duplicate lines. This sed oneliner starts with reading in the next line from input with the "N" command. As I already mentioned, the current line and the next get separated by "\n" character after "N" executes. This oneliner also restrics "N" to all lines but last with "$!" restriction. Now a substitution "s/^\(.*\)\n\1$/\1/" is tried. Similarly to oneliner #69, this substitution replaces two repeating strings with one. For example, a string "foo\nfoo" gets replaced with just "foo". Now, if this substitution was successful (there was a repeated string), the "t" command takes the script to the end where the current pattern space gets printed automatically. If the substitution was not successful, "D" executes, deleting the nonrepeated string. The cycle continues and this way only the duplicate lines get printed once.
Let's take a look at an example. Suppose the input is:
foo foo bar baz
This oneliner reads the first line and immediately executes the "N" command. The pattern space now is "foo\nfoo". The substitution "s/^\(.*\)\n\1$/\1/" is tried and it's successful, because "foo" is repeated twice. The pattern space now contains just a single "foo". As the substitution was successful, "t" command branches to the end of the script. At this moment "foo" gets printed. Now the cycle repeats. Sed reads in "bar", the "N" command appends "baz" to "bar". The pattern space now is "bar\nbaz". The substitution is tried, but it's not successful, as "bar" is not repeated. As the substitution failed, "t" does nothing and "D" executes, deleting "bar" from pattern space. The pattern space is left with single "baz". Command "N" no longer executes as we reached end of file, substitution fails, "t" fails, and "D" deletes the "baz".
The end result is:
foo
Just as we expected  only the duplicate line got printed.
72. Delete the first 10 lines of a file.
sed '1,10d'
This oneliner restricts the "d" command to a range of lines by number. The "1,10" means a range matching lines 1 to 10 inclusive. On each of the lines the "d" command gets executed. It deletes the current pattern space, and restarts the commands from beginning. The default action for lines > 10 is to print the line.
73. Delete the last line of a file.
sed '$d'
This oneliner restricts the "d" command to the last line of file. It's done by specifying the special char "$" as the line to match. It matches only the last line. The last line gets deleted, but the others get printed implicitly.
74. Delete the last 2 lines of a file.
sed 'N;$!P;$!D;$d'
This oneliner always keeps two lines in the pattern space. At the very last line, it just does not output these last two. All the others before last two get output implicitly. Let's see how it does it. As soon as sed reads the first line of input in pattern space, it executes the first command "N". It places the 2nd line of input in pattern space. The next two commands "$!P" and "$!D" print the first part of pattern space up to newline character, and delete this part from pattern space. They keep doing it until the very last line gets appended to pattern space by "N" command. At this moment the last two lines are in pattern space and "$d" executes, deleting them both. That's it. Last two lines got deleted.
If there is just one line of data, then it outputs it.
75. Delete the last 10 lines of a file.
sed e :a e '$d;N;2,10ba' e 'P;D'
This is really straight forward oneliner. It always keeps 10 lines in patternspace, by appending each new input line with "N", and deleting the 11th excessive line with "D". Once the end of file is reached, it "d" the whole pattern space, deleting the last 10 lines.
sed n e :a e '1,10!{P;N;D;};N;ba'
This is also a straight forward oneliner. For the lines that are not 110, it appends them to pattern space with "N". For lines > 10, it prints the first line in pattern space with "P", appends another line with "N" and deletes the printed line with "D". The "D" command causes sed to branch to the beginning of script! The "N;ba" at the end never, ever gets executed again for lines > 10. It keeps looping this way "P", "N", "D", always keeping 10 lines in pattern space and printing line10 on each cycle. The "N" command causes script to quit if it tries to read past end of file.
76. Delete every 8th line.
gsed '0~8d'
This oneliner only works with GNU Sed only. It uses a special address range match "first~step" that matches every step'th line starting with the first. In this oneliner first is 0 and step is 8. Zero is not a valid physical line number, so the very first line of input does not match. The first line to match is 8th, then 16th, then 24th, etc. Each line that matches is deleted by "d" command.
sed 'n;n;n;n;n;n;n;d;'
This is a portable version. The "n" command prints the current pattern space, empties it, and reads in the next line. It does so for every 7 lines, and 8th line gets deleted with "d". This process continues until all input has been processed.
77. Delete lines that match regular expression pattern.
sed '/pattern/d'
This oneliner executes the "d" command on all lines that match "/pattern/". The "d" command deletes the line and skips to the next line.
78. Delete all blank lines in a file (emulates "grep '.'".
sed '/^$/d'
The regular expression "/^$/" in this oneliner tests if the beginning of line matches the end of the line. Only the empty lines have this property and sed deletes them.
Another way to do the same is:
sed '/./!d'
This oneliner tests if the line matches at least one character. The dot "." in the regular expression matches any character. An empty line does not have any characters and it does not match this regular expression. Sed deletes all the lines that do not match this regular expression.
79. Delete all consecutive blank lines from a file (emulates "cat s").
sed '/./,/^$/!d'
This oneliner leaves one blank line at the end of the file, if there are multiple blanks at the end. Other than that, all consecutive blanks are stripped.
It uses an inverse range match "/start/,/finish/!" to "d" delete lines from first blank line, to first nonblank, noninclusive.
sed '/^$/N;/\n$/D'
This oneliner leaves one blank line at the beginning and end of the file, if there are multiple blanks at both sides. Other than that, all consecutive blanks are stripped.
The consecutive empty lines get appended in pattern space by "/^$/N" command. The "/\n$/D" command matches and deletes blanks until only 1 is left. At that moment it no longer matches, and the line is output.
80. Delete all consecutive blank lines from a file except the first two.
sed '/^$/N;/\n$/N;//D'
In case of > 2 blank lines, this oneliner trims them down to two. There is a catch to this oneliner. Let me explain it first. See the last command "//D"? It's a shortcut for "/previousmatch/D". In this case it's shortcut for "/\n$/D". Alright, now the oneliner itself. On every empty line, it appends the next to current pattern space with "/^$/N" command. Next it tests if the line just read in was actually a blank line with "/\n$/", if it is, it reads another line in with "N". At this moment it repeats the same test "/\n$/". If the line was a blank one again, it deletes the first blank line and restarts sed script from the beginning. Notice that at all times only 2 consecutive blank lines are in pattern space. This way any number of blank lines get deleted and only two are left.
81. Delete all leading blank lines at the top of a file.
sed '/./,$!d'
This oneliner inverts a match "match from the first nonblank line to end of file". It becomes "match from the beginning of file to last blank line".
82. Delete all trailing blank lines at the end of a file.
sed e :a e '/^\n*$/{$d;N;ba' e '}'
This oneliner accumulates blank lines in pattern space until it either hits end or hits a nonblank line. If it hits end, "$d" deletes the whole pattern space (which contained just the trailing blank lines) and quits. If however, it hits nonblank line, the whole pattern space gets printed implicitly and script continues as if nothing had happened.
This one is a portable version.
gsed e :a e '/^\n*$/N;/\n$/ba'
This is the same script, except a shorter version, made to work with Gnu Sed.
83. Delete the last line of each paragraph.
sed n '/^$/{p;h;};/./{x;/./p;}'
This oneliner always keeps the previous line in hold buffer. It's accomplished by 2nd block of commands "/./{x;/./p;}". In this block, the pattern space (1 line) gets exchanged with hold buffer (1 line) by "x" command and if the hold buffer was not empty, it gets printed by "p". The next moment to note is what happens on the first empty line. That is the line after the paragraph. At this moment "/^$/{p;h;}" gets executed, that prints the blank line (but does not print the last line of paragraph!), and puts the blank line in hold buffer. Once a new paragraph is reached, the script executed just like it was the very first paragraph of the input.
6. Special Sed Applications
84. Remove nroff overstrikes.
Nroff overstrikes are chars that are formatted to stand out in bold. They are achieved like in old typewriters, where you would do backspace and hit the same key again. In nroff it's key CHAR, CTRL+H, CHAR. This oneliner deletes the CHAR, CTRL+H, leaving just plain CHAR.
sed 's/.^H//g'
Press Ctrl+V and then Ctrl+H to insert ^H literally in sed oneliner. It then uses the substitute command to delete any char "." followed by CTRL+H "^H".
Another way to do the same is use a hex escape expression that works in most recent seds:
sed 's/.\x08//g'
Yet another way is to use "echo" and enable interpretation of backslashed characters:
sed 's/.'`echo e "\b"`'//g'
85. Print Usenet/HTTP/Email message header.
gsed r '/^\r?$/q'
Usenet, HTTP and Email headers are similar. They are a bunch of text lines, separated from the body of the message with two new lines "\r\n\r\n". Some implementations might even go with just "\n\n". This oneliner quits on the first line that is either empty or contains "\r". In other words, it prints the message header and quits.
86. Print Usenet/HTTP/Email message body.
sed '1,/^$/d'
This oneliner uses a range match "1,/^$/" to delete lines starting from 1st, and ending with the first blank line (inclusive). As I explained in the previous oneliner #78 above, "/^$/" matches empty lines. All the lines before first blank line in a Usenet/Email message or a HTTP header are message headers. They get deleted.
87. Extract subject from an email message.
sed '/^Subject: */!d; s///; q'
This oneliner deletes all lines that do not match "^Subject: ". Then it reuses the match in "s///" to delete "Subject: " part from the line, leaving just the real subject. Please notice how "s///" is equivalent to "s/previousmatch//", where "previousmatch" is "^Subject: *" in this oneliner.
88. Extract sender information from an email message.
sed '/^From: */!d; s///; q'
This one liner is equivalent to the previous one, except it prints sender information from email.
89. Extract email address from a "Name Surname <email@domain.com>" string.
sed 's/.*< *//;s/ *>.*//;
This oneliner strips all symbols before < symbol (and any whitespace after it), and stips all symbols after > symbol (including whitespace before it). That's it. What's left is email@domain.com.
90. Add a leading angle bracket and space to each line (quote an email message).
sed 's/^/> /'
This oneliner substitutes zerowidth anchor "^" that matches beginning of line with "> ". As it's a zerowidth anchor, the result is that "> " gets added to beginning of each line.
91. Delete leading angle bracket from each line (unquote an email message).
sed 's/^> //'
It does what it says, deletes two characters ">" and a space " " from the beginning of each line.
92. Strip HTML tags.
sed e :a e 's/<[^>]*>//g;/</N;//ba'
Sed is not made for parsing HTML. This is a very crude version of HTML tag eraser. It starts by creating a branch label named "a". Then on each line it substitutes "<[^>]*>" with nothing as many times as possible ("g" flag for s/// command). The "<[^>]*>" expression means match match symbol "<" followed by any other symbols that are not ">", and that ends with ">". This is a common pattern in regular expressions for nongreediness. Next, the oneliner tests if there are any open tags left on the line, if there are "N" reads the next line of input to make it work across multiple lines. "//ba" finally branches to the beginning of the script (it's short for "/previousexpression/ba" which in this case is "/</ba").
Sed OneLiners Explained EBook
I have written an ebook called "Sed OneLiners Explained". I improved the explanations of the oneliners in this article series, added new oneliners and added three new chapters  an introduction to sed, a summary of sed addresses and ranges, and debugging sed scripts with sedsed. Please take a look:
Have Fun!
This post completes the three part article on the superman of Unix stream editing. I hope that you learned a thing or two and it was my pleasure explaining them.
Just as with the three part article on famous Awk oneliners explained, my plans are to create a 'sed1lineexplained.txt' file that will be supplementary to 'sed1line.txt', and publish an ebook in pdf format.
If you have any comments on the oneliners, please let me know in the comments. Thanks!
This is going to be a small, technical tutorial on how to save a lot of time by watching videos at higher playback rates.
I first read about this idea from my most favorite personal development blog at Steve Pavlina.com. In his post "Overclock Your Audio Learning" he says that he occasionally listens to audios at 4.1x. At this speed 4 hour video/audio can be listened in less 1 hour!
I personally found it impossible to understand anything at 4x speed. My optimal listening speed is 1.65x  2.1x.
To speed up the videos you will first need to download and install AviSynth. AviSynth is kind of a video programming language with which you can do all kinds of manipulations to videos programmatically. If you are on Windows, then during the installation make sure to associate .avs file extension with Windows Media Player and not Notepad.
Next, create this AviSynth script, and place it in the same directory as your video. Name the script as "speedup.avs" or something similar. Make sure the extension is ".avs" if you are on Windows!
file = "file_name_of_video.avi" speedup = 1.65 pitch = 100 DirectShowSource(file) audio_rate = last.audiorate video_rate = last.framerate AssumeSampleRate(int(audio_rate*speedup)) AssumeFPS(video_rate*speedup) TimeStretch(pitch = pitch)
There are three variables that you can change in this simple script. The first is "file". It should be the filename of the video you are about to watch. The next is "speedup". It's the new playback rate, you may set it to any value you wish. For example, if you set it to 2.0, then the video will play twice as fast as it normally would. And the last parameter to change is the "pitch". You may change it to something lower than 100 when the video plays at higher speeds to make the speaker sound lower. I usually keep "speedup" at 1.65 and "pitch" at 75.
Once you have made your own configuration, just double click the .avs on Windows to play it at the new playback speed, or play it through mplayer on Linux!
Update: My blog readers bobb and crb suggested two new techniques on how to watch videos faster. Bobb suggested just to use mplayer. Turns out it already has an option to play videos faster!
mplayer speed 1.65 file.avi # use keys [ ], and { } to control the playback speed # use backspace to reset video speed to normal.
Crb suggested to use MySpeed™ PlugIn for YouTube to speed up video on YouTube in real time.
Here are a few examples that I crafted at speeds 1x, 1.4x, 1.65x, 2x, 2.25x, 2.5x, 2.75x and 3x. They are from Alan Kay's keynote speech "The computer revolution hasn't happend yet" at OOPSLA Conference in 1997.
After a few minutes of listening at higher speeds, try going back to listen at 1x. It will seem incredibly slow because your mind will have adapted to the faster input rate.
Alan Kay's talk at a normal speed. Total time: 5 mins.
Direct URL: http://www.youtube.com/watch?v=MaRODiPRZU
Alan Kay's talk at 1.4x speed. Total time: 3:34 mins.
Direct URL: http://www.youtube.com/watch?v=Oc_3yk22gn8
Alan Kay's talk at 1.65x speed. Total time: 3 mins.
Direct URL: http://www.youtube.com/watch?v=lrAq86Qk_rU
Alan Kay's talk at 2x speed. Total time: 2:30 mins.
Direct URL: http://www.youtube.com/watch?v=WlwEq4HXB3Y
Alan Kay's talk at 2.25x speed. Total time: 2:13 mins.
Direct URL: http://www.youtube.com/watch?v=b0K5JMjtP3w
Alan Kay's talk at 2.5x speed. Total time: 2 mins.
Direct URL: http://www.youtube.com/watch?v=YBt01TFNIA0
Alan Kay's talk at 2.75x speed. Total time: 1:49 mins.
Direct URL: http://www.youtube.com/watch?v=BNCqZp0XOVw
Alan Kay's talk at 3x speed. Total time: 1:40 mins.
Direct URL: http://www.youtube.com/watch?v=8DsuyPNOqGw
Do you have any other techniques to speed up videos? I am also curious at what speeds do you feel the most comfortable watching the videos?
It would also be cool to create a hack that modifies youtube and google video players to make them play videos faster natively.
Ps. Did you know that I was a big fan of video lectures? I have been collecting math, physics, computer science video lectures for almost 3 years now and posting them to my other Free Science Online Blog.
Famous Awk OneLiners Explained, Part III: Selective Printing and Deleting of Certain Lines
This is the third and final part of a threepart article on the famous Awk oneliners. This part will explain Awk oneliners for selective printing and deletion of certain lines. See part one for introduction of the series.
If you just came to my website, then you might wonder, "What are these Awk oneliners and why are they famous?" The answer is very simple  they are small and beautiful Awk programs that do one and only text manipulation task very well. They have been circulating around the Internet as awk1line.txt text file and they have been written by Eric Pement.
If you are intrigued by this article series, I suggest that you subscribe to my posts, as I will have a lot more interesting and educational articles this year.
Eric Pement's Awk oneliner collection consists of five sections:
 1. File spacing (explained in part one).
 2. Numbering and calculations (explained in part one).
 3. Text conversion and substitution (explained in part two).
 4. Selective printing of certain lines (explained in this part).
 5. Selective deletion of certain lines (explained in this part).
 6. String creation, array creation and update on selective printing of certain lines (explained in bonus article).
 7. Release of Awk OneLiners Explained ebook.
Awesome news: I have written an ebook based on this article series. Check it out:
Grab my Awk cheat sheet and the local copy of Awk oneliners file awk1line.txt and let's roll.
4. Selective Printing of Certain Lines
45. Print the first 10 lines of a file (emulates "head 10").
awk 'NR < 11'
Awk has a special variable called "NR" that stands for "Number of Lines seen so far in the current file". After reading each line, Awk increments this variable by one. So for the first line it's 1, for the second line 2, ..., etc. As I explained in the very first oneliner, every Awk program consists of a sequence of patternaction statements "pattern { action statements }". The "action statements" part get executed only on those lines that match "pattern" (pattern evaluates to true). In this oneliner the pattern is "NR < 11" and there are no "action statements". The default action in case of missing "action statements" is to print the line asis (it's equivalent to "{ print $0 }"). The pattern in this oneliner is an expression that tests if the current line number is less than 11. If the line number is less than 11, Awk prints the line. As soon as the line number is 11 or more, the pattern evaluates to false and Awk skips the line.
A much better way to do the same is to quit after seeing the first 10 lines (otherwise we are looping over lines > 10 and doing nothing):
awk '1; NR == 10 { exit }'
The "NR == 10 { exit }" part guarantees that as soon as the line number 10 is reached, Awk quits. For lines smaller than 10, Awk evaluates "1" that is always a truestatement. And as we just learned, true statements without the "action statements" part are equal to "{ print $0 }" that just prints the first ten lines!
46. Print the first line of a file (emulates "head 1").
awk 'NR > 1 { exit }; 1'
This oneliner is very similar to previous one. The "NR > 1" is true only for lines greater than one, so it does not get executed on the first line. On the first line only the "1", the true statement, gets executed. It makes Awk print the line and read the next line. Now the "NR" variable is 2, and "NR > 1" is true. At this moment "{ exit }" gets executed and Awk quits. That's it. Awk printed just the first line of the file.
47. Print the last 2 lines of a file (emulates "tail 2").
awk '{ y=x "\n" $0; x=$0 }; END { print y }'
Okay, so what does this one do? First of all, notice that "{y=x "\n" $0; x=$0}" action statement group is missing the pattern. When the pattern is missing, Awk executes the statement group for all lines. For the first line, it sets variable "y" to "\nline1" (because x is not yet defined). For the second line it sets variable "y" to "line1\nline2". For the third line it sets variable "y" to "line2\nline3". As you can see, for line N it sets the variable "y" to "lineN1\nlineN". Finally, when it reaches EOF, variable "y" contains the last two lines and they get printed via "print y" statement.
Thinking about this oneliner for a second one concludes that it is very ineffective  it reads the whole file line by line just to print out the last two lines! Unfortunately there is no seek() statement in Awk, so you can't seek to the end2 lines in the file (that's what tail does). It's recommended to use "tail 2" to print the last 2 lines of a file.
48. Print the last line of a file (emulates "tail 1").
awk 'END { print }'
This oneliner may or may not work. It relies on an assumption that the "$0" variable that contains the entire line does not get reset after the input has been exhausted. The special "END" pattern gets executed after the input has been exhausted (or "exit" called). In this oneliner the "print" statement is supposed to print "$0" at EOF, which may or may not have been reset.
It depends on your awk program's version and implementation, if it will work. Works with GNU Awk for example, but doesn't seem to work with nawk or xpg4/bin/awk.
The most compatible way to print the last line is:
awk '{ rec=$0 } END{ print rec }'
Just like the previous oneliner, it's computationally expensive to print the last line of the file this way, and "tail 1" should be the preferred way.
49. Print only the lines that match a regular expression "/regex/" (emulates "grep").
awk '/regex/'
This oneliner uses a regular expression "/regex/" as a pattern. If the current line matches the regex, it evaluates to true, and Awk prints the line (remember that missing action statement is equal to "{ print }" that prints the whole line).
50. Print only the lines that do not match a regular expression "/regex/" (emulates "grep v").
awk '!/regex/'
Pattern matching expressions can be negated by appending "!" in front of them. If they were to evaluate to true, appending "!" in front makes them evaluate to false, and the other way around. This oneliner inverts the regex match of the previous (#49) oneliner and prints all the lines that do not match the regular expression "/regex/".
51. Print the line immediately before a line that matches "/regex/" (but not the line that matches itself).
awk '/regex/ { print x }; { x=$0 }'
This oneliner always saves the current line in the variable "x". When it reads in the next line, the previous line is still available in the "x" variable. If that line matches "/regex/", it prints out the variable x, and as a result, the previous line gets printed.
It does not work, if the first line of the file matches "/regex/", in that case, we might want to print "match on line 1", for example:
awk '/regex/ { print (x=="" ? "match on line 1" : x) }; { x=$0 }'
This oneliner tests if variable "x" contains something. The only time that x is empty is at very first line. In that case "match on line 1" gets printed. Otherwise variable "x" gets printed (that as we found out contains the previous line). Notice that this oneliner uses a ternary operator "foo?bar:baz" that is short for "if foo, then bar, else baz".
52. Print the line immediately after a line that matches "/regex/" (but not the line that matches itself).
awk '/regex/ { getline; print }'
This oneliner calls the "getline" function on all the lines that match "/regex/". This function sets $0 to the next line (and also updates NF, NR, FNR variables). The "print" statement then prints this next line. As a result, only the line after a line matching "/regex/" gets printed.
If it is the last line that matches "/regex/", then "getline" actually returns error and does not set $0. In this case the last line gets printed itself.
53. Print lines that match any of "AAA" or "BBB", or "CCC".
awk '/AAABBBCCC/'
This oneliner uses a feature of extended regular expressions that support the  or alternation metacharacter. This metacharacter separates "AAA" from "BBB", and from "CCC", and tries to match them separately on each line. Only the lines that contain one (or more) of them get matched and printed.
54. Print lines that contain "AAA" and "BBB", and "CCC" in this order.
awk '/AAA.*BBB.*CCC/'
This oneliner uses a regular expression "AAA.*BBB.*CCC" to print lines. This regular expression says, "match lines containing AAA followed by any text, followed by BBB, followed by any text, followed by CCC in this order!" If a line matches, it gets printed.
55. Print only the lines that are 65 characters in length or longer.
awk 'length > 64'
This oneliner uses the "length" function. This function is defined as "length([str])"  it returns the length of the string "str". If none is given, it returns the length of the string in variable $0. For historical reasons, parenthesis () at the end of "length" can be omitted. This oneliner tests if the current line is longer than 64 chars, if it is, the "length > 64" evaluates to true and line gets printed.
56. Print only the lines that are less than 64 characters in length.
awk 'length < 64'
This oneliner is almost bytebybyte equivalent to the previous one. Here it tests if the length if line less than 64 characters. If it is, Awk prints it out. Otherwise nothing gets printed.
57. Print a section of file from regular expression to end of file.
awk '/regex/,0'
This oneliner uses a pattern match in form 'pattern1, pattern2' that is called "range pattern". The 3rd Awk Tip from article "10 Awk Tips, Tricks and Pitfalls" explains this match very carefully. It matches all the lines starting with a line that matches "pattern1" and continuing until a line matches "pattern2" (inclusive). In this oneliner "pattern1" is a regular expression "/regex/" and "pattern2" is just 0 (false). So this oneliner prints all lines starting from a line that matches "/regex/" continuing to endoffile (because 0 is always false, and "pattern2" never matches).
58. Print lines 8 to 12 (inclusive).
awk 'NR==8,NR==12'
This oneliner also uses a range pattern in format "pattern1, pattern2". The "pattern1" here is "NR==8" and "pattern2" is "NR==12". The first pattern means "the current line is 8th" and the second pattern means "the current line is 12th". This oneliner prints lines between these two patterns.
59. Print line number 52.
awk 'NR==52'
This oneliner tests to see if current line is number 52. If it is, "NR==52" evaluates to true and the line gets implicitly printed out (patterns without statements print the line unmodified).
The correct way, though, is to quit after line 52:
awk 'NR==52 { print; exit }'
This oneliner forces Awk to quit after line number 52 is printed. It is the correct way to print line 52 because there is nothing else to be done, so why loop over the whole doing nothing.
60. Print section of a file between two regular expressions (inclusive).
awk '/Iowa/,/Montana/'
I explained what a range pattern such as "pattern1,pattern2" does in general in oneliner #57. In this oneliner "pattern1" is "/Iowa/" and "pattern2" is "/Montana/". Both of these patterns are regular expressions. This oneliner prints all the lines starting with a line that matches "Iowa" and ending with a line that matches "Montana" (inclusive).
5. Selective Deletion of Certain Lines
There is just one oneliner in this section.
61. Delete all blank lines from a file.
awk NF
This oneliner uses the special NF variable that contains number of fields on the line. For empty lines, NF is 0, that evaluates to false, and false statements do not get the line printed.
Another way to do the same is:
awk '/./'
This oneliner uses a regularexpression match "." that matches any character. Empty lines do not have any characters, so it does not match.
Awk oneliners explained ebook
I have written my first ebook called "Awk OneLiners Explained". I improved the explanations of the oneliners in this article series, added new oneliners and added three new chapters  introduction to awk oneliners, summary of awk special variables and idiomatic awk. Please take a look:
Have Fun!
This concludes the article series about Awk oneliners. I hope that you enjoyed this threepart article and it made you a better Awk programmer!
My future plans are to create a awk1lineexplained.txt that will be a supplementary file to the famous awk1line.txt. I am also thinking about publishing a nicely formatted pdf ebook about all the oneliners.
If you liked this article, you may also like a very similar article on Famous Sed OneLiners Explained.
And finally, if you notice anything that you can't understand, please let me know in the comments. Thank you!