This is the world's best introduction to sed - the superman of UNIX stream editing. Originally I wrote this introduction for my second e-book, however later I decided to make it a part of the free e-book preview and republish it here as this article.

Introduction to sed

Mastering sed can be reduced to understanding and manipulating the four spaces of sed. These four spaces are:

  • Input Stream
  • Pattern Space
  • Hold Buffer
  • Output Stream

Think about the spaces this way - sed reads the input stream and produces the output stream. Internally it has the pattern space and the hold buffer. Sed reads data from the input stream until it finds the newline character \n. Then it places the data read so far, without the newline, into the pattern space. Most of the sed commands operate on the data in the pattern space. The hold buffer is there for your convenience. Think about it as temporary buffer. You can copy or exchange data between the pattern space and the hold buffer. Once sed has executed all the commands, it outputs the pattern space and adds a \n at the end.

It's possible to modify the behavior of sed with the -n command line switch. When -n is specified, sed doesn't output the pattern space and you have to explicitly print it with either p or P commands.

Let's look at several examples to understand the four spaces and sed. These are just examples to illustrate what sed looks like and what it's all about.

Here is the simplest possible sed program:

sed 's/foo/bar/'

This program replaces text "foo" with "bar" on every line. Here is how it works. Suppose you have a file with these lines:


Sed opens the file as the input stream and starts reading the data. After reading "abc" it finds a newline \n. It places the text "abc" in the pattern space and now it applies the s/foo/bar/ command. Since we have "abc" in the pattern space and there is no "foo" anywhere, sed does nothing to the pattern space. At this moment sed has executed all the commands (in this case just one). The default action when all the commands have been executed is to print the pattern space, followed by newline. So the output from the first line is "abc\n".

Now sed reads in the second line "foo" and executes s/foo/bar/. This replaces "foo" with "bar". The pattern space now contains just "bar". The end of the script has been reached and sed prints out the pattern space, followed by newline. The output from the second line is "bar\n".

Now the 3rd line is read in. The pattern space is now "123-foo-456". Since there is "foo" in the text, the s/foo/bar/ is successful and the pattern space is now "123-bar-456". The end is reached and sed prints the pattern space. The output is "123-bar-456\n".

All the lines of the input have been read at this moment and sed exits. The output from running the script on our example file is:


In this example we never used the hold buffer because there was no need for temporary storage.

Before we look at an example with temporary storage, let's take a look at three command line switches: -n, -e and -i. First -n.

If you specify -n to sed, like this:

sed -n 's/foo/bar/'

Then sed will no longer print the pattern space when it reaches the end of the script. So if you run this program on our sample file above, there will be no output. You must use sed's p command to force sed to print the line:

sed -n 's/foo/bar/; p'

As you can see, sed commands are separated by the ; character. You can also use -e switch to separate the commands:

sed -n -e 's/foo/bar/' -e 'p'

It's the same as if you used ;. Next, let's take a look at the -i command line argument. This one forces sed to do in-place editing of the file, meaning it reads the contents of the file, executes the commands, and places the new contents back in the file.

Here is an example. Suppose you have a file called "users", with the following content:


And you wish to replace the ":" symbol with ";" in the whole file. Then you can do it as easily as:

sed -i 's/:/;/' users

It will silently execute the s/:/;/ command on all lines in the file and do all substitutions. Be very careful when using -i as it's destructive and it's not reversible! It's always safer to run sed without -i, and then replace the file yourself.

Alternatively you can specify a file extension to the -i command. This way sed will make a backup copy of the file before it makes in-place modifications.

For example, if you specify -i.bak, like this:

sed -i.bak 's/:/;/' users

Then sed will create users.bak before modifying the contents of users file.

Actually, before we look at the hold buffer, let's take a look at addresses and ranges. Addresses allow you to restrict sed commands to certain lines, or ranges of lines.

The simplest address is a single number that limits sed commands to the given line number:

sed '5s/foo/bar/'

This limits the s/foo/bar/ only to the 5th line of file or input stream. So if there is a "foo" on the 5th line, it will be replaced with "bar". No other lines will be touched.

The addresses can be also inverted with the ! after the address. To match all lines that are not the 5th line (lines 1-4, plus lines 6-...), do this:

sed '5!s/foo/bar/'

The inversion can be applied to any address.

Next, you can also limit sed commands to a range of lines by specifying two numbers, separated by a comma:

sed '5,10s/foo/bar/'

In this one-liner the s/foo/bar/ is executed only on lines 5 - 10, inclusive. Here is a quick, useful one-liner. Suppose you want to print lines 5 - 10 in the file. You can first disable implicit line printing with the -n command line switch, and then use the p command on lines 5 - 10:

sed -n '5,10p'

This will execute the p command only on lines 5 - 10. No other lines will be output. Pretty neat, isn't it?

There is a special address $ that matches the last line of the file. Here is an example that prints the last line of the file:

sed -n '$p'

As you can see, the p command has been limited to $, which is the last line of input.

Next, there is also a single regular expression address match like this /regex/. If you specify a regex before a command, then the command will only get executed on lines that match the regex. Check this out:

sed -n '/a\+b\+/p'

Here the p command will get called only on lines that match a\+b\+ regular expression, which means one or more letters "a" followed by one or more letters "b". For example, it prints lines like "ab", "aab", "aaabbbbb", "foo-123-ab", etc. Note how the + has to be escaped. That's because sed uses basic regular expressions by default. You can enable extended regular expressions by using the -r command line switch:

sed -rn '/a+b+/p'

This way you don't need to quote meta-characters like +, ( and ).

There is also an expression to match a range between two regexes. Here is an example,

sed '/foo/,/bar/d'

This one-liner matches all lines between the first line that matches "/foo/" regex and the first line that matches "/bar/" regex, inclusive. It applies the d command that stands for delete. In other words, it deletes a range of lines between the first line that matches "/foo/" and the first line after "/foo/" that matches "/bar/", inclusive.

Now let's take a look at the hold buffer. Suppose you have a problem where you want to print the line before the line that matches a regular expression. How do you do this? If sed didn't have a hold buffer, things would be tough, but with hold buffer we can always save the current line to the hold buffer, and then let sed read in the next line. Now if the next line matches the regex, we would just print the hold buffer, which holds the previous line. Easy, right?

The command for copying the current pattern space to the hold buffer is h. The command for copying the hold buffer back to the pattern space is g. The command for exchanging the hold buffer and the pattern space is x. We just have to choose the right commands to solve this problem. Here is the solution:

sed -n '/regex/{x;p;x}; h'

It works this way - every line gets copied to the hold buffer with the h command at the end of the script. However, for every line that matches the /regex/, we exchange the hold buffer with the pattern space by using the x command, print it with the p command, and then exchange the buffers back, so that if the next line matches the /regex/ again, we could print the current line.

Also notice the command grouping. Several commands can be grouped and executed only for a specific address or range. In this one-liner the command group is {x;p;x} and it gets executed only if the current line matches /regex/.

Note that this one-liner doesn't work if it's the first line of the input matches /regex/. To fix this, we can limit the p command to all lines that are not the first line with the 1! inverted address match:

sed -n '/regex/{x;1!p;x}; h'

Notice the 1!p. This says - call the p command on all the lines that are not the 1st line. This prevents anything to be printed in case the first line matches /regex/.

Well, that's it! I think this introduction explains the most important concepts in sed, including various command line switches, the four spaces and various sed commands.

If you wish to learn more, I suggest you get a copy of my "Sed One-Liners Explained" e-book. The e-book contains exactly 100 well-explained one-liners. Once you work through them, you'll have rewired your brain to "think in sed". In other words, you'll have learned how to manipulate the pattern space, the hold buffer and you'll know when to print the data to get the results that you need.

Have fun!

If you enjoy my writing, you can subscribe to my blog, follow me on Twitter or Google+.

This article is part of the article series "Sed One-Liners Explained."
<- previous article next article ->

I love writing about programming and I am happy to announce my second e-book called "Sed One-Liners Explained". This book is based on my popular "Famous Sed One-Liners Explained" article series that has been read over 500,000 times.

I reviewed all the one-liners in the series, fixed various mistakes, greatly improved the explanations, added a bunch of new one-liners, bringing the total count to 100, and added three new chapters - an introduction to sed, a summary of sed addresses and ranges, and a chapter on debugging sed scripts with sed-sed.

Table of Contents

The e-book is 98 pages long and it explains exactly 100 one-liners. It's divided into the following chapters:

  • Preface.
  • 1. Introduction to sed.
  • 2. Line Spacing.
  • 3. Line Numbering.
  • 4. Text Conversion and Substitution.
  • 5. Selective Printing of Certain Lines.
  • 6. Selective Deletion of Certain Lines.
  • 7. Special sed Applications.
  • Appendix A. Summary of All sed Commands.
  • Appendix B. Addresses and Ranges.
  • Appendix C. Debugging sed Scripts with sed-sed.
  • Index.

What's sed?

Sed is the superman of UNIX stream editing. It's a small utility that's present on every UNIX system and it transforms one stream of text into another. Let's take a look at several practical examples that sed can carry out easily. All these examples and many more are explained in the e-book.

I have also made the first chapter of the book, Introduction to sed, freely available. Please download the e-book preview to read it. The introductory chapter explains the general principles of sed, introduces the four spaces of sed, addresses and ranges, and various command line flags.

Example 1: Replace "lamb" with "goat" on every line

sed 's/lamb/goat/'

This one-liner uses the famous s/.../.../ command. The s command substitutes the text in the first part of the command with the text in the second part. In this one-liner it replaces "lamb" with "goat".

A very detailed explanation of how sed reads the lines, how it executes the commands and how the printing happens is presented in the freely available introduction chapter. Please take a look.

Example 2: Replace only the second occurrence of "lamb" with "goat" on every line

sed 's/lamb/goat/2'

Sed is the only tool that I know that takes a numeric argument to the s command. The numeric argument, in this case 2, specifies which occurrence of the text to replace. In this example only the 2nd occurrence of "lamb" gets replaced with "goat".

Example 3: Number the lines in a file

sed = file | sed 'N; s/\n/: /'

This one-liner is actually two one-liners. The first one uses the = command that inserts a line containing the line number before every original line in the file. Then this output gets piped to the second sed command that joins two adjacent lines with the N command. When joining lines with the N command, a newline character \n is placed between them. Therefore it uses the s command to replace this newline \n with a colon followed by a space ": ".

So for example, if the file contains lines:

hello world
good job
sunny day

Then after running the one-liner, the result is going to be:

1: hello world
2: good job
3: sunny day

Example 4: Delete every 2nd line

sed 'n;d'

This one-liner uses the n command that prints the current line (actually the current pattern space, see the introduction chapter for in-depth explanation), deletes it, and reads the next line. Then sed executes the d command that deletes the current line without printing. This way the 1st line gets printed, the 2nd line gets deleted, then the 3rd line gets printed again, then the 4th gets deleted, etc.

Example 5: ROT 13 encode every line

sed '

Here the y/set1/set2/ command is used. The y command substitutes elements in the set1 with the corresponding elements in the set2. The first y command replaces all lowercase letters with their 13-char-shifted counterparts, and the second y command does the same for the uppercase letters. So for example, character a gets replaced by n, b gets replaced by o, character Z gets replaced by M, etc.

Sed is actually very powerful. It's as powerful as a Turing machine, meaning you can write any computer program in it. Check out these programs written in sed. Run them as sed -f file.sed:

After you read the e-book you'll be able to understand all these complex programs!

Book Preview

See the quality of my work before you buy the e-book. I have made the first chapter, Introduction to sed, freely available. The preview also includes the full table of contents, preface and the first page of chapter two.

Buy it now!

The price of the e-book is $9.95 and it can be purchased via PayPal:

PayPal - The safer, easier way to pay online!

After you have made the payment, my automated e-book processing system will send you the PDF e-book in a few minutes!

Tweet about my book!

Help me spread the word about my new book! I prepared a special link that you can use to tweet about it:

What's next?

I am not stopping here. I love writing about programming and my next book is going to be "Perl One-Liners Explained", based on my "Famous Perl One-Liners Explained" article series. Expect this book in a few months!


Enjoy the book and don't forget to leave comments about it!

Also if you're interested, take a look at my first e-book called "Awk One-Liners Explained". It's written in the same style as this e-book and it teaches practical Awk through many examples.

Finally, if you enjoy my blog, you can subscribe to my blog, follow me on Twitter or Google+.

At Browserling we are huge open-source fans and we have open-sourced 90 node.js modules! That's right! 90 node.js modules. All written from scratch. We crazy!

Here is the complete list of all the modules together with their brief descriptions. We have published all the modules on GitHub, which is the best tool for getting things done and collaborating. All of them are greatly documented so just click the ones you're interested in to read more and see examples.

We'd love if you followed us on GitHub. I am pkrumins on GitHub and James Halliday, co-founder of Browserling, is substack.

Also check out Browserling:

And read how we raised $55,000 seed funding for Browserling.

Here are all the modules we have written.

1. dnode

DNode is an asynchronous object-oriented RPC system for node.js that lets you call remote functions. Here is an example. You have your server.js:

var dnode = require('dnode');

var server = dnode({
    zing : function (n, cb) { cb(n * 100) }

This starts dnode server on port 5050 and exports the zing function that asynchronously multiplies number n by 100.

And you have your client.js:

var dnode = require('dnode');

dnode.connect(5050, function (remote) {
    remote.zing(66, function (n) {
        console.log('n = ' + n);

This connects to dnode server at port 5050, calls the zing method and passes it a callback function that then gets called from the server and the client outputs 6600.

We have built everything at Browserling using dnode, all our processes communicate via dnode. Also thousands of people in node.js community use dnode. It's the most awesome library ever written.

dnode on github

2. node-browserify

Node-browserify is browser-side require() for your node modules and npm packages. It automatically converts node.js modules that are supposed to run through node into code that runs in the browser! It walks the AST to read all the require()s recursively and prepares a bundle that has everything you need, including pulling in libraries you might have installed using npm!

Here are the node-browserify features:

  • Relative require()s work browser-side just as they do in node.
  • Coffee script gets automatically compiled and you can register custom compilers of your own!
  • Browser-versions of certain core node modules such as path, events, and vm are included as necessary automatically.
  • Command-line bundling tool.

Crazy if you ask me.

node-browserify on github

3. node-lazy

Node-lazy is lazy lists for node.js. It comes really handy when you need to treat a stream of events like a list. The best use case is returning a lazy list from an asynchronous function, and having data pumped into it via events. In asynchronous programming you can't just return a regular list because you don't yet have data for it. The usual solution so far has been to provide a callback that gets called when the data is available. But doing it this way you lose the power of chaining functions and creating pipes, which leads to ugly interfaces.

Check out this toy example, first you create a Lazy object:

var Lazy = require('lazy');

var lazy = new Lazy;

  .filter(function (item) {
    return item % 2 == 0
  .map(function (item) {
    return item*2;
  .join(function (xs) {

This code says that lazy is going to be a lazy list that filters even numbers, takes first five of them, then multiplies all of them by 2, and then calls the join function (think of join as in threads) on the final list.

And now you can emit 'data' events with data in them at some point later,

[0,1,2,3,4,5,6,7,8,9,10].forEach(function (x) {
  lazy.emit('data', x);

The output will be produced by the join function, which will output the expected [0, 4, 8, 12, 16].

Here is a concrete example, used in node-iptables,

var lazy = require('lazy');
var spawn = require('child_process').spawn;
var iptables = spawn('iptables', ['-L', '-n', '-v']);

    .skip(2) // skips the two lines that are iptables header
    .map(function (line) {
        // packets, bytes, target, pro, opt, in, out, src, dst, opts
        var fields = line.trim().split(/\s+/, 9);
        return {
            parsed : {
                packets : fields[0],
                bytes : fields[1],
                target : fields[2],
                protocol : fields[3],
                opt : fields[4],
                in : fields[5],
                out : fields[6],
                src : fields[7],
                dst : fields[8]
            raw : line.trim()

It takes the iptables.stdout stream, converts it to a list of lines via .lines, then converts this list to String objects (cause the lines are Buffers), then .skips the first two lines (iptables header lines), and finally maps a function on each line that converts them into a data structure.

node-lazy at github

4. node-burrito

Burrito makes it easy to do crazy stuff with the JavaScript AST. This is super useful if you want to roll your own stack traces or build a code coverage tool.

Here is an example,

var burrito = require('burrito');

var src = burrito('f() && g(h())\nfoo()', function (node) {
    if ( === 'call') node.wrap('qqq(%s)');


This wraps all function calls in qqq function:

qqq(f()) && qqq(g(qqq(h())));


This way you can do some crazy stuff in qqq to find more about function calls in your code.

We use node-burrito for Testling.

node-burrito at github

5. js-traverse

Js-traverse traverses and transforms objects by visiting every node on a recursive walk.

Here is an example:

var traverse = require('traverse');
var obj = [ 5, 6, -3, [ 7, 8, -2, 1 ], { f : 10, g : -13 } ];

traverse(obj).forEach(function (x) {
    if (x < 0) this.update(x + 128);


This example traverses the obj and returns a new object with negative values (+ 128)'d:

[ 5, 6, 125, [ 7, 8, 126, 1 ], { f: 10, g: 115 } ]

Here is another example,

var traverse = require('traverse');

var obj = {
    a : [1,2,3],
    b : 4,
    c : [5,6],
    d : { e : [7,8], f : 9 },

var leaves = traverse(obj).reduce(function (acc, x) {
    if (this.isLeaf) acc.push(x);
    return acc;
}, []);


This example uses .isLeaf to determine if the node being traversed is a leaf node. If it is, it accumulates it. The output is all the leaf noes:

[ 1, 2, 3, 4, 5, 6, 7, 8, 9 ]

js-traverse at github

6. jsonify

Jsonify provides Douglas Crockford's JSON implementation without modifying any globals.

jsonify at github

7. node-garbage

Node-garbage generates random garbage json data. Useful for fuzz testing.

Here is an example run in the node interpreter:

> var garbage = require('garbage')
> garbage()
> garbage()
> garbage()
{ '0\t4$$c(C&s%': {},
  '': 2.221633433726717,
  '!&pQw5': '<~.;@,',
  'I$t]hky=': {},
  '{4/li(MDYX"': [] }
> garbage()

node-garbage at github

8. node-ben

Node-ben benchmarks synchronous and asynchronous node.js code snippets.

Here is an example of synchronous benchmarking:

var ben = require('ben');

var ms = ben(function () {

console.log(ms + ' milliseconds per iteration');


0.0024 milliseconds per iteration

And here is an example of asynchronous benchmarking:

var ben = require('ben');

var test = function (done) {
    setTimeout(done, 10);

ben.async(test, function (ms) {
    console.log(ms + ' milliseconds per iteration');


10.39 milliseconds per iteration

node-ben at github

9. node-bigint

Node-bigint implements arbitrary precision integral arithmetic for node.js. This library wraps around libgmp's integer functions to perform infinite-precision arithmetic.

Here are several examples:

var bigint = require('bigint');

var b = bigint('782910138827292261791972728324982')


<BigInt 75067108192986261319312244199576>

node-bigint at github

10. node-mkdirp

Node-mkdirp does what mkdir -p does in the shell (creates a directory structure if it doesn't exist).

var mkdirp = require('mkdirp');

mkdirp('/tmp/foo/bar/baz', 0755, function (err) {
    if (err) console.error(err)
    else console.log('pow!')

This creates the path /tmp/foo/bar/baz if it doesn't exist, and gives each dir perms 0755.

node-mkdirp at github

11. npmtop

Npmtop is a silly program that ranks npm contributors by number of packages.

npmtop at github

12. node-sorted

Node-sorted is an implementation of JavaScript's Array that is always sorted.

Here is an example, run in node's interpreter:

> var sorted = require('sorted')
> var xs = sorted([ 3, 1, 2, 0 ])
> xs
<Sorted [0,1,2,3]>
> xs.push(2.5)
> xs
<Sorted [0,1,2,2.5,3]>

node-sorted at github

13. node-fileify

Node-fileify is middleware for browserify to load non-js files like templates.

node-fileify at github

14. node-bunker

Bunker is a module to calculate code coverage using native JavaScript node-burrito AST trickery.

Here is an example:

var bunker = require('bunker');
var b = bunker('var x = 0; for (var i = 0; i < 30; i++) { x++ }');

var counts = {};

b.on('node', function (node) {
    if (!counts[]) {
        counts[] = { times : 0, node : node };
    counts[].times ++;

Object.keys(counts).forEach(function (key) {
    var count = counts[key];
    console.log(count.times + ' : ' + count.node.source());

This creates a bunker from the code snippet:

var x = 0; for (var i = 0; i < 30; i++) { x++ }

And then counts how many times each statement was executed. The output is:

1 : var x=0;
31 : i<30
30 : i++
30 : x++;
30 : x++

node-bunker at github

15. node-hat

Node-hat generates random IDs and avoids collisions cause entropy is scary. Example:

> var hat = require('hat');
> var id = hat();
> console.log(id);

node-hat at github

16. node-detective

Node-detective finds all calls to require() no matter how crazily nested using a proper walk of the AST.

Here is an example. Suppose you have a source file called program.js:

var a = require('a');
var b = require('b');
var c = require('c');

Then this program that uses node-detective will find all the requires:

var detective = require('detective');
var fs = require('fs');

var src = fs.readFileSync(__dirname + '/program.js');
var requires = detective(src);


[ 'a', 'b', 'c' ]

node-detective at github

17. node-isaacs

Node-isaacs is your personal Isaac Schlueter. Just a fun module written for Isaac, the author of npm, on his birthday.

node-isaacs at github

18. testling

Testling is the new testing framework that we're releasing as I write this. This is going to be the open-source part of it, and we'll run some magic behind the scenes to run it on Browserling browsers.

testling at github

19. node-intestine

Node-intestine provides the guts of a unit testing framework. With intestine it's not hard at all to roll your own testing framework in whatever style you want with a pluggable runner so you can easily swap in code coverage via node-bunker or stack-traces via node-stackedy.

Check out the complicated code examples on GitHub. They are too complex to explain in this brief article.

node-intestine at github

20. node-hashish

Node-hashish is a library for manipulating hash data structures. It is distilled from the finest that Ruby, Perl, and Haskell have to offer by way of hash/map interfaces.

Hashish provides a chaining interface, where you can do things like:

var Hash = require('hashish');

Hash({ a : 1, b : 2, c : 3, d : 4 })
    .map(function (x) { return x * 10 })
    .filter(function (x) { return x < 30 })
    .forEach(function (x, key) {
        console.log(key + ' => ' + x);

Which produces the output:

a => 10
b => 20

node-hashish at github

21. node-binary

Node-binary can be used to unpack multibyte binary values from buffers and streams. You can specify the endianness and signedness of the fields to be unpacked, too.

This module is a cleaner and more complete version of bufferlist's binary module that runs on pre-allocated buffers instead of a linked list.

We use this module for node-rfb, which is an implementation of VNC's RFB protocol.

node-binary at github

22. jsup

Jsup updates JSON strings in-place, preserving the structure. Really useful for preserving the structure of configuration files when they are changed programmatically.

Suppose you have this JSON file called stub.json:

    "a" : [   1,  2,  333333,  4   ] ,
    "b" : [ 3, 4, { "c" : [ 5, 6 ] } ],
    "c" :
    "d" : null

And you have this jsup program:

var jsup = require('jsup');
var fs = require('fs');
var src = fs.readFileSync(__dirname + '/stub.json', 'utf8');

var s = jsup(src)
    .set([ 'a', 2 ], 3)
    .set([ 'c' ], 'lul')

After running this program, the output is:

    "a" : [   1,  2,  3,  4   ] ,
    "b" : [ 3, 4, { "c" : [ 5, 6 ] } ],
    "c" :
    "d" : null

Notice how 333333 was changed to just 3 and the two spaces before it were preserved; and notice how the 444444 was changed to the string "lul" and formatting again was preserved. Really awesome!

jsup at github

23. node-ent

Node-ent encodes and decodes HTML entities. Check it:

> var ent = require('ent');
> console.log(ent.encode('<span>©moo</span>'))
> console.log(ent.decode('&pi; &amp; &rho;'));
p & ?

node-ent at github

24. node-chainsaw


Node-chainsaw can be used to build chainable fluent interfaces the easy way in node.js.

With this meta-module you can write modules with chainable interfaces. Chainsaw takes care of all of the boring details and makes nested flow control super simple, too.

Just call Chainsaw with a constructor function like in the examples below. In your methods, just do to move along to the next event and saw.nest() to create a nested chain.

var Chainsaw = require('chainsaw');

function AddDo (sum) {
    return Chainsaw(function (saw) {
        this.add = function (n) {
            sum += n;
        }; = function (cb) {
            saw.nest(cb, sum);

    .do(function (sum) {
        if (sum > 12) this.add(-10);
    .do(function (sum) {
        console.log('Sum: ' + sum);

node-chainsaw at github

25. mrcolor

Mr. Color generates colors for you. That's it. Here is how it looks:

It can also be used from command line:

var mr = require('mrcolor');
mr.take(5).forEach(function (color) {
    console.log('rgb(' + color.rgb().join(',') + ')');



mrcolor at github

26. node-optimist

Node-optimist parses command-line arguments for you. And that's the way it should be done. No more option ancient parsing strings!

Check this out:

var argv = require('optimist').argv;

Done! Your arguments are parsed in argv.

Wondering what do I mean? If you run the program as node prog.js --x=5 --y=foo, then argv.x is "5" and argv.y is "foo".

Node-optimist also generates usage automatically, and also has a demand function to demand arguments:

var argv = require('optimist')
    .usage('Usage: $0 -x [num] -y [num]')

console.log(argv.x / argv.y);

If you run it with x and y:

$ ./divide.js -x 55 -y 11

With y missing:

$ node ./divide.js -x 4.91 -z 2.51
Usage: node ./divide.js -x [num] -y [num]

  -x  [required]
  -y  [required]

Missing required arguments: y

That's how option parsing should be done!

node-optimist at github

27. node-quack-array

Node-quack-array checks if it quaacks like an array and if it does, it returns the array. Here are two examples, run in node.js interpreter.

Shallow quaacking:

> var quack = require('quack-array');
> quack({ 0 : 'a', 1 : 'b' })
[ 'a', 'b' ]
> quack({ 0 : 'a', 1 : 'b', x : 'c' })
{ '0': 'a', '1': 'b', x: 'c' }

Deep quaacking:

> var quack = require('quack-array');
> quack.deep({ 0 : { 0 : 'a', 1 : 'b' }, 1 : 'c' })
[ [ 'a', 'b' ], 'c' ]


node-quack-array at github

28. node-stackedy

Use node-stackedy to roll your own stack traces and control program execution through AST manipulation!

Here is an example program called stax.js,

var stackedy = require('stackedy');
var fs = require('fs');

var src = fs.readFileSync(__dirname + '/src.js');
var stack = stackedy(src, { filename : 'stax.js' }).run();

stack.on('error', function (err) {
    console.log('Error: ' + err.message);

    var c = err.current;
    console.log('  in ' + c.filename + ' at line ' + c.start.line);

    err.stack.forEach(function (s) {
        console.log('  in ' + s.filename + ', '
            + s.functionName + '() at line ' + s.start.line

It reads and executes source code file src.js:

function f () { g() }
function g () { h() }
function h () { throw 'moo' }


As you can see, f calls g, which calls h, which throws the exception "moo".

Now when stax.js is run, it produces custom stack-trace via stackedy, and the output is:

Error: moo
  in stax.js at line 2
  in stax.js, h() at line 1
  in stax.js, g() at line 0
  in stax.js, f() at line 4

Very, very, very awesome.

node-stackedy at github

29. node-shimify

Node-shimify is a browserify middleware to prepend es5-shim so your browserified bundles work in old browsers.

node-shimify at github

30. node-keyboardify

Node-keyboardify is a browserify middleware that displays on-screen keyboard.

node-keyboardify at github

31. node-keysym

Node-keysym converts among X11 keysyms, unicodes, and string names. We need it for sending the correct keyboard codes to Browserling.

Here is an example, run in node interpreter:

> var ks = require('keysym');
> console.dir(ks.fromUnicode(8))
[ { keysym: 65288 , unicode: 8 , status: 'f' , names: [ 'BackSpace' ] } ]

node-keysym at github

32. node-jadeify

Node-jadeify is a browserify middleware for browser-side jade templates. That's right. Jade in your browser!

node-jadeify at github

33. node-findit

Node-findit is a recursive directory walker for node.js. It's as simple to use as this:

require('findit').find('/usr', function (file) {

This finds all files in the directory tree starting at /usr. It can also be used event style:

var finder = require('findit').find('/usr');

finder.on('directory', function (dir) {
    console.log(dir + '/');

finder.on('file', function (file) {

Node-findit emits directory event for directories and file for files.

node-findit at github

34. node-progressify

Node-progressify is hand-drawn progress bars for your webapps with browserify! Looks like this:

node-progressify at github

35. node-buffers

Node-buffers treats a collection of Buffers as a single contiguous partially mutable Buffer. Where possible, operations execute without creating a new Buffer and copying everything over.

Here is an example:

var Buffers = require('buffers');
var bufs = Buffers();
bufs.push(new Buffer([1,2,3]));
bufs.push(new Buffer([4,5,6,7]));
bufs.push(new Buffer([8,9,10]));



<Buffer 03 04 05 06 07 08>

node-buffers at github

36. node-commondir

Node-commondir computes the closest common parent directory among an array of directories.

Check out these two examples, run in node interpreter:

> var commondir = require('commondir');
> commondir([ '/x/y/z', '/x/y', '/x/y/w/q' ])

Works with relative paths, too:

> var commondir = require('commondir')
> commondir('/foo/bar', [ '../baz', '../../foo/quux', './bizzy' ])

node-commondir at github

37. node-nub

Node-nub return all the unique elements of an array. You can specify your own uniqueness comparison function with, too.

Example, run in node:

$ node
> var nub = require('nub')
> nub([1,2,2,3,1,3])
[ 1, 2, 3 ]
>[ 2, 3, 5, 7, 8 ], function (x,y) { return x + y === 10 })
[ 2, 3, 5 ]

node-nub at github

38. node-resolve

Node-resolve implements the node require.resolve() algorithm, except you can pass in the file to compute paths relatively to along with your own require.paths without updating the global copy (which doesn't even work in node >=0.5).

node-resolve at github

39. node-seq

Node-seq is an asynchronous flow control library with a chainable interface for sequential and parallel actions. Even the error handling is chainable.

Each action in the chain operates on a stack of values. There is also a variables hash for storing values by name.

This is the most powerful asynchronous flow control library there is. Check out this example:

var fs = require('fs');
var Hash = require('hashish');
var Seq = require('seq');

    .seq(function () {
        fs.readdir(__dirname, this);
    .parEach(function (file) {
        fs.stat(__dirname + '/' + file, this.into(file));
    .seq(function () {
        var sizes =, function (s) { return s.size })

This reads all files in the current directory sequentially, then flattens the result list, then executes stat on each of the files, and then sequentially finds sizes of the files.

Output (given there are only two files in current dir):

{ 'stat_all.js': 404, 'parseq.js': 464 }

node-seq at github

40. node-rfb

Node-rfb implements VNC's RFB protocol in node.js. Browserling is the most important module at Browserling.

node-rfb at github

41. node-wordwrap

Node-wordwrap wraps words. Example:

var wrap = require('wordwrap')(15);
console.log(wrap('You and your whole family are made out of meat.'));


You and your
whole family
are made out
of meat.

node-wordwrap at github

42. dnode-perl

Dnode-perl implements dnode protocol in Perl.

dnode-perl at github

43. node-ssh

Node-ssh was an attempt to create an ssh server for node.js by using libssh. It doesn't work but someone might find the code useful.

node-ssh at github

44. node-song

Node-song sings songs in node.js with a synthesized voice. Here is an example of how you'd make it sing:

var sing = require('song')();
        note : 'E3',
        durations : [ { beats : 0.3, text : 'hello' } ]
        note : 'F#4',
        durations : [ { beats : 0.3, text : 'cruel' } ]
        note : 'C3',
        durations : [ { beats : 0.3, text : 'world' } ]

node-song at github

45. singsong

Singsong is a web interface for node-singsong.

singsong at github

46. node-freestyle

Node-freestyle is a really terrible freestyle rhyming markov rap generation.

Example of freestyle rap:

$ node examples/rap.js
house up as a a rare rare pleasure kernes
require extraordinary Extraordinary claims claims require require extraordinary bull
out there is a part of it turns
REMEMBER it turns out of the future full

node-freestyle at github

47. node-markov

Node-markov generates Markov chains for chatbots and freestyle rap contests.

node-markov at github

48. rap-battle

Rap-battle is a rap battle server and competitors using dnode. It takes input files of peoples' chats and outputs rap.

Here is a rap battle between ryah, the creator of node.js and isaacs, the creator of npm:

substack : rap-battle $ node watch.js 
<isaacs> no scripting its arbitrary any lovecraftian c
<ryah> converting then it'd and be README's worth keeping FLEE
<ryah> standalone way to grab a long as long yeah, but than 2 locks
<isaacs> sth is missing what php i ala get sugar a syntactic FLOCKS
<ryah> it's was >64kb, that were shared utility functions are a cheaper
<isaacs> scripting I arbitrary call allowing DEEPER

rap-battle at github

49. node-rhyme

Node-rhyme finds rhymes. Example:

var rhyme = require('rhyme');
rhyme(function (r) {
    console.log(r.rhyme('bed').join(' '));


$ node examples/bed.js

node-rhyme at github

50. node-deck

Node-deck does uniform and weighted shuffling and sampling.

Uniform shuffle:

> var deck = require('deck');
> var xs = deck.shuffle([ 1, 2, 3, 4 ]);
> console.log(xs);
[ 1, 4, 2, 3 ]

Uniform sample:

> var deck = require('deck');
> var x = deck.pick([ 1, 2, 3, 4 ]);
> console.log(x);

Weighted shuffle:

> var deck = require('deck');
> var xs = deck.shuffle({
    a : 10,
    b : 8,
    c : 2,
    d : 1,
    e : 1,
> console.log(xs);
[ 'b', 'a', 'c', 'd', 'e' ]

Weighted sample:

> var deck = require('deck');
> var x = deck.pick({
    a : 10,
    b : 8,
    c : 2,
    d : 1,
    e : 1,
> console.log(x);

node-deck at github

51. node-bufferlist

BufferList provides an interface to treat a linked list of buffers as a single stream. This is useful for events that produce a many small Buffers, such as network streams.

This module is deprecated. Node-binary and node-buffers provide the functionality of bufferlist in similar but better ways.

Here is how it used to look like:

var sys = require('sys');
var Buffer = require('buffer').Buffer;
var BufferList = require('bufferlist').BufferList;

var b = new BufferList;
['abcde','xyz','11358'].forEach(function (s) {
    var buf = new Buffer(s.length);

sys.puts(b.take(10)); // abcdexyz11

node-bufferlist at github

52. node-permafrost

Permafrost uses JavaScript/EcmaScript harmony proxies to recursively trap updates to data structures and store the changing structures to disk automatically and transparently to the programming model.

Think ORM, but with a crazy low impedance mismatch.

This thing is still quite buggy. I wouldn't use it for anything important yet.

Here is an example:

var pf = require('permafrost');

pf('times.db', { times : 0 }, function (err, obj) {
    obj.times ++;
    console.log(obj.times + ' times');

And then run it:

$ node times.js
1 times
$ node times.js
2 times
$ node times.js
3 times

node-permafrost at github

53. node-put

Node-put packs multibyte binary values into buffers with specific endiannesses.


var Put = require('put');
var buf = Put()
    .put(new Buffer('pow', 'ascii'))

buf now contains 1337 encoded as 16 bit big endian, followed by 1, padded by 5 nuls, followed by pow in ascii, followed by 9000 encoded as 32 bit little endian.

node-put at github

54. node-keyx

Algorithms, parsers, and file formats for public key cryptography key exchanges.

Here is an example that generates a key pair and outputs the public key:

var keyx = require('keyx');
var keypair = keyx.generate('dss');



node-keyx at github

55. node-ssh-server

Node-ssh-server was an attempt to write an ssh server in node.js. It didn't work out but someone might find the code in there useful.

node-ssh-server at github

56. node-ap

Currying in JavaScript. Function.prototype.bind sets this which is super annoying if you just want to do currying over arguments while passing this through.

Instead you can do:

var ap = require('ap');
var z = ap([3], function (x, y) {
    return this.z * (x * 2 + y);
}).call({ z : 10 }, 4);



node-ap at github

57. node-source

Node-source finds all of the source files for a package.

Here is an example for jade:

var source = require('source')


[ 'jade',
  'jade/._index' ]

node-source at github

58. node-recon

Node-recon keeps your network connections alive in node.js no matter what. Recon looks like a regular tcp connection but it listens for disconnect events and tries to re-establish the connection behind the scenes. While the connection is down, write() returns false and the data gets buffered. When the connection comes back up, recon emits a drain event.

Here is an example. Run this program in node.js:

var recon = require('recon');
var conn = recon(4321);

conn.on('data', function (buf) {
    var msg = buf.toString().trim()

It tries to keep the connection open to localhost:4321 no matter what.

To try it out, you can listen on port 4321 with netcat, type some stuff, kill netcat, and fire it up again to type some more stuff.

node-recon at github

59. npmdep

Npmdep builds dependency graphs for npm packages.

npmdep at github

60. node-waitlist

Node-waitlist manages consumers standing in queue for resources.

node-waitlist at github

61. dnode-stack

Dnode-stack processes your webserver middleware for dnode connections.

dnode-stack at github

62. node-prox

Node-prox is a hookable socks5 proxy client and server in node.js.

Example. Here is the socks5 server, which creates a socks5 proxy on port 7890:

var socks5 = require('prox').socks5;
socks5.createServer(function (req, res) {
    res.write('Requested ' + + ':' + req.port);

And here is the client that connects to the socks5 server and then connects to through it:

var socks5 = require('prox').socks5;

var stream = socks5.createConnection('localhost', 7890)
    .connect('', 1337);

stream.on('data', function (buf) {

node-prox at github

63. dnode-ruby

Dnode-ruby implements dnode protocol in Ruby.

dnode-ruby at github

64. node-dmesh

Manage pools of DNode services that fulfill specific roles. (Unfinished).

node-dmesh at github

65. rowbit

Rowbit is a IRC bot that we use at Browserling for notifications. It uses dnode for its plugin system, so modules are just processes that connect to the rowbit server.

We get alerts like this in our IRC channel:

< rowbit> /!\ ATTENTION: (default-local) Somebody in the developer group is waiting in the queue! /!\
< rowbit> /!\ ATTENTION: (default-local) Somebody clicked the payment link! /!\

rowbit at github

66. node-rdesktop

Node-rdesktop implements the client side of the RDP protocol used for remote desktop stuff by Windows. (It's not finished.)

node-rdesktop at github

67. telescreen

Telescreen is used to manage processes running across many servers. We are not using it anymore.

telescreen at github

68. node-iptables

Node-iptables allows basic Linux firewall control via iptables.

For example, to allow TCP port 34567 from do:

var iptables = require('iptables');
    protocol : tcp,
    src : '',
    dport : 34567,
    sudo : true

We're using this for Browserling tunnels to control access to tunneled port.

node-iptables at github

69. node-passwd

Node-passwd lets you control UNIX users. It forks passwd utility to do all the job.

For example, to create a new UNIX user, do:

    { createHome : true },
    function (status) {
        if (status == 0) {
            console.log('great success! pkrumins added!');
        else {
            console.log('not so great success! pkrumins not added! useradd command returned: ' + status);

We're using this for Browserling tunnels to control ssh user access.

node-passwd at github

70. node-jpeg

Node-jpeg is a C++ module that converts RGB or RGBA buffers to JPEG images in memory. It has synchronous and asynchronous interface.

node-jpeg at github

71. nodejs-proxy

Nodejs-proxy is a dumb HTTP proxy with IP and URL access control. I wrote about it a year ago here on my blog - nodejs http proxy.

nodejs-proxy at github

72. node-base64

Node-base64 is a module for doing base64 encoding in node.js. It was written before node.js had base64 encoding built in.

Here is how it works:

var base64_encode = require('base64').encode;
var buf = new Buffer('hello world');

console.log(base64_encode(buf)); /* Output: aGVsbG8gd29ybGQ= */

node-base64 at github

73. node-gif

Node-gif is a C++ nodejs module for creating GIF images and animated GIFs from RGB or RGBA buffers.

node-gif at github

74. node-supermarket

Node-supermarket is a key/value store based on sqlite for node.js.

Here is an example:

var Store = require('supermarket');

Store('users.db', function (err, db) {
    db.set('pkrumins', 'cool dude', function (error) {
        // value 'pkrumins' is now set to 'cool dude'
        db.get('pkrumins', function (error, value, key) {
            console.log(value); // cool dude

node-supermarket at github

75. node-des

Node-des is a C++ node.js module that does DES encryption and actually works (node's crypto module didn't work.)

We're using it together with node-rfb to do VNC authentication.

node-des at github

76. node-png

Node-png is a C++ module that converts RGB or RGBA buffers to PNG images in memory.

node-png at github

77. node-browser

Node-browser provides a browser for easy web browsing from node.js (not yet finished.)

When it's finished it will work like this:

var Browser = require('browser');

var browser = new Browser;
        '' + data.username,
            op : 'login-main',
            user : data.username,
            passwd : data.password,
            id : '#login_login-main',
            renderstyle : 'html'
    .get('' + data.subreddit)
    .get('' + data.subreddit + '/submit')
            uh : 'todo',
            kind : 'link',
            sr : data.subreddit,
            url : data.url,
            title : data.title,
            id : '#newlink',
            r : data.subreddit,
            renderstyle : 'html'

This logs into reddit and posts a story.

node-browser at github

78. supermarket-cart

Supermarket-cart is a connect session store using node-supermarket.

supermarket-cart at github

79. node-video

Node-video is a C++ module that records Theora/ogg videos from RGB or RGBA buffers.

node-video at github

80. node-image

Node-image unifies node-png, node-jpeg and node-gif.

Here is an example usage:

var Image = require('image');

var png = new Image('png').encodeSync(buffer, width, height);
var gif = new Image('gif').encodeSync(buffer, widht, height);
var jpeg = new Image('jpeg').encodeSync(buffer, width, height);

node-image at github

81. node-multimeter

Multimeter controls multiple ANSI progress bars on the terminal.

Here is a screenshot from console:

We're using this module for Testling!

node-multimeter at github

82. node-charm

Node-charm uses VT100 ansi terminal characters to write colors and cursor positions. It's used by node-multimeter to output progress bars.

node-charm at github

83. node-cursory

Node-cursory computes the relative cursor position from a stream that emits ansi events.

node-cursory at github

84. rfb-protocols

RFB-protocols is a node.js module for various RFB encoding (RRE, Hextile, Tight, etc.) conversion to RGB buffers. Currently, though, it only implements hextile, and we're not really using it.

rfb-protocols at github

85. node-async

Node-async is an example C++ module that shows how to multiply two numbers asynchronously.

node-async at github

86. node-bufferdiff

Node-bufferdiff is a C++ module for node.js to test if two buffers are equal, fast.

node-bufferdiff at github

87. node-jsmin

Node-jsmin is the original Doug Crockford's JavaScript minimizer ported to node.js.

node-jsmin at github

88. dnode-protocol

This module implements the dnode protocol in a reusable form that is presently used for both the server-side and browser-side dnode code.

dnode-protocol at github

89. python.js

This is an implementation of Python in JavaScript. For the lulz.

python.js at github

90. node-time

I forgot that JavaScript has Date object, so I wrote node-time as a C++ module to get some time information.

node-time at github

There is more!

Looking for more open-source projects? Check out my two posts: I pushed 30 of my projects to GitHub and I pushed 20 more projects to GitHub!

Love us!

Also, we'd really love if you followed us on GitHub, Twitter and Google+.

I am pkrumins on GitHub and @pkrumins on Twitter and Peteris Krumins on Google+.

James is substack on GitHub and @substack on Twitter and James Halliday on Google+.

My next blog post on Browserling will be the announcement of Testling, the automated test framework for Browserling. After that, a video of all Browserling features. Then announcement of team plans for Browserling. Stay tuned!

So I participated in the 2nd annual 48-hour Node.JS Knockout competition together with James Halliday, Joshua Holbrook, and David Wee. Almost the same team as the last year. This year we called ourselves Replicants and we created a real-time code coverage heatmapping application called Heatwave!

If you like our app, please give us thumbs up, that will really help us out. You can vote for our app at Replicants team page.

Just for the record, here is a screenshot of our application (try it here):

So you can either paste the code snippet right on the site and run it and it will show live code heatmap as it runs.

Or you can upload your code via a web form and it will get stored on heatwave server and you'll get a unique url with your heatwave that you can share. Like this:

You can also use curl to upload your code. That's the smartest hackery I have seen. Idea by James Halliday. You can just do:

curl -sNT file.js

And that will upload the code to the heatwave server and respond with info on how to see it. Like this:

$ curl -sNT foo.js
Visit this site to run and manage the code:

 To upload more files:
    curl -sNT file.js

And with curl you can even upload multiple files to the same page, which is super neat.

We had really great team work. Josh, James and David hacked from Joyent and I hacked remotely from Latvia, and we communicated over IRC, just like the last year. Each of us had a separate github repo and we'd just pull from each other every now and then. That's about it.

The source code of heatwave is on github: heatwave source. Enjoy!

Looking forward to Node.js Knockout 2012!

Excellent news everyone! Last month we launched SSH tunnels for Browserling. SSH tunnels allow you to tunnel your localhost or local network straight through to Browserling, which means you can do cross-browser testing from your internal network!

I made a demo video about this awesome feature:

SSH tunnels is the first major feature that differentiates the paid plans from the free plans. In free plans you get 5 minutes of Browserling for free and no tunnels. With a paid plan you get tunnels and unlimited time.

Here is a brief technical overview of how the tunnels work.

We run the openssh server. The first time you use tunnels, you'll be asked to choose your ssh password and a new no-login Unix user will be created for you on the server. The server has ports 50000-60000 open for tunneling but they are firewalled with iptables. When you click the "open tunnel" button in the Browserling UI, the tunnel server generates a random port in this range and opens it up with iptables but only for your Browserling session. For example, it may open up Then it generates the ssh command for opening a reverse ssh tunnel for you. For example, if you're tunneling localhost:80 then the command that will be generated will be ssh -N -R 55555:localhost:80 Now you can just copy and paste this command to the terminal, you'll get prompted for your password, and you're done. The tunnel between Browserling and your localhost:80 has been opened. Now if you visit inside of Browserling, the connection will go through the tunnel and you'll really be accessing localhost:80!

If you're on Windows, you can also easily tunnel your localhost or local network with the plink.exe program from PuTTY. It turns out that the command line arguments for plink.exe are exactly the same as for ssh. In the example above it would be plink.exe -N -R 55555:localhost:80 Really cool!

We are huge fans of open-source at Browserling and we have open-sourced 40 node.js modules! I'll do a blog post about that soon. Then we are releasing Testling the next two weeks, which is an automated web testing framework for Browserling, and I am going to announce it here also. And we're adding a ton more web browsers to Browserling in the upcoming week!

If that sounds interesting, you can subscribe to my blog, follow me on twitter, or Google+. That way you'll be the first to know when we release all this goodness!

Never heard of Browserling? Read the Browserling announcement blog post, and the blog post on How I went to Silicon Valley and raised $55k seed funding for it!

Browserling is an interactive cross-browser testing site that allows you to use Internet Explorer, Firefox, Chrome, Opera and Safari from your browser. We built this amazing technology that brings virtual machines to the web and we built Browserling on top of it!