Today is 4 years since I and my co-founder James Halliday have been working at Browserling Inc. I thought I'd share what I think are the top 10 inventions at our company.

The first two choices are obvious. They're our products Browserling and Testling. The next eight choices are useful and popular open source projects - browserify, dnode, ploy, seaport, airport and upnode, bouncy, hyperglue and hyperspace and hyperstream, cipherhub.

1. Browserling

Browserling is our first and most successful commercial product. Browserling lets you interactively test websites in all the browsers - IE, FireFox, Opera, Chrome, and Safari.

We successfully managed to monetize the vnc-to-browser technology that we developed four years ago (see the announcement). Our vnc-to-browser technology don't use Flash or Java applets, it uses HTML5 canvas instead.

Try it out at! (Use Chrome)

2. Testling

Testling is our second product. It lets you run cross-browser JavaScript tests on every git push. After the tests run, you get a badge that shows their status in all the browsers:

Learn more about Testling at!

3. Browserify

Browserify lets you use node.js modules from npm in your browser. It's a game-changer for front-end development.

Browsers don't have the require method defined, but node.js does. With browserify you can write code that uses require in the same way that you would use it in node.

Here's a tutorial on how to use browserify on the command line to bundle up a simple file called main.js along with all of its dependencies:

var unique = require('uniq');

var data = [1, 2, 2, 3, 4, 5, 5, 5, 6];


Install the uniq module with npm:

$ npm install uniq

Now recursively bundle up all the required modules starting at main.js into a single file called bundle.js with the browserify command:

$ browserify main.js -o bundle.js

Browserify parses the AST for require calls to traverse the entire dependency graph of your project.

Drop a single <script> tag into your HTML and you're done!

<script src="bundle.js"></script>

We use browserify everywhere at Browserling. Browserling and Testling are built out of hundreds of small modules. They're bundled using browserify. If you look at Browserling's or Testling's source code, you'll see that there is just one source file bundle.js. All your Testling tests are browserified before run in the browsers as well.

Learn more about browserify at

4. Dnode

Dnode is an asynchronous RPC system for node.js that lets you call remote functions. Dnode is safe. No code gets passed along when you call a remote function. Only function references and their arguments get passed along.

Here's a tutorial on how to use dnode. First, create server.js that listens on port 5555 and exports transform function that converts its arguments to uppercase:

var dnode = require('dnode');

var server = dnode({
    transform : function (s, cb) {

Then create client.js that connects to the server over dnode and calls the transform function:

var dnode = require('dnode');

var d = dnode.connect(5555);
d.on('remote', function (remote) {
    remote.transform('test', function (s) {
        console.log('test => ' + s);

When you run this, you get test => TEST as output.

We use dnode heavily at Browserling. All the processes communicate between each other using dnode. For example, we've a centralized authentication service that Browserling and Testling use. When someone signs up at Browserling, they can use the same login at Testling. Dnode makes that work behind the scenes.

Learn more about dnode at dnode's github page.

5. Ploy

Ploy is our node.js deployment system. It includes a http(s) router, a git endpoint and a process manager all in one. You just git push your code at ploy, and it deploys it and runs the necessary processes. You can also manage staging branches and monitor process logs with it.

I recently wrote an in-depth article about how we deploy code at Browserling and Testling. It starts as a tutorial and then explains how we deploy our code in-depth. Read it!

Learn more about ploy at ploy's github page.

6. Seaport

Seaport is a service registry. Seaport stores (host, port) combos (and other metadata) for you so you won't need to spend so much effort keeping configuration files current as your architecture grows to span many processes on many machines. Just register your services with seaport and then query seaport to see where your services are running!

Here's a tutorial on how to setup a seaport server, register a web server at seaport and connect to the web server from another service. First spin up a seaport server:

$ seaport listen 9000

Then from your web server, connect to seaport and register your web service at seaport by calling ports.register():

var http = require('http');
var seaport = require('seaport');
var ports = seaport.connect('localhost', 9000);

var server = http.createServer(function (req, res) {
    res.end('hello world\r\n');


Now that your web server has been registered at seaport, just ports.get() that web service from another program:

var seaport = require('seaport');
var ports = seaport.connect(9000);
var request = require('request');

ports.get('web@1.2.x', function (ps) {
    var url = 'http://' + ps[0].host + ':' + ps[0].port;

This program gets the web server's port, connects to it through request, gets response, and prints hello world. Port 9000 is the only port that you need to know in all of your programs.

We also use seaport heavily at Browserling and Testling. Browserling is built using many small services, such as stripe service for taking payments, stats for usage statistics, monitor for monitoring services, status for, auth that's a centralized point for authentication, and many others. We don't even know what ports these services run on, seaport takes care of it.

Learn more about seaport at seaport's github page.

7. Airport and Upnode

Airport provides seaport-based port management for upnode. What's upnode? Upnode keeps a dnode (see invention #4 above) connection alive and re-establishes state between reconnects with a transactional message queue.

Here's a tutorial on upnode. First create a server.js that exports the time function:

var upnode = require('upnode');

var server = upnode(function (client, conn) {
    this.time = function (cb) { cb(new Date().toString()) };

Now when you want to make a call to the server, guard your connection in the up() function. If the connection is alive the callback fires immediately. If the connection is down the callback is buffered and fires when the connection is ready again.

var upnode = require('upnode');
var up = upnode.connect(7000);

setInterval(function () {
    up(function (remote) {
        remote.time(function (t) {
            console.log('time = ' + t);
}, 1000);

This program will connect to upnode server on port 7000, and keep printing the time every second. If you take the upnode server down, it will buffer the callbacks and they'll fire when the server is available again.

Learn more about upnode at upnode's github page.

Airport provides an upnode-style dnode connections using service names from a seaport server (see invention #6 above). Instead of connecting and listening on hosts and ports, you can .connect() and .listen() on service names.

Here's a tutorial on airport. First start a seaport server on port 7000:

$ seaport listen 7000

Then write a service called fiver that exports the timesFive function that multiplies its argument by five:

var airport = require('airport');
var air = airport('localhost', 7000);

air(function (remote, conn) {
    this.timesFive = function (n, cb) { cb(n * 5) }

Now write a client that connects to the fiver service and calls the timesFive function:

var airport = require('airport');
var air = airport('localhost', 7000);

var up = air.connect('fiver');

up(function (remote) {
    remote.timesFive(11, function (n) {
        console.log('timesFive(11) : ' + n);

This program outputs timesFive(11) : 55. In case the connection between the client and fiver goes down, upnode will buffer the callbacks until the connection is back.

Learn more about airport at airport's github page.

8. Bouncy

Bouncy is a minimalistic, yet powerful http(s) router that supports websockets.

Here's a bouncy tutorial. Let's say you want to route route requests based on the host HTTP header to servers on ports 8001 and 8002. For every http request, bouncy calls function (req, res, bounce) { }, so you can inspect the and then call bounce(8001) or bounce(8002), like this:

var bouncy = require('bouncy');

var server = bouncy(function (req, res, bounce) {
    if ( === '') {
    else if ( === '') {
    else {
        res.statusCode = 404;
        res.end('no such host');

The bounce(PORT) function redirects the connection to the service running on PORT.

I personally don't see myself using anything but bouncy for routing http(s) requests, including websockets. It's rock solid and has been used in production for many years.

Learn more about bouncy at bouncy's github page.

9. Hyperglue, Hyperspace, and Hyperstream

Hyperglue lets you update HTML elements by mapping query selectors to attributes, text, and hypertext both in the browser and node.js.

Here's a tutorial on hyperglue. Let's say you've an article template in article.html and you want to fill a .name, span .author, and div .body with data:

<div class="article">
  <div class="title">
    <a name="name"></a>
  <div class="info">
    <span class="key">Author:</span>
    <span class="value author"></span>
  <div class="body">

With hyperglue you can write the following code that will do just that:

function createArticle (doc) {
    return hyperglue(html, {
        '.title a': {
            href: doc.href,
            _text: doc.title
        'span .author':,
        'span .body': { _html: doc.body }

    author: 'James Halliday',
    href: '/robots',
    title: 'robots are pretty great',
    body: '<h1>robots!</h1>\n\n' +
          '<p>Pretty great basically.</p>'

The createArticle function will fill the article HTML template with data at the right selectors. As a result it will produce the following HTML, and it will append it to document.body:

<div class="article">
  <div class="title">
    <a name="name" href="/robots">robots are pretty great</a>
  <div class="author">
    <span class="key">Author:</span>
    <span class="value author">James Halliday</span>
  <div class="body">

<p>Pretty great basically.</p>

The object returned by hyperglue also has an innerHTML property that contains the generated HTML, so you can also use it on the server side to get the resulting HTML:


Learn more about hyperglue at hyperglue's github page.

Hyperspace renders streams of HTML on the client and the server. Here's a tutorial on hyperspace. First pick a stream data source that will give you records and let you subscribe to a changes feed. Let's start with the rendering logic in file render.js that will be used on both the client and the server:

var hyperspace = require('hyperspace');
var fs = require('fs');
var html = fs.readFileSync(__dirname + '/row.html');

module.exports = function () {
    return hyperspace(html, function (row) {
        return {
            '.who': row.who,
            '.message': row.message

The return value of hyperspace() is a stream that takes lines of JSON as input and returns HTML strings as its output. Text, the universal interface!

Template row.html used is just a really simple stub thing:

<div class="row">
  <div class="who"></div>
  <div class="message"></div>

It's easy to pipe some data to the renderer:

var r = require('./render')();
r.write(JSON.stringify({ who: 'substack', message: 'beep boop' }) + '\n');
r.write(JSON.stringify({ who: 'pkrumins', message: 'h4x' }) + '\n');

Which prints:

<div class="row">
  <div class="who">substack</div>
  <div class="message">beep boop</div>
<div class="row">
  <div class="who">pkrumins</div>
  <div class="message">h4x</div>

To make the rendering code work in browsers, you can just require() the shared render.js file and hook that into a stream. In this example we'll use shoe to open a simple streaming websocket connection with fallbacks:

var shoe = require('shoe');
var render = require('./render');


If you need to do something with each rendered row you can just listen for 'element' events from the render() object to get each element from the data set, including the elements that were rendered server-side.

Learn more about hyperspace at hyperspace's github page and the html streams for the browser and the server section of Stream Handbook.

Hyperstream streams HTML into HTML at css selector keys. Here's a tutorial on hyperstream. Let's say you've the following HTML template in index.html:

    <div id="a"></div>
    <div id="b"></div>

And you've two more files, a.html that contains:


And b.html that contains:

<b>hello world</b>

And you want to stream HTML from files a.html and b.html into selectors #a and #b. You can write the following code using hyperstream:

var hyperstream = require('hyperstream');
var fs = require('fs');

var hs = hyperstream({
    '#a': fs.createReadStream(__dirname + '/a.html'),
    '#b': fs.createReadStream(__dirname + '/b.html')
var rs = fs.createReadStream(__dirname + '/index.html');

And it will do just that! You'll get the following output:

    <div id="a"><h1>title</h1></div>
    <div id="b"><b>hello world</b></div>

Learn more about hyperstream at hyperstream's github page.

10. Cipherhub

Cipherhub is our secure communications tool. It can be frustrating and annoying to communicate with somebody using public key cryptography since setting up PGP/GPG is a hassle, particularly managing key-rings and webs of trust.

Luckily, you can fetch the public ssh keys of anybody on GitHub by going to:

If you just want to send somebody an encrypted message out of the blue and they already have a GitHub account with RSA keys uploaded to it, you can just do:

$ cipherhub USERNAME < secret_message.txt

And it will fetch their public keys from GitHub, storing the key locally for next time, and output the encrypted message. You can now send this message to your friend over IRC and they can decode it by running:

$ cipherhub <<< MESSAGE

Just recently we used cipherhub to send company's credit card information over IRC and I loved how easy that was.

Learn more about cipherhub at cipherhub's github page.

4 Years of Browserling

Happy birthday to Browserling Inc!

You can follow Browserling and Testling on Twitter at @browserling and @testling, and co-founders at @pkrumins and @substack.

We can also be contacted over email at


How many times have you had a situation when you open a file for editing, make a bunch of changes, and discover that you don't have the rights to write the file? This happens to me a lot.

It usually goes like this. You open a file and you forget to use sudo:

$ vim /etc/apache/httpd.conf

You make many changes and then you type:


And you get an error:

"/etc/apache/httpd.conf" E212: Can't open file for writing
Press ENTER or type command to continue

And then you go like, duh. At this point you either quit vim:


And open the file again with sudo:

$ sudo vim /etc/apache/httpd.conf

And make all the changes again. Or if you're a bit smarter, you save the file to /tmp directory:

:w /tmp/foo

And then you sudo move the /tmp/foo to the right location:

$ sudo mv /tmp/foo /etc/apache/httpd.conf

Don't do that anymore! Use this command:

:w !sudo tee % >/dev/null

This command will save you hundreds of hours throughout your career. Here's how it works - vim spawns sudo tee FILENAME and pipes the contents of the file to its stdin. The tee command now runs in a privileged environment and redirects its stdin to FILENAME. The >/dev/null discards tee's stdout as you don't need to see it.

In fact, don't use this command as it's too too long and complicated to remember! Save another few hundred hours and create a vim alias for this command because you'll use it for the rest of your life. Put this in your ~/.vimrc:

cnoremap sudow w !sudo tee % >/dev/null

Now the next time you're in this situation, just type:


See you!

Here's another interesting story about how we do things at Browserling. This time the story is about how we use ploy to deploy code at Browserling and Testling.

First off, what is ploy? Ploy is a deployment system that we created at Browserling, that includes a http router, a git endpoint and a process manager all in one. You just push your code at ploy, and it deploys it and runs the necessary processes.

Ploy overview

Here's a simple overview and an example of how to get started with ploy. First, setup auth.json that ploy will use for git authentication when you git push at it:

$ cat auth.json
  "pkrumins" : "password",
  "substack" : "password2"

Then setup a git remote in your app's repository:

$ git remote add ploy

Ploy uses the special /_ploy/NAME.git url as a git endpoint.

Then start ploy:

$ ploy ./myapp -a auth.json --port 80

This will start ploy on port 80 and use ./myapp directory to store the repository and logs. Surely you never want to run any code as a privileged user, so see the section below on how to setup ploy on port 8080 and redirect port 80 with iptables to port 8080.

Your app should have a package.json file, such as the following:

  "name": "myapp",
  "private": true,
  "version": "0.0.1",
  "scripts": {
    "start": {
      "index": "node server.js",
      "stats": "node stats/server.js",
      "_auth": "node auth/server.js"
    "postinstall": "./"

The scripts.start lists processes that ploy will manage. The index process will handle requests to and, whereas stats will handle requests to the subdomain. The process starting with an underscore _auth is a non-http process that will simply be run by ploy and won't be mapped to any http requests.

Now git push your code to ploy:

$ git push ploy master

When you push code at a ploy server, your processes will be started and any previous processes running under the same branch name will be killed. Ploy does not detach your server processes. When the ploy server goes down, it takes the processes it started with it. However, when the ploy server is started back up, it will attempt to restart all of the processes it was previously running.

Before starting the processes, ploy will run npm install . that will handle the scripts.postinstall (also install and preinstall) hooks. Ploy'll set the PORT environment variable that your application should listen to. It will then route all the http requests on port 80 to your application listening on PORT.

You can also setup staging branches easily with ploy, just push to ploy like this:

$ git push ploy master:staging

This will make the staging branch available at (given that you've an A DNS record for that points to the same server that does). The stats process will be available at

Once you're done with your staging branch, you can simply move staging to production (master branch) by running ploy mv staging master. That will re-route the connections and you'll instantly be running your staging in production.

Ploy provides many commands for managing processes, logs, and branches. You can list the processes by running ploy ls from your repository. You can restart them by running ploy restart <process name>. You can view the logs through ploy log, and manage branches branches through ploy mv, ploy rm commands, and ploy redeploy commands.

Ploy makes it really easy to deploy your apps to production and staging! Read more about it on ploy's project page!

Ploy at Browserling and Testling

Now let's turn to how we use ploy in production to deploy Browserling and Testling. We've a more complex ploy setup, where we use ssl certificates for running https, and a request router for more complicated routing.

Here's how we run ploy for Testling:

ploy ./testling \
  -a auth.json \
  --port 8000 \
  --sslPort 8443 \
  --ca=./certs/bundle.crt \
  --cert=./certs/ci_testling_com.crt \
  --key=./certs/server.key \
  -f testling-router.js
  • The -a argument specifies the authentication info for the git endpoint.
  • The --port argument specifies the http port that ploy will listen on.
  • The --sslPort argument specifies the https port that ploy will listen on.
  • The --ca, --cert, and --key argument setup the ssl cert.
  • The -f argument specifies the http router that ploy will use to route the connections.

We run ploy as non-root and it listens on port 8000 for http connections, and on port 8443 for https connections. We've the following iptables rules setup on the testling server that redirect ports 80 to 8000, and 443 to 8443:

iptables -A PREROUTING -t nat -p tcp --dport 80 -j REDIRECT --to-port 8000
iptables -t nat -I OUTPUT -p tcp -d --dport 80 -j REDIRECT --to-ports 8000
iptables -t nat -I OUTPUT -p tcp -d --dport 80 -j REDIRECT --to-ports 8000

iptables -A PREROUTING -t nat -p tcp --dport 443 -j REDIRECT --to-port 8443
iptables -t nat -I OUTPUT -p tcp -d --dport 443 -j REDIRECT --to-ports 8443
iptables -t nat -I OUTPUT -p tcp -d --dport 443 -j REDIRECT --to-ports 8443

Auth file auth.json contains the user access info, in our case it's just me and substack:

{ "pkrumins": "pass1", "substack": "pass2" }

The router file is used in more complicated http routing cases. If we didn't have the testling-router.js router, then all the requests to ports 80 and 443 would simply be routed to app's PORT (environment variable that was set by ploy before it started the processes), but in our case we redirect all unencrypted http connections to https:

module.exports = function (req, res, bounce) {
    if (!req.connection.encrypted) {
        if ( {
            var hostname =':')[0];
            if (hostname == '') {
                for (var i = 0; i < this.bouncers.length; i++) {
                    if (!this.bouncers[i].key) continue;
                    res.statusCode = 302;
                    res.setHeader('location', '' + req.url);
                    return res.end();
                res.statusCode = 404;
                return res.end('no https endpoint configured\n');

What this router does is it checks if a connection is encrypted, and if it's not, it simply sets the Location: header and the browser redirects the visitors to https.

You can do all kinds of crazy things with a router, for example, route based on the source/destination IP, or route based on http headers, route based on date/time, etc.

Here's how the root directory package.json looks for Testling:

  "name": "testling-ci",
  "private": true,
  "version": "0.0.1",
  "scripts": {
    "start": {
      "ci": "node ci/server.js",
      "git": "node git/server.js"
    "postinstall": "./"

And here's how the postinstall script looks like:


(cd ci; npm install .)
(cd git; npm install .)

What this script does is it simply goes into the ci/ and git/ directories and npm installs the modules. After it's done, ploy starts the and scripts.start.git processes that map to and subdomains.

Browserling uses pretty much the same arguments to ploy but has a much more complicated router that uses seaport:

var seaport = require('seaport');
var config = require('figc')(__dirname + '/config.json');
var pick = require('deck').pick;

var ports = seaport.connect(config.seaport);
ports.on('connect', function () { console.log('browserling-router.js: connected to seaport') });
ports.on('disconnect', function () { console.log('browserling-router.js: disconnected from seaport') });

module.exports = function (req, res, bounce) {
    if ( &&':')[0] == '') {
        var ps = ports.query('web.status')
        if (ps.length) {
            return bounce(pick(ps), {
                headers: { 'x-forwarded-for': req.connection.remoteAddress }
        res.statusCode = 404;
        return res.end('no matching services are running for');

    if ( &&':')[0] == '') {
        console.log(req.method + ' request from ' + req.connection.remoteAddress + ' to ' + req.url);
        if (req.connection.encrypted || req.url == '/account/verify.json') {
            var ps = ports.query('web.browserling');
            if (ps.length) {
                return bounce(pick(ps), {
                    headers: { 'x-forwarded-for': req.connection.remoteAddress }
            res.statusCode = 404;
            return res.end('no matching services are running for\n');
        else {
            res.statusCode = 302;
            res.setHeader('location', '' + req.url);
            return res.end();
    else if ( &&':')[0] == '') {
        res.statusCode = 302;
        res.setHeader('location', '' + req.url);
        return res.end();


At Browserling all our services are registered with seaport, so the routing no longer happens to the port in the PORT environment variable. Instead the the application's port is requested through seaport's ports.query. I'll write more about how we use seaport at Brwoserling some time later.

Ploy has been really wonderful to work with. If you haven't used it and you're looking for a deployment system for node (and not just node!), definitely try it out!

Until next time!

We recently added invoices to Browserling. I thought I'd share how we did it as it's another interesting story.

Our customers keep asking for invoices all the time so we made it simple to create them. They can now just go to their Browserling accounts and download them. Here's how an invoice looks like:

Example invoice. (Download example.)

And here are the implementation details.

We use a node module called invoice. The invoice module takes a hash of invoice details, internally spawns pdflatex that creates a pdf invoice, and then calls a callback with the path to the pdf file, like this:

var invoice = require('invoice');
        template: 'browserling-dev-plan.tex',
        from: "Browserling inc\\\\3276 Logan Street\\\\Oakland, CA 94601\\\\USA",
        to: "John Smith\\\\Corporation Inc.",
        period: "2/2013",
        amount: "$20"
    function (err, pdf) {
        if (err) {
            console.log("Failed creating the invoice: " + err);
        console.log("Pdf invoice: " + pdf);

The browserling-dev-plan.tex latex template contains %to%, %from%, %period%, and %amount% place holders that the invoice module simply replaces with the given data:




\Large{\textbf{Invoice}} \\
\large{Subscription to Browserling's Developer Plan}


\section*{Invoice from:}

\section*{Invoice to:}




Once the pdf invoice is generated, we create a token that maps to the pdf file, and once it's requested, we send it to the customer as an application/pdf.

Until next time!

This is going to be a quick tutorial on how to run multiple node versions side by side. There are many different ways to do it but this works well for me.

First I compile node versions from source and I set them up in the following directory structure:


When compiling node I simply specify --prefix=/home/pkrumins/installs/node-vVERSION, and make install installs it into that path.

Next I've this bash alias:

function chnode {
  local node_path="/home/pkrumins/installs/node-v$1/bin"
  test -z "$1" && echo "usage: chnode <node version>" && return
  test ! -d "$node_path" && echo "node version $1 doesn't exist" && return

Now when I want to run node 0.8.21, I run chnode 0.8.21 to update the path:

$ chnode 0.8.21
$ which node
$ node --version

Or if I want to run node 0.6.18, I run chnode 0.6.18:

$ chnode 0.6.18
$ which node
$ node --version

Works for me both locally and in production. Until next time.