This article is part of the article series "MIT Introduction to Algorithms."
<- previous article next article ->

MIT AlgorithmsThis is the tenth post in an article series about MIT's lecture course "Introduction to Algorithms." In this post I will review lecture fifteen, which introduces the concept of Dynamic Programming and applies it to the Longest Common Subsequence problem.

Dynamic programming is a design technique similar to divide and conquer. Divide-and-conquer algorithms partition the problem into independent subproblems, solve the subproblems recursively, and then combine their solutions to solve the original problem. Dynamic programming is applicable when the subproblems are not independent, that is, when subproblems share subsubproblems. A dynamic-programming algorithm solves every subsubproblem just once and then saves its answer in a table, thereby avoiding the work of recomputing the answer every time the subsubproblem is encountered.

Dynamic programming was systematized by Richard E. Bellman. He began the systematic study of dynamic programming in 1955. The word "programming," both here and in linear programming, refers to the use of a tabular solution method and not to writing computer code.

Dynamic programming is typically applied to optimization problems. In such problems there can be many possible solutions. Each solution has a value, and we wish to find a solution with the optimal (minimum or maximum) value. We call such a solution an optimal solution, as opposed to the optimal solution, since there may be several solutions that achieve the optimal value.

Dynamic programming can be effectively applied to solve the longest common subsequence (LCS) problem. The problem is stated as following: given two sequences (or strings) x and y find a maximum-length common subsequence (substring) of x and y.

For example, given two sequences x = "ABCBDAB" and y = "BDCABA", the LCS(x, y) = { "BCBA", "BDAB", "BCAB" }. As you can see there are several optimal solutions.

Lecture fifteen introduces dynamic programming via this longest common subsequence problem. It first gives a brute-force, exponential time algorithm for solving it. The idea of algorithm is to check every subequence of x[1..m] (m is the length of sequence x) to see if it is also a subsequence of y[1..n] (n is the length of sequence y). Checking takes O(n) time, and there are 2m subsequences of x. The running time thus is exponential O(n·2m). It is no good for large sequences and the lecture continues with a simplification - let's look at the length of a longest-common subseq and then extend this algorithm to find the LCS itself. The simplified algorithm is recursive in nature and computes the same subproblems. At this moment two dynamic programming hallmarks are stated:

  • 1. Optimal substructure: an optimal solution to a problem contains optimal solutions to subproblems.
  • 2. Overlapping subproblems: a recursive solution contains a "small" number of distinct subproblems repeated many times.

As the subproblems are overlapping, the lecture introduces concept of memoization algorithm (note that it's not memorization). A better known word for memoization is caching. The subproblems are cached (memoized) so that they are not recomputed over and over again.

The lecture ends with constructing a dynamic programming table for LCS problem and explains how to find a LCS from this table.

You're welcome to watch lecture fifteen:

Topics covered in lecture fifteen:

  • [00:20] Dynamic programming.
  • [01:47] Longest common subsequence (LCS) problem.
  • [03:55] Example of LCS on sequences "ABCBDAB" and "BDCABA".
  • [06:55] Brute force algorithm for LCS.
  • [07:50] Analysis of brute force algorithm.
  • [11:40] Simplification of LCS problem.
  • [16:20] Theorem about LCS length.
  • [18:25] Proof of the theorem.
  • [30:40] Dynamic programming hallmark #1: Optimal substructure.
  • [32:25] Example of hallmark #1 on LCS.
  • [34:15] Recursive algorithm for longest common subseq.
  • [36:40] Worst case analysis of the algorithm.
  • [38:10] Recursion tree of algorithm.
  • [42:40] Dynamic programming hallmark #2: Overlapping subproblems.
  • [44:40] Example of hallmark #2 on LCS.
  • [45:50] Memoization algorithm for LCS.
  • [48:45] Time and space analysis of memoized algorithm.
  • [54:30] Dynamic programming algorithm for LCS.
  • [01:01:15] Analysis of dynamic programming algorithm.

Lecture fifteen notes:

MIT Algorithms Lecture 15 Notes Thumbnail. Page 1 of 2.
Lecture 15, page 1 of 2.

MIT Algorithms Lecture 15 Notes Thumbnail. Page 2 of 2.
Lecture 15, page 2 of 2.

Have fun with programming dynamically! The next post will be about graphs, greedy algorithms and minimum spanning trees.

PS. This course is taught from the CLRS book (also called "Introduction to Algorithms"). Chapter 15 is called "Dynamic Programming" and covers the topics in this lecture. It also explains the assembly-line scheduling problem, matrix-chain multiplication problem, elements of dynamic programming and optimal binary search trees.

My Job Interview at GoogleTwo weeks ago I had an on-site interview at Googleplex! The job interview with Google was an interesting experience and I want to tell you about it.

The position I was interviewing for was a Google SRE. SRE stands for Site Reliability Engineering. Site reliability engineers (SREs) are both software engineers and systems administrators, responsible for Google's production services from end-to-end.

There were eight separate interviews. The first three were phone interviews and the remaining five were on-site. The first interview was with the recruiter and was not very technical but the other seven were very technical.

All interviews went pretty well but I just learned that I won't be getting hired. I personally think that I did really well. I answered all the questions but it seems they were not satisfied. Google and the recruiter didn't give me precise reasons. He said that "the morning interviews were not that great" and "I should get more experience to work in their mission critical team."

Update: This article has been translated to Japanese.
Update: This article has been translated to German.

Here is how it all happened.

Shortly after I published the "Code Reuse in Google Chrome" post I was contacted by a recruiter at Google. The email said:

I recruit top notch Software Engineering talent at Google. I recently came across your name as a possible world class Engineer and am intrigued to know more about you. I promise to exchange some detailed info about us as well.

Interested to hear more? Want to be an impact player at Google? Then please respond with a current (English) copy of your resume and I'll be happy to call you and discuss.

At first I thought I would be applying for a software developer position, but after we went through my skillset, the recruiter concluded that I would better fit as an SRE. I agreed with him. This seemed like a perfect position for me. I love systems administration as much as I love programming.

First Interview (phone)

The first interview was on 10th of September with the recruiter. He explained the Google recruitment process to me and we went through my skill set. I had to rank myself from 0 - 10 in a bunch of areas such as C programming (8), C++ programming (7), Python programming (6), networking (6), algorithms and data structures (6), distributed systems (5), Linux systems administration (9), and others.

Based on my answers we concluded that SRE was the best position for me. An SRE basically has to know everything: algorithms, data structures, programming, networking, distributed systems, scalable architecture, troubleshooting. It's a great hacker position!

The second half of the interview had some basic technical questions, just to make sure I knew something. The questions were about Linux systems administration, algorithms, computer architecture and C programming. I can't go into any details because I signed a non-disclosure agreement. (Update: NDA expired, so I posted all the interview questions at the bottom of this post.)

I made some factual mistakes but he was satisfied and we scheduled the next phone interview. He warned me that it will be very technical and I should do really good prep. I asked him to give me a plenty of time for the preparation and we scheduled the next interview on 22nd of September.

He also told me that each phone interview is going to be 45 minutes to 1 hour long.

I started preparing like crazy. I found three presentations on what SRE is all about:

Then I found all the other blog posts about interviews and interview questions at Google:

I printed and read four Google research papers:

I also went through several books:

As I did not know if I might get specific programming language questions, I went through a few tens of recipes in C++ Cookbook, Python Cookbook, and Perl Cookbook.

Second Interview (phone)

The second phone interview was with an engineer from Google. He worked on the Ads team which is responsible for running AdSense, AdWords and other advertisement stuff.

The interview was very technical and started with an algorithmic problem which was too large to fit in computer memory. I had to tell him precisely how I would get around this problem and what data structures and algorithms I would use. He also asked me to think out loudly. The interview continued with questions about data structures, DNS, TCP protocol, a security vulnerability associated with TCP, networking in general, and Google itself.

The questions basically where:

  • You've 100GB file but only 1GB of memory. How would you sort it?
  • Tell me about your favorite data structure.
  • How does DNS work?
  • Can DNS work over TCP?
  • How do DNS root servers work?
  • How does BGP work?
  • How does TCP work and what is 3-way handshake?
  • How does TCP session spoofing works and how is it prevented?
  • What would you change at Google?

After the interview the engineer had to write feedback on me. It was positive and I could move on with the interviews.

Third Interview (phone)

I gave myself more time to prepare and the third interview was on the 1st of October. It was with an engineer from the Google traffic team.

In this interview I had a very simple programming question and I had to do coding over phone. I was free to choose the language and I chose Perl as it is my most favorite programming language. It was impossible to dictate Perl syntax over phone "for my dollar sign element open paren at data close paren open curly brace ... close curly brace" so I submitted my Perl program over the email.

The question was: Write a program to find set difference. Given two sets A and B, find elements in A-B, or in other words, find elements in set A that are not in B.

Then the same problem was taken to the next level, what if the data we are working on is gigabytes in size, terabytes in size. How would my program/solution change?

Finally I had a question about DNS again, then HTTP protocol, routing, and TCP data transfer.

The questions were:

  • How does DNS work?
  • How does HTTP work?
  • If a HTTP request fails, does operating system retry it, or the browser?

The feedback was positive and I could prepare for the on-site interviews. In my conversation with my recruiter I got to know that there will be five on-site interviews, each exactly 45 minutes long. One on my previous work experience, one on algorithms and data structures, one on troubleshooting and networking, and two on software development with focus on C and C++.

My recruiter suggested that I read a few more documents:

Fourth Interview (on-site)

The fourth interview was finally at Googleplex! At 10am I met my recruiter and we had a 15 minute discussion about the interviews. He told me I would have two interviews now, then one of Google engineers would take me to lunch to one of Google's restaurants and then I would have three other interviews.

At 10:15am the first on-site interview began. It was about my previous job experience. I have had a lot of job experience in the past and I decided to tell about a physical security notification system that I coded in C on Linux a few years ago. The system would receive messages through the serial port and send out emails and text messages.

In the last minutes of the interview he asked me some basic Unix filesystem questions. What is an inode?

In all the on-site interviews I was writing and drawing on two big whiteboards.

Fifth Interview (on-site)

The fifth interview began at 11am. It was a coding session and began with a trick question and not a real coding problem. The trick question was: What's the angle between clock hands when it's 3:15. Then I was asked to implement the solution in C for arbitrary hour:minute. The solution was a mathematical expression that was a one-line return statement. No big coding there. Then I was asked to write an implementation of a binary tree. While coding I made a mistake and forgot to initialize part of a data structure that I had malloc()'ed. The program would have segfault'ed in real life and I would have noticed the error. I said there were no errors here but the Google engineer pointed out I hadn't initialized data.

After this interview I was taken to lunch by the engineer who interviewed me on the second (phone) interview. She told me she was working at Google for two years and was very happy about it. We went to Asian food restaurant (located in Googleplex). Then she showed me around Googleplex.

Sixth Interview (on-site)

The sixth interview began at 12:45pm. It was a troubleshooting and networking interview. The interviewer drew a network diagram on the whiteboard and had imagined a problem in there. I had to ask a bunch of specific networking questions to locate the problem. He was satisfied and in the last few minutes of the interview he asked me some specific networking device questions, like what's the difference between a router and a switch and what's OSI model.

Seventh Interview (on-site)

The seventh interview began at 1:30pm. It was a coding session. I was asked to implement a simple string manipulation subroutine that finds common characters in two C strings. I could use either C or C++. I chose C. Unfortunately I made an off-by-one mistake there - the most common programming mistake in the history of mankind. The whole interview focused on this one problem.

Eighth Interview (on-site)

The last, eight, interview began at 2:15pm. It was algorithms and data structures interview. The problem presented here was similar to the problem in the 2nd interview. Not only was it a problem too large to fit in computer memory but it also was distributed. How to sort data that doesn't fit in memory and you've 100 computers to sort it. I had to do all kinds of trickery to solve it. The interview was very free-style and we talked back and forth about the problem. I arrived at the correct solution near the end of the interview and he said that not many candidates get that far in the solution. I was also asked if I knew mapreduce and of course I knew mapreduce as I had read the Google paper. This was basically a mapreduce problem.

After the interview the engineer escorted me out to the lobby and I took a cab back to my hotel.

The End

Overall the Google interviews were super fun. I love trivia questions like they ask in interviews. The interview questions were technical but not very challenging or difficult.

Thanks for the opportunity Google!

Update: Now that my NDA has expired, here are some of the interview questions that I remember.

  • Tell me about one of the projects on your resume.
  • What technologies did you use to get this project going?
  • What if your project had 5000 or 50000 or 5000000 users?
  • What's an inode?
  • What's the angle between clock faces when it's 3:15?
  • Write a C function that returns angle between clock faces for any (hour, minute).
  • Write a binary tree.
  • How would you troubleshoot this problem - network diagram prestented.
  • What's the difference between a router and switch?
  • Implement a routine in C that counts number of characters in a string.
  • Given 100GB file and a computer with 1GB of memory, how would you sort it.
  • Can you make it parallel and solve it on 100 computers?
  • What's a priority queue?
  • How does BGP work?
  • Can DNS use TCP? In which cases DNS uses TCP?
  • Implement set difference in any language you like.
  • How does HTTP work?
  • How does 3 way handshake work in TCP?
  • What's void *?
  • What's the system call for creating files?
  • Order by execution time: reading disk, accessing memory, context switch, writing a cpu register.
This article is part of the article series "Musical Geek Friday."
<- previous article next article ->

Musical Geek Friday: Monzy - Kill Dash NineMusical Geek Friday is back! This week we have a geek song about the most cruel way to kill a process - killing it with kill -9!

Kill -9 song is written by Monzy. Monzy's real name is Dan Maynes-Aminzade and he is a Ph.D. student at Stanford. Here is an article on Wired about him.

In his songs, Monzy bashes another nerdcore hip-hop musician Mc Plus+. I actually published a geek song by Mc Plus+ on previous Musical Geek Friday - MGF #14: Alice and Bob.

Song's title has a lot of meaning. Kill -9 sends SIGKILL signal to a process. It's one of the signals that can't be caught. The process just gets killed.

Here it is! The kill dash nine song. The song is strongly NSFW as it contains some explicit language.


Download this song: kill dash nine.mp3 (musical geek friday #15)
Downloaded: 17515 times

Download lyrics: kill dash nine lyrics (musical geek friday #15)
Downloaded: 7917 times

Kill dash nine lyrics (profanity filtered, see the "download lyrics" link above for uncensored version):

I guess I'll have to shut you down for good this time,
Already tried a SIGQUIT, so now it's KILL DASH 9.
You gotta learn when it's time for your thread to yield;
It shoulda slept; instead you stepped and now your fate is sealed.
I'll take your process off the run queue without even asking
'Cause my flow is like reentrant and preemptive multitasking.
Your sad rhymes are spinnin' like you're in a deadlock,
You're like a synchronous sock that don't know when to block;
So I pull out my keyboard and I pull out my glock,
And I dismount your girl and I mount /proc
And I've got your f**kin pid and the bottom line
Is that you best not front or else it's KILL DASH NINE.

No more CPU time.
And your process is mine.
'Cause it's MY time to shine
So don't step outta line or else it's

See it ain't about the Benjamins or Pentiums or Athlons,
But you rappin' 50 meters while I'm spittin' in decathlons.
Your s**t's old and busted, mine's the new hotness;
You're like CLR and I'm like CLRS.
You're running csh and my shell is bash,
You're the tertiary storage; I'm the L1 cache.
I'm a web crawling spider; you an Internet mosquito;
You thought the 7-layer model referred to a burrito.
You're a dialup connection; I'm a gigabit LAN.
I last a mythical man-month; you a one-minute man.
It's like I'm running Thunderbird and you're still stuck with Pine,
Which is why I think it's time for me to KILL DASH NINE.

No more CPU time.
'Cause it's KILL DASH NINE,
And your process is mine.
'Cause it's my time to shine,
So don't step outta line or else it's

My posse throws down like leaky bucket regulators;
I was coding s**t in MIPS while you were playing Space Invaders.
With my finger on the trigger I run ./configure
Yo, this package is big, but MY package is bigger.
I roll my weed with Zig Zag while I zag-zig splay,
And I do a bounds check before I write to an array.
I'm a loc'd out baller writing KLOCS a day,
'Cause it's publish or perish, fool, what can I say?
I'm 26 now, will I live to see 28?
Some days I wonder if I'll survive to graduate.
But hey, that's just fine, I won't ever resign,
And if fools try to step then it's KILL DASH NINE!

From my command line
Sending chills down your spine,
'Cause it's my time to shine,
So don't step outta line or else it's

fs sa rlidwka
I'll chown your home and take your access away
Comin' straight outta Stanford, ain't nobody tougher,
Control-X, Control-C, I'll discard your f**kin' buffer.

You're outside your scope, son, close them curly brackets,
'Cause I drop punk-a** b*****s like a modem drops packets.
Dump your m***********g core, and trace your stack
'Cause where your a** is going, there won't be no callback.
See my style is divine and my code is sublime,
My career's in a climb and yours is in a decline.

I'll write a pound-define and assign you as mine,
So refine those sad rhymes or remove your plus signs,

No more CPU time,
'Cause it's KILL DASH NINE,
And your process is mine,
'Cause it's my time to shine,
Bitch you stepped outta line and now it's

Here is Monzy performing the song at Stanford University:

Download "kill -9" Song

Download this song: kill dash nine.mp3 (musical geek friday #15)
Downloaded: 17515 times

Download lyrics: kill dash nine lyrics (musical geek friday #15)
Downloaded: 7917 times

Click to listen:

Have fun and until next geeky Friday!

This article is part of the article series "MIT Introduction to Algorithms."
<- previous article next article ->

MIT AlgorithmsThis is the ninth post in an article series about MIT's lecture course "Introduction to Algorithms." In this post I will review lectures thirteen and fourteen. They are on theoretical topics of Amortized Analysis, Competitive Analysis and Self-Organizing Lists.

Amortized analysis is a tool for analyzing algorithms that perform a sequence of similar operations. It can be used to show that the average cost of an operation is small, if one averages over a sequence of operations, even though a single operation within the sequence might be expensive. Amortized analysis differs from average-case analysis in that probability is not involved; an amortized analysis guarantees the average performance of each operation in the worst case.

Lecture thirteen looks at three most common techniques used in amortized analysis - Aggregate Analysis, Accounting Method (also known as Taxation Method) and Potential Method.

Competitive analysis is a method that is used to show how an Online Algorithm compares to an Offline Algorithm. An algorithm is said to be online if it does not know the data it will be executing on beforehand. An offline algorithm may see all of the data in advance.

A self-organizing list is a list that reorders its elements based on some self-organizing heuristic to improve average access time.

Lecture fourteen gives a precise definition of what it means for an algorithm to be competitive and shows a heuristic for a self-organizing list that makes it 4-competitive.

Lecture 13: Amortized Algorithms

Lecture thirteen starts with a question - "How large should a hash table be?" Well, we could make it as large as possible to minimize the search time, but then it might take too much space. We could then try to make it as small as possible, but what if we don't know the number of elements that will be hashed into it? The solution is to use a dynamic hash table that grows whenever it is about to overflow.

This creates a problem that at the moment of overflow the table must be grown and growing it might take a lot of time. Thus, we need a way to analyze the running time in a long run.

Lecture continues with three methods for this analysis, called amortized arguments - aggregate method, accounting method, and potential method.

In aggregate analysis, we determine an upper bound T(n) on the total cost of a sequence of n operations. The average cost per operation is then T(n)/n. We take the average cost as the amortized cost of each operation, so that all operations have the same amortized cost.

In accounting method we determine an amortized cost of each operation. When there is more than one type of operation, each type of operation may have a different amortized cost. The accounting method overcharges some operations early in the sequence, storing the overcharge as "prepaid credit" on specific objects in the data structure. The credit is used later in the sequence to pay for operations that are charged less than they actually cost.

Potential method is like the accounting method in that we determine the amortized cost of each operation and may overcharge operations early on to compensate for undercharges later. The potential method maintains the credit as the "potential energy" of the data structure as a whole instead of associating the credit with individual objects within the data structure.

Each method is applied to dynamic tables to show that average cost per insert is O(1).

You're welcome to watch lecture thirteen:

Topics covered in lecture thirteen:

  • [01:05] How large should a hash table be?
  • [04:45] Dynamic tables.
  • [06:15] Algorithm of dynamic hash tables.
  • [07:10] Example of inserting elements in a dynamic array.
  • [09:35] Wrong analysis of time needed for n insert operations.
  • [12:15] Correct analysis of n insert operations using aggregate analysis method.
  • [19:10] Definition of amortized analysis.
  • [21:20] Types of amortized arguments.
  • [23:55] Accounting method (taxation method) amortized analysis.
  • [29:00] Dynamic table analysis with accounting (taxation) method.
  • [40:45] Potential method amortized analysis.
  • [54:15] Table doubling analysis with potential method.
  • [01:13:10] Conclusions about amortized costs, methods and bounds.

Lecture thirteen notes:

MIT Algorithms Lecture 13 Notes Thumbnail. Page 1 of 2.
Lecture 13, page 1 of 2.

MIT Algorithms Lecture 13 Notes Thumbnail. Page 2 of 2.
Lecture 13, page 2 of 2.

Lecture 14: Competitive Analysis and Self-Organizing Lists

Lecture fourteen starts with the notion of self-organizing lists. A self-organizing list is a list that reorders itself to improve the average access time. The goal is to find a reordering that minimizes the total access time.

At this point the lecture steps away from this problem for a moment and defines online algorithms and offline algorithms: Given a sequence S of operations that algorithm must execute, an online algorithm must execute each operation immediately but an offline algorithm may see all of S in advance.

Now the lecture returns to the self-organizing list problem and looks at two heuristics how the list might reorganize itself to minimize the access time. The first way is to keep a count of the number of times each element in the list is accessed and reorder list in the order of decreasing count. The other is move-to-front heuristic (MTF) - after accessing the element, move it to the front of the list.

Next the lecture defines what it means for an algorithm to be competitive. It is said that an online algorithm is α-competitive if there is a constant k such that for any sequence of operations the (running time cost of online algorithm) <= α·(running time cost of an optimal offline algorithm (also called "God's algorithm")) + k.

Lecture continues with a theorem that move-to-front is 4-competitive for self-organizing lists. The proof of this theorem takes almost an hour!

Topics covered in lecture fourteen:

  • [01:00] Definition of self-organizing lists.
  • [03:00] Example of a self-organizing list (illustrates transpose cost and access cost).
  • [05:00] Definition of on-line and off-line algorithms.
  • [08:45] Worst case analysis of self-organizing lists.
  • [11:40] Average case analysis of selforganizing lists.
  • [17:05] Move-to-front heuristic.
  • [20:35] Definition of competitive analysis.
  • [23:35] Theorem: MTF is 4-competitive.
  • [25:50] Proof of this theorem.

Lecture fourteen notes:

MIT Algorithms Lecture 13 Notes Thumbnail. Page 1 of 2.
Lecture 14, page 1 of 2.

MIT Algorithms Lecture 14 Notes Thumbnail. Page 2 of 2.
Lecture 14, page 2 of 2.

Have fun with amortized and competitive analysis! The next post will be about dynamic programming!

PS. This course is taught from the CLRS book (also called "Introduction to Algorithms"):

These two posters show the difference between professor Edsger Dijkstra and the author of Perl programming language, Larry Wall.

Edsger Dijkstra, Quick and Dirty
Edsger Dijkstra - Quick and Dirty, I Wouldn't Like It.

Larry Wall, Quick and Dirty
Larry Wall - Quick and Dirty, I'd Hit It.

See also what John McCarthy would say about your code.