My Job Interview at GoogleTwo weeks ago I had an on-site interview at Google in Mountain View, California! The job interview with Google was an interesting experience and I want to tell you about it. (I got the green light from Google to publish this article.)

The position I was interviewing for was a Google SRE. SRE stands for Site Reliability Engineering. Site reliability engineers (SREs) are both software engineers and systems administrators, responsible for Google's production services from end-to-end.

There were eight separate interviews total. The first three were over the phone (phone interviews) and the remaining five were on-site. The first interview was with the recruiter and was not very technical but the other seven were very technical.

All interviews went very well but I was just notified that I did not get hired. Oh well... I personally think that I did really well. I answered all the questions but it seems they were not satisfied. The recruiter did not tell me the precise reason, just that the morning interviews were not that great and I should get more experience to work in their mission critical team.

Update: This article has been translated to Japanese.
Update: This article has been translated to German.

Here is how it all happened.

Shortly after I published the "Code Reuse in Google Chrome" post I was contacted by a recruiter at Google. The email said:

I recruit top notch Software Engineering talent at Google. I recently came across your name as a possible world class Engineer and am intrigued to know more about you. I promise to exchange some detailed info about us as well.

Interested to hear more? Want to be an impact player at Google? Then please respond with a current (English) copy of your resume and I'll be happy to call you and discuss.

At first I thought I would be applying for a software developer position, but after we went through my skillset, the recruiter concluded that I would better fit as an SRE. I agreed with him. This seemed like a perfect position for me. I love systems administration as much as I love programming.

First Interview (phone)

The first interview was on the 10th of September with the recruiter. He explained the Google recruitment process to me and we went through my skill set. I had to rank myself from 0 - 10 in a bunch of areas such as C programming, C++ programming, Python programming, networking, algorithms and data structures, distributed systems, Linux systems administration, and others.

As I said, based on my answers we concluded that SRE was the best position for me. An SRE basically has to know everything: algorithms, data structures, programming, networking, distributed systems, scalable architecture, troubleshooting. It's a great hacker position!

After these questions he asked me where I would like to work - Google office in Ireland, Zurich, Mountain View or Australia. I said Mountain View as it's the Googleplex! He explained that if the interviews went OK, I'd have to get an H-1B visa that allows non-US citizens to work in the US.

The second half of the interview had some basic technical questions, just to make sure I knew something. The questions were about Linux systems administration, algorithms, computer architecture and C programming. I can't go into any details because I signed a non-disclosure agreement and my recruiter kindly asked me not to post the questions!

I made some factual mistakes but he was satisfied and we scheduled the next phone interview. He warned me that it will be very technical and I should do really good preps. I asked him to give me a plenty of time for the preparation and we scheduled the next interview on 22nd of September.

He also told me that each phone interview is going to be 45 minutes to 1 hour long.

I started preparing like crazy. I found three presentations on what SRE is all about:

Then I found all the other blog posts about interviews and interview questions at Google:

I printed and read four Google research papers:

I also went through several books:

As I did not know if I might get specific programming language questions, I went through a few tens of recipes in C++ Cookbook, Python Cookbook, and Perl Cookbook.

Second Interview (phone)

The second phone interview was with an engineer from Google. He worked on the Ads team which is responsible for running AdSense, AdWords and other advertisement stuff.

The interview was very technical and started with an algorithmic problem which was too large to fit in computer memory. I had to tell him precisely how I would get around this problem and what data structures and algorithms I would use. He also asked me to think out loudly. The interview continued with questions about data structures, DNS, TCP protocol, a security vulnerability associated with TCP, networking in general, and Google itself. Sorry, but I can't disclose anything in more details.

After the interview the engineer had to write feedback on me. It was positive and I could move on with the interviews.

Third Interview (phone)

I gave myself more time to prepare and the third interview was on the 1st of October. It was with an engineer from the Google traffic team.

In this interview I had a very simple programming question and I had to do coding over phone. I was free to choose the language and I chose Perl as it is my most favorite programming language. It was impossible to dictate Perl syntax over phone "for my dollar sign element open paren at data close paren open curly brace ... close curly brace" so I submitted my Perl program over the email.

Then the same problem was taken to the next level, what if the data we are working on is gigabytes in size, terabytes in size. How would my program/solution change?

Finally I had a question about DNS again, then HTTP protocol, routing, and TCP data transfer.

The feedback was positive and I could prepare for the on-site interviews. In my conversation with my recruiter I got to know that there will be five on-site interviews, each exactly 45 minutes long. One on my previous work experience, one on algorithms and data structures, one on troubleshooting and networking, and two on software development with focus on C and C++.

My recruiter suggested that I read a few more documents:

I flew out to USA on 24th of October at 1pm from Latvia and arrived in California at 8pm. The flight was actually 14 hours but it was nice that I flew in the same direction as the time flows. This saved me 7 hours. The on-site interview was scheduled on 27th of October so I had a good rest before the interview. It was also nice that Google paid for my trip, hotel, cab and food. I had zero expenses!

Fourth Interview (on-site)

The fourth interview was finally at Googleplex! At 10am I met my recruiter and we had a 15 minute discussion about the interviews. He told me I would have two interviews now, then one of Google engineers would take me to lunch to one of Google's restaurants and then I would have three other interviews.

At 10:15am the first on-site interview began. It was about my previous job experience. I have had a lot of job experience in the past and I decided to tell about a physical security notification system that I coded in C on Linux a few years ago. The system would receive messages through the serial port and send out emails and SMS'es.

In the last minutes of the interview he asked me some basic Unix filesystem questions.

In all the on-site interviews I was writing and drawing on two big whiteboards. Fun!

Fifth Interview (on-site)

The fifth interview began at 11am. It was a coding session and began with a trick question and not a real coding problem. I was asked to implement the solution in C. The solution was a mathematical expression that was a one-line return statement. No big coding there. Then I was asked to write an implementation of a very well known data structure. While coding I made a mistake and forgot to initialize part of a data structure that I had malloc()'ed! The program would have segfault'ed in real life and I would have noticed the error, but Google engineers are very serious about it! If you have an interview don't ever make any mistakes!

After this interview I was taken to lunch by the engineer who interviewed me on the second (phone) interview. She told me she was working at Google for two years and was very happy about it. We went to Asian food restaurant (located in Googleplex) and I had all kinds of delicious foods. All free!

Then she showed me around Googleplex. It was all amazing. Free drinks and candy everywhere, some arcade machines, a beach volleyball outside, and many other surprising things.

Sixth Interview (on-site)

The sixth interview began at 12:45pm. It was a troubleshooting and networking interview. The interviewer drew a network diagram on the whiteboard and had imagined a problem in there. I had to ask a bunch of specific networking questions to locate the problem. He was satisfied and in the last few minutes of the interview he asked me some specific networking device questions.

Seventh Interview (on-site)

The seventh interview began at 1:30pm. It was a coding session. I was asked to implement a simple string manipulation subroutine in either C or C++. I chose C. Unfortunately I made an off-by-one mistake there - the most common programming mistake in the history of mankind. The whole interview focused on this one problem.

Eighth Interview (on-site)

The last, eight, interview began at 2:15pm. It was algorithms and data structures interview. The problem presented here was similar to the problem in the 2nd interview. Not only was it a problem too large to fit in computer memory but it also was distributed. So I had to do all kinds of trickery to solve it. The interview was very free-style and we talked back and forth about the problem. I arrived at the correct solution near the end of the interview and he said that not many candidates get that far in the solution. I was happy.

After the interview the engineer escorted me out to the lobby and I took a cab back to my hotel. That's it! :)

The End

Overall the Google interviews were pure fun for me. The interview questions were technical but not very challenging or difficult.

Thanks for the opportunity Google! :)

This article is part of the article series "Musical Geek Friday."
<- previous article next article ->

Musical Geek Friday: Monzy - Kill Dash NineMusical Geek Friday is back! This week we have a geek song about the most cruel way to kill a process - killing it with kill -9!

Kill -9 song is written by Monzy. Monzy's real name is Dan Maynes-Aminzade and he is a Ph.D. student at Stanford. Here is an article on Wired about him.

In his songs, Monzy bashes another nerdcore hip-hop musician Mc Plus+. I actually published a geek song by Mc Plus+ on previous Musical Geek Friday - MGF #14: Alice and Bob.

Song's title has a lot of meaning. Kill -9 sends SIGKILL signal to a process. It's one of the signals that can't be caught. The process just gets killed.

Here it is! The kill dash nine song. The song is strongly NSFW as it contains some explicit language.

[audio:http://www.catonmat.net/download/monzy-kill_dash_nine.mp3]

Download this song: kill dash nine.mp3 (musical geek friday #15)
Downloaded: 16414 times

Download lyrics: kill dash nine lyrics (musical geek friday #15)
Downloaded: 3797 times

Kill dash nine lyrics (profanity filtered, see the "download lyrics" link above for uncensored version):

I guess I'll have to shut you down for good this time,
Already tried a SIGQUIT, so now it's KILL DASH 9.
You gotta learn when it's time for your thread to yield;
It shoulda slept; instead you stepped and now your fate is sealed.
I'll take your process off the run queue without even asking
'Cause my flow is like reentrant and preemptive multitasking.
Your sad rhymes are spinnin' like you're in a deadlock,
You're like a synchronous sock that don't know when to block;
So I pull out my keyboard and I pull out my glock,
And I dismount your girl and I mount /proc
And I've got your f**kin pid and the bottom line
Is that you best not front or else it's KILL DASH NINE.

KILL DASH NINE,
No more CPU time.
I run KILL DASH NINE,
And your process is mine.
I run KILL DASH NINE,
'Cause it's MY time to shine
So don't step outta line or else it's
KILL DASH NINE!

See it ain't about the Benjamins or Pentiums or Athlons,
But you rappin' 50 meters while I'm spittin' in decathlons.
Your s**t's old and busted, mine's the new hotness;
You're like CLR and I'm like CLRS.
You're running csh and my shell is bash,
You're the tertiary storage; I'm the L1 cache.
I'm a web crawling spider; you an Internet mosquito;
You thought the 7-layer model referred to a burrito.
You're a dialup connection; I'm a gigabit LAN.
I last a mythical man-month; you a one-minute man.
It's like I'm running Thunderbird and you're still stuck with Pine,
Which is why I think it's time for me to KILL DASH NINE.

Yeah it's KILL DASH NINE
No more CPU time.
'Cause it's KILL DASH NINE,
And your process is mine.
I said KILL DASH NINE
'Cause it's my time to shine,
So don't step outta line or else it's
KILL DASH NINE!

My posse throws down like leaky bucket regulators;
I was coding s**t in MIPS while you were playing Space Invaders.
With my finger on the trigger I run ./configure
Yo, this package is big, but MY package is bigger.
I roll my weed with Zig Zag while I zag-zig splay,
And I do a bounds check before I write to an array.
I'm a loc'd out baller writing KLOCS a day,
'Cause it's publish or perish, fool, what can I say?
I'm 26 now, will I live to see 28?
Some days I wonder if I'll survive to graduate.
But hey, that's just fine, I won't ever resign,
And if fools try to step then it's KILL DASH NINE!

Yeah it's KILL DASH NINE,
From my command line
It's KILL DASH NINE
Sending chills down your spine,
I said KILL DASH NINE,
'Cause it's my time to shine,
So don't step outta line or else it's
KILL DASH NINE!

fs sa rlidwka
I'll chown your home and take your access away
Comin' straight outta Stanford, ain't nobody tougher,
Control-X, Control-C, I'll discard your f**kin' buffer.

You're outside your scope, son, close them curly brackets,
'Cause I drop punk-a** b*****s like a modem drops packets.
Dump your m***********g core, and trace your stack
'Cause where your a** is going, there won't be no callback.
See my style is divine and my code is sublime,
My career's in a climb and yours is in a decline.

I'll write a pound-define and assign you as mine,
So refine those sad rhymes or remove your plus signs,

Or it's KILL DASH NINE,
No more CPU time,
'Cause it's KILL DASH NINE,
And your process is mine,
I said KILL DASH NINE
'Cause it's my time to shine,
Bitch you stepped outta line and now it's
KILL DASH NINE!

Here is Monzy performing the song at Stanford University:

Download "kill -9" Song

Download this song: kill dash nine.mp3 (musical geek friday #15)
Downloaded: 16414 times

Download lyrics: kill dash nine lyrics (musical geek friday #15)
Downloaded: 3797 times

Click to listen:
[audio:http://www.catonmat.net/download/monzy-kill_dash_nine.mp3]

Have fun and until next geeky Friday! :)

This article is part of the article series "MIT Introduction to Algorithms."
<- previous article next article ->

MIT AlgorithmsThis is the ninth post in an article series about MIT's lecture course "Introduction to Algorithms." In this post I will review lectures thirteen and fourteen. They are on theoretical topics of Amortized Analysis, Competitive Analysis and Self-Organizing Lists.

Amortized analysis is a tool for analyzing algorithms that perform a sequence of similar operations. It can be used to show that the average cost of an operation is small, if one averages over a sequence of operations, even though a single operation within the sequence might be expensive. Amortized analysis differs from average-case analysis in that probability is not involved; an amortized analysis guarantees the average performance of each operation in the worst case.

Lecture thirteen looks at three most common techniques used in amortized analysis - Aggregate Analysis, Accounting Method (also known as Taxation Method) and Potential Method.

Competitive analysis is a method that is used to show how an Online Algorithm compares to an Offline Algorithm. An algorithm is said to be online if it does not know the data it will be executing on beforehand. An offline algorithm may see all of the data in advance.

A self-organizing list is a list that reorders its elements based on some self-organizing heuristic to improve average access time.

Lecture fourteen gives a precise definition of what it means for an algorithm to be competitive and shows a heuristic for a self-organizing list that makes it 4-competitive.

Lecture 13: Amortized Algorithms

Lecture thirteen starts with a question - "How large should a hash table be?" Well, we could make it as large as possible to minimize the search time, but then it might take too much space. We could then try to make it as small as possible, but what if we don't know the number of elements that will be hashed into it? The solution is to use a dynamic hash table that grows whenever it is about to overflow.

This creates a problem that at the moment of overflow the table must be grown and growing it might take a lot of time. Thus, we need a way to analyze the running time in a long run.

Lecture continues with three methods for this analysis, called amortized arguments - aggregate method, accounting method, and potential method.

In aggregate analysis, we determine an upper bound T(n) on the total cost of a sequence of n operations. The average cost per operation is then T(n)/n. We take the average cost as the amortized cost of each operation, so that all operations have the same amortized cost.

In accounting method we determine an amortized cost of each operation. When there is more than one type of operation, each type of operation may have a different amortized cost. The accounting method overcharges some operations early in the sequence, storing the overcharge as "prepaid credit" on specific objects in the data structure. The credit is used later in the sequence to pay for operations that are charged less than they actually cost.

Potential method is like the accounting method in that we determine the amortized cost of each operation and may overcharge operations early on to compensate for undercharges later. The potential method maintains the credit as the "potential energy" of the data structure as a whole instead of associating the credit with individual objects within the data structure.

Each method is applied to dynamic tables to show that average cost per insert is O(1).

You're welcome to watch lecture thirteen:

Topics covered in lecture thirteen:

  • [01:05] How large should a hash table be?
  • [04:45] Dynamic tables.
  • [06:15] Algorithm of dynamic hash tables.
  • [07:10] Example of inserting elements in a dynamic array.
  • [09:35] Wrong analysis of time needed for n insert operations.
  • [12:15] Correct analysis of n insert operations using aggregate analysis method.
  • [19:10] Definition of amortized analysis.
  • [21:20] Types of amortized arguments.
  • [23:55] Accounting method (taxation method) amortized analysis.
  • [29:00] Dynamic table analysis with accounting (taxation) method.
  • [40:45] Potential method amortized analysis.
  • [54:15] Table doubling analysis with potential method.
  • [01:13:10] Conclusions about amortized costs, methods and bounds.

Lecture thirteen notes:

MIT Algorithms Lecture 13 Notes Thumbnail. Page 1 of 2.
Lecture 13, page 1 of 2.

MIT Algorithms Lecture 13 Notes Thumbnail. Page 2 of 2.
Lecture 13, page 2 of 2.

Lecture 14: Competitive Analysis and Self-Organizing Lists

Lecture fourteen starts with the notion of self-organizing lists. A self-organizing list is a list that reorders itself to improve the average access time. The goal is to find a reordering that minimizes the total access time.

At this point the lecture steps away from this problem for a moment and defines online algorithms and offline algorithms: Given a sequence S of operations that algorithm must execute, an online algorithm must execute each operation immediately but an offline algorithm may see all of S in advance.

Now the lecture returns to the self-organizing list problem and looks at two heuristics how the list might reorganize itself to minimize the access time. The first way is to keep a count of the number of times each element in the list is accessed and reorder list in the order of decreasing count. The other is move-to-front heuristic (MTF) - after accessing the element, move it to the front of the list.

Next the lecture defines what it means for an algorithm to be competitive. It is said that an online algorithm is α-competitive if there is a constant k such that for any sequence of operations the (running time cost of online algorithm) <= α·(running time cost of an optimal offline algorithm (also called "God's algorithm")) + k.

Lecture continues with a theorem that move-to-front is 4-competitive for self-organizing lists. The proof of this theorem takes almost an hour!

Topics covered in lecture fourteen:

  • [01:00] Definition of self-organizing lists.
  • [03:00] Example of a self-organizing list (illustrates transpose cost and access cost).
  • [05:00] Definition of on-line and off-line algorithms.
  • [08:45] Worst case analysis of self-organizing lists.
  • [11:40] Average case analysis of selforganizing lists.
  • [17:05] Move-to-front heuristic.
  • [20:35] Definition of competitive analysis.
  • [23:35] Theorem: MTF is 4-competitive.
  • [25:50] Proof of this theorem.

Lecture fourteen notes:

MIT Algorithms Lecture 13 Notes Thumbnail. Page 1 of 2.
Lecture 14, page 1 of 2.

MIT Algorithms Lecture 14 Notes Thumbnail. Page 2 of 2.
Lecture 14, page 2 of 2.

Have fun with amortized and competitive analysis! The next post will be about dynamic programming!

PS. This course is taught from the CLRS book (also called "Introduction to Algorithms"):

These two posters show the difference between professor Edsger Dijkstra and the author of Perl programming language, Larry Wall.

Edsger Dijkstra, Quick and Dirty
Edsger Dijkstra - Quick and Dirty, I Wouldn't Like It.

Larry Wall, Quick and Dirty
Larry Wall - Quick and Dirty, I'd Hit It.

See also what John McCarthy would say about your code.

Edsger DijkstraI found a great video interview with Edsger Wybe Dijkstra. You have probably heard of Dijkstra's algorithm. He invented it.

In the interview professor Edsger talks about his thoughts on software development. He compares two very different styles of programming - Mozart style of programming vs. Beethoven style of programming. When Mozart started to write, the composition was finished. He wrote manuscript in elegant handwriting in one go. Beethoven was a doubter and a struggler. He started writing before he finished the composition and then glued corrections onto the page. In one place he did it nine times. When they peeled them, the last version proved to be identical to the first one.

From the video one can understand that Edsger preferred Mozart's style of programming. Not just programming, but Mozart style of doing things. He says that the most important thing has been the daily discipline of neatly writing down his thoughts.

His daily discipline lead to hundreds of crystal clear scientific papers, which have now been archived in EWD Archive.

You are welcome to watch interview with Edsger Dijkstra:

At the beginning of video Dijkstra criticizes current software release methodology. He says that version 1.0 of should be the finished product. I don't think he's right. It's like Tannenbaum saying Torvalds that Linux is obsolete. Also see "Release Early, Release Often."

Edsger Dijkstra's quotes from video:

  • Computer science is no more about computers than astronomy is about telescopes.
  • The competent programmer is fully aware of the limited size of his own skull. He therefore approaches his task in full humility and avoids clever tricks like the plague.
  • We should not introduce errors through sloppiness but systematically keep them out.
  • Program testing can convincingly show the presence of bugs but it is hopelessly inadequate to show their absence.
  • Elegance is not a dispensable luxury but a factor that decides between success and failure.

I found a funny poster of Dijkstra:

Dijkstra - Quick and Dirty