This article is part of the article series "MIT Introduction to Algorithms."
<- previous article next article ->

MIT AlgorithmsThis is the fourth post in an article series about MIT's lecture course "Introduction to Algorithms." In this post I will review lecture six, which is on the topic of Order Statistics.

The problem of order statistics can be described as following. Given a set of N elements, find k-th smallest element in it. For example, the minimum of a set is the first order statistic (k = 1) and the maximum is the N-th order statistic (k = N).

Lecture six addresses the problem of selecting the k-th order statistic from a set of N distinct numbers.

The lecture begins with discussing the naive algorithm for finding k-th order statistic -- sort the given set (or array) A of numbers and return k-th element A[k]. Running time of such algorithm at best is O(n·log(n)).

Erik Demaine (professor) presents a randomized divide and conquer algorithm for this problem (my second post covers divide and conquer algorithms), which runs in expected linear time.

The key idea of this algorithm is to call Randomized-Partition (from previous lecture) subroutine on the given array and then recursively conquer on the partitioned elements on the left and right of the chosen pivot element.

Here is the pseudo code of this algorithm:

Randomized-Select(A, p, q, k) // finds k-th smallest element in array A[p..q]
  if p == q
    return A[p]
  r = Randomized-Partition(A, p, q)
  n = r - p + 1
  if k == n
    return A[r]
  if k < n
    return Randomized-Select(A, p, r-1, k)
  else
    return Randomized-Select(A, r+1, q, k-n)

The lecture continues with intuitive and then mathematically rigorous analysis of the algorithm. It is concluded that the expected running time of this algorithm is O(n) and the worst case (when the Randomized-Partition algorithm chooses bad pivots) is O(n2).

At the last minutes of the lecture a worst-case linear time order statistics algorithm is presented. This algorithm was invented by five scientists - M. Blum, R. W. Floyd, V. Pratt, R. Rivest and R. Tarjan. I found the original publication of this algorithm by these gentlemen: Time Bounds for Selection.

You're welcome to watch lecture six:

Lecture six notes:

MIT Algorithms Lecture 6 Notes Thumbnail. Page 1 of 2.
Lecture 6, page 1 of 2.

MIT Algorithms Lecture 6 Notes Thumbnail. Page 2 of 2.
Lecture 6, page 2 of 2.

Topics covered in lecture six:

  • [00:30] The problem of order statistics.
  • [01:50] Naive algorithm for finding k-th smallest element.
  • [04:30] Randomized divide and conquer algorithm.
  • [11:20] Example of algorithm run on array (6, 10, 13, 5, 8, 3, 2, 11) to find 7-th smallest element.
  • [15:30] Intuitive analysis of Randomized-Select algorithm.
  • [20:30] Mathematically rigorous analysis of expected running time.
  • [43:55] Worst-case linear time order statistics algorithm.

Have fun! The next post will be all about hashing!

PS. This course is taught from the CLRS book (also called "Introduction to Algorithms"):

This article is part of the article series "MIT Introduction to Algorithms."
<- previous article next article ->

MIT AlgorithmsThis is the third post in an article series about MIT's lecture course "Introduction to Algorithms." In this post I will review lectures four and five, which are on the topic of sorting.

The previous post covered a lecture on "Divide and Conquer" algorithm design technique and its applications.

Lecture four is devoted entirely to a single sorting algorithm which uses this technique. The algorithm I am talking about is the "Quicksort" algorithm. The quicksort algorithm was invented by Charles Hoare in 1962 and it is the most widely used sorting algorithm in practice.

I wrote about quicksort before in "Three Beautiful Quicksorts" post, where its running time was analyzed experimentally. This lecture does it theoretically.

Lecture five talks about theoretical running time limits of sorting using comparisons and then discusses two linear time sorting algorithms -- Counting sort and Radix sort.

Lecture 4: Quicksort

The lecture starts by giving the divide and conquer description of the algorithm:

  • Divide: Partition the array to be sorted into two subarrays around pivot x, such that elements in the lower subarray <= x, and elements in the upper subarray >= x.
  • Conquer: Recursively sort lower and upper subarrays using quicksort.
  • Combine: Since the subarrays are sorted in place, no work is needed to combine them. The entire array is now sorted!

The main algorithm can be written in a few lines of code:

Quicksort(A, p, r) // sorts list A[p..r]
  if p < r
    q = Partition(A, p, r)
    Quicksort(A, p, q-1)
    Quicksort(A, q+1, r)

The key in this algorithm is the Partition subroutine:

Partition(A, p, q)
  // partitions elements in array A[p..q] in-place around element x = A[p],
  // so that A[p..i] <= x and A[i] == x and A[i+1..q] >= x
  x = A[p]
  i = p
  for j = p+1 to q
    if A[j] <= x
      i = i+1
      swap(A[i], A[j])
  swap(A[p], A[i])
  return i

The lecture then proceeds with the analysis of Quicksort's worst-case running time. It is concluded that if the array to be sorted is already sorted or already reverse sorted, the running time is O(n2), which is no better than Insertion sort (seen in lecture one)! What happens in the worst case is that all the elements get partitioned to one side of the chosen pivot. For example, given an array (1, 2, 3, 4), the partition algorithm chooses 1 as the pivot, then all the elements stay where they were, and no partitioning actually happens. To overcome this problem Randomized Quicksort algorithm is introduced.

The main idea of randomized quicksort hides in the Partition subroutine. Instead of partitioning around the first element, it partition around a random element in the array!

Nice things about randomized quicksort are:

  • Its running time is independent of initial element order.
  • No assumptions need to be made about the statistical distribution of input.
  • No specific input elicits the worst case behavior.
  • The worst case is determined only by the output of a random-number generator.
Randomized-Partition(A, p, q)
  <strong>swap(A[p], A[rand(p,q)])</strong> // the only thing changed in the original Partition subroutine!
  x = A[p]
  i = p
  for j = p+1 to q
    if A[j] <= x
      i = i+1
      swap(A[i], A[j])
  swap(A[p], A[i])
  return i

The rest of the lecture is very math-heavy and uses indicator random variables to conclude that the expected running time of randomized quicksort is O(n·lg(n)).

In practice quicksort is 3 or more times faster than merge sort.

Please see Three Beautiful Quicksorts post for more information about the version of industrial-strength quicksort.

You're welcome to watch lecture four:

Topics covered in lecture four:

  • [00:35] Introduction to quicksort.
  • [02:30] Divide and conquer approach to quicksort.
  • [05:20] Key idea - linear time (Θ(n)) partitioning subroutine.
  • [07:50] Structure of partitioning algorithm.
  • [11:40] Example of partition algorithm run on array (6, 10, 13, 5, 8, 3, 2, 11).
  • [16:00] Pseudocode of quicksort algorithm.
  • [19:00] Analysis of quicksort algorithm.
  • [20:25] Worst case analysis of quicksort.
  • [24:15] Recursion tree for worst case.
  • [28:55] Best case analysis of quicksort, if partition splits elements in half.
  • [28:55] Analysis of quicksort, if partition splits elements in a proportion 1/10 : 9/10.
  • [33:33] Recursion tree of this analysis.
  • [04:30] Analysis of quicksort, if partition alternates between lucky, unlucky, lucky, unlucky, ...
  • [46:50] Randomized quicksort.
  • [51:10] Analysis of randomized quicksort.

Lecture four notes:

MIT Algorithms Lecture 4 Notes Thumbnail. Page 1 of 2.
Lecture 4, page 1 of 2.

MIT Algorithms Lecture 4 Notes Thumbnail. Page 2 of 2.
Lecture 4, page 2 of 2.

Lecture 5: Lower Sorting Bounds and Linear Sorting

The lecture starts with a question -- How fast can we sort? Erik Demaine (professor) answers and says that it depends on the model of what you can do with the elements.

The previous lectures introduced several algorithms that can sort n numbers in O(n·lg(n)) time. Merge sort achieves this upper bound in the worst case; quicksort achieves it on average. These algorithms share an interesting property: the sorted order they determine is based only on comparisons between the input elements. Such sorting algorithms are called Comparison Sorts, which is a model for sorting.

It turns out that any comparison sort algorithm can be translated into something that is called a Decision Tree. A decision tree is a full binary tree that represents the comparisons between elements that are performed by a particular sorting algorithm operating on an input of a given size.

Erik uses decision trees to derive the lower bound for running time of comparison based sorting algorithms. The result is that no comparison-based sort can do better than O(n·lg(n)).

The lecture continues with bursting outside of comparison model and looks at sorting in linear time using no comparisons.

The first linear time algorithm covered in the lecture is Counting Sort. The basic idea of counting sort is to determine, for each input element x, the number of elements less than x. This information can be used to place element x directly into its position in the output array. For example, if there are 17 elements less than x, then x belongs in output position 18.

The second linear time algorithm is Radix Sort, which sorts a list of numbers by examining each digit at a given position separately.

Erik ends the lecture by analyzing correctness and running time of radix sort.

Video of lecture five:

Topics covered in lecture five:

  • [00:30] How fast can we sort?
  • [02:27] Review of running times of quicksort, heapsort, merge sort and insertion sort.
  • [04:50] Comparison sorting (model for sorting).
  • [06:50] Decision trees.
  • [09:35] General description of decision trees.
  • [14:25] Decision trees model comparison sorts.
  • [20:00] Lower bound on decision tree sorting.
  • [31:35] Sorting in linear time.
  • [32:30] Counting sort.
  • [38:05] Example of counting sort run on an array (4, 1, 3, 4, 3)
  • [50:00] Radix sort.
  • [56:10] Example of radix sort run on array (329, 457, 657, 839, 546, 720, 355).
  • [01:00:30] Correctness of radix sort.
  • [01:04:25] Analysis of radix sort.

Lecture five notes:

MIT Algorithms Lecture 5 Notes Thumbnail. Page 1 of 2.
Lecture 5, page 1 of 2.

MIT Algorithms Lecture 5 Notes Thumbnail. Page 2 of 2.
Lecture 5, page 2 of 2.

Have fun sorting! The next post will be about order statistics (given an array find n-th smallest element)!

PS. This course is taught from the CLRS book (also called "Introduction to Algorithms"):

This article is part of the article series "Musical Geek Friday."
<- previous article next article ->

Mc Plus - Alice and Bob SongThis week on Musical Geek Friday a song about the lovely cryptographic couple Alice and Bob!

Alice and Bob song is written by a guy MC Plus+. His real name is Armand Navabi and he's computer science Ph.D. student at Purdue University. His moniker is a pun on the name C++.

The song is about three different topics in cryptography.

First, the song talks about various archetypes used in cryptography. They are Alice, Bob, Trent, Mallory and Eve. Alice and Bob are the usual persons trying to communicate securely. They both trust Trent, who helps them communicate. Meanwhile Eve tries to eavesdrop their communication and Mallory tries to modify their messages!

Then, it covers cryptography algorithms -- the insecure 56-bit Data Encryption Standard (DES) algorithm, secure Advanced Encryption Standard (AES) algorithm, and not-so-secure Blowfish algorithm.

Lastly, the song reaches the problem of factoring numbers for which no algorithm in polynomial time is known.

This song is similar to the first song I ever posted on Musical Geek Friday -- Crypto.

Here it is! The Alice and Bob song:

[audio:http://www.catonmat.net/download/mc_plus_plus-alice_and_bob.mp3]

Download this song: alice and bob.mp3 (musical geek friday #14)
Downloaded: 21149 times

Download lyrics: alice and bob lyrics (musical geek friday #14)
Downloaded: 3383 times

Alice and Bob lyrics:

Alice is sending her message to Bob
Protecting that transmission is Crypto's job
Without the help of our good friend Trent,
It's hard to get that secret message sent
Work tries to deposit the check of your salary
But with no crypto, it'll be changed by Mallory
You think no one will see what it is, you believe?
But you should never forget, there's always an Eve...

[Chorus]
'Cause I'm encrypting s**t like every single day
Sending data across the network in a safe way
Protecting messages to make my pay
If you hack me, you're guilty under DMCA

DES is wrong if you listen to NIST
Double DES ain't no better man, that got dissed
Twofish for AES, that was Schneier's wish
Like a shot from the key, Rijndael made the swish
But Blowfish is still the fastest in the land
And Bruce used his fame to make a few grand
Use ECB, and I'll crack your ciphertext
Try CFB mode to keep everyone perplexed

[Chorus]
'Cause I'm encrypting s**t like every single day
Sending data across the network in a safe way
Protecting messages to make my pay
If you hack me, you're guilty under DMCA

Random numbers ain't easy to produce...
Do it wrong, and your key I'll deduce
RSA, only public cipher in the game
Creating it helped give Rivest his fame
If we could factor large composites in poly time,
We'd have enough money to not have to rhyme
Digesting messages with a hashing function
Using SHA1 or else it won't cause disfunction

[Chorus]
'Cause I'm encrypting s**t like every single day
Sending data across the network in a safe way
Protecting messages to make my pay
If you hack me, you're guilty under DMCA

Password confirmed. Stand by...

Download "Alice and Bob" Song

Download this song: alice and bob.mp3 (musical geek friday #14)
Downloaded: 21149 times

Download lyrics: alice and bob lyrics (musical geek friday #14)
Downloaded: 3383 times

Click to listen:
[audio:http://www.catonmat.net/download/mc_plus_plus-alice_and_bob.mp3]

Have fun and until next geeky Friday! :)

This article is part of the article series "MIT Introduction to Algorithms."
<- previous article next article ->

MIT AlgorithmsThis is the second post in an article series about MIT's lecture course "Introduction to Algorithms."

I changed my mind a little on how I will be posting the reviews of lectures. In the first post I said that I will be posting reviews of two or three lectures at a time, but I decided to group them by topic instead. This is more logical. Some lectures stand out alone. Lecture 3 is like that.

Lecture three is all all about a powerful technique in algorithm design called "Divide and Conquer." I won't be able to describe it better than the CLRS book, which says: "Divide and Conquer algorithms break the problem into several subproblems that are similar to the original problem but smaller in size, solve the subproblems recursively, and then combine these solutions to create a solution to the original problem."

There are three steps to applying Divide and Conquer algorithm in practice:

  • Divide the problem into one ore more subproblems.
  • Conquer subproblems by solving them recursively. If the subproblem sizes are small enough, however, just solve the subproblems in a straightforward manner.
  • Combine the solutions to the subproblems into the solution for the original problem.

An example of Divide and Conquer is the Merge Sort algorithm covered in lecture one:

  • Divide: Divide the n-element sequence to be sorted into two subsequences of n/2 elements each.
  • Conquer: Sort the two subsequences recursively using Merge Sort.
  • Combine: Merge the two sorted subsequences to produce the sorted answer.

The recursion stops when the sequence to be sorted has length 1, in which case there is nothing else to be done, since every sequence of length 1 is already in sorted order!

Lecture three presents various other algorithms which use the powerful Divide and Conquer technique.

It starts by the running time analysis of Merge Sort algorithms and shows the general structure of recurrence equations generated by Divide and Conquer algorithms.

Then it moves on to Binary Search algorithm which allows finding an element in a sorted array in time proportional to logarithm of array length, or speaking in asymptotic notation speech, it's worst running time complexity is O(lg(n)).

A naive algorithm which just scanned through all elements of an array would have time complexity of O(n) which is much, much worse!

The next algorithm presented solves the problem stated as following: given a number x and given an integer n, compute x to the power n.

A naive algorithm would create a loop from 1 to n, and multiply the x with itself:

result = 1
Naive-Power(x, n):
  for 1 to n:
    result = x * result
  return result

This leads to running time of O(n).

A much more clever approach is to use an algorithm called Recursive Powering. The idea to calculate xn is to express xn as:

  • xn/2­·xn/2­ if n is even, or
  • x(n-1)/2·­x(n-1)/2·x, if n is odd.

Here is the pseudo-code of this algorithm:

Recursive-Power(x, n):
  if n == 1
    return x
  if n is even
    y = Recursive-Power(x, n/2)
    return y*y
  else
    y = Recursive-Power(x, (n-1)/2)
    return y*y*x

Doing asymptotic analysis of this algorithm leads to O(lg(n)) running time, which is a great improvement over O(n).

The lecture continues with four algorithms on how to compute Fibonacci Numbers! You should definitely watch that! Man, it's four algorithms! It's at 17:49 in the lecture!

I won't give the pseudo code here for these ones, but they are naive recursive algorithm, bottom up algorithm, naive recursive squaring and recursive squaring!

The next two topics are Matrix Multiplication the naive way, giving O(n3) running time, and an improved Strassen's Algorithm which reduces the number of multiplications yielding running time of approximately O(n2.81).

The last topic covered in this lecture is VLSI (Very large Scale Integration) layout problem: given a number of various electronic gates an chips, how to position them on the circuit board to minimize area they occupy.

Having described the lecture in great detail, you're welcome to watch it:

Topics covered in lecture 3:

  • [01:25] Divide and conquer method.
  • [04:54] Example of Divide and conquer applied to Merge sort.
  • [06:45] Running time analysis of Merge sort.
  • [09:55] Binary search algorithm.
  • [12:00] Analysis of binary search running time.
  • [13:15] Algorithm for powering a number (recursive powering).
  • [17:49] Algorithms for computing Fibonacci numbers (FBs).
  • [19:04] Naive recursive algorithm (exponential time) for computing FBs.
  • [22:45] Bottom-up algorithm for computing FBs.
  • [24:25] Naive recursive squaring algorithm for FBs (doesn't work because of floating point rounding errors).
  • [27:00] Recursive squaring algorithm for FBs.
  • [29:54] Proof by induction of recursive squaring FB algorithm
  • [34:00] Matrix multiplication algorithms.
  • [35:45] Naive (standard) algorithm for multiplying matrices.
  • [37:35] Divide and conquer algorithm for multiplying matrices.
  • [40:00] Running time analysis of divide and conquer matrix multiplication algorithm.
  • [43:09] Strassen's matrix multiplication algorithm.
  • [50:00] Analysis of Strassen's algorithm.
  • [55:25] VLSI layout problem.
  • [57:50] Naive embedding algorithm for VLSI.
  • [01:05:10] Divide and conquer algorithm for laying out VLSI.

Lecture 3 notes:

MIT Algorithms Lecture 3 Notes Thumbnail. Page 1 of 2.
Lecture 3, page 1 of 2.

MIT Algorithms Lecture 3 Notes Thumbnail. Page 2 of 2.
Lecture 3, page 2 of 2.

Have fun dividing and conquering! The next post will be about two lectures on sorting algorithms!

Ps. the lectures are taught from the CLRS book (also called "Introduction to Algorithms"):

This article is part of the article series "MIT Introduction to Algorithms."
<- previous article next article ->

MIT AlgorithmsI just finished watching the last lecture of MIT's "Introduction to Algorithms" course. Having a great passion for all aspects of computing, I decided to share everything I learned with you, my dear readers! This is the first post in an article series about this course.

As I wrote earlier, I am very serious about watching video lectures. If they are math-intensive, I usually take notes as if I were in the classroom. Lectures in this course were exactly like that -- logarithms, big-o's, thetas, expectations, and all the other math guys fighting with each other on the blackboards.

There are totally 23 video lectures, each around 1 hour 20 minutes long. I will be posting about 2 - 3 lectures at a time which will result in approximately 10 blog posts. Each post will contain annotated lecture, along with embedded video of the lecture and a time-stamped list of topics covered in the lecture. I will also post the notes I took myself as I watched the lectures (actually, I just bought a scanner (Canon CanonScan 4400F) just for this purpose!)

Understanding and designing effective algorithms is a very important skill for a top-notch programmer. You can still do good without knowing much about algorithms, but knowing them makes you superior. There are two kinds of people, those who can design effective algorithms and those who don't. ;)

Let's start with Lecture 1 of this course.

Lecture 1: Analysis of Algorithms

The first lecture is given by the famous professor Charles E. Leiserson. He is the L in CLRS. If that doesn't ring you a bell - it's one of the most popular books on algorithms!

He starts the lecture by explaining what this course and algorithms will be all about. He says that this course will be about "Analysis of Algorithms" and states: "Analysis of algorithms is the theoretical study of computer program performance and resource usage".

Designing great software is not just about performance. Charles presents a list of 12 things that can be more important than performance. Just for comparison with algorithms guru, what do you think can be more important than performance?

Here is the list of things more important than performance that Charles presented:

  • modularity,
  • correctness,
  • maintainability,
  • security,
  • functionality,
  • robustness,
  • user-friendliness,
  • programmer's time,
  • simplicity,
  • extensibility,
  • reliability, and
  • scalability.

He also asks "Why study algorithms and performance at all?". He and students answer:

  • Sometimes performance is correlated with user-friendliness.
  • Performance draws line between feasible and unfeasible.
  • Algorithms give language for talking about program behavior.
  • Performance can be used to "pay" for other things, such as security, features and user-friendliness.

The lecture continues with the definition of the Sorting Problem - given a sequence (a1, a2, ..., an) of numbers, permute them in such a way that a1 <= a2 <= ... <= an.

There are various algorithms to solve this problem. Two algorithms are presented to solve this problems - one of them is Insertion Sort and the other is Merge Sort.

Running time of these algorithms is analyzed by introducing Asymptotic Analysis and Recursion Trees.

Here is the whole lecture:

Topics covered in lecture 1:

  • [17:15] Main topic of the course - Analysis of algorithms.
  • [19:00] What's more important than performance?
  • [22:03] Why study algorithms and performance?
  • [27:45] The sorting problem.
  • [29:30] Insertion sort algorithm
  • [34:30] Example of Insertion sort.
  • [36:25] Running time of algorithms.
  • [39:39] Definition of worst-case, average-case and best-case types of analysis.
  • [46:50] How to analyze the Insertion sort's worst-case running time?
  • [49:28] BIG IDEA - Asymptotic analysis.
  • [50:49] Asymptotic notation - theta notation.
  • [57:14] Insertion sort analysis.
  • [01:02:42] Is Insertion sort fast?
  • [01:03:40] Merge sort algorithm.
  • [01:05:25] Example of Merge subroutine of Merge sort.
  • [01:08:15] Analysis of Merge sort's running time.
  • [01:10:55] Recurrence equation for Merge sort.
  • [01:13:15] Recursion tree solution of the Merge sort's recurrence equation.

Lecture 1 notes:

MIT Algorithms Lecture 1 Notes Thumbnail. Page 1 of 2.
Lecture 1, page 1 of 2.

MIT Algorithms Lecture 1 Notes Thumbnail. Page 2 of 2.
Lecture 1, page 2 of 2.

Lecture 2: Asymptotic Notation

Lecture 2, on the other hand, is given by genius professor Erik Demaine. He is the youngest professor in the history of the MIT! He became professor at MIT at 20! Wow!

This lecture is all about mathematical notation (Asymptotic Notation) used in the analysis of algorithms. It's the big-o notation, big omega notation, theta notation, small-o and small-omega notation.

The second half of the lecture is devoted to solving recurrence equations. Three methods are presented:

  • Substitution method,
  • Recursion-tree method, and
  • The Master method.

Here is the whole lecture:

Topics covered in lecture 2:

  • [01:25] Big-o (upper bounds) notation.
  • [03:58] Set definition of big-o notation.
  • [05:25] The meaning of O(h(n)) in notation f(n) = g(n) + O(h(n)).
  • [10:20] Big-omega (lower bounds) notation.
  • [11:40] Analogies of O, Ω and Θ to comparison operations of real numbers.
  • [12:28] Theta (tight bounds) notation.
  • [13:40] Small-o and small-omega notation.
  • [17:03] Solving recurrences: substitution method.
  • [37:56] Recursion-tree method.
  • [49:00] The Master method.
  • [01:02:00] Proof sketch of the Master method.

Lecture 2 notes:

MIT Algorithms Lecture 2 Notes Thumbnail. Page 1 of 2.
Lecture 2, page 1 of 2.

MIT Algorithms Lecture 2 Notes Thumbnail. Page 2 of 2.
Lecture 2, page 2 of 2.

MIT Algorithms Lecture 2 Notes Thumbnail. Master's Theorem.
Lecture 2. Sketch of Master's theorem proof.

Have fun absorbing all this information! Until next post!

Ps. It turned out that the lectures were not available anywhere but from MIT's OCW website. I found that they were released under CC license, which allowed me to upload them to Google Video, so I can embed them in the posts!

Pps. the lectures are taught from the CLRS book (also called "Introduction to Algorithms"):