MIT AlgorithmsThis is the second post in an article series about MIT's lecture course "Introduction to Algorithms."

I changed my mind a little on how I will be posting the reviews of lectures. In the first post I said that I will be posting reviews of two or three lectures at a time, but I decided to group them by topic instead. This is more logical. Some lectures stand out alone. Lecture 3 is like that.

Lecture three is all all about a powerful technique in algorithm design called Divide and Conquer. I won't be able to describe it better than the CLRS book, which says: "Divide and Conquer algorithms break the problem into several subproblems that are similar to the original problem but smaller in size, solve the subproblems recursively, and then combine these solutions to create a solution to the original problem."

There are three steps to applying Divide and Conquer algorithm in practice:

  • Divide the problem into one ore more subproblems.
  • Conquer subproblems by solving them recursively. If the subproblem sizes are small enough, however, just solve the subproblems in a straightforward manner.
  • Combine the solutions to the subproblems into the solution for the original problem.

An example of Divide and Conquer is the Merge Sort algorithm covered in lecture one:

  • Divide: Divide the n-element sequence to be sorted into two subsequences of n/2 elements each.
  • Conquer: Sort the two subsequences recursively using Merge Sort.
  • Combine: Merge the two sorted subsequences to produce the sorted answer.

The recursion stops when the sequence to be sorted has length 1, in which case there is nothing else to be done, since every sequence of length 1 is already in sorted order.

Lecture three presents various other algorithms which use the powerful Divide and Conquer technique.

It starts by the running time analysis of Merge Sort algorithms and shows the general structure of recurrence equations generated by Divide and Conquer algorithms.

Then it moves on to Binary Search algorithm which allows finding an element in a sorted array in time proportional to logarithm of array length, or speaking in asymptotic notation speech, it's worst running time complexity is O(lg(n)).

A naive algorithm which just scanned through all elements of an array would have time complexity of O(n) which is much, much worse.

The next algorithm presented solves the problem stated as following: given a number x and given an integer n, compute x to the power n.

A naive algorithm would create a loop from 1 to n, and multiply the x with itself:

result = 1
Naive-Power(x, n):
  for 1 to n:
    result = x * result
  return result

This leads to running time of O(n).

A much more clever approach is to use an algorithm called Recursive Powering. The idea to calculate xn is to express xn as:

  • xn/2­·xn/2­ if n is even, or
  • x(n-1)/2·­x(n-1)/2·x, if n is odd.

Here is the pseudo-code of this algorithm:

Recursive-Power(x, n):
  if n == 1
    return x
  if n is even
    y = Recursive-Power(x, n/2)
    return y*y
  else
    y = Recursive-Power(x, (n-1)/2)
    return y*y*x

Doing asymptotic analysis of this algorithm leads to O(lg(n)) running time, which is a great improvement over O(n).

The lecture continues with four algorithms on how to compute Fibonacci Numbers. You should definitely watch that. Man, it's four algorithms. It's at 17:49 in the lecture.

I won't give the pseudo code here for these ones, but they are naive recursive algorithm, bottom up algorithm, naive recursive squaring and recursive squaring.

The next two topics are Matrix Multiplication the naive way, giving O(n3) running time, and an improved Strassen's Algorithm which reduces the number of multiplications yielding running time of approximately O(n2.81).

The last topic covered in this lecture is VLSI (Very large Scale Integration) layout problem: given a number of various electronic gates an chips, how to position them on the circuit board to minimize area they occupy.

Having described the lecture in great detail, you're welcome to watch it:

Topics covered in lecture 3:

  • [01:25] Divide and conquer method.
  • [04:54] Example of Divide and conquer applied to Merge sort.
  • [06:45] Running time analysis of Merge sort.
  • [09:55] Binary search algorithm.
  • [12:00] Analysis of binary search running time.
  • [13:15] Algorithm for powering a number (recursive powering).
  • [17:49] Algorithms for computing Fibonacci numbers (FBs).
  • [19:04] Naive recursive algorithm (exponential time) for computing FBs.
  • [22:45] Bottom-up algorithm for computing FBs.
  • [24:25] Naive recursive squaring algorithm for FBs (doesn't work because of floating point rounding errors).
  • [27:00] Recursive squaring algorithm for FBs.
  • [29:54] Proof by induction of recursive squaring FB algorithm
  • [34:00] Matrix multiplication algorithms.
  • [35:45] Naive (standard) algorithm for multiplying matrices.
  • [37:35] Divide and conquer algorithm for multiplying matrices.
  • [40:00] Running time analysis of divide and conquer matrix multiplication algorithm.
  • [43:09] Strassen's matrix multiplication algorithm.
  • [50:00] Analysis of Strassen's algorithm.
  • [55:25] VLSI layout problem.
  • [57:50] Naive embedding algorithm for VLSI.
  • [01:05:10] Divide and conquer algorithm for laying out VLSI.

Lecture 3 notes:

MIT Algorithms Lecture 3 Notes Thumbnail. Page 1 of 2. Lecture 3, page 1 of 2. MIT Algorithms Lecture 3 Notes Thumbnail. Page 2 of 2. Lecture 3, page 2 of 2.

Have fun dividing and conquering! The next post will be about two lectures on sorting algorithms.