This article is part of the article series "MIT Introduction to Algorithms."
<- previous article next article ->
MIT Algorithms

This is the second post in an article series about MIT's lecture course "Introduction to Algorithms."

I changed my mind a little on how I will be posting the reviews of lectures. In the first post I said that I will be posting reviews of two or three lectures at a time, but I decided to group them by topic instead. This is more logical. Some lectures stand out alone. Lecture 3 is like that.

Lecture three is all all about a powerful technique in algorithm design called "Divide and Conquer." I won't be able to describe it better than the CLRS book, which says: "Divide and Conquer algorithms break the problem into several subproblems that are similar to the original problem but smaller in size, solve the subproblems recursively, and then combine these solutions to create a solution to the original problem."

There are three steps to applying Divide and Conquer algorithm in practice:

  • Divide the problem into one ore more subproblems.
  • Conquer subproblems by solving them recursively. If the subproblem sizes are small enough, however, just solve the subproblems in a straightforward manner.
  • Combine the solutions to the subproblems into the solution for the original problem.

An example of Divide and Conquer is the Merge Sort algorithm covered in lecture one:

  • Divide: Divide the n-element sequence to be sorted into two subsequences of n/2 elements each.
  • Conquer: Sort the two subsequences recursively using Merge Sort.
  • Combine: Merge the two sorted subsequences to produce the sorted answer.

The recursion stops when the sequence to be sorted has length 1, in which case there is nothing else to be done, since every sequence of length 1 is already in sorted order!

Lecture three presents various other algorithms which use the powerful Divide and Conquer technique.

It starts by the running time analysis of Merge Sort algorithms and shows the general structure of recurrence equations generated by Divide and Conquer algorithms.

Then it moves on to Binary Search algorithm which allows finding an element in a sorted array in time proportional to logarithm of array length, or speaking in asymptotic notation speech, it's worst running time complexity is O(lg(n)).

A naive algorithm which just scanned through all elements of an array would have time complexity of O(n) which is much, much worse!

The next algorithm presented solves the problem stated as following: given a number x and given an integer n, compute x to the power n.

A naive algorithm would create a loop from 1 to n, and multiply the x with itself:

result = 1
Naive-Power(x, n):
  for 1 to n:
    result = x * result
  return result

This leads to running time of O(n).

A much more clever approach is to use an algorithm called Recursive Powering. The idea to calculate xn is to express xn as:

  • xn/2­·xn/2­ if n is even, or
  • x(n-1)/2·­x(n-1)/2·x, if n is odd.

Here is the pseudo-code of this algorithm:

Recursive-Power(x, n):
  if n == 1
    return x
  if n is even
    y = Recursive-Power(x, n/2)
    return y*y
  else
    y = Recursive-Power(x, (n-1)/2)
    return y*y*x

Doing asymptotic analysis of this algorithm leads to O(lg(n)) running time, which is a great improvement over O(n).

The lecture continues with four algorithms on how to compute Fibonacci Numbers! You should definitely watch that! Man, it's four algorithms! It's at 17:49 in the lecture!

I won't give the pseudo code here for these ones, but they are naive recursive algorithm, bottom up algorithm, naive recursive squaring and recursive squaring!

The next two topics are Matrix Multiplication the naive way, giving O(n3) running time, and an improved Strassen's Algorithm which reduces the number of multiplications yielding running time of approximately O(n2.81).

The last topic covered in this lecture is VLSI (Very large Scale Integration) layout problem: given a number of various electronic gates an chips, how to position them on the circuit board to minimize area they occupy.

Having described the lecture in great detail, you're welcome to watch it:

Topics covered in lecture 3:

  • [01:25] Divide and conquer method.
  • [04:54] Example of Divide and conquer applied to Merge sort.
  • [06:45] Running time analysis of Merge sort.
  • [09:55] Binary search algorithm.
  • [12:00] Analysis of binary search running time.
  • [13:15] Algorithm for powering a number (recursive powering).
  • [17:49] Algorithms for computing Fibonacci numbers (FBs).
  • [19:04] Naive recursive algorithm (exponential time) for computing FBs.
  • [22:45] Bottom-up algorithm for computing FBs.
  • [24:25] Naive recursive squaring algorithm for FBs (doesn't work because of floating point rounding errors).
  • [27:00] Recursive squaring algorithm for FBs.
  • [29:54] Proof by induction of recursive squaring FB algorithm
  • [34:00] Matrix multiplication algorithms.
  • [35:45] Naive (standard) algorithm for multiplying matrices.
  • [37:35] Divide and conquer algorithm for multiplying matrices.
  • [40:00] Running time analysis of divide and conquer matrix multiplication algorithm.
  • [43:09] Strassen's matrix multiplication algorithm.
  • [50:00] Analysis of Strassen's algorithm.
  • [55:25] VLSI layout problem.
  • [57:50] Naive embedding algorithm for VLSI.
  • [01:05:10] Divide and conquer algorithm for laying out VLSI.

Lecture 3 notes:

MIT Algorithms Lecture 3 Notes Thumbnail. Page 1 of 2.
Lecture 3, page 1 of 2.

MIT Algorithms Lecture 3 Notes Thumbnail. Page 2 of 2.
Lecture 3, page 2 of 2.

Have fun dividing and conquering! The next post will be about two lectures on sorting algorithms!

Ps. the lectures are taught from the CLRS book (also called "Introduction to Algorithms"):

This article is part of the article series "MIT Introduction to Algorithms."
<- previous article next article ->

Comments

Madars Permalink
August 21, 2008, 22:53

Thanks! I'm looking forward to seeing more posts in this series :)

John Permalink
August 22, 2008, 00:55

Shouldn't you link to the original lectures/site as well?

August 22, 2008, 01:31

Nice summary. Your pseudo-code for the recursive power algorithm is not quite right. As written, it is still O(n). You don't get the benefit if you make each recursive call twice. For the O(log n) version, try:

Recursive-Power(x, n):
if n == 1
return x
if n is even
y = Recursive-Power(x, n/2)
return y*y
else
y = Recursive-Power(x, (n-1)/2)
return y*y*x

August 22, 2008, 01:45

John, I don't know. Perhaps I should. I linked to the original lectures from part one.

Steve, oops! What was I thinking! Fixed now. Many thanks!

harsha Permalink
August 22, 2008, 10:00

just wanted to say thanks for putting up these posts. i especially like the fact that you summarise the videos and put up scanned notes.

August 23, 2008, 05:20

Peteris, have a read of the Attribution-Noncommercial-Share Alike license at http://ocw.mit.edu/OcwWeb/web/terms/terms/index.htm

I think you need to make clear the license on the videos (e.g. by linking to that page). I'm not sure about commercial use though. You clearly have advertising that's designed to bring in money - I suppose it depends on whether the adverts and sponsorship on your site are for profit or not and more importantly whether they're incidental to these articles that include MIT's work or not

It's definitely worth a read though and it also encourages you to contact MIT if you have any questions

zhanglei Permalink
August 24, 2008, 04:16

I like the posts,especially the way you learning algorithms.

mave Permalink
November 09, 2008, 03:20

Hi, i saw this question on the other post on Order Statistics, can someone help answer this question

Let X[1::n] and Y [1::n] be two arrays, each containing n numbers already in sorted order.
Give an O(lg n)[in other words an algorithm better than O(n) ] time algorithm to find the nth smallest element of the 2n elements in arrays X and Y, using divide & conquer

Thanks!!

Mahmood Permalink
March 28, 2011, 18:30

did you get the answer?.If yes,tell me because I have interest in your question?
I know It is very late to ask!

February 19, 2009, 10:04

I have a very important question... Anyone know the e-mail address of the blond chick sitting to the right of the chick in the red shirt whom asked for clarification of Fibonacci series? They don't have hot CS chicks like that at my school..

Jacob
school with held for personal safety reasons

tabane Permalink
August 13, 2009, 10:26

i want to write a c++ code that implements strassen's algorithm for an n*n matrix.My my is how to divide a matrix in to sub matrices and apply strassen's algorithm on them.

den Permalink
November 22, 2009, 04:18

Very informative, thank you very much :)

mandeep Permalink
August 04, 2010, 12:54

that's good....great help....thanks mit.......

Jay Permalink
November 23, 2012, 14:41

I can't access any of these videos on Google. I keep getting a message "This Video is private. Sorry"

anon Permalink
January 30, 2013, 01:44

All the videos as linked from here are private. However, they are all available on iTunes.

Yogender Permalink
September 29, 2013, 16:09

Great work done bro.
Reflects your passion.
Also, thanks for the notes.

sare Permalink
February 28, 2014, 08:22

thanks,very useful.

2cupsoftech Permalink
April 10, 2014, 16:28

I'm not entirely clear on how we get L(n/4) for the VLSI H-tree layout, can anyone explain how to calculate the height/width of the efficient VLSI layout?

ucmsuri Permalink
December 15, 2014, 09:34

i am not able to watch any of those lectures. Is there any problem with your server?

Leave a new comment

(why do I need your e-mail?)

(Your twitter name, if you have one. (I'm @pkrumins, btw.))

Type the word "linux_85": (just to make sure you're a human)

Please preview the comment before submitting to make sure it's OK.

Advertisements