**48**Comments January 21, 2010

# MIT Linear Algebra, Lecture 5: Vector Spaces and Subspaces

This is the fifth post in an article series about MIT's course "Linear Algebra". In this post I will review lecture five that finally introduces real linear algebra topics such as **vector spaces** their **subspaces** and **spaces from matrices**. But before it does that it closes the topics that were started in the previous lecture on **permutations**, **transposes** and **symmetric matrices**.

Here is a list of the previous posts in this article series:

- Lecture 1: Geometry of Linear Equations
- Lecture 2: Elimination with Matrices
- Lecture 3: Matrix Multiplication and Inverse Matrices
- Lecture 4: A=LU Factorization

## Lecture 5: Vector Spaces and Subspaces

Lecture starts with reminding some facts about permutation matrices. Remember from the previous lecture that permutation matrices P execute row exchanges and they are identity matrices with reordered rows.

Let's count how many permutation matrices are there for an nxn matrix.

For a matrix of size 1x1, there is just one permutation matrix - the identity matrix.

For a matrix of size 2x2 there are two permutation matrices - the identity matrix and the identity matrix with rows exchanged.

For a matrix of size 3x3 we may have the rows of the identity matrix rearranged in 6 ways - {1,2,3}, {1,3,2}, {2,1,3}, {2,3,1}, {3,1,2}, {3,2,1}.

For a matrix of size 4x4 the number of ways to reorder the rows is the same as the number of ways to rearrange numbers {1,2,3,4}. This is the simplest possible combinatorics problem. The answer is 4! = 24 ways.

In general, for an nxn matrix, there are n! permutation matrices.

Another key fact to remember about permutation matrices is that their inverse P^{-1} is their transpose P^{T}. Or algebraically P^{T}·P = I.

The lecture proceeds to **transpose matrices**. The transpose of a matrix exchanges its columns with rows. Another way to think about it that it flips the matrix over its main diagonal. Transpose of matrix A is denoted by A^{T}.

Here is an example of transpose of a 3-by-3 matrix. I color coded the columns to better see how they get exchanged:

A matrix does not have to be square for its transpose to exist. Here is another example of transpose of a 3-by-2 matrix:

In algebraic notation transpose is expressed as (A^{T})_{ij} = A_{ji}, which says that an element a_{ij} at position ij get transposed into the position ji.

Here are the rules for matrix transposition:

- The transpose of A + B is (A + B)
^{T}= A^{T}+ B^{T}. - The transpose of A·B is (A·B)
^{T}= B^{T}·A^{T}. - The transpose of A·B·C is (A·B·C)
^{T}= C^{T}·B^{T}·A^{T}. - The transpose of A
^{-1}is (A^{-1})^{T}= (A^{T})^{-1}.

Next the lecture continues with **symmetric matrices**. A symmetric matrix has its transpose equal to itself, i.e., A^{T} = A. It means that we can flip the matrix along the diagonal (transpose it) but it won't change.

Here is an example of a symmetric matrix. Notice that the elements on opposite sides of the diagonal are equal:

Now check this out. If you have a matrix R that is not symmetric and you multiply it with its transpose R^{T} as R·R^{T}, you get a symmetric matrix! Here is an example:

Are you wondering why it's true? The proof is really simple. Remember that matrix is symmetric if its transpose is equal to itself. Now what's the transpose of the product R·R^{T}? It's (R·R^{T})^{T} = (R^{T})^{T}·R^{T} = R·R^{T} - it's the same product, which means that R·R^{T} is always symmetric.

Here is another cool fact - the inverse of a symmetric matrix (if it exists) is also symmetric. Here is the proof. Suppose A is symmetric, then the transpose of A^{-1} is (A^{-1})^{T} = (A^{T})^{-1}. But A^{T} = A, therefore (A^{T})^{-1} = A^{-1}.

At this point lecture finally reaches the fundamental topic of linear algebra - **vector spaces**. As usual, it introduces the topic by examples.

Example 1: Vector space **R**^{2} - all 2-dimensional vectors. Some of the vectors in this space are (3, 2), (0, 0), (π, e) and infinitely many others. These are all the vectors with two components and they represent the xy plane.

Example 2: Vector space **R**^{3} - all vectors with 3 components (all 3-dimensional vectors).

Example 3: Vector space **R**^{n} - all vectors with n components (all n-dimensional vectors).

What makes these vectors vector spaces is that they are closed under multiplication by a scalar and addition, i.e., vector space must be closed under linear combination of vectors. What I mean by that is if you take two vectors and add them together or multiply them by a scalar they are still in the same space.

For example, take a vector (1,2,3) in **R**^{3}. If we multiply it by any number α, it's still in **R**^{3} because α·(1,2,3) = (α, 2α, 3α). Similarly, if we take any two vectors (a, b, c) and (d, e, f) and add them together, the result is (a+d, b+e, f+c) and it's still in **R**^{3}.

There are actually 8 axioms that the vectors must satisfy for them to make a space, but they are not listed in this lecture.

Here is an example of not-a-vector-space. It's 1/4 of **R**^{2} (the 1st quadrant). The green vectors are in the 1st quadrant but the red one is not:

An example of not-a-vector-space.

This is not a vector space because the green vectors in the space are not closed under multiplication by a scalar. If we take the vector (3,1) and multiply it by -1 we get the red vector (-3, -1) but it's not in the 1st quadrant, therefore it's not a vector space.

Next, Gilbert Strang introduces **subspaces** of vector spaces.

For example, any line in **R**^{2} that goes through the origin (0, 0) is a subspace of **R**^{2}. Why? Because if we take any vector on the line and multiply it by a scalar, it's still on the line. And if we take any two vectors on the line and add them together, they are also still on the line. The requirement for a subspace is that the vectors in it do not go outside when added together or multiplied by a number.

Here is a visualization. The blue line is a subspace of **R**^{2} because the red vectors on it can't go outside of line:

An example of subspace of **R**^{2}.

And example of not-a-subspace of **R**^{2} is any line that does not go through the origin. If we take any vector on the line and multiply it by 0, we get the zero vector, but it's not on the line. Also if we take two vectors and add them together, they are not on the line. Here is a visualization:

An example of not-a-subspace of **R**^{2}.

Why not list all the subspaces of **R**^{2}. They are:

- the
**R**^{2}itself, - any line through the origin (0, 0),
- the zero vector (0, 0).

And all the subspaces of **R**^{3} are:

- the
**R**^{3}itself, - any line through the origin (0, 0, 0),
- any plane through the origin (0, 0, 0),
- the zero vector.

The last 10 minutes of the lecture are spent on **column spaces of matrices**.

The column space of a matrix is made out of all the linear combinations of its columns. For example, given this matrix:

The column space C(A) is the set of all vectors {α·(1,2,4) + β·(3,3,1)}. In fact, this column space is a subspace of **R**^{3} and it forms a plane through the origin.

More about column spaces in the next lecture.

You're welcome to watch the video lecture five:

Direct link: http://www.youtube.com/watch?v=JibVXBElKL0

Topics covered in lecture five:

- [01:30] Permutations.
- [03:00] A=LU elimination without row exchanges.
- [03:50] How Matlab does A=LU elimination.
- [04:50] PA=LU elimination with row exchanges
- [06:40] Permutation matrices.
- [07:25] How many permutation matrices are there?
- [08:30] Permutation matrix properties.
- [10:30] Transpose matrices.
- [11:50] General formula for transposes: (A
^{T})_{ij}= A_{ji}. - [13:06] Symmetric matrices.
- [13:30] Example of a symmetric matrix.
- [15:15] R·R
^{T}is always symmetric. - [18:23] Why is R·R
^{T}symmetric? - [20:50] Vector spaces.
- [22:05] Examples of vector spaces.
- [22:55] Real vector space
**R**^{2}. - [23:20] Picture of
**R**^{2}- xy plane. - [26:50] Vector space
**R**^{3}. - [28:00] Vector space
**R**^{n}. - [30:00] Example of not a vector space.
- [32:00] Subspaces of vector spaces.
- [33:00] A vector space inside
**R**^{2}. - [34:35] A line in
**R**^{2}that is subspace. - [34:50] A line in
**R**^{2}that is not a subspace. - [36:30] All subspaces of
**R**^{2}. - [39:30] All subspaces of
**R**^{3}. - [40:20] Subspaces of matrices.
- [41:00] Column spaces of matrices C(A).
- [44:10] Example of column space of matrix with columns in
**R**^{3}.

Here are my notes of lecture five:

Have fun with this lecture! The next post is going to be more about column spaces and null spaces of matrices.

PS. This course is taught from Introduction to Linear Algebra textbook. Get it here:

This is the sixth post in the article series "**Vim Plugins You Should Know About**". This time I am going to introduce you to a vim plugin called "**nerd_tree.vim**". It's so useful that I can't imagine working without it in vim.

**Nerd Tree** is a nifty plugin that allows you to explore the file system and open files and directories directly from vim. It opens the file system tree in a new vim window and you may use keyboard shortcuts and mouse to open files in new tabs, in new horizontal and vertical splits, quickly navigate between directories and create bookmarks for your most important projects.

This plugin was written by **Marty Grenfell** (also known as **scrooloose**).

Previous articles in the series:

- Part I: surround.vim - find and edit surrounding parens, quotes, html tags, etc.
- Part II: repeat.vim - repeat the previous surrounding command.
- Part III: matchit.vim - extends the functionality of % key.
- Part IV: snipmate.vim - the best snippet plugin for vim
- Part V: a.vim - quickly change between source and header files (.c/.h).

Ps. Please help me reach 10,000 RSS subscribers. I am almost there. If you enjoy my posts and have not yet subscribed, subscribe here!

## How to use nerd_tree.vim?

Nerd Tree plugin can be activated by the `:NERDTree`

vim command. It will open in vim as a new vertical split on the left:

A screenshot of Nerd Tree plugin in action.

Here are the basics of how to use the plugin:

- Use the natural vim navigation keys
`hjkl`

to navigate the files. - Press
`o`

to open the file in a new buffer or open/close directory. - Press
`t`

to open the file in a new tab. - Press
`i`

to open the file in a new horizontal split. - Press
`s`

to open the file in a new vertical split. - Press
`p`

to go to parent directory. - Press
`r`

to refresh the current directory.

All other keyboard shortcuts can be found by pressing `?`

. It will open a special help screen with the shortcut listings. Press `?`

again to get back to file tree.

To close the plugin execute the `:NERDTreeClose`

command.

Typing `:NERDTree`

and `:NERDTreeClose`

all the time is really inconvenient. Therefore I have mapped the toggle command **:NERDTreeToggle** to the F2 key. This way I can quickly open and close Nerd Tree whenever I wish. You can also map it to F2 by putting `map <F2> :NERDTreeToggle<CR>`

in your .vimrc file.

## How to install nerd_tree.vim?

To get the latest version:

- 1. Download NERD_tree.zip.
- 2. Extract NERD_tree.zip to ~/.vim (on Unix/Linux) or ~\vimfiles (on Windows).
- 3. Run :helptags ~/.vim/doc (on Unix/Linux) or :helptags ~/vimfiles/doc (on Windows) to rebuild the tags file (so that you can read :help NERD_tree.)
- 4. Restart Vim.

## Have Fun!

Have fun exploring your files with this awesome plugin and until next time!

**119**Comments January 13, 2010

# Using Fibonacci Numbers to Convert from Miles to Kilometers and Vice Versa

I learned an interesting fact about Fibonacci numbers recently while watching a lecture on number theory. Fibonacci numbers can be used to approximately convert from miles to kilometers and back.

Here is how.

Take two consecutive Fibonacci numbers, for example 5 and 8. And you're done converting. No kidding – there are 8 kilometers in 5 miles. To convert back just read the result from the other end - there are 5 miles in 8 km!

Another example. Let's take the consecutive Fibonacci numbers 21 and 34. What this tells us is that there are approximately 34 km in 21 miles and vice versa. (The exact answer is 33.79 km.)

If you need to convert a number that is not a Fibonacci number, just express the original number as a sum of Fibonacci numbers and do the conversion for each Fibonacci number separately.

For example, how many kilometers are there in 100 miles? Number 100 can be expressed as a sum of Fibonacci numbers 89 + 8 + 3. Now, the Fibonacci number following 89 is 144, the Fibonacci number following 8 is 13 and the Fibonacci number following 3 is 5. Therefore the answer is 144 + 13 + 5 = 162 kilometers in 100 miles. This is less than 1% off from the precise answer, which is 160.93 km.

Another example, how many miles are there in 400 km? Well, 400 is 377 + 21 + 2. Since we are going the opposite way now from miles to km, we need the preceding Fibonacci numbers. They are 233, 13 and 1. Therefore there are 233 + 13 + 1 = 247 miles in 400 km. (The correct answer is 248.55 miles.)

Just remember that if you need to convert from km to miles, you need to find the preceding Fibonacci number. But if you need to convert from miles to km, you need the subsequent Fibonacci number.

If the distance you're converting can be expressed as a single Fibonacci number, then for numbers greater than 21 the error is always around 0.5%. However, if the distance needs to be composed as a sum of `n`

Fibonacci numbers, then the error will be around sqrt(n)·0.5%.

**Here's why it works.**

Fibonacci numbers have a property that the ratio of two consecutive numbers tends to the Golden ratio as numbers get bigger and bigger. The Golden ratio is a number and it happens to be approximately 1.618.

Coincidentally, there are 1.609 kilometers in a mile, which is within 0.5% of the Golden ratio.

Now that we know these two key facts, we can figure out how to do the conversion. If we take two consecutive Fibonacci numbers, F_{n+1} and F_{n}, we know that their ratio F_{n+1}/F_{n} is approximately 1.618. Since the ratio is also almost the same as kilometers per mile, we can write F_{n+1}/F_{n} = [mile]/[km]. It follows that F_{n}·[mile] = F_{n+1}·[km], which translates to English as "n-th Fibonacci number in miles is the same as (n+1)-th Fibonacci number in kilometers".

That's all there is to it. A pure coincidence that the Golden ratio is almost the same as kilometers in a mile.

**61**Comments January 07, 2010

# Famous Perl One-Liners Explained, Part IV: String and Array Creation

This is the fourth part of a nine-part article on **famous Perl one-liners**. In this part I will create various one-liners for **string and array creation**. See part one for introduction of the series.

Famous Perl one-liners is my attempt to create "**perl1line.txt**" that is similar to "awk1line.txt" and "sed1line.txt" that have been so popular among Awk and Sed programmers.

The article on famous Perl one-liners will consist of nine parts:

- Part I: File spacing.
- Part II: Line numbering.
- Part III: Calculations.
**Part IV: String creation and array creation (this part).**- Part V: Text conversion and substitution.
- Part VI: Selective printing and deleting of certain lines.
- Part VII: Handy regular expressions.
- Part VIII: Release of perl1line.txt.
- Part IX: Release of Perl One-Liners e-book.

I decided that there will be two new parts in this series. The most powerful feature in Perl is its regular expressions, therefore I will write a part on "**Handy Perl regular expressions**." I also decided to **publish an e-book** after I am done with the series, so that will be the last part of this series. Subscribe to my blog to know when that happens!

**Awesome news:** I have written an e-book based on this article series. Check it out:

I also updated the previous part on calculations with 14 new one-liners on finding values of constants pi and e, doing date calculations, finding factorial, greatest common divisor, least common multiple, generating random numbers, generating permutations, finding power sets and doing some IP address conversions.

Here are today's one-liners:

## String Creation and Array Creation

**49. Generate and print the alphabet.**

perl -le 'print a..z'

This one-liner prints all the letters from a to z as `abcdefghijklmnopqrstuvwxyz`

. The letters are generated by the range operator `..`

. The range operator, when used in the list context (which is forced here by `print`

) on strings, uses the magical auto-increment algorithm that advances the string to the next character. So in this one-liner the auto-increment algorithm on the range `a..z`

produces all the letters from a to z.

I really golfed this one-liner. If you used `strict`

it would not work because of barewords `a`

and `z`

. Semantically more correct version is this:

perl -le 'print ("a".."z")'

Remember that the range operator `..`

produced a list of values. If you wish, you may print them comma separated by setting the `$,`

special variable:

perl -le '$, = ","; print ("a".."z")'

There are many more special variables. Take a look at my special variable cheat sheet for a complete listing.

Syntactically more appealing is to use join to separate the list with a comma:

perl -le 'print join ",", ("a".."z")'

Here the list `a..z`

gets joined by a comma before printing.

**50. Generate and print all the strings from "a" to "zz".**

perl -le 'print ("a".."zz")'

Here the range operator `..`

is used again. This time it does not stop at "z" as in the previous one-liner, but advances z by one-character producing "aa", then it keeps going, producing "ab", "ac", ..., until it hits "az". At this point it advances the string to "ba", continues with "bb", "bc", ..., until it reaches "zz".

Similarly, you may generate all strings from "aa" to "zz" by:

perl -le 'print "aa".."zz"'

Here it goes like "aa", "ab", ..., "az", "ba", "bb", ..., "bz", "ca", ... "zz".

**51. Create a hex lookup table.**

@hex = (0..9, "a".."f")

Here the array `@hex`

gets filled with values `0, 1, 2, 3, 4, 5, 6, 7, 8, 9`

and letters `a, b, c, d, e, f`

.

You may use this array to convert a number (in variable `$num`

) from decimal to hex using base conversion formula:

perl -le '$num = 255; @hex = (0..9, "a".."f"); while ($num) { $s = $hex[($num%16)&15].$s; $num = int $num/16 } print $s'

Surely, much easier way to convert a number to hex is just using the `printf`

function (or `sprintf`

function) with `%x`

format specifier. (The example above just illustrates a use of a hex lookup table that we created by using the range operator.)

perl -le '$hex = sprintf("%x", 255); print $hex'

(See my Perl printf and sprintf format cheat sheet for all the format specifiers.)

To convert the number back from hex to dec use the `hex`

function:

perl -le '$num = "ff"; print hex $num'

The `hex`

function takes a hex string (beginning with or without "0x") and converts it to decimal.

**52. Generate a random 8 character password.**

perl -le 'print map { ("a".."z")[rand 26] } 1..8'

Here the map function executes `("a".."z")[rand 26]`

code 8 times (because it iterates over the dummy range `1..8`

). In each iteration the code chooses a random letter from the alphabet. When `map`

is done iterating, it returns the generated list of characters and `print`

function prints it out by concatenating all the characters together.

If you also wish to include numbers in the password, add `0..9`

to the list of characters to choose from and change `26`

to `36`

as there are 36 different characters to choose from:

perl -le 'print map { ("a".."z", 0..9)[rand 36] } 1..8'

If you need a longer password, change `1..8`

to `1..20`

to generate a 20 character long password.

**53. Create a string of specific length.**

perl -le 'print "a"x50'

Operator `x`

is the repetition operator. This one-liner creates a string of 50 letters "a" and prints it.

If the repetition operator is used in list context, it creates a list (instead of scalar) with the given elements repeated:

perl -le '@list = (1,2)x20; print "@list"'

This one liner creates a list of twenty repetitions of (1, 2) (it looks like (1, 2, 1, 2, 1, 2, ...)).

**54. Create an array from a string.**

@months = split ' ', "Jan Feb Mar Apr May Jun Jul Aug Sep Oct Nov Dec"

Here the `@months`

gets filled with values from the string containing month names. As each month name is separated by a space, the `split`

function splits them and puts them in `@months`

. This way `$months[0]`

contains "Jan", `$months[1]`

contains "Feb", ..., and `$months[11]`

contains "Dec".

Another way to do the same is by using `qw//`

operator:

@months = qw/Jan Feb Mar Apr May Jun Jul Aug Sep Oct Nov Dec/

The `qw//`

operator takes a space separated string and creates an array with each word being an array element.

**55. Create a string from an array.**

@stuff = ("hello", 0..9, "world"); $string = join '-', @stuff

Here the values in array `@stuff`

get turned in a string `$string`

that has them separated by a hyphen. Turning an array in a string was done by the `join`

function that takes a separator and a list, and concatenates the items in the list in a single string, separated by the separator.

**56. Find the numeric values for characters in the string.**

perl -le 'print join ", ", map { ord } split //, "hello world"'

This one-liner takes the string "hello world", splits it into a list of characters by `split //, "hello world"`

, then it `map`

s the `ord`

function onto each of the characters, which returns the numeric, native 8-bit encoding (like ASCII or EBCDIC) of the character. Finally all the numeric values get joined together by a comma and get printed out.

Another way to do the same is use the `unpack`

function and specify `C*`

as the unpacking template (C means unsigned character and * means as many characters there are):

perl -le 'print join ", ", unpack("C*", "hello world")'

**57. Convert a list of numeric ASCII values into a string.**

perl -le '@ascii = (99, 111, 100, 105, 110, 103); print pack("C*", @ascii)'

Just as we unpacked a string into a list of values with the `C*`

template in the one-liner above, we can pack them back into a string.

Another way to do the same is use the `chr`

function that takes the code point value and returns the corresponding character:

perl -le '@ascii = (99, 111, 100, 105, 110, 103); print map { chr } @ascii'

Similar to one-liner #55 above, function `chr`

gets mapped onto each value in the `@ascii`

producing the characters.

**58. Generate an array with odd numbers from 1 to 100.**

perl -le '@odd = grep {$_ % 2 == 1} 1..100; print "@odd"'

This one-liner generates an array of odd numbers from 1 to 99 (as 1, 3, 5, 7, 9, 11, ..., 99). It uses the `grep`

function that evaluates the given code `$_ % 2 == 1`

for each element in the given list `1..100`

and returns only the elements that had the code evaluate to true. In this case the code tests if the reminder of the number is 1. If it is, the number is odd and it has to be put in the `@odd`

array.

Another way to write is by remembering that odd numbers have the low-bit set and testing this fact:

perl -le '@odd = grep { $_ & 1 } 1..100; print "@odd"'

Expression `$_ & 1`

isolates the low-bit, and grep selects only the numbers with low-bit set (odd numbers).

See my explanation of bit-hacks for full explanation and other related bit-hacks.

**59. Generate an array with even numbers from 1 to 100.**

perl -le '@even = grep {$_ % 2 == 0} 1..100; print "@even"'

This is almost the same as the previous one-liner, except the condition `grep`

tests for is "is the number even (reminder dividing by 2 is zero)?"

**60. Find the length of the string.**

perl -le 'print length "one-liners are great"'

Just for completeness, the `length`

subroutine finds the length of the string.

**61. Find the number of elements in an array.**

perl -le '@array = ("a".."z"); print scalar @array'

Evaluating an array in a scalar context returns the number of elements in it.

Another way to do the same is by adding one to the last index of the array:

perl -le '@array = ("a".."z"); print $#array + 1'

Here `$#array`

returns the last index in array `@array`

. Since it's a number one less than the number of elements, we add 1 to the result to find the total number of elements in the array.

## Perl one-liners explained e-book

I've now written the "Perl One-Liners Explained" e-book based on this article series. I went through all the one-liners, improved explanations, fixed mistakes and typos, added a bunch of new one-liners, added an introduction to Perl one-liners and a new chapter on Perl's special variables. Please take a look:

## Have Fun!

Have fun with these one-liners for now. The next part is going to be about text conversion and substitution.

**Can you think of other string creating and array creation one-liners that I didn't include here?**

This is the fourth post in an article series about MIT's course "Linear Algebra". In this post I will review lecture four on **factorizing** a matrix A into a product of a **lower-triangular** matrix L and an **upper-triangular** matrix U, or in other words A=LU. The lecture also shows how to find the **inverse of matrix product** A·B, how to find the **inverse of transposed matrix** A^{T}, and introduces **permutation matrices**.

Here are the previous lectures:

- Lecture 1: Geometry of Linear Equations
- Lecture 2: Elimination with Matrices
- Lecture 3: Matrix Multiplication and Inverse Matrices

## Lecture 4: A=LU Factorization

Lecture four starts with a small review of inverse matrices. Remember from previous lecture that the inverse of matrix A is matrix A^{-1} such that multiplying them together A·A^{-1} produces the identity matrix I.

Another key fact to remember is that matrix multiplication is associative (can change parenthesis) and that for square matrices the right-inverse is also the left-inverse A·A^{-1} = A^{-1}·A = I.

Lecture then continues with finding the inverse of matrix product A·B. The answer is found by reasoning what should we multiply A·B to get the identity matrix. Let's try B^{-1}·A^{-1}. If this is the inverse of A·B, then multiplying it with A·B from the left and right sides should produce the identity matrix I. Let's test this.

From the right side: (A·B)·(B^{-1}·A^{-1}). Since we can rearrange parenthesis, this is the same as A·(B·B^{-1})·A^{-1}. But B·B^{-1} is identity, therefore A·(B·B^{-1})·A^{-1} = A·I·A^{-1} = A·A^{-1} = I. Seems good so far.

Now the left side: (B^{-1}·A^{-1})·(A·B). We rearrange parenthesis again, B^{-1}·(A^{-1}·A)·B and just like in the previous example, A^{-1}·A is identity and B^{-1}·I·B is B^{-1}·B = I.

We have found the inverse of (A·B). It's (B^{-1}·A^{-1}).

Next, while we're at it, the lecture continues with finding the inverse of transposed matrix A^{T}. It's found by reasoning again. Let's first look at the equation A·A^{-1} = I. We can transpose both sides of this equation and it will still hold. If we transpose the right side, we get the same identity matrix I, because I^{T}=I. But what about the left-hand side? Transposing the left-hand side we get (A·A^{-1})^{T} = (A^{-1})^{T}·A^{T}. Now left-hand side is still equal to right-hand side, therefore (A^{-1})^{T}·A^{T} = I. But from this equation it's instantly obvious that the inverse of A^{T} is (A^{-1})^{T}.

We can therefore note and remember that (A^{T})^{-1} = (A^{-1})^{T}.

Finally the lecture moves on to today's key topic of **A=LU decomposition**. As usual, the topic is introduced by an example.

Let's take a look at this 2x2 matrix A:

And let's try to find the elementary matrix E_{21} that will eliminate the element at row 2, column 1. Multiplying the first row by 4 and subtracting it from the second row will produce a zero at 2, 1.

But look at the right-hand side. It's the **upper-triangular matrix** U (all elements below the diagonal are 0) that we were looking for!

Now if we multiply both sides by the inverse of E_{21}, we get (E_{21})^{-1}·E_{21}·A = (E_{21})^{-1}·U. But (E_{21})^{-1}·E_{21} is identity, therefore A = (E_{21})^{-1}·U. From this equation it's instantly obvious that the **lower-triangular matrix** L is nothing else but (E_{21})^{-1}!

Another form of factorization is A = LDU, where D is the diagonal matrix that contains the pivots. For this example it would be:

Now imagine that we have some arbitrary 3x3 matrix A. To reduce matrix A into upper-triangular matrix U, we first eliminate the element at position 2,1, that is, we multiply A (from the left side) with elimination matrix E_{21}. Then we eliminate the element at 3,1 by multiplying (from the left side) with elimination matrix E_{31}. Finally we eliminate the element at 3,2 by multiplying with elimination matrix E_{32}:

E_{32}·E_{31}·E_{21}·A = U

It follows from this equation that the lower-triangular matrix is the inverse of E_{32}·E_{31}·E_{21}, that is,

L = (E_{32}·E_{31}·E_{21})^{-1} = E_{21}^{-1}·E_{31}^{-1}·E_{32}^{-1}

We have found the factorization of a 3x3 matrix:

A = E_{21}^{-1}·E_{31}^{-1}·E_{32}^{-1}·U = L·U

The algorithm for finding matrices L and U should now be clear. First do the elimination to find matrix U, then invert the product of elimination matrices used for finding U to find L. Actually it's even easier, you don't even need to keep elimination matrices E, or find their inverse. You can just keep the multipliers used in each elimination step. Please see the video to find out how it works.

Next, the cost of elimination algorithm is discussed. How many steps does it take to go from matrix A to U? Turns out the elimination is O(n^{3}) process (more carefully, it takes around n^{3}/3 steps). (See my notes on MIT's Introduction to Algorithms for more info about O-notation.)

Since we sometimes need to do row exchanged to do elimination, the last ten minutes of lecture are spent on permutation matrices. Remember from lecture two that multiplying a matrix from the left side with a permutation matrix exchanges its rows.

The key facts about permutation matrices P are:

- The inverse of P is its transpose: P
^{-1}= P^{T}. - There are n! permutation matrices for n
_{x}n matrices.

Here is the video of the fourth lecture:

Direct link: http://www.youtube.com/watch?v=5hO3MrzPa0A

Topics covered in lecture four:

- [01:20] What is the inverse of A·B?
- [05:20] What is the inverse of A
^{T}? - [09:20] A=LU factorization for 2x2 matrices.
- [13:20] A=LDU factorization.
- [14:50] A=LU decomposition for 3x3 matrices.
- [18:30] Why is finding matrix L trivial?
- [27:25] How many operations does elimination take?
- [42:20] Permutation matrices.
- [47:20] What is the inverse of a permutation matrix?
- [48:20] How many P matrices for 3x3 and 4x4 matrices?

Here are my notes of lecture four:

Have fun with this lecture! The next post is going to be about transpose matrices, symmetric matrices, vector spaces, their subspaces and column spaces of matrices.

PS. This course is taught from Introduction to Linear Algebra textbook. Get it here: