awk programming and youtubeAlright. Let's get things going on this blog. This is the first post and I am still getting the design right so bear with me.

As I mentioned in the lengthy 'About this blog' post, one of the things I love to do is figuring out how to get something done with the wrong set of tools. I love it because it teaches me the "dark corners" of those tools.

This will be a tutorial on how to download YouTube videos. I just love watching YouTube videos. One of the latest videos I watched was this brilliant commercial. Also I love programming and one of the languages I learned recently was the awk (the name awk comes from the initials of its designers: Alfred V. Aho, Peter J. Weinberger and Brian W. Kernighan) programming language. Would it be possible to download YouTube videos with awk?!

I do not want to go into the language details since that is not my goal today. If you want to learn this cool language check out this tutorial or these books.

The awk language originally does not have networking support so without using some networking tools which would create a network connection for us to pipe contents from YouTube to awk we would be out of luck. Also awk is not quite suited for handling binary data, so we will have to figure out how to read large amounts of binary data from the net in an efficient manner.

Let's find out what Google has to say about awk and networking. Quick search for 'awk + networking' gives us an interesting result - "TCP/IP Internetworking With `gawk'". Hey, wow! Just what we were looking for! Networking support for awk in GNU's awk implementation through special files!

Quoting the manual:

The special file name for network access is made up of several fields, all of which are mandatory:
/inet/protocol/localport/hostname/remoteport

Cool! We know that the web talks over the tcp protocol port 80 and we are accessing www.youtube.com for videos. So the special file for accessing YouTube website would be:

/inet/tcp/0/www.youtube.com/80

(localport is 0 because we are a client)

Now let's test this out and get the banner of the YouTube's webserver by making a HEAD HTTP request to the web server and reading the response back. The following script will get the HEAD response from YouTube:

BEGIN {
YouTube = "/inet/tcp/0/www.youtube.com/80"
print "HEAD / HTTP/1.0\r\n\r\n" |& YouTube
while ((YouTube |& getline) > 0)
  print $0
close(YouTube)
}

I saved this script to youtube.head.awk file and and run gawk from command line on my Linux box:

pkrumins@graviton:~$ gawk youtube.head.awk
HTTP/1.1 200 OK
Date: Mon, 09 Jul 2007 21:41:59 GMT
Server: Apache
...
[truncated]

Yeah! It worked!

Now, let's find out how YouTube embeds videos on their site. We know that the video is played with a flash player so html code which displays it must be present. Let's find it.
I'll go a little easy here so the users with less experience can learn something, too. Suppose we did not know how the flash was embedded in the page. How could we find it?

One way would be to notice that the title of the video is 'The Wind' and then search this string in the html source until we notice something like 'swf' which is extension for flash files, or 'flash'.

The other way would be to use a better tool like FireFox browser's FireBug extension and arrive at the correct place in source instantly without searching the source but by bringing up the FireBug's console and inspecting the emedded flash movie.

After doing this we would find that YouTube videos are displayed on the page by calling this JavaScript function which generates the appropriate html:

SWFObject("/player2.swf<strong>?hl=en&video_id=2mTLO2F_ERY&l=123&t=OEgsToPDskK5DwdDH6isCsg5GtXyGpTN&soff=1&sk=sZLEcvwsRsajGmQF7OqwWAU</strong>"

Visiting this URL http://www.youtube.com/player2.swf?hl=en... loads the video player in full screen. Not quite what we want. We want just the video file that is being played in the video player. How does this flash player load the video? There are two ways to find it out - use a network traffic analyzer like Wireshark (previously Ethereal) or disassembling their flash player using SoThink's SWF Decompiler (it's commercial, i don't know a free alternative. can be bought here) to see the ActionScript which loads the movie. I hope to show how to find the video file url using both of these methods in future posts.

UPDATE (2007.10.21): This is no longer true. Now YouTube gets videos by taking 'video_id' and 't' id from the following JavaScript object:

var swfArgs = {hl:'en',video_id:'xh_LmxEuFo8',l:'39',t:'OEgsToPDskKwChZS_16Tu1BqrD4fueoW',sk:'ZU0Zy4ggmf9MYx1oVLUcYAC'};

UPDATE (2008.03.01): This is no longer true. Now YouTube gets videos by taking 'video_id' and 't' id from the following JavaScript object:

var swfArgs = {"BASE_YT_URL": "http://youtube.com/", "video_id": "JJ51hx3wGgI", "l": 242, "sk": "sZLEcvwsRsajGmQF7OqwWAU", "t": "OEgsToPDskJfAwvlG0JDr8cO-HVq2RaB", "hl": "en", "plid": "AARHZ9SrFgUPvbFgAAAAcADYAAA", "e": "h", "tk": "KVRgpgeftCUWrYaeqpikCbNxXMXKmdUoGtfTNVkEouMjv1SwamY-Wg=="};

UPDATE (2009.08.25): This is also no longer true. Now YouTube gets videos by requesting it from one of the urls specified in 'fmt_url_map', which is located in the following JavaScript object:

var swfArgs = {"rv.2.thumbnailUrl": "http%3A%2F%2Fi4.ytimg.com%2Fvi%2FCSG807d3P-U%2Fdefault.jpg", "rv.7.length_seconds": "282", "rv.0.url": "http%3A%2F%2Fwww.youtube.com%2Fwatch%3Fv%3DOF5T_7fDGgw", "rv.0.view_count": "2379471", "rv.2.title": "Banned+Commercials+-+Levis", "rv.7.thumbnailUrl": "http%3A%2F%2Fi3.ytimg.com%2Fvi%2FfbIdXn1zPbA%2Fdefault.jpg", "rv.4.rating": "4.87804878049", "length_seconds": "123", "rv.0.title": "Variety+Sex+%28LGBQT+Part+2%29", "rv.7.title": "Coke_Faithless", "rv.3.view_count": "2210628", "rv.5.title": "Three+sheets+to+the+wind%21", "rv.0.length_seconds": "364", "rv.4.thumbnailUrl": "http%3A%2F%2Fi3.ytimg.com%2Fvi%2F6IjUkNmUcHc%2Fdefault.jpg", "fmt_url_map": "18%7Chttp%3A%2F%2Fv22.lscache3.c.youtube.com%2Fvideoplayback%3Fip%3D0.0.0.0%26sparams%3Did%252Cexpire%252Cip%252Cipbits%252Citag%252Cburst%252Cfactor%26itag%3D18%26ipbits%3D0%26signature%3D41B6B8B8FC0CF235443FC88E667A713A8A407AE7.CF9B5B68E39D488E61FE8B50D3BAEEF48A018A3C%26sver%3D3%26expire%3D1251270000%26key%3Dyt1%26factor%3D1.25%26burst%3D40%26id%3Dda64cb3b617f1116%2C34%7Chttp%3A%2F%2Fv19.lscache3.c.youtube.com%2Fvideoplayback%3Fip%3D0.0.0.0%26sparams%3Did%252Cexpire%252Cip%252Cipbits%252Citag%252Cburst%252Cfactor%26itag%3D34%26ipbits%3D0%26signature%3DB6853342CDC97C85C83A872F9E5F274FE8B7B4A2.2B24E4836216C2F54428509388BC74043DB1782A%26sver%3D3%26expire%3D1251270000%26key%3Dyt1%26factor%3D1.25%26burst%3D40%26id%3Dda64cb3b617f1116%2C5%7Chttp%3A%2F%2Fv17.lscache8.c.youtube.com%2Fvideoplayback%3Fip%3D0.0.0.0%26sparams%3Did%252Cexpire%252Cip%252Cipbits%252Citag%252Cburst%252Cfactor%26itag%3D5%26ipbits%3D0%26signature%3DB84AF2BE4ED222EC0217BA3149456F1164827F0C.1ECC42B7587411B734CC7B37209FDFA9A935391D%26sver%3D3%26expire%3D1251270000%26key%3Dyt1%26factor%3D1.25%26burst%3D40%26id%3Dda64cb3b617f1116", "rv.2.rating": "4.77608082707", "keywords": "the%2Cwind", "cr": "US", "rv.1.url": "http%3A%2F%2Fwww.youtube.com%2Fwatch%3Fv%3Dmp7g_8rEdg8", "rv.6.thumbnailUrl": "http%3A%2F%2Fi1.ytimg.com%2Fvi%2Fx-OqKWXirsU%2Fdefault.jpg", "rv.1.id": "mp7g_8rEdg8", "rv.3.rating": "4.14860864417", "rv.6.title": "best+commercial+ever", "rv.7.id": "fbIdXn1zPbA", "rv.4.url": "http%3A%2F%2Fwww.youtube.com%2Fwatch%3Fv%3D6IjUkNmUcHc", "rv.1.title": "Quilmes+comercial", "rv.1.thumbnailUrl": "http%3A%2F%2Fi2.ytimg.com%2Fvi%2Fmp7g_8rEdg8%2Fdefault.jpg", "rv.3.title": "Viagra%21+Best+Commercial%21", "rv.0.rating": "3.79072164948", "watermark": "http%3A%2F%2Fs.ytimg.com%2Fyt%2Fswf%2Flogo-vfl106645.swf%2Chttp%3A%2F%2Fs.ytimg.com%2Fyt%2Fswf%2Fhdlogo-vfl100714.swf", "rv.6.author": "hbfriendsfan", "rv.5.id": "w0BQh-ICflg", "tk": "OK0E3bBTu64aAiJXYl2eScsjwe3ggPK1q1MXf7LPuwIFAjkL2itc1Q%3D%3D", "rv.4.author": "yaquijr", "rv.0.featured": "1", "rv.0.id": "OF5T_7fDGgw", "rv.3.length_seconds": "30", "rv.5.rating": "4.42047930283", "rv.1.view_count": "249202", "sdetail": "p%3Awww.catonmat.net%2Fblog%2Fdownload", "rv.1.author": "yodroopy", "rv.1.rating": "3.66379310345", "rv.4.title": "epuron+-+the+power+of+wind", "rv.5.thumbnailUrl": "http%3A%2F%2Fi4.ytimg.com%2Fvi%2Fw0BQh-ICflg%2Fdefault.jpg", "rv.5.url": "http%3A%2F%2Fwww.youtube.com%2Fwatch%3Fv%3Dw0BQh-ICflg", "rv.6.length_seconds": "40", "sourceid": "r", "rv.0.author": "kicesie", "rv.3.thumbnailUrl": "http%3A%2F%2Fi4.ytimg.com%2Fvi%2FKShkhIXdf1Y%2Fdefault.jpg", "rv.2.author": "dejerks", "rv.6.url": "http%3A%2F%2Fwww.youtube.com%2Fwatch%3Fv%3Dx-OqKWXirsU", "rv.7.rating": "4.51851851852", "rv.3.url": "http%3A%2F%2Fwww.youtube.com%2Fwatch%3Fv%3DKShkhIXdf1Y", "fmt_map": "18%2F512000%2F9%2F0%2F115%2C34%2F0%2F9%2F0%2F115%2C5%2F0%2F7%2F0%2F0", "hl": "en", "rv.7.url": "http%3A%2F%2Fwww.youtube.com%2Fwatch%3Fv%3DfbIdXn1zPbA", "rv.2.view_count": "9744415", "rv.4.length_seconds": "122", "rv.4.view_count": "162653", "rv.2.url": "http%3A%2F%2Fwww.youtube.com%2Fwatch%3Fv%3DCSG807d3P-U", "plid": "AARyAMgw_jlMzIA7", "rv.5.length_seconds": "288", "rv.0.thumbnailUrl": "http%3A%2F%2Fi4.ytimg.com%2Fvi%2FOF5T_7fDGgw%2Fdefault.jpg", "rv.7.author": "paranoidus", "sk": "I9SvaNetkP1IR2k_kqJzYpB_ItoGOd2GC", "rv.5.view_count": "503035", "rv.1.length_seconds": "61", "rv.6.rating": "4.74616639478", "rv.5.author": "hotforwords", "vq": "None", "rv.3.id": "KShkhIXdf1Y", "rv.2.id": "CSG807d3P-U", "rv.2.length_seconds": "60", "t": "vjVQa1PpcFOeKDyjuF7uICOYYpHLyjaGXsro1Tsfao8%3D", "rv.6.id": "x-OqKWXirsU", "video_id": "2mTLO2F_ERY", "rv.6.view_count": "2778674", "rv.3.author": "stephancelmare360", "rv.4.id": "6IjUkNmUcHc", "rv.7.view_count": "4260"};

We need to extract these two ids and make a request string '?video_id=xh_LmxEuFo8&t=OEgsToPDskKwChZS_16Tu1BqrD4fueoW'. The rest of the article describes the old way YouTube handled videos (before update), but it is basically the same.

For now, I can tell you that once the video player loads it gets the FLA (flash movie) file from:

http://www.youtube.com/get_video<strong>?hl=en&video_id=2mTLO2F_ERY&l=123&t=OEgsToPDskK5DwdDH6isCsg5GtXyGpTN&soff=1&sk=sZLEcvwsRsajGmQF7OqwWAU</strong>

Where the string in bold after 'http://www.youtube.com/get_video' is the same that is in the previous fragment after player2.swf (both in bold).

If you now entered the url into a browser it should popup the download dialog and you should be able save the flash movie to your computer. But it's not that easy! YouTube actually 302 redirects you to one or two other urls before the video download actually starts! So we will have to handle these HTTP redirects in our awk script because awk does not know anything about HTTP protocol!

So basically all we have to do is construct an awk script which would find the request string (previously in bold), append it to 'http://www.youtube.com/get_video' and handle the 302 redirects and finally save the video data to file.

Since awk has great pattern matching built in already we can extract the request string (in bold previously) by getting html source of the video page then searching for a line which contains SWFObject("/player2.swf and extracting everything after the ? up to ".

So here is the final script. Copy or save it to 'get_youtube_vids.awk' file and then it can be used from command line as following:

gawk -f get_youtube_vids.awk <http://www.youtube.com/watch?v=ID1> [http://youtube.com/watch?v=ID2 | ID2] ...

For example, to download the video commercial which I told was great you'd call the script as:

gawk -f get_youtube_vids.awk http://www.youtube.com/watch?v=2mTLO2F_ERY

or just using the ID of the video:

gawk -f get_youtube_vids.awk 2mTLO2F_ERY

Here is the source code of the program:

#!/usr/bin/gawk -f
#
# 2007.07.10 v1.0 - initial release
# 2007.10.21 v1.1 - youtube changed the way it displays vids
# 2008.03.01 v1.2 - youtube changed the way it displays vids
# 2008.08.28 v1.3 - added a progress bar and removed need for --re-interval 
# 2009.08.25 v1.4 - youtube changed the way it displays vids
#
# Peteris Krumins (peter@catonmat.net)
# http://www.catonmat.net -- good coders code, great reuse
#
# Usage: gawk -f get_youtube_vids.awk <http://youtube.com/watch?v=ID1 | ID1> ...
# or just ./get_youtube_vids.awk <http://youtube.com/watch?v=ID1 | ID1>
#

BEGIN {
    if (ARGC == 1) usage();

    BINMODE = 3

    delete ARGV[0]
    print "Parsing YouTube video urls/IDs..."
    for (i in ARGV) {
        vid_id = parse_url(ARGV[i])
        if (length(vid_id) < 6) { # havent seen youtube vids with IDs < 6 chars
            print "Invalid YouTube video specified: " ARGV[i] ", not downloading!"
            continue
        }
        VIDS[i] = vid_id
    }

    for (i in VIDS) {
        print "Getting video information for video: " VIDS[i] "..."
        get_vid_info(VIDS[i], INFO)

        if (INFO["_redirected"]) {
            print "Could not get video info for video: " VIDS[i]
            continue 
        }

        if (!INFO["video_url"]) {
            print "Could not get video_url for video: " VIDS[i]
            print "Please goto my website, and submit a comment with an URL to this video, so that I can fix it!"
            print "Url: http://www.catonmat.net/blog/downloading-youtube-videos-with-gawk/"
            continue
        }
        if ("title" in INFO) {
            print "Downloading: " INFO["title"] "..."
            title = INFO["title"]
        }
        else {
            print "Could not get title for video: " VIDS[i]
            print "Trying to download " VIDS[i] " anyway"
            title = VIDS[i]
        }
        download_video(INFO["video_url"], title)
    }
}

function usage() {
    print "Downloading YouTube Videos with GNU Awk"
    print
    print "Peteris Krumins (peter@catonmat.net)"
    print "http://www.catonmat.net  --  good coders code, great reuse"
    print 
    print "Usage: gawk -f get_youtube_vids.awk <http://youtube.com/watch?v=ID1 | ID1> ..."
    print "or just ./get_youtube_vids.awk <http://youtube.com/watch?v=ID1 | ID1> ..."
    exit 1
}

#
# function parse_url
#
# takes a url or an ID of a youtube video and returns just the ID
# for example the url could be the full url: http://www.youtube.com/watch?v=ID
# or it could be www.youtube.com/watch?v=ID
# or just youtube.com/watch?v=ID or http://youtube.com/watch?v=ID
# or just the ID
#
function parse_url(url) {
    gsub(/http:\/\//, "", url)                # get rid of http:// part
    gsub(/www\./,     "", url)                # get rid of www.    part
    gsub(/youtube\.com\/watch\?v=/, "", url)  # get rid of youtube.com... part

    if ((p = index(url, "&")) > 0)      # get rid of &foo=bar&... after the ID
        url = substr(url, 1, p-1)

    return url
}

#
# function get_vid_info
#
# function takes the youtube video ID and gets the title of the video
# and the url to .flv file
#
function get_vid_info(vid_id, INFO,    InetFile, Request, HEADERS, matches, escaped_urls, fmt_urls, fmt) {
    delete INFO
    InetFile = "/inet/tcp/0/www.youtube.com/80"
    Request = "GET /watch?v=" vid_id " HTTP/1.1\r\n"
    Request = Request "Host: www.youtube.com\r\n\r\n"

    get_headers(InetFile, Request, HEADERS)
    if ("Location" in HEADERS) {
        INFO["_redirected"] = 1
        close(InetFile)
        return
    }

    while ((InetFile |& getline) > 0) {
        if (match($0, /"fmt_url_map": "([^"]+)"/, matches)) {
            escaped_urls = url_unescape(matches[1])
            split(escaped_urls, fmt_urls, /,?[0-9]+\|/)
            for (fmt in fmt_urls) {
                if (fmt_urls[fmt] ~ /itag=5/) {
                    # fmt number 5 is the best video
                    INFO["video_url"] = fmt_urls[fmt]
                    close(InetFile)
                    return
                }
            }
            close(InetFile)
            return
        }
        else if (match($0, /<title>YouTube - ([^<]+)</, matches)) {
            # lets try to get the title of the video from html tag which is
            # less likely a subject to future html design changes
            INFO["title"] = matches[1]
        }
    }
    close(InetFile)
}

#
# function url_unescape
#
# given a string, it url-unescapes it.
# charactes such as %20 get converted to their ascii counterparts.
#
function url_unescape(str,    nmatches, entity, entities, seen, i) {
    nmatches = find_all_matches(str, "%[0-9A-Fa-f][0-9A-Fa-f]", entities)
    for (i = 1; i <= nmatches; i++) {
        entity = entities[i]
        if (!seen[entity]) {
            if (entity == "%26") { # special case for gsub(s, r, t), when r = '&'
                gsub(entity, "\\&", str)
            }
            else {
                gsub(entity, url_entity_unescape(entity), str)
            }
            seen[entity] = 1
        }
    }
    return str
}

#
# function find_all_matches
#
# http://awk.freeshell.org/FindAllMatches
#
function find_all_matches(str, re, arr,    j, a, b) {
    j=0
    a = RSTART; b = RLENGTH   # to avoid unexpected side effects

    while (match(str, re) > 0) {
        arr[++j] = substr(str, RSTART, RLENGTH)
        str = substr(str, RSTART+RLENGTH)
    }
    RSTART = a; RLENGTH = b
    return j
}

#
# function url_entity_unescape
#
# given an url-escaped entity, such as %20, return its ascii counterpart.
#
function url_entity_unescape(entity) {
    sub("%", "", entity)
    return sprintf("%c", strtonum("0x" entity))
}

#
# function download_video
#
# takes the url to video and saves the movie to current directory using
# santized video title as filename
#
function download_video(url, title,    filename, InetFile, Request, Loop, HEADERS, FOO) {
    title = sanitize_title(title)
    filename = create_filename(title)

    parse_location(url, FOO)
    InetFile = FOO["InetFile"]
    Request  = "GET " FOO["Request"] " HTTP/1.1\r\n"
    Request  = Request "Host: " FOO["Host"] "\r\n\r\n"

    Loop = 0 # make sure we do not get caught in Location: loop
    do {     # we can get more than one redirect, follow them all
        get_headers(InetFile, Request, HEADERS)
        if ("Location" in HEADERS) { # we got redirected, let's follow the link
            close(InetFile)
            parse_location(HEADERS["Location"], FOO)
            InetFile = FOO["InetFile"]
            Request  = "GET " FOO["Request"] " HTTP/1.1\r\n"
            Request  = Request "Host: " FOO["Host"] "\r\n\r\n"
            if (InetFile == "") {
                print "Downloading '" title "' failed, couldn't parse Location header!"
                return
            }
        }
        Loop++
    } while (("Location" in HEADERS) && Loop < 5)

    if (Loop == 5) {
        print "Downloading '" title "' failed, got caught in Location loop!"
        return
    }
    
    print "Saving video to file '" filename "' (size: " bytes_to_human(HEADERS["Content-Length"]) ")..."
    save_file(InetFile, filename, HEADERS)
    close(InetFile)
    print "Successfully downloaded '" title "'!"
}

#
# function sanitize_title
#
# sanitizes the video title, by removing ()'s, replacing spaces with _, etc.
# 
function sanitize_title(title) {
    gsub(/\(|\)/, "", title)
    gsub(/[^[:alnum:]-]/, "_", title)
    gsub(/_-/, "-", title)
    gsub(/-_/, "-", title)
    gsub(/_$/, "", title)
    gsub(/-$/, "", title)
    gsub(/_{2,}/, "_", title)
    gsub(/-{2,}/, "-", title)
    return title
}

#
# function create_filename
#
# given a sanitized video title, creates a nonexisting filename
#
function create_filename(title,    filename, i) {
    filename = title ".flv"
    i = 1
    while (file_exists(filename)) {
        filename = title "-" i ".flv"
        i++
    }
    return filename
}

#
# function save_file
#
# given a special network file and filename reads from network until eof
# and saves the read contents into a file named filename
#
function save_file(Inet, filename, HEADERS,    done, cl, perc, hd, hcl) {
    OLD_RS  = RS
    OLD_ORS = ORS

    ORS = ""

    # clear the file
    print "" > filename

    # here we will do a little hackery to write the downloaded data
    # to file chunk by chunk instead of downloading it all to memory
    # and then writing
    #
    # the idea is to use a regex for the record field seperator
    # everything that gets matched is stored in RT variable
    # which gets written to disk after each match
    #
    # RS = ".{1,512}" # let's read 512 byte records

    RS = "@" # I replaced the 512 block reading with something better.
             # To read blocks I had to force users to specify --re-interval,
             # which made them uncomfortable.
             # I did statistical analysis on YouTube video files and
             # I found that hex value 0x40 appears pretty often (200 bytes or so)!
             #

    cl = HEADERS["Content-Length"]
    hcl = bytes_to_human(cl)
    done = 0
    while ((Inet |& getline) > 0) {
        done += length($0 RT)
        perc = done*100/cl
        hd = bytes_to_human(done)
        printf "Done: %d/%d bytes (%d%%, %s/%s)            \r",
            done, cl, perc, bytes_to_human(done), bytes_to_human(cl)
        print $0 RT >> filename
    }
    printf "Done: %d/%d bytes (%d%%, %s/%s)            \n",
        done, cl, perc, bytes_to_human(done), bytes_to_human(cl)

    RS  = OLD_RS
    ORS = OLD_ORS
}

#
# function get_headers
#
# given a special inet file and the request saves headers in HEADERS array
# special key "_status" can be used to find HTTP response code
# issuing another getline() on inet file would start returning the contents
#
function get_headers(Inet, Request,    HEADERS, matches, OLD_RS) {
    delete HEADERS

    # save global vars
    OLD_RS=RS

    print Request |& Inet

    # get the http status response
    if (Inet |& getline > 0) {
        HEADERS["_status"] = $2
    }
    else {
        print "Failed reading from the net. Quitting!"
        exit 1
    }

    RS="\r\n"
    while ((Inet |& getline) > 0) {
        # we could have used FS=": " to split, but i could not think of a good
        # way to handle header values which contain multiple ": "
        # so i better go with a match
        if (match($0, /([^:]+): (.+)/, matches)) {
            HEADERS[matches[1]] = matches[2]
        }
        else { break }
    }
    RS=OLD_RS
}

#
# function parse_location
#
# given a Location HTTP header value the function constructs a special
# inet file and the request storing them in FOO
#
function parse_location(location, FOO) {
    # location might look like http://cache.googlevideo.com/get_video?video_id=ID
    if (match(location, /http:\/\/([^\/]+)(\/.+)/, matches)) {
        FOO["InetFile"] = "/inet/tcp/0/" matches[1] "/80"
        FOO["Host"]     = matches[1]
        FOO["Request"]  = matches[2]
    }
    else {
        FOO["InetFile"] = ""
        FOO["Host"]     = ""
        FOO["Request"]  = ""
    }
}

# function bytes_to_human
#
# given bytes, converts them to human readable format like 13.2mb
#
function bytes_to_human(bytes,    MAP, map_idx, bytes_copy) {
    MAP[0] = "b"
    MAP[1] = "kb"
    MAP[2] = "mb"
    MAP[3] = "gb"
    MAP[4] = "tb"
   
    map_idx = 0
    bytes_copy = int(bytes)
    while (bytes_copy > 1024) {
        bytes_copy /= 1024
        map_idx++
    }

    if (map_idx > 4)
        return sprintf("%d bytes", bytes, MAP[map_idx])
    else
        return sprintf("%.02f%s", bytes_copy, MAP[map_idx])
}

#
# function file_exists
#
# given a path to file, returns 1 if the file exists, or 0 if it doesn't
#
function file_exists(file,    foo) {
    if ((getline foo <file) >= 0) {
        close(file)
        return 1
    }
    return 0
}

Each function is well documented so the code should be easy to understand. If you see something can be improved or optimized, just comment on this page. Also if you would like that I explain each fragment of the source code in even more detail, let me know.

The most interesting function in this script is save_file which does chunked downloading in a hacky way (see the comments in the source to see how)

Download

Download link: gawk youtube video downloader
Total downloads: 27338 times

Are you interested in AWK programming language? Here are four great books on AWK from Amazon:

Comments

August 09, 2007, 07:58

One of the best (or at least brightest) perl programers, Randal Schwartz (Merlyn) started as an AWK expert programer.

I love AWK, Bash and perl with the same intensity!

Keep on writing such interesting articles!
Alberto

August 03, 2014, 17:59

I was very pleased to find this site.I wanted to thank you for this great read!! I definitely enjoying every little bit of it and I have you bookmarked to check out new stuff you post.

independence day images|
independence day quotes|
independence day wallpapers|
independence day sms|
independence day wishes|
independence day poems|
independence day message|
independence day |

August 12, 2014, 14:34

I really love your blog.I wanted to thank you for this great read!! I definitely enjoying each and every little bit of the article and I have you bookmarked your site to check out new stuff you post.

independence day images
happy independence day images.

FIBA Permalink
August 22, 2014, 13:10

I really love your blog.I wanted to thank you for this great read!! I definitely enjoying each and every little bit of the article and I have you bookmarked your site to check out new stuff you post.

fiba 2014
fiba world cup
fiba world cup 2014
Basketball world cup 2014
fiba world cup schedule
fiba world cup 2014 teams
fiba world cup 2014 groups
fiba live streaming,

October 11, 2014, 07:28

The tips which you have shared in this post are just awesome. These tips are really helpful to me and I think it should must helpful to others. I really like the style of writing this article. Your articles are always helps me a lot. Thanks for sharing this wonderful article with us.

Diwali 2014
Diwali Wishes
Diwali images
Diwali SMS
Diwali Rangoli
Diwali messages
Diwali Greetings
Diwali poems
Diwali Quotes
Happy Diwali
Happy Diwali 2014
Happy Diwali Wishes.
Diwali 2014
Diwali Wishes
Diwali images
Happy Diwali SMS
Happy Diwali Rangoli
Happy Diwali messages
Happy Diwali Greetings
Happy Diwali poems
Happy Diwali Quotes.

August 09, 2007, 08:11

Thanks, Chanio! I will definitely keep writing interesting articles :)

August 29, 2007, 21:12

VERY GOOD

September 04, 2007, 10:34

I am seeing another Torvalds in the making...anyone else also thinks the same way ;)

Nice blog..

September 04, 2007, 16:44

Credence, I am very thankful for your kind comparison! :)

October 19, 2007, 02:58

I have been to your site for just around 30 minutes and you have raised an immense interest in me to learn programming and the basics of internet. I not only see a great programmer in you, but also a great teacher. Hope, I will continue to learn from you and your blogs.

Btw, I already have a query. I've seen tools available to download just the audio from a youtube video, in various formats; but as per your explanation it seems, that the audio is integrated with the video in the .swf file. How can we extract only the audio part and have it converted to a format like mp3?

October 31, 2007, 17:53

i like it?

November 03, 2007, 05:32

Download any videos on youtube.com site with 5X fast speed
Convert the flv video to various formats, including MPEG4
Auto transfer video to iPod, iPhone, Pocket PC, PSP, or Zune
Schedule the download and conversion tasks to be executed automatically
Product page: youtuberobot.com
Direct download link: youtuberobot.com/download/utuberobot.exe
Company web-site: youtuberobot.com
E-mail: support@youtuberobot.com

August 19, 2014, 07:46

wow really great article i must say this is the most awesome site i have ever been to!! great just keep going with such beautiful articles!! Visit Here. All the best with more such articles!!

Werner Permalink
February 21, 2008, 18:20

Hi,

youtube changed it again. To download i made a little change in function get_vid_info

#!/usr/bin/gawk -f
#
# 2007.07.10 v1.0 - initial release
# 2007.10.21 v1.1 - youtube changed the way it displays vids
#
# Peteris Krumins (peter@catonmat.net)
# http://www.catonmat.net - good coders code, great reuse
#
# Usage: gawk --re-interval -f get_youtube_vids.awk  ...

BEGIN {
    if (ARGC == 1) usage();

    if ("fooooo" !~ "o{5}") {
        print "Error: --re-interval option was not specified!"
        print
        usage();
    }

    BINMODE = 3

    delete ARGV[0]
    print "Parsing YouTube video urls/IDs"
    for (i in ARGV) {
        vid_id = parse_url(ARGV[i])
        if (length(vid_id)  ..."
    exit 1
}

#
# function parse_url
#
# takes a url or an ID of a youtube video and returns just the ID
# for example the url could be the full url: http://www.youtube.com/watch?v=ID
# or it could be www.youtube.com/watch?v=ID
# or just youtube.com/watch?v=ID or http://youtube.com/watch?v=ID
# or just the ID
#
function parse_url(url) {
    gsub(/http:\/\//, "", url)                # get rid of http:// part
    gsub(/www\./,     "", url)                # get rid of www.    part
    gsub(/youtube\.com\/watch\?v=/, "", url)  # get rid of youtube.com... part

    if ((p = index(url, "&")) > 0)      # get rid of &foo=bar&... after the ID
        url = substr(url, 1, p)

    return url
}

#
# function get_vid_info
#
# function takes the youtube video ID and gets the title of the video
# and request string to .flv video file
#
function get_vid_info(vid_id, INFO) {
    YouTube = "/inet/tcp/0/www.youtube.com/80"
    Request = "GET /watch?v=" vid_id " HTTP/1.0\r\n\r\n"

    print Request |& YouTube
    while ((YouTube |& getline) > 0) {
        if (match($0, /"video_id":"([^"]+)".+"t":"([^"]+)"/, matches)) {
            # we found the request string
            #
            INFO["request"] = "video_id=" matches[1] "&t=" matches[2]
        }
        else if (match($0, /YouTube - ([^([^ filename

    # here we will do a little hackery to write the downloaded data
    # to file chunk by chunk instead of downloading it all to memory
    # and then writing
    #
    # the idea is to use a regex for the record field seperator
    # everything that gets matched is stored in RT variable
    # which gets written to disk after each match
    #
    RS = ".{1,512}" # let's read 512 byte records

    while ((Inet |& getline) > 0)
        print RT >> filename

    RS  = OLD_RS
    ORS = OLD_ORS
}

#
# function get_headers
#
# given a special inet file and the request saves headers in HEADERS array
# special key "_status" can be used to find HTTP response code
# issuing another getline() on inet file would start returning the contents
#
function get_headers(Inet, Request, HEADERS) {
    # save global vars
    OLD_RS=RS

    print Request |& Inet

    # get the http status response
    if (Inet |& getline > 0) {
        HEADERS["_status"] = $2
    }
    else {
        print "Failed reading from the net. Quitting!"
        exit 1
    }

    RS="\r\n"
    while ((Inet |& getline) > 0) {
        # we could have used FS=": " to split, but i could think of a good
        # way to handle header values which contain multiple ": "
        # so i better go with a match
        if (match($0, /([^:]+): (.+)/, matches)) {
            HEADERS[matches[1]] = matches[2]
        }
        else { break }
    }
    RS=OLD_RS
}

#
# function parse_location
#
# given a Location HTTP header value the function constructs a special
# inet file and the request storing them in FOO
#
function parse_location(location, FOO) {
    # location might look like http://cache.googlevideo.com/get_video?video_id=ID
    if (match(location, /http:\/\/([^\/]+)(\/.+)/, matches)) {
        FOO["InetFile"] = "/inet/tcp/0/" matches[1] "/80"
        FOO["Host"]     = matches[1]
        FOO["Request"]  = matches[2]
    }
    else {
        FOO["InetFile"] = ""
        FOO["Host"]     = ""
        FOO["Request"]  = ""
    }
}

BR,
Werner.

Elmer Fittery Permalink
February 22, 2008, 23:23

With regards to the new code Mr. Werner posted on February 21st, 2008 at 6:20 pm

I think a few lines of code for the function get_vid_info are missing starting with the line:

else if (match($0, /YouTube - ([^([^ filename

Werner Permalink
February 24, 2008, 17:27

There are even more lines missing, sorry. I have no idea why...
However, just take the original code and update the first if statement in get_vid_info by the following:

        if (match($0, /"video_id"[ \t]*:[ \t]*"([^"]+)".+"t"[ \t]*:[ \t]*"([^"]+)"/, matches)) {

BR,
Werner.

February 25, 2008, 20:39

is there just a button that you click and it downloads that program for you? Then you can just download any video from youtube you want??? Please i need this.....

March 02, 2008, 01:09

Elmer Fittery and Werner, I have uploaded a new version of gawk youtube downloader. Thanks for noticing the broken version!

March 16, 2008, 05:50

I wrote a video downloader with a friend in ruby. Youtube is also handled (18+ urls also). You can check the source for comparision, it's hosted on code.google.com/p/mget
The project site: http://movie-get.org
There are *several* other hosting sites we support, check it out ;)

March 16, 2008, 14:04

That's pretty cool, nice job. I was a little disappointed, though, that you put all the logic in the BEGIN pattern. It seems like you do all the pattern matching in a procedural manner inside the functions themselves rather than using the pattern/action syntax which is a bit more natural for AWK programs.

localhost Permalink
April 22, 2008, 01:25

holy crap this is cool!

this is the only
simple youtube dl'er i've found that gives you a human readable filename!

that feature alone makes this one very cool.

the other thing that's nice is gawk is a small package without so many possible config snafus.

python, perl, ruby, etc. are quite large and there's more possibility of some config problem rearing up.

a python dl'er is my 2d fav to this one. but the filename thing is still an annoyance with the python solution i was using.

bsergean Permalink
October 24, 2008, 07:38

Cool post !

I digged inside your (nice) code, and came out with another version using curl. The output name is always out.flv thought ...

What was interesting is to look at the server header with curl -i.
First web page hit is apache, and the videos are served with lighttd, as they say in the google video explaining youtube guts.

#!/bin/sh

test $# -eq 1 || {
        echo "Usage: mytube.sh "
        exit 1
}
url=$1

# Get video ID.
video_url=`curl -s $url | awk '{ if (match($0, /"video_id": "([^"]+)".+"t": "([^"]+)"/, matches)) { print "video_id=" matches[1] "&t=" matches[2] } }'`

# Download video to output.flv
youtube=http://www.youtube.com
curl -L $youtube/get_video?$video_url -o out.flv
Mars Permalink
December 10, 2008, 06:37

I try this, it is very wonderful.
I found some video is in video.google.com, and this script is not work for the video in video.google.com.

James Permalink
June 14, 2009, 18:54

Thank you...

But i found "Zillatube" program download video quickly, and also easy to play those videos too.

found it at http://www.zillatube.com

August 23, 2009, 06:07

It seems to be not working or youtube may have blocked it. I am getting messages below. I edit what I pasted below to hide the link I was downloading.

$ gawk -f ./get_youtube_vids.awk http://
Parsing YouTube video urls/IDs...
Getting video information for video: 2sU
Could not get request string for video:

Anybody tried it successfully recently? Thanks.

August 24, 2009, 16:31

Youtube seems to have changed their system at some time last week.
As far as I can tell, there is no more "video_id" in the page, so the script cannot find the video.

August 25, 2009, 17:21

Peter can you fix script to work again ? it very useful.

August 25, 2009, 18:11

Atoms, will try to fix it right now. Hang on.

August 26, 2009, 01:08

It's fixed now!

Sujan Permalink
September 27, 2009, 13:59

gawk -f get_youtube_vids.awk http://www.youtube.com/watch?v=317eucCSL3s
Parsing YouTube video urls/IDs...
Getting video information for video: 317eucCSL3s...
Could not get video_url for video: 317eucCSL3s
Please goto my website, and submit a comment with an URL to this video, so that I can fix it!
Url: http://www.catonmat.net/blog/downloading-youtube-videos-with-gawk/

werner Permalink
September 28, 2009, 04:52

$ gawk -f get_youtube_vids-2.awk 317eucCSL3s
Parsing YouTube video urls/IDs...
Getting video information for video: 317eucCSL3s...
Downloading: Paul Young Love of A Common People...
Saving video to file 'Paul_Young_Love_of_A_Common_People-2.flv' (size: 4.71mb)...
Done: 4941449/4941449 bytes (100%, 4.71mb/4.71mb)
Successfully downloaded 'Paul_Young_Love_of_A_Common_People'!

September 28, 2009, 08:16

Sujan, works for me. Perhaps it was a temporary error.

Werner, thanks for catching that it worked. :)

Sujan Permalink
October 04, 2009, 01:52

Thanks!! werner & Peteris

It now works. Yes perhaps that was a temporary error, looks like.

It is a great tool. And Thanks once again to you all.

Regards,
Sujan

AndyJ Permalink
October 05, 2009, 21:51

This won't work for me:
http://www.youtube.com/watch?v=nb1u7wMKywM

Cool utility and good to see another awk power user :-)

Gilles Detillieux Permalink
October 21, 2009, 14:27

I've used this script successfully before, but now it's getting stuck consistently on this one...

$ gawk -f scripts/get_youtube_vids.awk 2lXh2n0aPyw
Parsing YouTube video urls/IDs...
Getting video information for video: 2lXh2n0aPyw...
Could not get video_url for video: 2lXh2n0aPyw
Please goto my website, and submit a comment with an URL to this video, so that I can fix it!
Url: http://www.catonmat.net/blog/downloading-youtube-videos-with-gawk/
$

Any insights? Thanks.

October 23, 2009, 21:20

i'm seeing similar issues as Gilles:

[/home3/epromfou]# ./get_youtube_vids.awk.sh http://www.youtube.com/watch?v=y67HAxbhw48
Parsing YouTube video urls/IDs...
Getting video information for video: y67HAxbhw48...
Could not get video_url for video: y67HAxbhw48
Please goto my website, and submit a comment with an URL to this video, so that I can fix it!
Url: http://www.catonmat.net/blog/downloading-youtube-videos-with-gawk/

Gilles Detillieux Permalink
October 27, 2009, 16:21

It's failing the same way on d2WK44cH2J0, VimkyAWJ0uU and TqI7cGM9mWs (3 different postings of an impressive skipping rope performance). Has YouTube changed the format of the HTML code in which it buries the video URL?

Gorki Permalink
November 06, 2009, 12:56

Every youtube downloader script from your site doesn't extract video titles anymore

rahul kumar Permalink
January 23, 2010, 12:34

Dear Peter,

I've been using this a lot but it has not been able to get the title since the source now has a newline after < title >.

I've made a small change to my file to get it to work with title.

  else if (match($0, /VIDEO_TITLE': '([^']+)'/, matches)) {
            ## lets try to get the title of the video from html tag which is
            ## less likely a subject to future html design changes
            INFO["title"] = matches[1]
            printf " ----> GOT title: %s\n\n", INFO["title"]
        }

Before this i tried catching the text between the h1 tags but it seems it never gets those lines.

Now its working fine. Perhaps, you could look into this and make a fresh version.

Thanks a lot. rahul.

January 23, 2010, 15:44

rahul kumar, thanks for the fix!

Don Permalink
April 01, 2010, 03:51

I believe YouTube may have changed their site again. The awk script stopped working today for me on any video that I try. For example, http://www.youtube.com/watch?v=KNNWzJpBgAY

April 01, 2010, 12:28

Don, someone will have to fix it and send me a patch. I don't have time to support these scripts anymore. I wrote them for fun, not as serious projects that I wish to support.

April 03, 2010, 22:54

yes, i see the new page and the script don't work anymore.
Peteris, i think the only function we shall chance is "get_vid_info" but i've read the script very fast and now I too don't have time for helping you.
Happy Easter.

rahul kumar Permalink
April 05, 2010, 03:55

oh my apologies, I just read your comment that you won't be supporting this great tool any longer :-(

rahul kumar Permalink
April 05, 2010, 04:41

Youtube has removed the double quotes, so this tiny change works now.

113c113
 	if (match($0, /fmt_url_map=([^"]+)&/, matches)) {

In short, remove double quotes and colon, Use = and place an ampersand at end. I will upload the changed file to gist.

http://gist.github.com/356056

re
rahul

p.s. I have run only one test on it, its working, will update if any issues.

rahul kumar Permalink
April 05, 2010, 04:52
if (match($0, /fmt_url_map=([^"]+)&/, matches)) {

Code got eaten in earlier post.
(download gist file posted above)

Peteris, i have always had a wrong progress status reported (OS X). I shows half the download and finishes at 46 or 50% - even though the file has completely downloaded. I looked at the code, but could not figure out.

Seems that length of string read is reported wrong ??

cheers. rk.

April 05, 2010, 12:34

Download failed for the following link.
Youtube just became smarter! :)

http://www.youtube.com/watch?v=RzTg7YXuy34

shimmen Permalink
May 21, 2010, 20:39

great and it worked for me... is it possible it works sometimes, depending on the video, and not other times? I've tried recently myself similar approaches using wget and alway failed even though the technics (if it can be called so) is the same.

swftofla Permalink
May 27, 2010, 12:01

It works for me, great article and thanks to you all for updates.

Yupee Permalink
July 23, 2010, 12:42

[root@snet-ng tmp]# ./get_youtube_vids.awk DItTc223MPM
Parsing YouTube video urls/IDs...
Getting video information for video: DItTc223MPM...
Could not get title for video: DItTc223MPM
Trying to download DItTc223MPM anyway
gawk: ./get_youtube_vids.awk:322: fatal: expression for `|&' redirection has null string value

Glenn Permalink
October 16, 2010, 16:43

Yupee: I get the same error

gloonie@habibi:~$ gawk -f get_youtube_vids.awk h0PcHEdSpnA
Parsing YouTube video urls/IDs...
Getting video information for video: h0PcHEdSpnA...
Could not get title for video: h0PcHEdSpnA
Trying to download h0PcHEdSpnA anyway
gawk: get_youtube_vids.awk:322: fatal: expression for `|&' redirection has null string value

Has anyone found a solution?

Glenn Permalink
October 16, 2010, 16:55

Also, the example file (the "Wind" commercial) failed:

gloonie@habibi:~$ gawk -f get_youtube_vids.awk 2mTLO2F_ERY
Parsing YouTube video urls/IDs...
Getting video information for video: 2mTLO2F_ERY...
Could not get video_url for video: 2mTLO2F_ERY
Please goto my website, and submit a comment with an URL to this video, so that I can fix it!
Url: http://www.catonmat.net/blog/downloading-youtube-videos-with-gawk/

October 17, 2010, 18:10

I fixed the program because I needed it myself. Download the latest version.

Luke Permalink
November 12, 2010, 18:40

Thank you for your program/file. I don't know if it is just my connection, but I don't always get the complete download. I have never seen awk before and can not determine the effects of a weak connection directly from the code. I tend to believe that my connection somehow is lost and the (Inet |& getline) returns 0 and exits the while loop in the save file function.

November 08, 2011, 05:38

It would be great if I could get this working!
I used the latest version posted at https://gist.github.com/356056
I hope that's the right one.

~$ gawk -f bin/get_youtube_vids.awk http://www.youtube.com/watch?v=n7gKcfKdeQ4
Parsing YouTube video urls/IDs...
ARGV: n7gKcfKdeQ4...
...........
Getting video information for video: n7gKcfKdeQ4...
Could not get video_url for video: n7gKcfKdeQ4
Please goto my website, and submit a comment with an URL to this video, so that I can fix it!
Url: http://www.catonmat.net/blog/downloading-youtube-videos-with-gawk/

Rakesh Maheshwari Permalink
January 03, 2012, 13:00

Requesting you to please fix the script for the YouTube video below -

# gawk -f get_youtube_vids.awk http://www.youtube.com/watch?v=Uh1k8IcdlSY
Parsing YouTube video urls/IDs...
Getting video information for video: Uh1k8IcdlSY...
Could not get video_url for video: Uh1k8IcdlSY
Please goto my website, and submit a comment with an URL to this video, so that I can fix it!
Url: http://www.catonmat.net/blog/downloading-youtube-videos-with-gawk/
#

Thanks Much!

January 03, 2012, 18:50

I am no longer maintaining this script.

May 12, 2014, 04:40

Really great article shared by you and your team. I always visited your blog. Thanks for sharing this type of article. The articles is so good to view of technical point. Just these are my androids apps for PC blog. You can download these by using our tutorials and guides.Thanks.
Download Free Wechat For PC
Free Download Kik Messenger For PC
Download Free WhatsApp For PC
Free WhatsApp For PC
Free Download Deer Hunter 2014 For PC
Free Download BBM For PC.

amit Permalink
May 23, 2014, 07:12
amit Permalink
May 23, 2014, 07:12
amit Permalink
May 23, 2014, 07:12
June 07, 2014, 01:43

Thanks for sharing this information. Very informative and also helpful
Caribbean Jobs
Legal Jobs
Mobile option Caribbean Jobs

Thanks again

smith Permalink
June 08, 2014, 06:22

Thanks for sharing this article .Fathers day is coming out have a look
fathers day wishes
happy fathers day wishes
fathers day sms
happy fathers day sms

quotes on fathers day
fathers day quotes
Thanks

Arnold Robbins Permalink
June 24, 2014, 02:51

Hi. I just tried to use your downloader (awesome program), but I wasn't able to. Here's the URL: http://www.youtube.com/watch?v=0qDqi9mHSyE.

For obvious reasons, I think using gawk is really cool.

Thanks,

Arnold

sana Permalink
July 12, 2014, 12:41

One of the modern methods to increase community awareness of the Internet. Nowadays, the Internet has been a very good position in this field achieved.
I'm up for being in a particular field, your information will go to the Internet. Because this method is very simple and reliable. Glad that the visitor field increases conversancy There are sites that.
درب اتوماتیک - کرکره برقی - درب اتوماتیک شیشه ای
دوربین مدار بسته
.

July 18, 2014, 11:15

Ok it is so nice plan

mahendher Permalink
July 23, 2014, 12:39

After read a couple of the articles on your website these few days, and I truly like your style of blogging.
I tag it to my favorites internet site list and will be checking back soon.
Please check out my web site also and let me know what you think.
ramadan mubarak messages
ramadan mubarak message
ramadan quotes
ramadan mubarak images
ramadan pictures.

sanchit Permalink
July 25, 2014, 21:55

Hey its great post really enjoyed this one i found this on google ,i think there are still websmasters who provide quality contents to the reader and hope to come back soon and by the way Eid Mubarak Wishes
Eid Mubarak Sms in Hindi
Eid Mubarak sms In English
Eid Mubarak sms in Urdu
Eid Mubarak Wallpapers
Eid Mubarak Greetings
Eid Mubarak cardsto all of you also you can check Happy eid Mubarak 2014
really a great stuff

Neo Parker Permalink
July 26, 2014, 06:28
apk Permalink
July 28, 2014, 17:09

i am great admire of your articles
friendship day 2014
friendship day sms 2014
friendship day sms

thanks for awesome articles

August 06, 2014, 05:15

Is so great to do

August 08, 2014, 12:52

Watch Bollywood hotest poonam pandey latest images. click here for all collection.

watch the exciting Hollywood Movies like inception which entertain you completly.

sanchit Permalink
August 13, 2014, 11:49

I am very happy to visit this site and hope to visit here again and again. I have bookmarked your site due to the interesting and relevant stuff found here. Also I have recommended the same to my other friends.
Independence day india
Happy Independence day India
Desh Bhakti songs
Patriotic songs
Whatsapp Status
Fb status
Independence Day Speech in English

Independence Day Quotes

Modi Speech on Independece Day 2014
PM Modi LIVE Speech on Independece Day 2014

This is my site, you can visit it and leave your valuable feedback.

Rosy Permalink
August 30, 2014, 11:03

This article is quite helpful and informative too. I enjoyed a lot. Thanks for sharing such a great article.

Amazon Promotional Code 2014
Amazon Promotional Code
Amazon Promotional Code for August 2014

Thanks for sharing such a great article

September 24, 2014, 07:23

thanks for everything...
Bigg Boss 8 Live
Bigg Boss 8
thanks..

Leave a new comment

(why do I need your e-mail?)

(Your twitter name, if you have one. (I'm @pkrumins, btw.))

Type the first letter of your name: (just to make sure you're a human)

Please preview the comment before submitting to make sure it's OK.

Advertisements