reddit river: what flows online!

On my way back home from university (30 minutes+) I just love to read news from my favorite social news site reddit.com. A few weeks ago I saw this 'Ask Reddit' post which asked if we could get a reddit version for mobile phones. Well, I thought, it's a cool project and I can do it quickly.

While scanning through the comments of 'Ask Reddit' post, I noticed davidlvann's comment where he said that Digg.com already had almost a plain text version of Digg, called DiggRiver.com.

It didn't take me long to do a

$ whois redditriver.com
No match for "REDDITRIVER.COM".

to find that the domain RedditRiver.com was not registered! What a great name for a project! I quickly mailed my friend Alexis [kn0thing] Ohanian at Reddit (check his alien blog) to ask a permission to do a Reddit River project. Sure enough, he registered the domain for me and I was free to make it happen!

I'll describe how I made the site, and I will release full source code.

Update: The project is now live!

Update: Full source code is now available! It includes all the scripts mentioned here!

Download full redditriver.com source code (downloaded 7165 times)

My language of choice for this project is Python, the same language reddit.com is written in.

This is actually the first real project I am doing in Python (I'm a big Perl fan). I have a good overall understanding of Python but I have never done a project from the ground up! Before doing the project I watched a few Python video lectures and read a bunch of articles to get into a mindset of a Pythonista.

Designing Stages of RedditRiver.com

The main goal of the project was to create a very lightweight version of reddit, which would monitor for story changes (as they get up/down voted) on several pages across the most popular popular subreddits, and which would find mobile versions of stories posted (what I mean is rewrite URLs, say, a post to The Washington Post gets rewritten to the print version of the same article, or a link to youtube.com gets rewritten to the mobile version of yotube.com -- m.youtube.com, etc.).

The project was done in several separate steps.

  • First, I set up the web server to handle Python applications,
  • Then I created a few Python modules to extract contents of Reddit website,
  • Next I created an SQLite database and wrote a few scripts to save the extracted data,
  • Then I wrote a Python module to discover mobile versions of given web pages,
  • Finally, I created the web.py application to handle requests to RedditRiver.com!

Setting up the Web Server

I am very lucky to have a full dedicated server sponsored by ZigZap - We Are Tech (I seriously recommend them if you are looking for a great hosting!). Being an experienced Linux user, I asked them for a pure Linux server with no software or control panels pre-installed and that's exactly what I got! Thanks, ZigZap! :)

I already run this blog and picurls.com on the server and I had chosen lighttpd web server and PHP programming language for these two projects. To get RedditRiver running, I had to add Python support to the web server.

I decided to run web.py web framework to serve the HTML contents because of its simplicity and because Reddit guys used it themselves after rewriting Reddit from Lisp to Python.

Following the install instructions, getting web.py running on the server was as simple as installing the web.py package!

It was also just as easy to get lighttpd web server to communicate with web.py and my application. This required flup package to be installed to allow lighttpd to interface with web.py.

Update: after setting it all up, and experimenting a bit with web.py (version 0.23) and Cheetah's templates, I found that for some mysterious reason web.py did not handle "#include" statements of the templates. The problem was with web.py's 'cheetah.py' file, line 23, where it compiled the regular expression for handling "#include" statements:

r_include = re_compile(r'(?!\\)#include \"(.*?)\"($|#)', re.M)

When I tested it out in interpreter,

>>> r_include = re.compile(r'(?!\\)#include \"(.*?)\"($|#)', re.M)
>>> r_include.search('#include "foo"').groups()
('foo', '')
>>> r_include.search('foo\n#include "bar.html"\nbaz').groups()
('bar.html', '')

it found #include's accross multiline text lines just fine, but it did not work with my template files. I tested it like 5 times and just couldn't get it why it was not working.

As RedditRiver is the only web.py application running on my server, I easily patched that regex on line 23 to something trivial and it all started working! I dropped all the negative lookahead magic and checking for end of the line:

r_include = re_compile(r'#include "(.*?)"', re.M)

As I said, I am not sure why the original regex did not work in the web.py application, but did work in the interpreter. If anyone knows what happened, I will be glad to hear from you! :)

Accessing Reddit Website via Python

I wrote several Python modules (which also work as executables) to access information on Reddit - stories across multiple pages of various subreddits (and front page) and user created subreddits.

As Reddit still does not provide an API to access the information on their site, I had to extract the relevant information from the HTML content of the pages.

The first module I wrote is called 'subreddits.py' which accesses http://reddit.com/reddits and returns (or prints out, if used as an executable) the list of the most popular subreddits (a subreddit is a reddit for a specific topic, for example, programming or politics)

Get this program here: subreddit extractor (redditriver.com project) (downloaded: 5240 times).

This module provides three useful functions:

  • get_subreddits(pages=1, new=False), which gets 'pages' pages of subreddits and returns a list of dictionaries of them. If new is True, gets 'pages' pages of new subreddits (http://reddit.com/reddits/new),
  • print_subreddits_paragraph(), which prints subreddits information in human readable format, and
  • print_subreddits_json(), which prints it in JSON format. The output is in utf-8 encoding.

The way this module works can be seen from the Python interpreter right away:

>>> import subreddits
>>> srs = subreddits.get_subreddits(pages=2)
>>> len(srs)
50
>>> srs[:5]
[{'position': 1, 'description': '', 'name': 'reddit.com', 'subscribers': 11031, 'reddit_name': 'reddit.com'}, {'position': 2, 'description': '', 'name': 'politics', 'subscribers': 5667, 'reddit_name': 'politics'}, {'position': 3, 'description': '', 'name': 'programming', 'subscribers': 9386, 'reddit_name': 'programming'}, {'position': 4, 'description': 'Yeah reddit, you finally got it. Context appreciated.', 'name': 'Pictures and Images', 'subscribers': 4198, 'reddit_name': 'pics'}, {'position': 5, 'description': '', 'name': 'obama', 'subscribers': 651, 'reddit_name': 'obama'}]
>>>
>>> from pprint import pprint
>>> pprint(srs[3:5])
[{'description': 'Yeah reddit, you finally got it. Context appreciated.',
  'name': 'Pictures and Images',
  'reddit_name': 'pics',
  'subscribers': 4198},
 {'description': '',
  'name': 'obama',
  'reddit_name': 'obama',
  'subscribers': 651}]
>>>
>>> subreddits.print_subreddits_paragraph(srs[3:5])
position: 4
name: Pictures and Images
reddit_name: pics
description: Yeah reddit, you finally got it. Context appreciated.
subscribers: 4198

position: 5
name: obama
reddit_name: obama
description:
subscribers: 651
>>>
>>> subreddits.print_subreddits_json(srs[3:5])
[
    {
        "position": 4,
        "description": "Yeah reddit, you finally got it. Context appreciated.",
        "name": "Pictures and Images",
        "subscribers": 4198,
        "reddit_name": "pics"
    },
    {
        "position": 4,
        "description": "",
        "name": "obama",
        "subscribers": 651,
        "reddit_name": "obama"
    }
]

Or it can be called from the command line:

$ ./subreddits.py --help
usage: subreddits.py [options]

options:
  -h, --help  show this help message and exit
  -oOUTPUT    Output format: paragraph or json. Default: paragraph.
  -pPAGES     How many pages of subreddits to output. Default: 1.
  -n          Retrieve new subreddits. Default: nope.

This module reused the awesome BeautifulSoup HTML parser module, and simplejson JSON encoding module.

The second program I wrote is called 'redditstories.py' which accesses the specified subreddit and gets the latest stories from it. It was written pretty much the same way I did it for redditmedia project in Perl.

Get this program here: reddit stories extractor (redditriver.com project) (downloaded: 3437 times).

This module also provides three similar functions:

  • get_stories(subreddit='front_page', pages=1, new=False), which gets 'pages' pages of stories from subreddit and returns a list of dictionaries of them. If new is True, gets new stories only,
  • print_stories_paragraph(), which prints subreddits information in human readable format, and
  • print_stories_json(), which prints it in JSON format. The output is in utf-8 encoding.

It can also be used as a Python module or executable.

Here is an example of using it as a module:

>>> import redditstories
>>> s = redditstories.get_stories(subreddit='programming')
>>> len(s)
25
>>> s[2:4]
[{'title': "when customers don't pay attention and reply to a "donotreply.com" email address, it goes to Chet Faliszek, a programmer in Seattle", 'url': 'http://consumerist.com/371600/the-man-who-owns-donotreplycom-knows-all-the-secrets-of-the-world', 'unix_time': 1206408743, 'comments': 54, 'subreddit': 'programming', 'score': 210, 'user': 'srmjjg', 'position': 3, 'human_time': 'Tue Mar 25 03:32:23 2008', 'id': '6d8xl'}, {'title': 'mysql --i-am-a-dummy', 'url': 'http://dev.mysql.com/doc/refman/4.1/en/mysql-tips.html#safe-updates', 'unix_time': 1206419543, 'comments': 59, 'subreddit': 'programming', 'score': 135, 'user': 'enobrev', 'position': 4, 'human_time': 'Tue Mar 25 06:32:23 2008', 'id': '6d9d3'}]
>>> from pprint import pprint
>>> pprint(s[2:4])
[{'comments': 54,
  'human_time': 'Tue Mar 25 03:32:23 2008',
  'id': '6d8xl',
  'position': 3,
  'score': 210,
  'subreddit': 'programming',
  'title': "when customers don't pay attention and reply to a "donotreply.com" email address, it goes to Chet Faliszek, a programmer in Seattle",
  'unix_time': 1206408743,
  'url': 'http://consumerist.com/371600/the-man-who-owns-donotreplycom-knows-all-the-secrets-of-the-world',
  'user': 'srmjjg'},
 {'comments': 59,
  'human_time': 'Tue Mar 25 06:32:23 2008',
  'id': '6d9d3',
  'position': 4,
  'score': 135,
  'subreddit': 'programming',
  'title': 'mysql --i-am-a-dummy',
  'unix_time': 1206419543,
  'url': 'http://dev.mysql.com/doc/refman/4.1/en/mysql-tips.html#safe-updates',
  'user': 'enobrev'}]
>>> redditstories.print_stories_paragraph(s[:1])
position: 1
subreddit: programming
id: 6daps
title: Sign Up Forms Must Die
url: http://www.alistapart.com/articles/signupforms
score: 70
comments: 43
user: markokocic
unix_time: 1206451943
human_time: Tue Mar 25 15:32:23 2008

>>> redditstories.print_stories_json(s[:1])
[
    {
        "title": "Sign Up Forms Must Die",
        "url": "http:\/\/www.alistapart.com\/articles\/signupforms",
        "unix_time": 1206451943,
        "comments": 43,
        "subreddit": "programming",
        "score": 70,
        "user": "markokocic",
        "position": 1,
        "human_time": "Tue Mar 25 15:32:23 2008",
        "id": "6daps"
    }
]

Using it from a command line:

$ ./redditstories.py --help
usage: redditstories.py [options]

options:
  -h, --help   show this help message and exit
  -oOUTPUT     Output format: paragraph or json. Default: paragraph.
  -pPAGES      How many pages of stories to output. Default: 1.
  -sSUBREDDIT  Subreddit to retrieve stories from. Default:
               reddit.com.
  -n           Retrieve new stories. Default: nope.

These two programs just beg to be converted into a single Python module. They have the same logic with just a few changes in the parser. But for the moment I am generally happy, and they serve the job well. They can also be understood individually without having a need to inspect several source files.

I think that one of the future posts could be a reddit information accessing library in Python.

I can already think of one hundred ideas what someone can do with such a library. For example, one could print out top programming stories his or her shell:

$ echo "Top five programming stories:" && echo && ./redditstories.py -s programming | grep 'title' | head -5 && echo && echo "Visit http://reddit.com/r/programming to view them!"

Top five programming stories:

title: Sign Up Forms Must Die
title: You can pry XP from my cold dead hands!
title: mysql --i-am-a-dummy
title: when customers don't pay attention and reply to a "donotreply.com" email address, it goes to Chet Faliszek, a programmer in Seattle
title: Another canvas 3D Renderer written in Javascript

Visit http://reddit.com/r/programming to view them!

Creating and Populating the SQLite Database

The database choice for this project is SQLite, as it is fast, light and this project is so simple, that I can't think of any reason to use a more complicated database system.

The database has a trivial structure with just two tables 'subreddits' and 'stories'.

CREATE TABLE subreddits (
  id           INTEGER  PRIMARY KEY  AUTOINCREMENT,
  reddit_name  TEXT     NOT NULL     UNIQUE,
  name         TEXT     NOT NULL     UNIQUE,
  description  TEXT,
  subscribers  INTEGER  NOT NULL,
  position     INTEGER  NOT NULL,
  active       BOOL     NOT NULL     DEFAULT 1
);

INSERT INTO subreddits (id, reddit_name, name, description, subscribers, position) VALUES (0, 'front_page', 'reddit.com front page', 'since subreddit named reddit.com has different content than the reddit.com frontpage, we need this', 0, 0);

CREATE TABLE stories (
  id            INTEGER    PRIMARY KEY  AUTOINCREMENT,
  title         TEXT       NOT NULL,
  url           TEXT       NOT NULL,
  url_mobile    TEXT,
  reddit_id     TEXT       NOT NULL,
  subreddit_id  INTEGER    NOT NULL,
  score         INTEGER    NOT NULL,
  comments      INTEGER    NOT NULL,
  user          TEXT       NOT NULL,
  position      INTEGER    NOT NULL,
  date_reddit   UNIX_DATE  NOT NULL,
  date_added    UNIX_DATE  NOT NULL
);

CREATE UNIQUE INDEX idx_unique_stories ON stories (title, url, subreddit_id);

The 'subreddits' table contains information extracted by 'subreddits.py' module (described earlier). It keeps the information and positions of all the subreddits which appeared on the most popular subreddit page (http://reddit.com/reddits).

Reddit lists 'reddit.com' as a separate subreddit on the most popular subreddit page, but it turned out that it was not the same as the front page of reddit! That's why I insert a fake subreddit called 'front_page' in the table right after creating it, to keep track of both 'reddit.com' subreddit and reddit's front page.

The information in the table is updated by a new program - update_subreddits.py.

View: subreddit table updater (redditriver.com project) (downloaded: 2299 times)

The other table, 'stories' contains information extracted by 'redditstories.py' module (also described earlier).

The information in this table is updated by another new program - update_stories.py.

As it is impossible to keep track of all the scores and comments, and position changes across all the subreddits, the program monitors just a few pages on each of the most popular subreddits.

View: story table updater (redditriver.com project) (downloaded: 2248 times)

These two programs are run periodically by crontab (task scheduler in unix). The program update_subreddits.py gets run every 30 minutes and update_stories.py every 5 minutes.

Finding the Mobile Versions of Given Websites

This is probably the most interesting piece of software that I wrote for this project. The idea is to find versions of a website suitable for viewing on a mobile device.

For example, most of the stories on politics subreddit link to the largest online newspapers and news agencies, such as The Washington Post or MSNBC. These websites provide a 'print' version of the page which is ideally suitable for mobile devices.

Another example is websites who have designed a real mobile version of their page and let the user agent know about it by placing <link rel="alternate" media="handheld" href="..."> tag in the head section of an html document.

I wrote an 'autodiscovery' Python module called 'autodiscover.py'. This module is used by the update_stories.py program described in the previous section. After getting the list of new reddit stories, the update_stories.py tries to autodiscover a mobile version of the story and if it is successful, it places it in 'url_mobile' column of the 'stories' table.

Here is an example run from Python interpreter of the module:

>>> from autodiscovery import AutoDiscovery
>>> ad = AutoDiscovery()
>>> ad.autodiscover('http://www.washingtonpost.com/wp-dyn/content/article/2008/03/24/AR2008032402969.html')
'http://www.washingtonpost.com/wp-dyn/content/article/2008/03/24/AR2008032402969_pf.html'
>>> ad.autodiscover('http://www.msnbc.msn.com/id/11880954/')
'http://www.msnbc.msn.com/id/11880954/print/1/displaymode/1098/'

And it can also be used from command line:

$ ./autodiscovery.py http://www.washingtonpost.com/wp-dyn/content/article/2008/03/24/AR2008032402969.html
http://www.washingtonpost.com/wp-dyn/content/article/2008/03/24/AR2008032402969_pf.html

Source: mobile webpage version autodisovery (redditriver.com project) (downloaded 3946 times)

This module actually uses a configuration file 'autodisc.conf' which defines patterns to look for in the web page's HTML code. At the moment the config file is pretty primitive and defines just three configuration options:

  • REWRITE_URL defines a rule how to rewrite URL of a website which makes it difficult to autodiscover the mobile link easily. For example, a page could use JavaScript to pop-up the print version of the page. In such a case REWRITE_URL rule can be used to match the host which uses this technique and rewrite part of the url to another.
  • PRINT_LINK defines how a print link might look like. For example, it could say 'print this page' or 'print this article'. This directive defines such phrases to look for.
  • IGNORE_URL defines urls to ignore. For example, a link to a flash animation should definitely be ignored, as it does not define a mobile version at all. You can place the .swf extension in this ignore list to avoid it being downloaded by autodiscovery.py.

Configuration used by autodiscovery.py: autodiscovery configuration (redditriver.com project) (downloaded 3881)

Creating the web.py Application

The final part to the project was creating the web.py application.

It was pretty straight forward to create it as it only required writing the correct SQL expressions for selecting the right data out of the database.

Here is how the controller for the web.py application looks like:

urls = (
    '/',                                 'RedditRiver',
    '/page/(\d+)/?',                     'RedditRiverPage',
    '/r/([a-zA-Z0-9_.-]+)/?',            'SubRedditRiver',
    '/r/([a-zA-Z0-9_.-]+)/page/(\d+)/?', 'SubRedditRiverPage',
    '/reddits/?',                        'SubReddits',
    '/stats/?',                          'Stats',
    '/stats/([a-zA-Z0-9_.-]+)/?',        'SubStats',
    '/about/?',                          'AboutRiver'
)

The first version of reddit river implements browsable front stories (RedditRiver and RedditRiverPage classes), browsable subreddit stories (SubRedditRiver and SubRedditRiverPage classes), list of the most popular subreddits (SubReddits class), front page and subreddit statistics (most popular stories and most active users, Stats and SubStats classes) and an about page (AboutRiver class).

The source code: web.py application (redditriver.com project) (downloaded: 3946 times)

Release

I have put it online! Click redditriver.com to visit the site.

I have also released the source code. Here are all the files mentioned in the article, and a link to the whole website package.

Download Programs which Made Reddit River Possible

All the programs in a single .zip:
Download link: full redditriver.com source code
Downloaded: 7165 times

Individual scripts:

Download link: subreddit extractor (redditriver.com project)
Downloaded: 5240 times

Download link: reddit stories extractor (redditriver.com project)
Downloaded: 3437 times

Download link: subreddit table updater (redditriver.com project)
Downloaded: 2299 times

Download link: story table updater (redditriver.com project)
Downloaded: 2248 times

Download link: mobile webpage version autodisovery (redditriver.com project)
Downloaded: 3946 times

Download link: autodiscovery configuration (redditriver.com project)
Downloaded: 3881 times

Download link: web.py application (redditriver.com project)
Downloaded: 2884 times

All these programs are released under GNU GPL license, so you may derive your own stuff, but do not forget to share your derivative work with everyone!

Vote for this article:

Alexis recently sent me a reddit t-shirt for doing redditmedia project, I decided to take a few photos wearing it :)

peteris krumins loves reddit

Have fun and I hope to hear a lot of positive feedback on redditriver project :)

Comments

March 26, 2008, 02:51

Great work, i especially love the fact that you try to automatically direct users to the print/mobile version of the remote site.

I had wanted to something like this myself, so thanks for killing a project of mine ^_^.

Also, while typing i just realized that your the same developer behind the Digg picture website, all credit to you, keep up the good (and very fast) work.

September 15, 2014, 02:52

Great post nice

October 14, 2014, 14:38

http://www.cafepembesarpenis.com/
http://hawa-shop.com/
http://obatpelangsingbadanherbal.webnode.com/
http://roomobatbius.wordpress.com/
http://www.warungobatpembesarpenis.blogspot.com/
http://sudolgan.blogspot.com/
http://roompembesarpenis.wordpress.com/

March 26, 2008, 03:06

Great work, will definitely be using this next time I am out and about.

Rodg Permalink
March 26, 2008, 03:30

Nice work. I have also been accessing a mobile version of reddit here:

http://m.phonefavs.com/reddit.com/.rss

The site takes reddit's rss feed through a mobile transcoder so all the links are made mobile.

idonthack Permalink
March 26, 2008, 04:11

looking through your autodiscover.py script, i did not see any attempts to search the page's header for a reference to a separate mobile stylesheet with link tags, or a "mobileoptimized" metatag as recognized by microsoft's mobile browser, both of which could be used to determine if the page actually needs autodiscovery

http://dev.mobi/node/403 - syntax for defining mobile and print stylesheets in html

http://msdn2.microsoft.com/en-us/library/bb431690.aspx - msdn page describing rendering modes of microsoft's mobile browser and the mobileoptimized metatag

idonthack Permalink
March 26, 2008, 04:13

oops. disregard that, i suck cocks.

i just didn't look very hard

ryan Permalink
March 26, 2008, 04:53

idonthack:

Firstly, let me say that I agree with your second comment!

Secondly, MSDN? Microsoft mobile browser? What ARE you talking about? Who uses that crap?

March 26, 2008, 05:27

Shouldn't the website use an infinite scroll if it's called *river.com...?

March 26, 2008, 05:34

Braydon, wow, what a fantastic idea! I'll work on it and see if I can get it done easily! Genius!

March 26, 2008, 10:01

Wonderful work Peteris, just wonderful. I like the way you stuffed things up there, self redirection and many other features.

Maybe do a skinning plugin/sub-application for iPhone users or something?

I really wish you a big gift from the reddit.com people; let's see if they will get you the first Lamborghini.

serkan Permalink
March 26, 2008, 10:26

I didn't read the whole article, sorry, but I use diggriver all the time and, well digg is going to crap these days so i've switched to reddit.

thank you so much

Cian Permalink
March 26, 2008, 20:44

Oops, maybe http://pypi.pyhthon.org/pypi/flup/1.0 should be http://pypi.python.org/pypi/flup/1.0

Really interesting article. Thanks
C

April 22, 2008, 17:45

Nice work! I know that Google has some service that converts regular pages to mobile versions of them, so you can use it when the page doesn't have a mobile version of his own.

Ralph Corderoy Permalink
July 13, 2008, 13:14

WRT the #include regexp, perhaps your template had whitespace after the closing double-quote, or an ASCII CR? We'd need a `grep '#include' template | od -c' to diagnose further.

Ah, OK, having got the ZIP'd source I see you've ASCII CRs.

$ g '#include' * | cat -A
about.tpl.html:#include "common.header.tpl.html"^M$
March 17, 2010, 13:37

It the site discontinued or temporarily offline?

July 27, 2013, 16:44

I love your T-shirt Reddit. Ok your article provide a lot on information about the reddit. I did nor know all the information you shared here. Thanks

July 31, 2013, 09:11

Stash is more focused on Atlassian product line , Understandably so.

August 16, 2013, 12:15

apakah kamu ingin mempelajari cara Cari Uang Lewat Ekiosku.com ayo ikuti petunjuknya disini

August 20, 2013, 16:37

I like your web! I was reading the news and I saw this really cool info…

Dennis Permalink
September 16, 2013, 19:11

As Reddit still does not provide an API to access the information http://v.gd/moSTxt on their site, I had to extract the relevant information from the HTML content of the pages.

sandong Permalink
October 27, 2013, 15:38

dapatkan permainan online gratis hanya disini. Kumpulan game online terlengkap semuanya ada disini.

fafda Permalink
November 29, 2013, 07:30

[url=http://www.topmall007.com/Roberto-Cavalli-Handbag.html]Replica Roberto Cavalli Handbag[/url] Features it offers Life of The North american Teenager Season 3 Episode 16 Mirrors Synopsis Charge [URL=http://www.beltsgift.com/Dolce-Gabbana-Belts-8.html]Replica Dolce & Gabbana Belts[/URL] contemplates requesting Adrian to be able to wed him or her. Somewhere else, Amy fulfills Ricky s delivery mom, that[URL=http://www.jacketsprice.com/Moncler-Women-Sweater-9.html]Moncler Women Sweater[/URL] believes Amy sounds like the woman s; Give professes their adore to get Sophistication; as well as Madison starts seeing the woman s old employer.Amy wants her relationship with Ricky

November 29, 2013, 07:31

Replica Long Sleeved Shirts
Desire to wipe location-tracking data that s being stored for your iphone without your permission? There s an app to the, but you should jailbreak your iphone first.Several tools have cropped Replica Franck Muller as a treatment for people riled up with the fact that iPhones (and iPads) are surreptitiously logging unencrypted location-related data for the device, including cell tower coordinates, Replica replica burberry time stamps, cell oper

Jean Permalink
December 07, 2013, 03:43

great post .. thanks pakar seo nice

Raka Permalink
February 20, 2014, 13:03

Kami Arif Hosting Harga Murah dan Hosting Terbaik di Indonesia menyajikan layanan terbaik hosting. Slogan yang kami miliki adalah

October 17, 2014, 15:05

thank you of some of the articles I find this is one of the very interesting article to read, I like this vibrator .

Andi Permalink
March 14, 2014, 14:58

I like designing a blog. It makes me fun though sometimes spends much time in front of monitor. belajar bahasa inggris online is a blog I designed

March 20, 2014, 02:36

Great write-up, I am regular visitor of one's blog, blog maintain up the excellent operate, and It's going to be a regular visitor for a long time

April 04, 2014, 16:10

A safe place to play the very best free games! Free online games, puzzle games, girls games, car games, dress up games and more. Share them with your friends online!
games, juegos, jogos, jeux, online games, free games, oyunlar, gry, giochi, free online games, spiele, jocuri .

April 05, 2014, 03:05

Good Article, i like this , Good Job Gays

April 11, 2014, 04:19

I get a lot of great information here and this is what I am searching for. Thank you for your sharing. I have bookmark this page for my future reference

April 12, 2014, 13:34

Thanks for the share.

April 15, 2014, 11:28

thanks for this awesome web page

April 30, 2014, 15:46

wow i like your writing style

May 03, 2014, 03:38

All posts have something to learn. Your work is very good and i appreciate you and hoping for some more informative posts.keep writing.

May 04, 2014, 08:47

Great post! I?m just starting out in community management/marketing media and trying to learn how to do it well - resources like this article are incredibly helpful. As our company is based in the US, it?s all a bit new to us. The example above is something that I worry about as well, how to show your own genuine enthusiasm and share the fact that your product is useful in that case

May 10, 2014, 22:54

bon coin maroc, le bon coin fes, bon coin a fes, le bon coin agadir maroc, le bon coin maroc fes, annonce maroc, annonces maroc, Maroc Annonce,, Le bon coin Maroc, a vendre au Maroc, acheter et vendre au Maroc gratuit, annonce auto, annonce immobilier, lbi3 , bi3 o chera, bi3 o chri

May 24, 2014, 12:20

Great click buddy...
i might help you in photo editing
clipping path
check out my work.

June 05, 2014, 16:51

great!!
http://tasmurahbeebagshop.blogspot.com/
http://tasmurahbeebagshop.blogspot.com/2014/06/tas-ransel-murah-jeanlist-sy-024.html
http://tasmurahbeebagshop.blogspot.com/2014/06/hand-bag-murah-bahan-kulit-sintetis.html
http://tasmurahbeebagshop.blogspot.com/2014/06/dompet-murah-bahan-jeans-jinsaku-warna.html
http://tasmurahbeebagshop.blogspot.com/2014/06/tote-bag-murah-bahan-kanvas-giraffe-se.html
http://tasmurahbeebagshop.blogspot.com/2014/06/tote-bag-murah-garfield-se-136.html
http://tasmurahbeebagshop.blogspot.com/2014/06/tas-ransel-murah-zora-se-044.html
http://tasmurahbeebagshop.blogspot.com/2014/06/tas-ransel-anak-hello-kitty-sp-012.html
http://tasmurahbeebagshop.blogspot.com/2014/06/tas-travel-murah-travel-lv-light.html
http://bajuonlinebeeoshop.blogspot.com/
http://bajuonlinebeeoshop.blogspot.com/2014/06/dress-elegan-bahan-katun-rayon-maxi-violin.html
http://bajuonlinebeeoshop.blogspot.com/2014/06/batik-couple-couple-risky.html
http://bajuonlinebeeoshop.blogspot.com/2014/06/dress-batik-murah-maxi-winda.html
http://bajuonlinebeeoshop.blogspot.com/2014/06/gamis-batik-maxi-nabila.html
http://bajuonlinebeeoshop.blogspot.com/2014/06/dress-cantik-murah-bahan-denim-maxi-tamara.html
http://bajuonlinebeeoshop.blogspot.com/2014/06/blouse-spandek-murah-dress-women.html
http://caramenghilangkanjerawatalananospray.blogspot.com/
http://caramenghilangkanjerawatalananospray.blogspot.com/2014/06/cara-menghilangkan-jerawat-dengan-lemon.html

July 19, 2014, 05:42

http://google.co.id | http://naranua.blogspot.com | http://ianeerik.blogspot.com | http://kataucapangua.blogspot.com/ http://trackmodifikasi.blogspot.com | http://cakewreck.blogspot.com | http://modifikasibro.blogspot.com | http://modifikasibaru.blogspot.com http://lokersurya.blogspot.com | http://lokerbang.blogspot.com | http://lokerdan.blogspot.com | http://ngeblak-blakan.blogspot.com | http://modifmotorklasik.blogspot.com | http://cat-rumahminimalis.blogspot.com | http://gambar-rumah-idaman.blogspot.com | http://febemoss.blogspot.com | http://ferpiink.blogspot.com | http://designing-seo.blogspot.com | http://daftarhargane.blogspot.com http://id.wikipedia.org

July 19, 2014, 05:43

http://google.co.id | http://naranua.blogspot.com | http://ianeerik.blogspot.com | http://kataucapangua.blogspot.com/ http://trackmodifikasi.blogspot.com | http://cakewreck.blogspot.com | http://modifikasibro.blogspot.com | http://modifikasibaru.blogspot.com http://lokersurya.blogspot.com | http://lokerbang.blogspot.com | http://lokerdan.blogspot.com | http://ngeblak-blakan.blogspot.com | http://modifmotorklasik.blogspot.com | http://cat-rumahminimalis.blogspot.com | http://gambar-rumah-idaman.blogspot.com | http://febemoss.blogspot.com | http://ferpiink.blogspot.com | http://designing-seo.blogspot.com | http://daftarhargane.blogspot.com http://id.wikipedia.org

This information is very interesting, I really enjoyed, I would like get more information about this, because is very beautiful, thanks for sharing!

September 15, 2014, 02:50

nice post thx very much

September 26, 2014, 06:53

Sono molto felice di visitare questo sito e spero di visitare qui ancora e ancora. Ho segnalibro tuo sito a causa della roba interessante e pertinente trovate qui. Anche io ho consigliato la stessa ad altri miei amici.

September 26, 2014, 06:53

Sono molto felice di visitare questo sito e spero di visitare qui ancora e ancora. Ho segnalibro tuo sito a causa della roba interessante e pertinente trovate qui. Anche io ho consigliato la stessa ad altri miei amici.

October 18, 2014, 15:17

Nice Blog Thanks For Share Salam Sukses Slalu - http://goo.gl/lWF8sn

November 14, 2014, 00:02

that was a great work.. thanks 4 share

November 14, 2014, 00:04

Thank you for your sharing. I have bookmark this page for my future reference :)

November 14, 2014, 00:07

lol i make a double .com in my permalink comment. nice post :)

November 14, 2014, 00:09

I like designing a blog. It makes me fun though sometimes spends much time in front of monitor. thanks for share this :)

November 14, 2014, 00:11

i especially love the fact that you try to automatically direct users to the print/mobile version of the remote site. That was a great works duds

November 29, 2014, 01:12

Kursi roda, Jual Kursi Roda, Kursi Roda Shima, Kursi Roda Rebah, Kursi Roda Anak, Kursi Roda Travelling http://www.ratumedika.com/kursi-roda.html

November 29, 2014, 14:50

Always so interesting to visit your site.What a great info, thank you for sharing. this will help me so much in my learning...

November 29, 2014, 17:55

I am happy to find this post very useful for me, as it contains lot of information. I always prefer to read the quality content and this thing I found in you post. Thanks for sharing.

I should say only that its awesome! The blog is informational and always produce amazing things.

December 01, 2014, 08:02

I am happy to find this post very useful for me, as it contains lot of information. I always prefer to read the quality content and this thing I found in you post. Thanks for sharing.

December 01, 2014, 08:02

I should say only that its awesome! The blog is informational and always produce amazing things.

December 01, 2014, 11:36

Grazie per la condivisione. Ho Segna questa pagina per il mio riferimento futuro

December 01, 2014, 11:37

Sono felice di trovare questo post molto utile per me, in quanto contiene molte informazioni. Io preferisco sempre leggere il contenuto di qualità e questa cosa che ho trovato in voi postare. Grazie per la condivisione.

December 01, 2014, 11:38

Sono felice di trovare questo post molto utile per me, in quanto contiene molte informazioni. Io preferisco sempre leggere il contenuto di qualità e questa cosa che ho trovato in voi postare. Grazie per la condivisione.

December 03, 2014, 03:52

Great stuff from you, man. Ive read your stuff before and you're just too awesome. I love what you've got here, love what you're saying and the way you say it.

Incredible post! I am really getting prepared to over this data, is extremely useful my companion. Likewise extraordinary blog here with the greater part of the important data you have. Keep doing awesome doing here.

December 04, 2014, 03:15

I love the blog. Great post. It is very true, people must learn how to learn before they can learn. lol i know it sounds funny but its very true. . .

jackdon Permalink
December 04, 2014, 10:14

Thanks a lot very much for the high quality and results-oriented help.dampvask af bil aalborg I won’t think twice to endorse your blog post to anybody who wants and needs support about this area.

December 04, 2014, 13:39

As a Newbie, I am permanently exploring online for articles that can be of assistance to me. Thank you

December 06, 2014, 12:05

I've recently chosen to make a site, which I have been needing to accomplish for some time. A debt of gratitude is in order regarding this post, its truly valuable!

December 07, 2014, 13:26

I've just decided to create a blog, which I have been wanting to do for a while. Thanks for this post, it's really useful!

December 07, 2014, 15:34

Great work, I also love the fact that you automatically direct users to the print/mobile version of the remote site.

I don't have sufficient energy right now to completely read your site but I have bookmarked it and likewise include your RSS channels. I will be back in a day or two. much obliged concerning an incredible site.

jackdon Permalink
December 09, 2014, 14:10

This amazing site will be interesting and as well as heaped with important material.dampvask af bil Aalborg Have expressing and that I will forever assist this site

December 09, 2014, 17:02

My friend mentioned to me your blog, so I thought I’d read it for myself. Very interesting insights, will be back for more!

December 10, 2014, 04:59

You have done a great job. I will definitely dig it and personally recommend to my friends. I am confident they will be benefited from this site.

I am happy to find this post very useful for me, as it contains lot of information. I always prefer to read the quality content and this thing I found in you post. Thanks for sharing.

December 11, 2014, 09:36

I like this post,And I guess that they having fun to read this post,they shall take a good site to make a information,thanks for sharing it to me.

December 11, 2014, 09:36

I like this post,And I guess that they having fun to read this post,they shall take a good site to make a information,thanks for sharing it to me.

December 11, 2014, 10:07

I like this post,And I guess that they having fun to read this post,they shall take a good site to make a information,thanks for sharing it to me.

December 11, 2014, 10:07

I like this post,And I guess that they having fun to read this post,they shall take a good site to make a information,thanks for sharing it to me.

December 11, 2014, 10:52

I am happy to find this post very useful for me, as it contains lot of information. I always prefer to read the quality content and this thing I found in you post. Thanks for sharing.

December 11, 2014, 11:16

You have done a great job.

December 15, 2014, 12:01

Hi there, I found your blog via Google while searching for such kinda informative post and your post looks very interesting for me.

Thanks for creating the page! Im positive that it will be very popular. It has good and valuable content which is very rare these days.

Leave a new comment

(why do I need your e-mail?)

(Your twitter name, if you have one. (I'm @pkrumins, btw.))

Type the word "floppy_44": (just to make sure you're a human)

Please preview the comment before submitting to make sure it's OK.

Advertisements