One of my developers just sent me some truly incredible stats about Ruby 1.9 and its threading performance.
20 threads * 100,000 iterations
Ruby 1.9 = 1.54 s.
Ruby Enterprise = 3.01 s.
JRuby 1.1.2 = 5.82 s.
Jython 2.2.1 = 11.86 s.
Python 2.5.2 = 12.32 s.
Ruby 1.8.7 = 22.68
Since our attempt at testing Ruby as a crawler really wasn’t all that much slower than Python it could be really interesting to see what will happen with Ruby 1.9.
The blog post about the test (Its in Polish)
Remember the Python crawler NotSleepy built to suck up all your internets and find your affiliate IDs? Well we kept massaging the code and finally slapped that thing down on a fat pipe. WOW. The stats are rocking now. How about double time!
Latest Stats:
35.6 URLs per second
3.073 Million URLs per day!
Whats most promising is that the new fat pipe is still the bottleneck which means that if anybody really wants to party, all we need to do is lay down some greenbacks and a OC-12 will show us mass terabyte pleasure.
Someone emailed me doubting my crawler could operate at the speeds I posted last week so here is a video I took this morning. I should have waited a few minutes after launching it before starting the video as it really starts cranking once all the threads get rocking and you can see that near the end of the video. Also notice my streaming internet radio going in and out thanks to no available bandwidth left on my 5Mbps line.
You can also hear a ticking sound. That is my new 1TB drive. It makes these weird ticking noises even when its not in use. REally sounds like the arm hitting something its not supposed to hit. Hope its not defective.
Video link
OK I’m just ecstatic with my new crawler, I think nobody but Google has one better than me, and I’m ready for a good old fashion show-and-tell. Multi-threaded programming is a bear to deal with and I’ve written several crawlers in different languages. For years I’ve been plagued with several complex problems:
* Complex code that is difficult to maintain and difficult to setup on a server
* Memory leakage
* Configurability
So the latest design is just 192 lines of Python in a single file, has a single configuration file, and takes about 5 minutes to setup on a standard Linux machine. I ran it last night and was delighted with the results:
Test Run
Tested 139,740 urls
Completed in 2 hrs, 13 mins
3.6 GB of html
Average filesize: 25.05 KB
Averaging
18.2 urls/second
1.572 million urls/day
Hardware and Environment
3 year old Dell Poweredge SC240
Pentium 4
3.5 GB of RAM
Average CPU load: 0.16
Average physical RAM used: 950 MB
OS: Ubuntu 7.10 (Gutsy Gibbon)
Filesystem: ReiserFS 3
Network connection:
Residential cable modem 5Mbps down (of which 100% is consumed when its running so likely to be faster on a fatter pipe)
Even better this code is infinitely extensible. We’ll spread it across as many machines as necessary to download the entire internet.
Big SEO’s with Crawlers… what are your stats?