
Ask HN: Has anyone ever crawled over a billion pages? How much did it cost? - outpan
I&#x27;m really curious to find out how much it&#x27;ll cost to crawl a billion pages. Doesn&#x27;t really matter if you used a SaaS solution or built your own crawler, any info would be really useful.
======
mtmail
There's a discussion about a 2 billion page crawl on the frontpage right now.
[https://news.ycombinator.com/item?id=12486631](https://news.ycombinator.com/item?id=12486631)

Here's the author's comment on hardware
[https://news.ycombinator.com/item?id=12487003](https://news.ycombinator.com/item?id=12487003)
and later he says it costs 300 Euro/month to run the service.

~~~
outpan
That post is what triggered my Ask post.

The problem is the huge contrast with [https://www.quora.com/How-much-would-
it-cost-to-crawl-1-bill...](https://www.quora.com/How-much-would-it-cost-to-
crawl-1-billion-sites-using-rented-AWS-servers-bandwidth?share=1)

Even taking into account the drop in prices on AWS. Also, if you take a quick
look at companies that provide such services the prices are orders of
magnitude higher than deusu's costs.

~~~
mtmail
Deusu's crawl servers are located at
[https://www.hosteurope.de/en/Server/Root-
Server/](https://www.hosteurope.de/en/Server/Root-Server/) while the website
points to his home broadband ISP. Two servers at his specs would be 200
Euro/month total, with 5x more bandwidth than he currently uses. I'd say
that's much cheaper that AWS. Of course crawl companies charge more: they run
a business, pay system administrators, have more backup and redundancy.

~~~
outpan
I'm not sure how he manages to crawl with this speed using such low amount of
resources.

We did a benchmark on Nutch and couldn't really pass the 10-14 M(B)ps on a
$1200/month machine. Even though we hired a professional to optimize the
setup. The same is roughly true about Heritrix.

Just wondering if there is something missing in his setup, such as domain/ip
rate limiting.

~~~
detaro
You can check his source if you are curious how it works ;)

------
AznHisoka
I've crawled over a billion pages over a stretch of 3 years or so. Crawling is
the easy task and just crawling a billion pages wouldn't cost more than a few
thousand a month. Add a couple more thousand for storing these pages in a
search index and database.

~~~
outpan
Would you be able to share what your stack was? and the resources it took?
Thanks a lot.

~~~
AznHisoka
Ruby and Sidekiq as the messaging queue

Postgres to store the data

Elasticsearch as a search index.

My ES cluster has around 10 nodes, 64 GB RAM, quad-core.

Postgres database cluster is 4 nodes, 1 TB, 64 GB RAM, quad-core.

800 crawler threads distributed across 10 dedicated servers.

~~~
outpan
Thanks a lot! This sounds reasonable. Did you guys look into professional
services for this?

~~~
AznHisoka
Nope. We have lots of custom needs.

------
cdnsteve
I think it would be valuable to have an open dataset of a raw crawl index. It
could be distributed via academic torrents or partner with a hosting provider.

The real innovation won't be in crawling but in working on the index,
filtering it, organizing it, trying sort algorithms and learning.

If this was available and gained popularity I could see competition in search
again.

