Hacker News new | comments | show | ask | jobs | submit login

Strange how an unfunded startup with a single founder (DuckDuckGo) is getting better results than a $30million backed one comprised of a bunch of ex-Googlers.

That's because cuil was building a search engine from scratch rather than relying on bing's/yahoo's index to do the underlying scoring. Maybe they shouldn't have started from scratch, but they were certainly tackling a much harder problem (maybe not the right one.)

I've heard through the grapevine that they were able to index and serve 100 billion documents on 100 machines, which is a pretty impressive technical accomplishment if true. I'm surprised they weren't acquired for that. It's unfortunate that their search quality wasn't up to snuff yet.

Those numbers alone really don't mean much.

How many queries per second can they handle on those nodes, and with what latency? What kind of relevancy calculations were they able to do at query-time in their system with 1B documents per node? Were they able to support query-time aggregation of structured fields in their documents? Was the index stale or did they support continuous feeding and indexing of new documents? If the latter, how well did they meet their SLA QPS and latency when indexing new documents?

I can set up a single search node and fill it up with God know how many documents any day, but the difference between supporting 10 QPS with ~500ms latency and 3000 QPS with the 99 percentile below 40ms is really more interesting than exactly how many documents I have per node.

The harder problem IMHO is customer acquisition. Many companies have built (and still have) link graphs of reasonable quality, but they all struggle to gain real customers.

I started out before those APIs existed, and did all my own crawling & indexing. When they came out, I decided to focus on my value-adds because I thought that was a quicker path to customer acquisition.

Furthermore, I don't use Yahoo/Bing straight up, e.g. I re-rank, omit, etc. I also mix them with my own my index/negative spam index from my own crawling efforts.

I don't use Yahoo/Bing straight up, e.g. I re-rank, omit, etc.

All re-ranking you can do is very limited with these services because they are black boxes: you don't see what factors went into ranking a page the way it happened, you can't tweak the weight of different factors, you can't add new factors. All you can have is hardcoded rules like "if there is a wikipedia page in the first 20 results, bring it on top" that don't really add much value, because if that wikipedia page is any good it would be on top already. With spam results it's similar: you can provide impressive customer service blacklisting spam on user requests, but major search engines are already pretty good in low-ranking spam so much that you don't see it if there are any other meaningful results.

Your marketing here on HN has been brilliant, you have some very interesting UI decisions and possibilities that Google doesn't have, but your added value is definitely not in improved ranking of results.

Thx. I can't reveal too much here, but I do a lot more in the reranking area such that the top 20 will look very different for many queries when compared. I think these improvements go a long way to improving search UX, but ranking is subtle and so doesn't get noticed much (except when failing miserably).

That's almost certainly the right strategy for you. The people who founded cuil had a solution in look of a problem, in that they came from the search infrastructure teams at Google and wanted to use those skills. It turns out that running a search engine on very few machines isn't much of a competitive advantage, or even having a very large index.

They also got a lot of marking out of their "ex-Googlers take on Google" narrative which probably wouldn't have worked out as well if they were using something like your strategy.

"That's because cuil was building a search engine from scratch rather than relying on bing's/yahoo's index to do the underlying scoring."

I've heard this in other discussions of DuckDuckGo here, and I don't understand why bing/yahoo allow a potential competitor free access to data that is so important to their search businesses. What's in it for Yahoo/Microsoft? Or is DDG paying for the privilege?

It's probably the same reason google allows free app engine accounts. If someone builds something cool, it's easy to integrate upon acquisition.

At the moment DDG is effectively a customer, not a competitor. If DDG ever became large enough to show up on Bing's radar (Bing currently has 600x as much traffic), you can bet that the terms would change.

As far as I know, DDG isn't paying for access. But I might be wrong about that. Maybe they are they paying for Bing, but not Yahoo?

Yahoo have recently announced that they will soon be charging for Boss.


"We are exploring a potential fee-based structure as well as ad-revenue models that will enable BOSS developers to monetize their offerings. When we roll out these changes, BOSS will no longer be a free service to developers."

Yahoo's search api is about to go premium. It's been free to date, but they have always said they'll start charging for it at some point.

They can be the long tail of search engines, an army of Google beaters that are used by people that would never consider using Yahoo search.

I don't think the ordering of the results is Google's competitive advantage anymore it's branding and habit.

100 billion documents on 100 machines

I think Cuil should sell their index as a service. Over which businesses can implement PageRank and http://en.wikipedia.org/wiki/Pagerank#See_also type of algorithms.

or just make accessible their crawl data

Ok, so the problem they were solving was more difficult... but they are going after similar markets (or at least segments of the same market).

Technical achievements are great; but Gabriel is much better placed. He is self funded, he is building on existing tools (always good advice), he leveraged us, the hacker crowd, who can be very loyal, he clearly listens to his customers etc.

Cuil, on the other hand, produced some very confusing (if technically interesting) things and then ranted about those who criticised them. They had a lot of big bucks VC money (always a warning sign) and didn't appear to be leveraging loyalty from any user base.

Even if the problems these two startups are facing are different; there is a lesson here. One is how not to build a product, and one is :)

I voted you up but I don't find it strange. yegg only answers to himself and his users while operating with tight constraints. Cuil, however, had a bumper budget and a cadre of smart people with a plethora of ideas and directions in mind.

To me, Cuil looked like a prime example of design by committee whereas DDG is clearly opinionated but thrives because of it.

DuckDuckGo is Cuiler. http://cuiler.com

IMHO DDG's success stems from the fact that yegg listens to the users, is small yet big, and addresses a niche market - the one that demands quality results and settles for nothing less.

By being small DDG can address issues that others will not even think or bother thinking about i.e. enhanced privacy controls, TOR utilization etc.

I believe that Cull was a dream that went south. Unfortunately that dream had a hefty bill ($33m).

Strangely enough I visited that site once before and could not put a name to a site until I saw a screenshot of it.

"IMHO DDG's success stems from the fact that yegg listens to the users"

I once contacted Cuil about some worthless search results, and got a standard reply asking me to be patient since they were a small company. But there wasn't any hint in the email that they would actually address the issue, so drastically reduced my use of them and never bothered to contact them again. Listening to users would have probably helped if my experience indicates a pattern.

Alexa says cuil and ddg are about the same. (I know alexa sucks etc etc)

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | DMCA | Apply to YC | Contact