When Altavista launched, it was an impressive showcase of the DEC Alpha's power. Intel only became usable for serious servers (with the exception of exotic stuff like Sequents), as did Linux, years later. Google had the good fortune to be in the right place at the right time, when Lintel became a commodity in the datacentre. 5 years earlier, they'd have been on Sun probably.
Altavista launched in 1995 and Google began as a research project in 1996. At my own startup, in 1996, we used Intel because with Sun servers you paid an extreme markup for unnecessary reliability.
I was VP of Engineering at Altavista in 2000, and I started the project to move to Linux. It wasn't easy because search engineering was populated by Alpha fans who were unswayed by the 10x cost advantage.
As late as 2001, I sat in multiple focus groups where all the enterprise customers said Linux was not yet ready for the datacenter. IBM's penguin campaigns were just beginning at that time.
Google's large scale use of Linux was groundbreaking when they launched in 1998.
It would have affected their cost base certainly, and probably their entire datacentre strategy. With SPARC kit, you wouldn't build assuming that machines will often fail and simply be swapped out, for example, something that Google is famous for.
> With SPARC kit, you wouldn't build assuming that machines will often fail
In the late 90s SPARCs did fail. Yes, they were more reliable than commodity x86 boxes, but they failed often enough that it was an issue if you had 100 or so, and search engines hit that level very quickly.
Right, but look at what Google do, their boxes are basically disposable. Why invest in dual-redundant-hotswappable-everything boxes when you just throw the entire thing away if any bit of it breaks, 'cos it's cheaper to replace it than to even try to repair it in-place.
> Right, but look at what Google do, their boxes are basically disposable
We're talking matter of degree.
The claim was that building a search engine out of 90s sparcs meant that you didn't have to worry about things dying.
That claim is not true - reasonable search engines of that era required enough machines that the failure rate of 90s sparcs, while better than x86 of the time, was enough to require folks to handle frequent failures.
It's reasonable to argue that the cost/benefit tradeoff of sparc's extra reliability vs x85 wasn't worth it for those companies, but that's a different argument.
Close. Lots of other companies were also hiring pretty high tier talent as well, and had intense focus. Google's success came down to effectively executing across typically disparate disciplines. You have hard core research level CS eggheads, you have top tier software engineers, and you have state of the art data center operations. In a typical organization these groups have competing interests, they fight amongst themselves and in the end some sort of compromise is reached that allows everyone to grudgingly get along.
At google these three groups worked hand in hand and complemented each other's work. The eggheads came up with page rank, the coders figured out how to make pagerank scale through massive paralellism via sharding and mapreduce, and the data center folks figured out how to make sharding cheap and fast through commodity pc based servers and massive amounts of automation for management. In the end everyone was working at the top of their game to help everyone else. The result was that google was able to deliver better results (pagerank) faster (mapreduce) and cheaper (automated commodity hardware datacenters) than the competition.
There were lots of other fine details that led to google's success, but in the end those core factors are what allowed them to deliver a better search experience to users (better/faster) and to be more competitive in the marketplace (lower cost per search means more profit even with lower per search ad revenue).
No one else in search was pushing on all the right pressure points the way google was, and the rest is history.
I agree, but I'd ascribe that to Google being run by technical founders rather than MBAs. The main benefit of technical company leadership is the ability to "see" across and coordinate the disparate areas.
Infighting and begrudging compromises only happen when the leadership is blind to the details.
Having technical founders is a necessary but not sufficient condition, I think. Lots of companies have had technical founders who haven't managed a level of success as impressive (regardless of scale) as google.
From the article: "In short, Google had realized that a search engine wasn't about finding ten links for you to click on. It was about satisfying a need for information. For us engineers who spent our day thinking about search, this was obvious. Unfortunately, we were unable to sell this to our executives. Doug built a clutter-free UI for internal use, but our execs didn't want to build a destination search engine to compete with our customers. I still have an email in which I outlined a proposal to build a snippets and caching cluster, which was nixed because of costs."
The engineers here had more than inkling what needed to be done. The problem was this didn't go through the entire company.
If black people can call themselves niggers and not be insulted, can geeks call themselves eggheads and not be insulted? Not that I condone black people calling themselves niggers, but if anyone can, they can. So why not geeks and the word egghead? Heck, even geek was a high school derogatory word.
The better abstracts is the reason I use DuckDuckGo at home.
If I just want to know when the next episode of Big Bang Theory is out or what the weather is today I rarely need to even click on a result.
For more obscure technical searches at work, Google still finds more answers.
But remember - the barrier to change for a search engine's customers is very very low
With the new DuckDuckHack project as well it does make it a lot easier for very quick 'cheap' results. More complex queries I do seem to find myself !g'ing them. Getting better though, improved over the last 3 months I've been using it.
I miss very often the top box too.
I have learned the relevancy of this top box, but I still miss it very often.
My eyes usually go right to the link with "Official site" tag.
I've heard the effect is called banner blindness.
However I use adblock on every browser. Therefore I have less training than others to ignore ads. When I see one, it just hit me stronger (it's a side effect of adblock).
I would modify that to read (since this article even stated that the engineers there found this to be obvious):
>The Google CEO's have the point of view of an engineer.
And, more generally:
>The Google CEO's have the point of view of the people doing the actual work.
The lesson to take away from this is that one shouldn't try to manage what one can't do themself. The disconnect between the manager and the problem domain becomes too great and they end up making ridiculous decisions since they are acting on the wrong information.
First, engineers very often build crappy products when left to their own devices. The ones that they do the best at are ones where they are also the users and are more or less representative of the target audience. Google's a great example, and so is Firefox. So I think Google execs' perspective as a user was much more important. Consider as contrast Google Plus, which definitely was not built because was an avid but dissatisfied Facebook and Twitter user.
I think your second point is almost right. You should never try to control things you don't understand. It's ok to manage things you don't understand, because good management in that case is not directing the people who understand, but supporting them in achieving common goals.
As a non-tech example, few hospital administrators can perform brain surgery. But that's fine as long as they ask the brain surgeons what they need rather than telling them how to work.
Did Google actually have revenue at that point in time? They bought Applied Semantics (for adsense / adwords) in what, 2003? (edit: mlinksva points out that they introduced adwords at the end of 2000)
Inktomi management would probably have had to raise capital on a risky pivot whilst at the same time dropping all of their existing revenue streams in order to compete head to head with Google who at that point didn't even have a way to monetise their technology.
That's a hard thing for any company to do: In this case it would have been the right choice, but it's far easier to say that with the benefit of perfect hindsight.
Inktomi killed Inktomi long before Google helped put the nail in the coffin.
What the article doesn't say is Inktomi had a dual sided business. One side was in Caching Proxies the other was licensing a search API.
Inktomi decided to focus on the caching proxy business and de-emphasized their search product, only to watch the proxy business evaporate as internet bandwidth became cheaper/better.
The focus on a shrinking market (proxies) and the lack of focus on growing market (search) killed them. Had search been a priority from the beginning things may have ended very differently with Inktomi creating their own front end.
Indeed. Inktomi also tried to position themselves as an arms supplier to the CDN business. It didn't help that the CDN business basically disappeared from 2001-2004, and that CDNs, to this day, rarely buy software.
I was going to mention this. It seemed like the management at Inktomi let it fall once the engineers started using Google search engines. Their response is a likely bellweather of the attitude of the time.
It was clear that Yahoo.com was the definitive result for the query "yahoo" so it would score a 10. Other Yahoo pages would be ok (perhaps a 5 or 6). Irrelevant pages stuffed with Yahoo-related keywords would be spam.
As someone who worked on search quality at Google for some time, this bit jumped out at me as a terrible mistake. The correct way to judge results for the query [yahoo] is:
(a) Where is yahoo.com? At the top?
(b) There is no (b).
It seems like a slight difference, but it leads to the wrong priorities. For the query [yahoo], it does not matter if spam or non-spam is in spot #5. The only thing that matters is where you put yahoo.com.
After I had switched to Google, I never understood why all of the competition just disappeared over night. You would think they would have given it a fight, but that never seemed to happen. At least this article gives a little insight to that. I still wonder what happened to Altavista.
AltaVista tried to jump on the 'portal' bandwagon. I remember at the time how stupid I thought they were, trying to beat Excite and Yahoo at something that was already old hat, a concept that had the Internet had outgrown. Then they screwed up pretty much the same way Inktomi did.
AltaVista still exists. It's awful, and it's powered by the Yahoo search engine. Which is pretty much the same thing, I suppose.
Yahoo! bought Overture (which owned AltaVista) way back in 2003. They only replaced the search results with Yahoo!'s a year ago after announcing the site would be shut down. Seemed like they were using it as an occasional test bed in the meantime.
I just realized that Google won on search the way Apple has won on smartphones. They control the full stack -- frontend, relevance, indexing, advertising -- and tightly couple these pieces. Inktomi couldn't control the user interface the way Google can't really control the interface on Android.
You are right, but IMO you miss the real point. Both Google's search and Apple's iPhone were about delivering a wholly satisfying product/service, and controlling the whole stack was needed to do that for those cases. This is not always true (though it often is).
- users always want faster, more direct answers (rather than controlling the filtering/categorization of their searches)
That's a very power-user centric attitude, don't you think? As a power user I preferred to type long, complicated Sabre queries to find exactly which airplane flight I wanted. It was much faster, and I had memorized all of the complicated mnemonics. But that's not what a casual user would want to use.
Asking users to specify categories for what they want means requiring a certain orientation in their thinking which is shared by computer scientists and trained librarians. But to an average user, that's extra work. And think about how this might work if you're talking to an actual human librarian: if you start asking about TV shows, and then mention "The Big Bang Theory", do you think the librarian will ask you, "Did you mean the scientific theory, or the TV show?" That's only something a stupid computer would do. A smart librarian would take the context of the previous queries that you've made of him or her, and provide the right answer quickly and efficiently. Wouldn't you want the same thing from a search engine?
To be fair, faster answers + the ability to undo a do-what-I-mean guess lets me correct Google's assumptions pretty fast. The tools at the left allow for some quick refining as well; that's pretty useful when I need particularly fresh results, or a time window from when some news was breaking. And the fast completion is useful to refine a query before it returns results (though occasionally annoying when it erases quotes and the like).
True. In this case, I forgot to mention that not having a world-facing UI deprived us of vital signals that Google used to improved their experience. We had query logs, but we had no idea of what our customers' users were doing within a session, no user history, etc.
According to a friend of mine who worked on the search team, Inktomi shifted its (management and CapEx) focus away from search and onto other projects. He thought at the time that even with the constraints of not competing with their own customers, there were things they could have done to better compete if management had chosen to do so.
Yes, that's true. They didn't think search would be a huge business. Back then the model was to sell search to portals charging by query volume, and it was a race to the bottom. Our Solaris servers were more expensive than Google's Linux boxes.
Ok, so the timing on this is really amazing. Techcrunch, reporting on Facebook's S1, mentioned that Yahoo! has suggested another 12 patents they may try to throw against Facebook and their Open Compute project. The Yahoo! letter is here: http://www.scribd.com/doc/92280387/TechCrunch-Letter-From-Ya... and the source of many of these patents? Inktomi!
After reading the "Tale" it appears that Inktomi killed itself. It is a good example of what happens if top dog companies fail to innovate in face of sudden superior competition. RIM is another example, but would be wrong to say that Apple is killing them. Apple is just making and selling superior products.
That was a good read. I remember I used Yahoo for searching the web. Due to relevancy factor I moved to Altavista (but it didn't improve anything until the day I found out about Google and still use Google). I didn't know that Inktomi was powering search at that time. If Yahoo was so dependent on Inktomi or Google for its Search, I wonder why didn't they work on Search by themselves. After all they were information organization tool. Why did they ignore such a huge market. VC's were going crazy for funding search engines and number of search engines companies were either getting funded or going public. Based on these signals and the traffic they had during dot com era, they could have easily built substantially good search engine; yet they ignored it. Can anyone shed light on it?
Yahoo owned Overture so their results were pure pay-for-placement. If anyone wanted to pay $100 per click to have the #1 spot on "beef" go to a site about chicken, that was a-OK with Yahoo management. When Yahoo realized their auction system was stupid, they had "project Panama" which was also a joke and by that time Google had the market to themselves anyway.
When I started at Yahoo Search (in 2005; it was Inktomi), I quickly learned that "just Google it" was frowned upon, and modified my vocabulary to say "just use web search". To this day, I still use this phrase.
Most engineers that I knew in YST did not use Google at all. We preferred to eat our own dogfood, and filed query triages against bad results (and only used Google to compare).
Oh yes. I remember back in early 1998 I was using Copernic to aggregate and filter results from a whole bunch of search engines, because individually their results where so poor and irrelevant. Then I discovered http://google.stanford.edu and never looked back.
As an Australian I used to use 'Anzwers' as it seem to give great relevance for Australian specific information. I think it used Inktomi?
I'm not sure why I switched to Google. Not to discredit the Google UX, but I think I switched because the name 'Google' was so catchy and eased it's way into my university's vernacular. "Just Google it" rolls of the tongue nicely.
I remember why I switched to Google: 1. speed, 2. simplicity. Relevance, meh. Never noticed much difference there. This led me to believe that PageRank was more about marketing (brilliant marketing) than technical edge.
Are you kidding? I was using Altavista, IIRC, and would routinely have to page down to the second or third page to find any kind of relevant link. When I switched to Google, I almost immediately stopped looking at more than the first page of results.
There was nothing wrong with Altavista's algorithm for "finding stuff" - it was just too vulnerable to SEO pollution. I remember the quality of it declining almost overnight once spammers (yes that's what SEOs are) figured it out.
PageRank was of paramount importance back then. At the time, AltaVista/Inktomi were easy to spam, and Google wasn't.
The google spam that actually works (or at least, worked a few years ago, before panda) requires setting up lots of sites, lots of independent IP blocks. That was much harder when PageRank appeared - hosting was damn expensive, VPSes were nowhere to be found.
PageRank was a huge thing. I used "and" queries in altavista at the time. It was no match.
This is purely subjective, but Google is the only one that links to the Netlib 'real' LAPACK, straight to the file/documentation in question. The others have mixtures of Java packages and other examples...