I guess there's something that human editors (which Yahoo still uses for its news page) can do better than machines for now. At least, to me.
I also use my.yahoo.com, I still like it more than google's iGoogle. Especially as I don't want the overhead of loading iGoogle every time I hit google.com, where I just want to search.
Yeah, flickr, too, although sooner or later I'm going to get around to putting up my own gallery thing for my pictures. I'm paying for hosting anyway, might as well use it.
The funny thing is while coding that stuff the bigger problems were financial and the enormous amount of cruft that is the web. The actual search engine wasn't that hard at all.
But I think that once you have enough customers the cost of 'crawling' goes down for every new customer you sign up because you only need to crawl a page once and you can sell the crawled result to many customers. Or do I misread your model and is every page crawled over and over again for every user ?
Could you define "good"?)
Of course my broker supplies all that data and more, but they don't understand the complexities of corporate firewalls and http tunneling. Not to mention information stovepiping -- you know, one page for news, a separate page for quotes, another page for charts ... Yahoo integrates all that for free.