I work in search and I can't recollect a time EVER having been frustrated with what Google gave me - and having to go to another engine. And I live, eat, & breathe the thing.
I understand I have a notoriously small sample (as compared to the entire world of search - and I generally search for terms that have been optimized by similar search professionals), but the idea is that really, 10 good search results is pretty much impossible - what matters is that there is always three-four results on every page that are relevant. And without fail, Google continues to deliver this - at least for me.
This process is what occurs for every monopoly. We hate the Yankees, Microsoft, whatever. It's natural to want to bring down something so dominating. But for me, Google is doing an absolutely incredible job.
The only potential I see for something like DDG unseating them (or even gaining legitimate market share) is if Google loses the ability to pivot. If this occurs, the dilution of the algorithm is possible, which could turn their product to crap.
BUT, given their history of 200+ algorithm changes per year, they have shown that not to be the case. Because of this, I imagine Google will be with us - & strongly with us - for the entirety of our lifetimes.
>> I really think this is a case of confirmation bias. We see Google approaching this dominating mass that's impossible to stop, so we begin looking for ways to tear down big brother and look for holes.
Lately, I've found myself feeling like search is back in 1999 again.
When I joined Google in 2000, one thing I did was run a bunch of searches and save the results. When I went back and looked at the data, Google2000 can't compare to Google2010. Google2000 still had lots of spam--it was much, much better than Altavista or other competitors at the time, but the results weren't nearly as good as they are now. I keep meaning to do a blog post and show some examples.
One example category does come to mind, which is a frequent pet peeve: lyrics searches. They definitely return sort of crappy results currently, but I can't say for sure what they returned in 2000, since I didn't save any results.
Imo [joy division walked in line lyrics] should return a page like http://www.joydiv.org/shadowplay/joyd/walkedline.html. Instead it returns a bunch of sites like lyricsfreak.com, sing365.com, lyricsdepot.com, metrolyrics.com, etc.
The case is even stronger if there's actually an official site for the musician with official lyrics. An example of that: [vnv nation fragments lyrics] should return http://www.vnvnation.com/Webfiles/lyrics/fragments.htm, but instead it again returns the usual lyrics-site suspects.
I think my recollection is that I used to get fan sites more than lyrics aggregators in 2000. That might be more due to web changes than Google changes, admittedly, since many of these lyrics aggregators didn't exist in 2000, and hobbyist fan sites may have been more vibrant. The fact that the same lyrics-aggregator sites dominate for almost all the searches seems weird, though; it seems that basically all lyrics searches return "big lyrics site", rather than trying to retrieve a site specializing in the band I searched for.
You're spoiled then.
Spoiled? I remember search being horrendous back in those days. It was almost exact word matching and it took hours to find relevant results. Google was a breath of fresh air back then (especially since they didn't run all those X10 ads and popups ala Yahoo!).
I've switched to Bing a few months ago for tin-foil-haty reasons, and also being disappointed in the way Google interpreted some of my queries (for some reason the plus sign doesn't mean anything anymore).
I occasionally use Google when I give up on finding something on Bing, but 99% of the time 4-5 results of the first page are pages I already found on Bing, with the rest being irrelevant results.
I too wasn't aware of the search tools that Matt Cutts mentioned above, I'll give it a try the next time I'm having a hard time finding something with a date restriction.
But it doesn't actually move the conversation forward at all.
Certainly Google isn't perfect, but they do a pretty good job combating web spam that is explicitly targeted at them.
So Google has power search operators and we introduce new tools like the left-hand side searching tools, but we also have to try to be accessible for the folks that aren't that tech-savvy.
My vote: "Syntax error on line 1" with no further hints. ;)
At one point I was debugging, I think, an NSTableView object. Google says "eh, fuck it, iPhone dev is much more popular, here's UITableView. Have fun with that."
And so then I have to wrap NSTableView in quotes to force Google to use my input as I've provided it.
I wish I could turn this kind of stuff off.
I know a little of Google's syntax like the + and " workarounds, but having a literal search as default would be worthwhile for many users. I recall that I switched to Altavista back in the mid 90s because it offered a feature like that.
My ultimate Google fantasy: An account setting called "2008 mode."
No instant search.
No fancy, annoying endless scroll Google image search.
No word clustering/auto-substitution.
It would be awesome.
The image search has been a pet peeve of mine, especially because of the way the results are provided to the end users. It seems they do everything they can to stop people from visiting actual websites through the image search.
Instant search was fun for about 30 seconds. Then it's back to work and then it is rather annoying. At least you can disable it (just to the right of the search bar there is a little settings menu).
The word clustering and auto-substitution are a real pain in the ass, especially when it keeps coming up with stuff that just isn't right but is more popular. I really can't stand that. I find myself using quoted queries far more frequently than in the past.
2008 sounds just about right, they can market it as 'google classic' for all I care.
It makes me wonder if, perhaps, Google2010 is just Google's New Coke ;-)
At the very least, please give us an "allintext:" that actually works:
absolutely no stemming whatsoever
no "words were found in links pointing to this page" (this is the most infuriating thing ever)
and... no "you're a bot" crazyness (or captchas that normal people can solve...)
For instance: หาเมนบอร์ดช๊อคเกต775 returns 42k results, and is not exactly a common combination of words on google.com.
Extra points if you know what it stands for :)
'Haa men bord chok ket 775' = 'Find motherboard socket 775.'
I would have thought that 'socket' would be a direct English transliteration, and use ซ (s) not ช (ch) though.
Are you in Thailand, Jacques? Also, what makes you think this is such an uncommon phrase? It may not rival English, but there are plenty of Thai speakers out there, in many countries.
On google.com it would be an uncommon phrase. On the Thai version of google not so much I guess.
I don't know what that is supposed to be a reference to.
As a proof, search on any seo play company on Alexa around April 2010
Google Image Search also breaks horribly with FF and Ghostery installed.
Google Instant still does horribly at my favorite test search word: Cardinal
Do they know if I'm looking for
a) the bird
b) the team
c) guy with the funny hat
? A: No.
Too much ambiguity.
I actually tried this search on DDG yesterday while trying some other tests and DDG's results were much more useful IMO.
Something like "view advanced members" or something...
Are you sure you didn't mistype? If I type in USTableView, it shows me two UITableView hits at the top, before the exact matches.
Not that I suspect it helps anything, but at least if someone finds my computer or something, that data isn't two clicks away.
All I did was point out that that data is stored on a server somewhere else and that there are lots of eyes that could be prying there.
I'm sure that there are plenty of people that are concerned about their internet use being monitored, for instance, dissidents and other people that have legitimate reasons to be afraid of having their search history lifted in the future.
Interesting reading on the subject of subpoenas of search records:
The less information that is being held on you by third parties the less chance that one day you'll be part of some drag-net operation.
Google for "what we have here is failure to communicate" http://www.google.com/search?sourceid=chrome&ie=UTF-8...
The first hit is the wiki article on the quote (good), but the exact search string is not present on the page.
A big issue with Google is that, behind the scenes, things change without any outward guidance to longterm users on how to adjust.
I've been complaining about these changes for years.
"These terms only appear in links pointing to this page: what we have here is failure to communicate"
"Brand Model 1234 broken control panel"
And get an aggregate review page with a mention of broken control panels for another model and brand on which my model and brand are not listed....
I'm glad that my search term is in a link pointing to this page, but that is worth less than nothing to me...
The interface on my webmail software feels like a mail client should -- easy navigation, threaded conversations, multiple window panes, and it's fast. Google on the other hand took years before they could be bothered to add buttons to Gmail, and even now Gmail's interface is an ugly monument to 90's era design principles.
Uh, in 2004 when Gmail launched it was one of the first widespread AJAX apps. It made Hotmail / Yahoo feel like molasses, gave you 200x storage, and added the idea of labels instead of folders. They obsessed tremendously over performance to make it usable by people who hitherto had only been able to tolerate desktop mail clients.
Now I realize a lot of things change in 6 years, but hell, Dreamhost is still using SquirrelMail. The author just throws it out there like amazing webmail is a foregone conclusion, yet I've not seen any webmail that beats Gmail even by a little bit. Can someone enlighten me?
It feels pretty much like a desktop application, actually...
I think Gmail does hit a sweet spot for most power emailers. It took months for me to switch from my previous custom mutt + .procmail setup to Gmail. Do I still miss procmail sometimes? Sure. But overall, I save a ton of time using Gmail instead of rolling my own email solution.
That's not to say it's bad, but rather that -- given Google's size and the ridiculous amount of smarts on its payroll -- it could be a lot better. It just doesn't seem to be what Google is focusing on these days.
I'll grant though that Gmail has, hands-down, the very best spam filtering in existence. I really don't know how you guys do that.
But ... just to put things in perspective, I think that's the only killer feature that Gmail has left. Otherwise, it's pretty vulnerable; any service provider could launch the same setup I have, on the really cheap VPS services available these days, and readily compete with Google's business mail hosting.
But I do think that Gmail Labs is pretty hard to beat. Labs like "Send and Archive," "Undo Send," and "Quick Links" save my bacon on a pretty regular basis.
The engineering problem is a pretty interesting one, but the headache of ensuring your users can actually get their mail someplace makes it seem dull or insurmountable to solo developers.
Users start to get annoyed if you do any worse than 99.9%, and they don't like receiving any spam at all, and to make matters worse, "the other end" can be anything from a creaky old sendmail server to an Exchange monstrosity, either of which can communicate in odd ways or drop a message altogether.
I keep expecting to see email go the way of Usenet, but it keeps refusing to die.
Wrt to spam sites, DDG often looks a lot different because I maintain a large database of spam sites that I remove from results. I see these crop up all the time in the API feeds I use. It's over 60M in just the main tlds (non country level domains).
I suspect that of the two, yahoo's has a much bigger impact given the size of the teams involved.
Probably because few people report sites.
Ironically, of all Google's services, search is the one I could most easily replace and would most like to replace, if an equally good competitor emerges. On the other hand, giving up Gmail, Google Calendar, Google Docs and even Buzz at this point would be hard as my life is interwoven into them in many complex ways. The fact that my whole life is stored in these accounts but the same account lets google track my searches and associate them to me scares me and actually makes me want to replace Google as my search engine.
Not that I'd want to go around with safe search disabled all the time...
Edit: FWIW, I fixed this: http://duckduckgo.com/?q=breast+shimming should now work as well.
And not long before, I searched for my oldest still-surviving site (on Angelfire). I discovered that while Google returned 20 garbage sites, they didn’t include mine; my site was the only one DDG returned.
I've created a subscribed links to mimic the UNIX cal command, for example: http://www.mattcutts.com/blog/add-calendar-shortcut-to-googl...
- [avaya 103r manual] It's a fair complaint to say some low-quality results are returned, but there's a reason. Do that search and Google says "About 1,510 results." That's a minuscule number of results--it usually means the web has very little content that matches that query in any way. That's why the lower-quality and foreign results show up: we're scraping the bottom of the barrel of the web to find any results at all, and there's not many pages that have that those words. If you go to avaya.com and search for [103r], they don't find any results either. It's hard for Google to return useful Avaya results when avaya.com doesn't have that word anywhere on the site. :)
- You were looking for specs on a Gateway mt6840 motherboard, specifically the socket. Instead of trying to solve that in one query, I'd go for doing it in two steps. I did the query [site:gateway.com mt6840] to see if there was any authoritative result, and the #1 result was http://support.gateway.com/s/Mobile/2007/Oasis/1014554R/1014... which has specs for that motherboard, but not the socket. But that spec sheet mentions that the motherboard uses a Intel® Core™ 2 Duo processor T2450. So then I searched for [T2450 socket] and got a Wikipedia page that says the T2450 uses Socket M. Just to be safe, searching for [MT6840 "socket m"], which returns a page on computing.net that's a forum with some ads, but the page mentions "945GM-based laptops support Socket M Core 2 Duo CPUs." The Gateway spec sheet says the MT6840 uses a 945GM chipset, so Socket M seems like a safe bet.
- The last search was about an OpenSolaris machine that wouldn't boot and that said "Error 16" instead. The complaint was that the results were stale/old. I did the search [opensolaris boot "error 16"]. Then over on the left-hand side, click "Show search tools" and click "Past year" to get results from the past year. The #1 result shows a long discussion about debugging this (which implies it's not a trivial issue). The #2 result is a discussion that points into opensolaris.org, which then points to this bug: http://bugs.opensolaris.org/view_bug.do?bug_id=6774616 . My point on this query is that you can use the left-hand search bar to restrict the results to a certain time range (e.g. only results from the last week, month, or year).
We do try to provide tools (e.g. estimated number of results, or the ability to restrict results by date) to help find the answers--or to find out if there aren't good answers on the web.
And in some cases like your #2 search, we could do a better job of synthesizing information. If page A says that a motherboard uses this chip/chipset and page B says that this chip/chipset uses this socket, then we could infer what socket the motherboard uses. Inferring information like this is really tricky though because the web can be really noisy.
I'm not saying Google shouldn't do better on these searches. You're clearly a power searcher, and I share your frustration when it's really hard to find what you want using Google. I'll pass this article around within the search quality group at Google and see if we could do better on searches like these--thanks for the feedback.
So, this whole thing started out as a brief rant that's been in my head for a couple of months. Also, I wanted to get rid of those tabs that had been on the far left side of Firefox for ages. I'm not even sure why I posted it to HN; I just did it and then intended to head out the door afterward, except that all of a sudden my web server became unresponsive.
Anyway: the ability to get results from the past year is great, and I was totally unaware of it. I'll update my post shortly with a note about that. Having something like that sticking right out on the left-hand side would be even better. :-)
I think I often see people complaining about search quality (here, and in the other HN thread I linked to from about a year back, as examples), but if you ask them about the specifics of the search, they don't remember. I think that's a problem that needs to be solved somehow. Although it's at least partly laziness on the user's part, I think Google could view this as a huge amount of potentially beneficial information that they're missing out on. Trying to improve search results without knowing what problems people are having with them is challenging, at least.
It would be nifty if there were some kind of "these were not the search results I was looking for" quick form linked from certain types of search results.
We do have a "Give us feedback" link at the bottom of each search results page which does some neat AJAX-y things. But even then, we get a lot of "Hi, I'm trying to find my great-great-great-grandfather. He last name was Smith. Thanks!" sort of submissions. Also submissions like "My computer keeps humming. I opened it up and cleaned out the dust, but it still hums. Do I need to clean out the dust again?" And some submissions that say "I would like to rank #1 for all my keywords. How do I do that?"
I've wondered whether a Chrome extension or something similar would give higher quality data for spam or bug reports. It might be worth thinking about offering something like that.
Okay, here goes. Lately I have been doing a lot of C# searches. Google suggest strips out the hash (#) from the suggest items, so I keep getting faked out by likely C programming matches. Can you please stop stripping off the "#"? Otherwise, I'll need to attach a hot wire to my down arrow to break the habit. Thanks.
Normally Google doesn't allows searches for punctuation, because doing so would swell the index size a lot for very little return in terms of helpful searches. But since we're engineers, we do search some punctuation, e.g. underscores so you can search for sprintf_s, and terms like C# and F#. I'll ask whether the # can carry through into Suggest.
I think I've tried more than a couple of times to give some feedback about the search results. Seeing that nothing happened, and getting nothing but an automated response, I've stopped doing that.
Reporting flashing ads to the ad team worked better, they were removed later the same day.
Overall I agree with you, but even if we had that channel, many times the feedback would be: "Yup, that's a bad set of search results. We'll try to come up with a way to make it better, but it might be a while. Getting an algorithm to do better on this search will be hard."
$ wget http://bmtcinfo.com/
--2010-09-14 09:01:43-- http://bmtcinfo.com/
Resolving bmtcinfo.com... failed: Name or service not known.
wget: unable to resolve host address `bmtcinfo.com'
The url http://bmtcinfo.com/ just doesn't work in a browser or with wget. And trying to fetch the "www" version of the website, it does a 302 redirect to a deeper url. 302 redirects are the ones that don't pass PageRank. The last time we tried to crawl the page, we got an error--probably from the non-www version. I'll see what we can do to find this site better, but returning errors instead of content to Googlebot really hurts that effort.
By the way, we provide a free webmaster console that would let the owner of this domain self-diagnose these issues. The "Fetch as Googlebot" feature lets the owner send Googlebot to fetch a url on their site and see what Googlebot saw, including errors and redirects. If the owner of bmtcinfo.com were to use that tool, they'd probably notice the errors pretty quickly.
Overall I still love Google, I must use the search service 50 times a day so complaints about shortcomings are a bit like grumbling about the paintwork on my free new car :-) So I'll try and identify two recurring (and relatively new-feeling) headaches:
1. quotes. When I put it "in quotes" I don't want ti speell-checked, or cleaned, or made case-insensitive, or whatever else. I would rather get not results and experiment with other strings, than think I've got results that turn out not to be exactly what I searched for. It seems to me like punctuation often gets stripped even if it's inside quotes. For some kinds of searches involving bits of source code or so, this can be a drag.
2. Content farming. I know you are constantly struggling against people gaming the system and so forth. I don't blame Google when I get 50 results of generic junk referring to obscure search terms...the "find [niche product] resellers, hints, tips!" types that are totally generic. but what does piss me off is that a few months ago Google offered a button that would let zap such results, and I could label clutter as clutter. Now on a deep search, I often spend several minutes trying to think what terms are common enough to content farms that -excluding them will prune the results sufficiently that the remaining search results are worth checking one by one.
You remember that Bing ad where someone says something random and their geeky partner starts hypnotically chanting associated but unhelpful phrases, implying an overbroad result set? Sorry, but they had a point.
On content farms, we've definitely heard that feedback. One point up for debate is whether to respond with algorithms-only, or whether we should update our quality guidelines to call out low-quality content farms as a type of webspam. Both DuckDuckGo and Blekko seem delighted to remove sites that most people don't like from the search results. Here's a link for DDG for example: http://www.technologyreview.com/blog/mimssbits/25532/ . The question is: would you feel comfortable if Google removed results that a lot of people don't like from our index? And how do you balance the goal of reducing clutter and junk with the goal of being fair and comprehensive?
and if people wanted a fuller result, you can say, "more lower quality links found. Click here to view all" similar to the way that searching in gmail lets you search in trash & spam.
Farming-wise, I think you should probably keep all those results in your index, even the ones that are composed of nothing more than your top searches separated by random phrases! even if it's not there now, in future it'll be possible to score page content on whether or not it has semantic value and draw inferences about sites or entire domains that are filled with junk. That will be interesting and useful from security, economic, and scientific points of view. In the meantime people will find useful analyses to run against that 'bad' data in your results which would not be practical if you purged too aggressively.
What I had envisioned (which might be a tad ambitious, but bear in mind that you already have 5% of my local CPU for the asking with the desktop tool installed) is per-user search filtering. I may like sculpture but hate politics, so I would always search for 'statue -liberty', you are the other way around so your searches tend more towards 'liberty -statue'; I would very much like to be able to have complex filters on the client side, either locally or on the client-facing parts of your servers - and not just for spam sites.
DDG takes the approach of allowing regex, which is a neat thing for the people who know enough to want it, and it would be interesting if search patterns and/or selective exclusions (as described above) could be stored locally, as either weighting tables or some sort of white/blacklist - always include wikipedia, never include about.com. I'm already running chrome and using a Nexus one, so perhaps some hashing could take place on my computers rather than increasing the load on your servers.
The other reason besides spam is that lately I find myself wanting to do specialized searches, but I don't know how to specify the bounds of the search space. For example, I'm interested in law. But a lot of legal terms are in popular currency, so even if I search for "theft +legal" I may get tons of results for cheap car alarms or something. It would be fantastic if I could get a large set of results by specifying a large number of domain-specific terms - say, "tortfeasor privity precedent appellate" - and then hash and save that result set as a 'search space'. So then I could do more specific searches and know that my results would mostly come from websites devoted to the subject, with few that mention it only incidentally.
In actual fact, the legal resources searchable via Scholar and Books are fantastic. I just picked law as an example of where you might want to temporarily limit your search set because most people can appreciate the difference between writing specifically about legal topics vs things that just mention the subject in passing. If you want to learn how to write a good disclaimer, "legal disclaimer" is not a good start because every 3rd landing page on the net includes that phrase as boilerplate. If users could save and reuse result sets, we'd get more actionable results, you'd (maybe) get lower server loads but more importantly, every successful hit (where the user doesn't search again or try another result for several minutes) is an implicit vote for the relevance of the result to the set, and thus a valid input to a semantic classifier.
I am amazed at how many users still don't know about this feature. It is one of the best features in google search (that was added recently). Google should promote it better to its users.
While Google Instant is technically impressive, the feeling of speed is undone whenever I have to next manually restrict the results to the past year or 6 months or whatever (and sometimes experiment with multiple durations) in order to filter out discussions from 2008 and earlier. Same with using regular search as well.
Hence, might I suggest you guys consider something like inverting the default search, to only include pages created or modified within the past year (or whatever duration you calculate most optimal for returning relevant results)? Or perhaps changing the PageRank weighting of recency, at least for domains that change rapidly. I know this could cause some problems, and I'm not sure the web is quite at the age where this is necessary yet, but I think there's an inflection point coming soon where more searchers will expect recent results in their top 10 rather than years-old pages.
It might also be worth moving the duration filter up one tier of UI so it is both clearly visible on the search results page, only takes one instead of two clicks to activate. The OP isn't the only one who's recently needed searches restricted by age and didn't realize just from the UI that this could be done. A few months ago I made a similar complaint on another forum and was informed of the same thing - use Search Tools -> Past X. But it's not obvious to do that on the current UI.
QFT. I spent the beginning of the year teaching myself Rails, and I can't count the number of times I cursed Google's search results. Multiple different copies of the same mailing list post, just from different mirrors. And almost everything pointed to some old version of Ruby or Rails that wasn't relevant anymore.
And searching on Google Groups is worse.
The only downside really is just in the formatting of the results - I can get 11-12 results in a google search in a full-height window and about 8 in DDG. Sure, some of that is due to the very nice "disambiguator" box (or whetever they call it), and the results are indeed generally higher quality, but it is also due to large title fonts and designey white space, so I personally would like a bit of that back.
(No I haven't sent this to DDG feedback because for all I know it is a personal peeve and everybody else likes it this way)
The way I see it, Gabriel's job is to decide whether it's a personal peeve or not. Your job is to ask for new features and "complain" about the old features, so that he can hear what real users think.
The design is continually in flux based on user feedback, but recently I've been stepping back to take a more holistic approach and really push it forward.
I will be working with a professional designer on this and the duck.co community (if anyone wants to participate). I hope this will become evident (and useful) within a few months from now. So by all means, give design feedback!
It's a shame as it is a not so hard problem to solve. It should just copy the search result google had in the early days :)
(Web apps get a pass, but if your site only needs to display text it should not require JS.)
I'll fix it soon but right now I have a server down.
(BTW, do you have any idea how much extra time you noscript guys add to web development efforts? Ugg.)
> (BTW, do you have any idea how much extra time you
> noscript guys add to web development efforts? Ugg.)
On this site, that wasn't really that hard. On more complex sites, it can be very hard.
...and I'm one of the "silly" web developers, that actually tries to work with the .5% or so of people that have JS turned off. With jQuery and everything else out there, most web developers don't seem to bother.
The accepted way of doing this is to build your links to point to URLs, then use script to override them with whatever dynamic wackiness you think is helpful. You get all sorts of bonuses by doing it that way (such as ctrl-click, save target as, etc.) without any of the downside you go on about.
And it's not any harder to implement.
See, in order for the layout to work with a minimum number of images, there's a CSS trick in some overlapping layers. The JS in the page coincidentally extends one of those layers when it inserts the fairly unobtrusive text control links that allow you to scale the page text as large or as small as is comfortable to read on your display (something the noscript folks don't even realize isn't there). Without that element, the content layer wasn't being resized correctly.
I'm on your guys' side here. I know all the "accepted ways"; I have to explain them to my clients when I justify the costs they're charged, and the benefits. Sometimes they want to know how many people this will actually affect, and I have to tell them, "maybe a few", and then try to justify doing it anyway.
And, it is harder to implement. I can build a site that will look exactly right no matter what size display you're reading it on; fonts and images will all scale, and the site will look right at 800x600 or 1600x1200, without lots of scrolling or empty space. The catch is, to do that, I need JS to work, and spending time trying to figure out the least ugly way to display the site sans JS is not "not any harder".
Feel free to browse my page source, it's fairly easy to read, if unconventional in places.
By doing it any other way, you run into the issues you're running into.
Anyways, I'd love to switch to DDG, but too often it tells me "No more results. Try Google.", and almost every time a subsequent search on Google provides me with many more (relevant) links for the same query.
If DDG could be configured to basically be a proxy and fetch all of its results from Google, that'd be cool.
It's brilliant that there is still quality innovation going on in the search space. It's an old industry now, after all.
Maybe yegg could drop a comment on how he manages his time to do all this?
--All feedback gets pushed to my personal email inbox--gmail :)
--I try to do 0-inbox, i.e. keep my inbox at zero messages. This is usually not attainable, but means it functions essentially as a to-do list.
--I respond to all feedback ASAP (unless anonymous).
--If it is something simple that I know how to fix I try to do it that day.
--If it is something a bit more complicated, I respond, and put it on the bug queue. I'm currently using http://speckleapp.com for that, made by an HNer (http://elliottkember.com/).
--Every few days I set aside a large block of time to go through that bug queue. If easily fixable, I fix it and respond back: "Fixed!" If not, I put it in another category, and explain why it is a complex issue and how and when it might get fixed (or not).
If someone wants to add Duck Duck Go to search engines in Opera, here's how (my own screenshot): http://twitpic.com/24ex2e
No such hackery required - it's Opera after all! There's a "Create Search" context menu these days, and it works on any text box.
In summary, just right click the search text box, click "Create Search". Even the DDG homepage says so!
As for Google, they have gone past their prime. As the other comments note, 2008 was the peak. Yahoo is in the midst of what may be the longest running identity crisis ever to hit a company. And the name of the game for both Google and Bing is to keep you on their site with their advertisements for as long as they possibly can. Like some tentacled monster, they don't want to just serve you and let you go. The cheap tricks masquerading as a flashy UI ruin the user experience and make me not want to go back. Duck Duck Go actually keeps you coming by serving salient information (aka 'value') and then lets you get on with life. Kinda reminds me of the difference between the personality ethic and the character ethic from Covey's 7HHEP.
This was a great post. Thanks for putting it forward.
Instant search has not saved me any time and a couple times it has gotten in my way.
I searched for this a while ago:
'impact of basic research on GDP'
There were no relevant results on Google, and the first result was relevant on Bing. (and of course on Duck Duck Go and Yahoo, which use Bing.)
Edit: It seems that now google also gives back relevant result on my search query, so this example is not relevant anymore.
Oh well, whatever. I'm sure they logged an exception and will fix that soon.
As for people gaming the results, it is a huge problem, but it's a problem that is bound to plague any market leader in search.
I do really wish I could have a "NO" button when it asks "Did you mean this?". It might give some feedback.
Was Yahoo! ever in the search business, among the tech-savvy?
This is a serious question. I've been webbing since '93, but I never made any serious use of the Yahoo! search box. I thought I was typical. Am I wrong?
I wonder why google can't store a list of results I simply never want returned.
Actually, they're doing it for other things too:
[web based email]
[adverts for publishers]
I'm sure there's more.
Wow. That truly is an epic fail. Google have removed Google from the SERPs.
LOL! I love(d?) Google, but they are definitely screwing something up somewhere. These rubbishy results occur when I'm logged out too (well, I don't have Web History or personalized search enabled so I guess that's n/a)
In any case, I still love gmail, everything else has been crap.
Also try replacing google EVERYWHERE on an android device, quite well embedded and hard to change in 100 places and still browsers use google as the search engine.