Hacker News new | past | comments | ask | show | jobs | submit login

My searching has split pretty evenly between Google and Bing. There was a relatively recent change to Google's search functionality that really brings down the usefulness: I can type an exact phrase in double quotes like I have for years, but often it will "correct" it to something else. These corrections involve either changing a term into another word or dropping words from my query altogether. It completely defeats the point of double quotes and often delivers wrong results because of it. When this happens, I switch to Bing and often get the results that I'm looking for.



I think that double-quotes has had its Eternal September. Maybe "regular people" used it for mundane search tasks and didn't find what they wanted, Google detected this through their search metrics, and now they've decided to nudge double-quote search terms towards synonyms and typo-corrections... verbatim no longer.

For the majority of seekers, Google may actually be performing better, but for a minority of hobbyists/specialists/experts looking for niche and jargon terms, its anything but an improvement.


Presumably there's a technical challenge with offering to match exact strings at this scale?

After all, the ability to do it was Google's main improvement over AltaVista at the time.

I'd love for someone in-the-know to share some of the technical details. Because I could bet that the sloppyness with quoted strings could be about dumbing down, but could equally be a consequence of some implementation choice at this scale and the way the data is indexed.


My humble guess is you'll never get these sorts of technical details from an internal so don't get your hopes up :-)


If you don't ask :) I've never worked on a large-scale text indexing so it doesn't have to be a particularly sophisticated insight.

I'm speculating that the majority of uses cases need some kind of work equivalency, so the major index is built that way. Whether that's just for the search terms of the average user, or ranking/clustering used to derive which pages are important. Then any additional indexes is a trade off with the magnitude of resources it would consume.


I've thought so too but it goes deeper.

If this was the entire reason they shouldn't have botched the verbatim option as well.

My guess is incompetence or arrogance or something.

I don't mean incompetence as dumb and I'm not saying I'm smarter, but clearly something is missing that the previous maintainers had.

My best guess is they threw out a bunch of integration tests somewhere around 2006 and never recovered ;-)


More search engines and/or algorithms is the answer IMO. Having one singular page and answer for a query describing the world's online information is far from ideal, particularly when some search engines have the predisposition of showing their own datasets over organic results wherever possible.

Anecdotally, Google tends to favour newer content so there's a lot of content recycling going on serving no purpose other than pleasing the algorithm.

And of course if you're say, a UK retail site and you suffer an algorithmic penalty, that's potentially half of your traffic gone (G has 95% of the UK search market, around 50% of information discovery is via search). A lot of power for one company to have, too much.


I use presearch which is a decentralized search engine on blockchain tech... It has nice links on the sidebar so you can open searches in google/etc if the results aren't stellar it's actually pretty good...

https://presearch.org/signup?rid=

Use the above if you want or you can attach my ref id: 2320736 if you so choose...

only get like 25 creds, but trying to get to 1k so I can start staking without having to pay for the privilege.

Or you might just be able to use https://presearch.org/

to search without signing up at all...


What a bunch of nonsense that is.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: