Hacker News new | past | comments | ask | show | jobs | submit login
How is search so bad? A case study (svilentodorov.xyz)
319 points by Tenoke 4 months ago | hide | past | web | favorite | 394 comments



I have been thinking about the same problem since a few weeks. The real problem with search engines is the fact that so many websites have hacked SEO that there is no meritocracy left. Results are not sorted based on relevance or quality but by SEO experts' efforts at making the search results favor themselves. I can possibly not find anything deep enough about any topic by searching on Google anymore. It's just surface-level knowledge that I get from competing websites who just want to make money off pageviews.

It kills my curiosity and intent with fake knowledge and bad experience. I need something better.

However, it will be interesting to figure the heuristics to deliver better quality search results today. When Google started, it had a breakthrough algorithm - to rank page results based on number of pages linking to it. Which is completely meritocratic as long as people don't game for higher rankings.

A new breakthrough heuristic today will look something totally different, just as meritocratic and possibly resistant to gaming.


> I can possibly not find anything deep enough about any topic by searching on Google anymore.

> It kills my curiosity and intent with fake knowledge and bad experience. I need something better.

It's hard for me to take this seriously when wikipedia exists, and almost always ranks very highly in search results for searches for "knowledge topics". Between wikipedia and sources cited on wikipedia, I find the depth of almost everything worth learning about to be far greater than I can remember in, say, the early 2000s, which is seems like the "peak" of google before SEO became so influential.

In general, I think there are a lot of people wearing rose tinted glasses looking at the "old internet" in this thread. The only thing that has maybe gotten worse is "commercial" queries like those for products and services. Everything else is leaps and bounds better.


There is a lot of stuff you won't find on wikipedia that is now buried, one example being old forum threads containing sage wisdom from fellow enthusiasts on any given topic. You search for an interest and a half dozen relevant forums used to come up on page 1.

These days I rarely see a forum result appear unless I know the specific name of the forum to begin with and utilize the search by site domain operator.

Another problem these days, unrelated to search but dooming it in the process, is all these companies naming themselves or their products after irrelevant existing english words, rather than making up something unique. It's usually fine with major companies, but I think a lot of smaller companies/products shoot themselves in the foot with this and don't realize it. I was once looking for some good discussion and comparison on a bibliography tool called Papers, and that was just a pit of suffering getting anything relevant at all with a name like that.


Add inurl:forum to the query. Google used to have a filter "discussions", but they removed it for some reason. Nowadays I usually start with https://hn.algolia.com/ and site:reddit.com when I want to find a discussion.


The fact that Wikipedia exists, is frequently (though not always) quite good, has citations and references, and ranks highly or is used directly for "instant answers" ...

... still does nothing to answer the point that Web search itself is unambiguously and consistently poorer now than it was 5-10 years ago.

Yes, I find myself relying far more on specific domain searches, either in the Internet sense, or by searching for books / articles on topics rather than Web pages. Because so much more traditionally-published information is online, this actually means the net of online-based search has improved, but not for the most part because of improved Web-oriented resources (Webpages, discussions, etc.), but because old-school media is now Web-accessible.


This. More and more, I have been finding that good books provide better learning than the internet.

You search for quality books online, mostly through discussion forums as search fails here, or through following references of books and articles. Then spend time digesting them.


Wikipedia is surface level knowledge. Using wikipedia what is AM4 socket’s pinout? Do a google search and you find several people asking the question but no answers. On the other hand you can easily find that for an old 8086 cpu.

What’s sad is Google has generally indexed the pages I want, it’s just getting harder to actually find them.


Did you ever find the pin out manual for AM4? Your comment sent me down a google hole...

Clearly, they don’t want it available because their tech docs they host stop at AM3b. I was hoping an X470 (or other flavor) motherboard manufacturer would have something floating around...


Basically, I think you can divide search between commercial interest search and not commercial interest searches. I can find deep discussions of algorithms curated quite nicely. But information curtains, say, that will be as bad as the OP says.


> A new breakthrough heuristic today will look something totally different, just as meritocratic and possibly resistant to gaming.

I wonder how much of this could be obtained back by penalizing:

1. The number of javascript dependencies 2. The number of ads on the page, or the depth of the ad network

This might start a virtuous circle, but in the end, this is just a game of cat-and-mouse, and website might optimize for this as well.

What we might need to break this is a variety of search engines that uses different criteria to rank pages. I suspect it would be pretty hard, if not impossible, to optimize for all of them.

And in any case, frequently change the ranking algorithms to combat over-optimization by the websites (as that's classically done against ossification for protocols, or any overfitting to outside forces in a competitive system).


You could even have all this under one roof: one common search spider that feeds this ensemble of different ranking algorithms to produce a set of indices, and then a search engine front end that round-robins queries out between the different indices. (Don’t like your query? Spin the algorithm wheel! “I’m Feeling Lucky” indeed.)


The Common Crawl is a thing already. Unfortunately, a "full" text crawl of the internets is a YUUUGE amount of data to manage, and I can't think of anything that could change that in the foreseeable future. That's why I think providing a federated Web directory standard, ala ODP/DMOZ except not limited to a single source, would be a far more impactful development.


Unfortunately, a "full" text crawl of the internets is a YUUUGE amount of data to manage

Maybe instead of a problem, there is an opportunity here.

Back before Google ate the intarwebs, there used to be niche search engines. Perhaps that is an idea whose time has come again.

For example, if I want information from a government source, I use a search engine that specializes in crawling only government web sites.

If I want information about Berlin, I use a search engine that only crawls web sites with information about Berlin, or that are located in Berlin.

If I want information about health, I use a search engine that only crawls medical web sites.

Each topic is still a wealth of information, but siloed enough that the amount of data could be manageable to a small or medium-sized company. And the market would keep the niches from getting so small that they become useful. A search engine dedicated to Hello Kitty lanyards isn't going to monetize.


I´d be happy with something like Searx [1,2,3]

[1] https://en.wikipedia.org/wiki/Searx [2] https://asciimoo.github.io/searx/ [3] https://stats.searx.xyz/

featuring the semantic map of [4] https://swisscows.ch/

incorporating [5] https://curlie.org/ and Wikipedia and something like Yelp/YellowPages embedded in Open Streetmaps for businesses and points of interest, with a no frills interface showing the history (via timeslide?) of edits.

Bang! Done!


That's the problem that web directories solve. It's not that you're wrong, it's just largely orthogonal to the problem that you'd need a large crawl of the internets for, i.e. spotting sites about X niche that you wouldn't find even from other directly-related sites, and that are too obscure, new, etc. to be linked in any web directory.


That's the problem that web directories solve

Not really. A web directory is a directory of web sites. I can't search a web directory for content within the web sites, which is what a niche search engine would do.


On the other hand, the niche search engine depends upon having such a web directory (the list of sites to index).



Don't forget the search engine search engine!


You don’t really need to store a full text crawl if you’re going to be penalizing or blacklisting all of the ad-filled SEO junk sites. If your algorithm scores the site below a certain threshold then flag it as junk and store only a hash of the page.

Another potentially useful approach is to construct a graph database of all these sites, with links as edges. If one page gets flagged as junk then you can lower the scores of all other pages within its clique [1]. This could potentially cause a cascade of junk-flagging, cleaning large swathes of these undesirable sites from the index.

[1] https://en.wikipedia.org/wiki/Clique_(graph_theory)


Javascript, which Google coincidentally pushed and still pushes for, doesn't exactly make the web easier to crawl either.


What if: SEO consultants aren’t gaming system, but the search and web is being optimized for “measurable immediate economic impact” that is ad revenue at this moment — due to web itself being in-monetizable and unable to generate value.

I don’t like the whole concept of SEO, I don’t like the way the web is today, but I think we should stop and think before resorting to “immoral few is destroying things, we unfuck it reclaim what we deserve” type simplification.


> the search and web is being optimized for “measurable immediate economic impact” that is ad revenue at this moment

So much is obvious. The discussion is about whether there is a less shitty metric.


Merging js deps into one big resource isn't difficult. The number of ads point is interesting though. How would one determine what is an ad and what is an image? I have my ideas, but optimizing on this boundary sounds like it would lead to weird outcomes.


Adblockers have to solve that problem already. And it's actually really easy because "ads" aren't just ads unfortunately, they're also third-party code that's trying to track you as you browse the site. So it's reasonably easy to spot them and filter them out.


Though, advertisers are already using first party redirection. Future of adblockers is bleak.

https://github.com/uBlockOrigin/uBlock-issues/issues/780/


Back in the early days of banner ads, a CSS-based approach to blocking was to target images by size. Since advertising revolved around specific standards of advertising "units" (effectively: sizes of images), those could be identified and blocked. That worked well, for a time.

This is ultimately whack-a-mole. For the past decade or so, point-of-origin based blockers have worked effectively, because that's how advertising networks have operated. If the ad targets start getting unified, we may have to switch to other signatures:

- Again, sizes of images or DOM elements.

- Content matching known hash signatures, or constant across multiple requests to a site (other than known branding elements / graphics).

- "Things that behave like ads behave" as defined by AI encoded into ad blockers.

- CSS / page elements. Perhaps applying whitelist rather than blacklist policies.

- User-defined element subtraction.

There's little in the history of online advertising that suggests users will simply give up.


Some of those techniques will make the whole experience slow compared to the current network request filters and dns blockers.

And that will probably be blocked or severely locked down by your most popular browser, chrome.

I don't need to give advertisers data myself when someone else I know can. I really doubt it is easy to throw off chrome monopoly at this stage. I presume we will see a chilling effect before anything moves like IE.


At this point, I'll take slow over shitshow.


That is fixed since 1.24.3b7 / https://github.com/gorhill/uBlock/releases ?


In the early days of DMOZ, some editors would rank sites lower based on the number of ads they had.


I don't think DMOZ had ranking per se? They could mark "preferred" sites for any given category, but only a handful of them at most, and with very high standards, i.e. it needed to be the official site or "THE" definitive resource about X.


You are correct, the sites weren't "ranked" the same way that Google ranks sites now. But there were preferred sites, and each site had a description written by an editor who could be fairly unpleasant if they wanted to.

I had a site that appeared in DMOZ, and the description was written in such a way that nobody would want to visit it. But it was one of only a few sites on the internet at the time with its information, so it was included.


Given that Google makes money off the ads, that would be hard. DuckDuckGo could pull it off. You need another revenue stream though.


Google has taken on so many markets that I don't think they can do anything reasonably well (or disruptive) without conflicting interests. A breakup is overdue: if they didn't control both search and ads, the web would be a lot better nowadays. If they didn't control web browsers as well, standards would be much more important.


> What we might need to break this is ...

Create a core protocol at the same level as DNS etc., that web servers can use to offer an index of everything they serve/relay. A multitude of user-side apps may then query that protocol, with each app using different algorithms, heuristics and offering different options.


I've been thinking along similar lines for a year or so now.

There are several puzzling omissions from Web standards, particularly given that keyword-based search was part of the original CERN WWW discussion:

http://info.cern.ch/hypertext/WWW/Addressing/Search.html

IF we had a distributable search protocol, index, and infrastructure ... the entire online landscape might look rather different.

Note that you'd likely need some level of client support for this. And the world's leading client developer has a strongly-motivated incentive to NOT provide this functionality integrally.

A distributed self-provided search would also have numerous issues -- false or misleading results (keyword stuffing, etc.) would be harder to vet than the present situation. Which suggests that some form of vetting / verifying provided indices would be required.

Even a provided-index model would still require a reputational (ranking) mechanism. Arguably, Google's biggest innovation wasn't spidering, but ranking. The problem now is that Google's ranking ... both doesn't work, and incentivises behaviours strongly opposed to user interests. Penalising abusive practices has to be built into the system, with those penalties being rapid, effective, and for repeat offenders, highly durable.

The problem of potential for third-party malfeasance -- e.g., engaging in behaviours appearing to favour one site, but performed to harm that site's reputation through black-hat SEO penalties, also has to be considered.

As a user, the one thing I'd most like to be able to do is specify blacklists of sites / domains I never want to have appear in my search results. Without having to log in to a search provider and leave a "personalised" record of what those sites are.

(Some form of truly anonymised aggregation of such blocklists would, of course, be of some use, and facilitating this is an interesting challenge.)


I too have been thinking about these things for a long time, and I also believe a better future is going to include "aggregation of such blocklists would, of course, be of some use, and facilitating this is an interesting challenge."

I decided it is time for us to have a bouncer-bots portal (or multiple) - this would help not only with search results, but also could help people when using twitter or similar - good for the decentralized and centralized web.

My initial thinking was these would be 'pull' bots, but I think they would be just as useful, and more used, if they were perhaps active browser extensions..

This way people can choose which type of censoring they want, rather than relying on a few others to choose.

I believe creating some portals for these, similar to ad-block lists - people can choose to use Pete'sTooManyAds bouncer, and or SamsItsTooSexyfor work bouncer..

ultimately I think the better bots will have switches where you can turn on and off certain aspects of them and re-search.. or pull latest twitter/mastodon things.

I can think of many types of blockers that people would want, and some that people would want part of - so either varying degrees of blocking sexual things, or varying bots for varying types of things.. maybe some have sliders instead of switches..

make them easy to form and comment on and provide that info to the world.

I'd really like to get this project started, not sure what the tooling should be - and what the backup would be if it started out as a browser extension but then got booted from the chrome store or whatever.

Should this / could this be a good browser extension? What language / skills required for making this? It's on my definite to do future list.


There are some ... "interesting" ... edge cases around shared blocklists, most especially where those:

1. Become large.

2. Are shared.

3. And not particularly closely scrutinised by users.

4. Via very highly followed / celebrity accounts.

There are some vaguely similar cases of this occurring on Twitter, though some mechanics differ. Celebs / high-profile users attract a lot of flack, and take to using shared blocklists. Those get shared not only among celeb accounts but their followers, though, because celebs themselves are a major amplifying factor on the platform, being listed effectively means disappearing from the platform. Particularly critical for those who depend on Twitter reach (some artists, small businesses, and others).

Names may be added to lists in error or malice.

This blew up summer of 2018 and carried over to other networks.

Some of the mechanics differ, but a similar situation playing out over informally shared Web / search-engine blocklists could have similar effects.


Create a core protocol at the same level as DNS etc., that web servers can use to offer an index of everything they serve/relay

Isn't that pretty much a site map?

https://en.wikipedia.org/wiki/Sitemaps


A sitemap simply tells you what pages exist, not what's on those pages.

Systems such as lunr.js are closer in spirit to a site-oriented search index, though that's not how they're presently positioned, but instead offer JS-based, client-implemented site search for otherwise static websites.

https://lunrjs.com


How would this help anything? It would make the blackhat SEO even easier, if anything.


The results could be audited.

Fail an audit, lose your reputation (ranking).

The basic principle of auditing is to randomly sample results. BlackHat SEO tends to rely on volume in ways that would be very difficult to hide from even modest sampling sizes.


How do you stop the server lying?

If a good site is on shared hosting will it always be dismissed because of the signal of the other [bad] sites on that same host? (you did say at DNS level, not domain level)


> Create a core protocol at the same level as DNS etc., that web servers can use to offer an index of everything they serve/relay.

So, back to gopher? That might actually work!


Google already penalizes based off payload size/download speed.


> 1. The number of javascript dependencies

How about we don't start looking at the /how/ a site is made, when it's already difficult to sort out the /what/ it is.


The real reason why search is so bad is that Google is downranking the internet.

I should know - I blew the whistle on the whole censorship regime and walked 950 pages to the DOJ and media outlets.

--> zachvorhies.com <--

What did I disclose? That Google was using a project called "Machine Learning Faireness" to rerank the entire internet.

Part of this beast has to do with a secret Page Rank score that Google's army of workers assign to many of the web pages on the internet.

If wikipedia contains cherry picked slander against a person, topic or website then the raters are instructed to provide a low page rank score. This isn't some conspiracy but something openly admitted by Google itself:

https://static.googleusercontent.com/media/guidelines.raterh...

See section 3.2 for the "Expertise, Authoritativeness and Trustworthiness" score.

Despite the fact that I've had around 50 interviews and countless articles written about my disclosure, my website zachvorhies.com doesn't show up on Google's search index, even when using the exact url as a query! Yet bing and duckduckgo return my URL just fine.

Don't listen to the people who say that's its some emergent behavior from bad SEO. This deliberate sabotage of Google's own search engine in order to achieve the political agenda of the controllers. The stock holders of Google should band together in a class action lawsuit and sue the C-Level executives of negligence.

If you want your internet search to be better then stop using Google search. Other search engines don't have this problem: I'm looking at qwant, swisscows, duckduckgo, bing and others.

~Z~


Google's search rankings are based on opinions held by other credible sources. This isn't really blowing the whistle when, as you admitted, Google admits this openly.

And maybe your site doesn't get ranked well because it's directly tied to project veritas. I don't like being too political, especially on hn and on an account tied to my real identity, but project veritas and it's associates exhibit appalling behavior in duplicity and misdirection. I would hope that trash like this does get pushed to the bottom.


In a political context, "credible" is often a synonym for "agrees with me". Anyone ranking "page quality" should be conscious of and try to avoid that, and yet the word "bias" doesn't even appear in the linked guidelines for Search Quality Raters.

Of course Google's own bias (and involvement in particular political campaigns) is well known, and opposed to Project Veritas, so it's quite possible that you are right and Google is downranking PV.

Would that be good? Well, that's an opinion that depends mostly on the bias of the commentator.


https://en.wikipedia.org/wiki/Project_Veritas

I doubt this affected search rankings but Project Veritas does have a ton of credibility issues.


And so does wikipedia.


Project Veritas does not have credibility issues. They have never issued a single retraction.

If this surprises you, then welcome to the systematic bias of wikipedia.


[flagged]

skosch 4 months ago [flagged]

Two wrongs don't make a right – but more importantly, Project Veritas deliberately uses lies and deception to obtain its footage, which it then edits to remove context. That crosses a line far beyond run-of-the-mill reporting bias.


The point of that story isn't "two wrongs make a right", but that the left wing still accepts CNN just as the right wing still accepts PV.

"Gotchas" and "gaffes" are taken out of context all the time. And when reporters lie to go undercover and obtain a story, they're usually hailed as heroes.

These objections are only used to "discredit" opposing viewpoints. People don't object when sources they agree with use the same tactics.


Why the assumption of left or right, when at least in the US most people identify as independents?

And to be even more fair, you aren’t talking about “people” here but the OP who can clearly state their own personal preferences, without the need for you to construct a theory of hidden bias.


> Why the assumption of left or right, when at least in the US most people identify as independents?

Most people don’t identify as independents, and, anyhow, studies of voting behavior show that most independents have a clear party leaning between the two major parties and are just as consistently attached to the party they lean toward as people who identify with the party.

So not only is it the case that most people don't identify as independents, most of those who do aren't distinguishable from self-identified partisans when it comes to voting behavior.


On the contrary, according to relatively recent Pew numbers from 2018 [1], around ~38% of American's identify as independents, with ~%31 democrat and ~%26 republican. To be fair many of them lean towards one party or another (your last point acknowledged), but independents are much more important in politics than they are usually given credit for.

[1] https://www.pewresearch.org/fact-tank/2019/05/15/facts-about...


> On the contrary, according to relatively recent Pew numbers from 2018 [1], around ~38% of American's identify as independents, with ~%31 democrat and ~%26 republican.

That's not “on the contrary”; 38% < 50%; most Americans don't identify as independents.

Heck, even 43% (the number that don't identify as either Republicans or Democrats) is less than 50%; most Americans specifically identify as either Democrats or Republicans.

> but independents are much more important in politics than they are usually given credit for.

On the contrary, independents are given outsize importance by the major media, because they are treated as just as large of a group as they actually are, but are treated as if their voting behavior was much more different from that of self-identified partisans than it actually is.


You got me in a technically correct way, which is as they say, the best kind of correct. I think the keyword "majority" being defined as more than %50 being the crux, so I should have worded my statement better with a "most", "plurality" or "relative majority" instead.

> most Americans specifically identify as either Democrats or Republicans

Now you just did the same thing I did but in reverse! See my comments about more specific words for plurality such as "most".

> independents are given outsize importance by the major media

I don't think independents are given much importance at all by the major media, but I have increasingly disconnected from that circus too so I might not be a good judge of it.


> Now you just did the same thing I did but in reverse!

No, I didn't.

> See my comments about more specific words for plurality such as "most".

“Most” (as an adjective applied to a group) is for a majority, not a plurality, but that's okay, because 57% is an absolute majority, anyway, which is why my reference, which did use “most” for a majority, was not what you did.


I disagree with your definition of most.

https://electowiki.org/wiki/Majority

Maybe not the best source, but quite a few sites returned similar verbiage about most usually but not always referring to a plurality. I'm open to correction on this point, and am genuinely curious about this meta argument now. Having a hard time finding a statistical dictionary that references the words we are using here (majority, most, etc).


Can I even reply beyond this point?


Are you asking rhetorically in a facetious way or a real question? If the former am I missing something (like maybe I said something dumb, it's happened before)?


  Why the assumption of left or right, when at least
  in the US most people identify as independents?
Do you wish to cite a source for this or do you wish to gaslight here?

  Voters declaring themselves as independent of a major
  political party rose nationally by 6 percentage points
  to 28 percent of the U.S. electorate between 2000 and 
  last year’s congressional midterm contest in states 
  with party registration, analyst Rhodes Cook wrote 
  in Sabato’s Crystal Ball political newsletter at the
  University of Virginia Center for Politics.

  But as few as 19 percent of Americans identifying as
  independent truly have no leaning to a party, according
  to the Pew Research Center.[1]
[1]

'Difference-maker' independent voters in U.S. presidential election crosshairs

https://www.reuters.com/article/us-usa-election-deciders-ind...


The pew polling numbers posted higher in the thread are a fine source.

https://www.pewresearch.org/fact-tank/2019/05/15/facts-about...

The gist of it is that for any given party, more people are not members of it (e.g. 70%+ of Americans are not Democrats) so you are not safe in guessing someone political affiliations. Stats wise, you will likely be wrong.

Secondarily, saying that someone who leans to a party is basically in that party is wrong.

For example, I am an independent. I lean republican on the local level, but in the Federal elections I lean Democrat. If you polled me, I would say I lean republican since that’s usually what’s on my mind


No, I'm not only talking about the OP here.

Look up the story I'm talking about and you'll see that it was ignored by left-wing sources:

https://duckduckgo.com/?q=cnn+edited+video+of+sister&t=ffab&...

https://www.google.com/search?hl=en&q=cnn%20edited%20video%2...

The entire media establishment ignores these tactics when used by other organizations they agree with.


I'm sorry, whataboutism doesn't actually lend credence.

You're free to select the occasional time when a news agency gets it wrong (and apologizes and issues a correction)...but to try and compare it to an intentionally biased organization that always, intentionally gets it wrong (by willfully setting up people, then editing their responses, to paint things in a particular light), and never backs down, never apologizes, well...at best, your bias is showing.


Actually I'm not a fan of PV. Gotcha journalism doesn't appeal to me. But I see that low quality "journalism" every day, so I don't see PV as particularly noteworthy. Certainly not worth altering search results for.

And I'd say you are mistaken when you call changing "take that [violence] to the suburbs" into "a call for peace" merely getting it wrong. What CNN did could hardly be anything other than intentional deception from an organization with a strong left-wing bias[1].

1: https://mediabiasfactcheck.com/cnn/


PV isn't "gotcha journalism". They've repeatedly committed felonies and completely fabricated things (ex. when they tried to trick WaPo into pushing a fake #metoo story about Roy Moore) in attempts to create their content.


> They've repeatedly committed felonies

That's quite a claim. If that's true, why aren't they in jail?

I'm sure they have many powerful enemies who would love to see them convicted of these multiple felonies.


https://web.archive.org/web/20100531174024/http://neworleans...

O'Keefe plead out to a misdemeanor and served probation + a fine. But the crime he committed was a felony.

> In January 2012, O'Keefe released a video of associates obtaining a number of ballots for the New Hampshire Primary by using the names of recently deceased voters.

That's probably a felony in most parts of the US, although O'Keefe claims it wasn't since he didn't actually vote.


Michael Moore films and edits his documentaries in a similar fashion.


I agree. I watched one of his newer docs and anytime I heard something that just seemed too crazy to be true, I researched it. And in all cases, I found that important context was missing and in some cases, the evidence for the point being made was almost completely fabricated.


> In a political context, "credible" is often a synonym for "agrees with me".

Not among credible sources.


Meta-comment, but this is partly why search and the internet is so bad now; there are a large number of political disinformation campaigns which are getting increasingly blatant, and getting better at finding believers on the internet.

People have a vested interest in destroying the idea that anything can be a non-partisan "fact". Anything can become a smear. Only the most absolutely egregious ones can be reined in by legal action (e.g. Alex Jones libelling the Sandy Hook parents).

(This is not just internet, of course; the British press mendacity towards the Royal family is playing out at the moment.)


His website contains gems like "Things got political in June 2017 when Google deleted "covfefe" out of it's arabic translation dictionary in order to make a Trump tweet become nonsense." (No, covfefe doesn't mean anything in Arabic.)

Here is someone who believes that a private company's open attempts to rank websites by quality amounts to "seditious behaviour" deserving of criminal prosecution, and the only people willing to pay attention were Project Veritas. Google has plenty of ethics issues, but this guy's claims are absurd.


Not only is it a word, but Google had to delete the word twice.

https://www.zachvorhies.com/covfefe.html


> Despite the fact that I've had around 50 interviews and countless articles written about my disclosure, my website zachvorhies.com doesn't show up on Google's search index, even when using the exact url as a query!

I just tried it, it's just showing results for "zach vorhies" instead, which it thinks you meant. I just tried a few other random "people's names as URL" websites I could find, sometimes it does this, sometimes it doesn't.

Furthermore, the results that do appear are hardly unsympathetic to you. If google is censoring you/your opinions, they're doing a very poor job of it.


(I work at Google, but don't work on search)

> If wikipedia contains cherry picked slander against a person, topic or website then the raters are instructed to provide a low page rank score

This sounds like a good thing to me. Sites that contain lies, fabrications, and falsehoods should not be as highly ranked as those which do not.

Why should shareholders sure Google for, as far as I can tell from your argument, trying to provide users with a more useful product?


Why do your colleagues get to decide what is fact and what is fiction? It's our right, as humans, to be able to make that decision on our own after we encounter information. If Wikipedia gains a reputation for libel, then the onus should be on the public to stop trusting them.

Google does not have the moral authority to censor the internet, and it's absolutely wrong for them to attempt this. Information should be free, and you don't have the right to get in the way of that.


They don't get to decide any such thing, and in fact, can't. Google (fortunately for all of us) doesn't run the Internet.

They do run a popular search page, and have to decide what to do with a search like "Is the Earth flat?".

Personally, I would prefer they prominently display a "no". Others would disagree, but a search engine is curation by definition, that's what makes it useful.


> Personally, I would prefer they prominently display a "no".

Oh hey! I just tried this, and it does! image: https://i.imgur.com/OqqxSq3.png


You would, and fortunately Google agrees with you, but imagine for a moment that they didn’t. 90% if the internet would suddenly see a ’yes’ to that answer, even if 99% of websites disagree.


They get to decide because it is their algorithm and the whole point is of a search function is to discriminate inputs to be relevant. They aren't "getting in the way" - they are using it as they please.

What you ask for isn't freedom but control over everyone else - there is nothing stopping you from running your own spiders, search engines, and rankings.


Who said anything about censorship? The topic of discussion is what order results are in. Are you saying Google would be more useful if it returned the 100 million results randomly and left you to sort them out?


They get to decide what they display as results. What do you suggest they do instead? Display all of the internet and let the user filter things for themselves?


> decide what is fact

The fun thing about facts is that nobody needs to decide whether or not they are true. Perhaps the fact that you can honestly claim to think otherwise means you need to take a step back and examine your reasoning.


> Perhaps the fact that you can honestly claim to think otherwise

This is an example of a "fact" that I'm talking about. It's not a fact, it's an opinion being presented as fact. I guess if you present yourself this way online you have no problem with Google controlling what "facts" are found when you use their search engine.

I guess I'll just have to wait until they start peddling a perspective you disagree with.

> If wikipedia contains cherry picked slander against a person, topic or website

Just remember that this is the comment we're discussing... how does one determine if a statement is slander? Are you telling me Google has teams of investigative journalists following up on each of their search results? Or did someone at Google form an opinion, then decide their opinion is the one that should be presented as "fact" on the most popular search engine in the world?


> Despite the fact that I've had around 50 interviews and countless articles written about my disclosure, my website zachvorhies.com doesn't show up on Google's search index, even when using the exact url as a query!

I am not sure what is happening but I directly searched for your website : zachvorhies.com on Google (in Australia, if that matters), which returned the website as the first result:

https://i.imgur.com/Z7RTsuE.png


I'm in the US and Google does not display zachvorhies.com when searching for "zachvorhies.com".


Does for me. USA.

[0] https://i.imgur.com/S2rPywz.png

Also, is this guy a Q follower or something? The favicon is a Q.


> Also, is this guy a Q follower or something? The favicon is a Q.

Actually the favicon is a headshot.

https://www.zachvorhies.com/favicon.ico

And that's also the icon in the page source:

<link rel=”shortcut icon” href="favicon.ico" type=”image/x-icon” />

I wonder why Google shows a Q.


> I wonder why Google shows a Q

A placeholder used if the algorithm thinks the favicon is not appropriate for some reason.


I would think the quotes would make a difference. I was not using quotes, does using quotes in the US still return the website? I would use a VPN and test it out but I am at work right now.


Not using quotes, the first result I get is his twitter account, and the second is a link to a Project Veritas piece about this document leak he describes. Hardly seems like he's getting buried.


Additionally, if you search "site:zachvorhies.com" you get the site. So it isn't de-indexed. It just isn't ranking well.


[removed, for some reason I thought australia was part of the EU. I'm in the US, also did not see the site link through google]


I think the only time Australia has been part of some sort of European organization or group is when Australia competed in the Eurovision.

Anyhow, I would be interested to know what results you get in the EU.


I'm in the US and I get the same search results. Although if I put in your name, I get your Twitter instead. Not sure why anyone would be searching for a dot com instead of going to it.


Same for me in Australia, searching for the name results in the Twitter handle showing up, and then a WikiSpooks websites and so forth. The website isn't even on the first page. I think that's rather concerning, that searching for a person's won't return their website but rather a twitter feed and other websites that have possibly optimized for SEO.


Australia isn't in the EU.

For what it's worth, I'm in Australia and have the same search results as erklik.


Ahh, Zachary Vorhies. I remember your bizarre internal posts literally claiming that Obama's birth certificate was fake. I wasn't surprised at all when you leaked a pile of confidential information to a far right conspiracy theorist group (Project Veritas), and were subsequently fired.

I wouldn't trust a single word that comes out of this guy's mouth.


FWIW I see your website as the first results when I google it:

https://imgur.com/a/jhx7N9D


For what it's worth, when I search for your site it's the first result. You have to click past the "did you mean", which searches on your name originally, but then it's there.


Highly polarized views, like the one you hold, are a result of a multiplicity of communications between entities on the Internet. Those humans who are more prone to spontaneous visualizations or audio tend to follow patterns which use biased arguments over reasoned arguments. That nets you comments like:

> Don't listen to the people who say

> this beast has to do with a secret Page Rank score that Google's army of workers

Anyone who tells you to not listen to others intends to tell you gossip about why you shouldn't gather data that conflicts their own views. They'll SAY all sorts of things to try to make you "see" it their way. Rational people DO things to prove or disprove a given belief. (Just to note SAYING a bunch of stuff does not equal DOING a bunch of stuff.)

For anyone rational and interested, Google "this video will make you angry" and bump the speed to 75%. The idea is that memes will compete for "compute" space in both people's minds and the space in which they occupy the Internet. Those who get "infected" with a given meme, will go to all sorts of lengths to rationalize why that meme is true, even though the meme the arguing against is just as irrational as their own.


>my website zachvorhies.com doesn't show up on Google's search index, even when using the exact url as a query!

Just searched it and it's literally the first result.


Goodhart‘s Law in action. I wonder how we make a new measure that buys us more time?

https://en.wikipedia.org/wiki/Goodhart's_law


The backtick in your link broke it:

https://en.wikipedia.org/wiki/Goodhart's_law

(Were you on mobile and using a smart keyboard?)


I manually added the comma on a mobile smart keyboard. :) Didn't know that doesn't work haha.


That damn ‘Smart Quotes’ misfeature is still causing havoc even after 30 years.


Nitpick: It's actually an apostrophe " ' ", not a backtick/grave accent " ` " or comma " , " :D

https://en.wikipedia.org/wiki/Apostrophe

https://en.wikipedia.org/wiki/Grave_accent#Use_in_programmin...

https://en.wikipedia.org/wiki/Comma


Oh yes sorry I’m full of mistakes today. Of course, not a comma!


In the specific case of date based searches they are pretty difficult because of how pages are ranked. For a long time (and still to a large extent) Google ranks 'newer' pages higher than 'relevant' pages. At Blekko[1] there was a lot of code that tried to figure out that actual date of the document (be it a forum post, news article, or blog post). That date would often be months or years earlier than the 'last change' information would have you think.

Sometimes its pretty innocuous, a CMS system updates every page with an updated copyright notice at the start of each year. Other times its less innocuous where the page simply updates the "related links" or side bar material and refreshes the content.

It is still an unsolved ranking relevance problem where a student written, 3 month old description of how AM modulation works ranks higher than a 12 year old, professor written description. There isn't a ranking signal for 'author authority'. I believe it is possible to build such a system but doing so doesn't align well with the advertising goals of a search engine these days.

[1] disclaimer I worked at Blekko.


“I can possibly not find anything deep enough about any topic by searching on Google anymore. It's just surface-level knowledge that I get from competing websites who just want to make money off pageviews.”

Is it possible that there is no site providing non fluffy content on your query? For a lot of niche subjects, there really are very few if any substantial content on that topic.


> Is it possible that there is no site providing non fluffy content on your query? For a lot of niche subjects, there really are very few if any substantial content on that topic.

“Very few if any substantial”

The problem is that google won’t even show me the very few anymore. It’s just fluff upon fluff and depth (or real insight at least) is buried in twitter threads and reddit/hn comments, and github issue discussion.

I fear the seo problem has not only killed knowledge propagation, but also thoroughly disincentivized smart people from even trying. And that makes me sad.


I mirror your sentiment. It used to be that you could use your google fu and you'd be able to find a dozen relevant forum posts or mail chains in plain text. It's much, much, harder to get the same standard of results. "Pasting stack traces and error messages. Needle in a haystack phrases from an article or book. None of it works anymore."

Yeah, if I know where I'm looking (the sites) then google is useful since I can narrow it to that domain. But if I don't know where to look then I'm SOL.

The serendipity of good results on Google.com is no longer there. And given the talent at google you have to wonder why.


The devil is in this detail: Regular users don’t want those “weird looking” results. Normies prefer the fluff.

And guess what: most users are normal. Us here on HN are weird:


So this point is such an interesting and common anti-pattern on the internet though:

1. Something is or provides access to good quality content.

2. Because of this quality, it gets more and more popular.

3. As popularity grows, and commercialization takes over, the incentive becomes to make things "more accessible" or "appealing" to the "average" user. More users is always better right!?

4. This works, and quality plummets.

5. The thing begins to lose popularity. Sometimes it collapses into total unprofitability. Sometimes it remains but the core users that built the quality content move somewhere else, and then that new thing starts to offer tremendous value in comparison to the now low quality thing.


It is only solvable for a short period of time. Then, when whatever replaces the current search is successful enough there will be an incentive to game the new system. So the only way to really solve this is by radical fragmentation of the search market or by randomizing algorithms.


Someone in a previous thread that I, unfortunately can’t remember, suggested that it might not just be the SEO but the internet that changed. Google used to be really good at ascertaining meaning from a panoply of random sources, but those sites are all gone now. The Wild West of random blogs and independent websites are basically dead in favor of content farms and larger scale media companies.


I’ve found more and more I have reverted to finding books instead of searching to find deeper knowledge. The only issue is it is easy to publish low quality books now. Depending on the topic you are looking into often if a book stands the test of time it is a worthwhile read. With tech books you have to focus on the authors credentials.


> However, it will be interesting to figure the heuristics to deliver better quality search results today.

If only there were some kind of analog for effective ways to locate information. Like if everything were written on paper, bound into collections, and then tossed into a large holding room.

I guess it's past the Internet's event horizon now, but crawler-primary searching wasn't the only evolutionary path to search.

Prior to Google (technically: AdWords revenue funding Google) seizing the market, human-currated directories were dominant [1, Virtual Library, 1991] [2, Yahoo Directory, 1994] [3, DMOZ, 1998].

Their weakness was always cost of maintenance (link rot), scaling with exponential web growth, and initial indexing.

Their strength was deep domain expertise.

Google's initial success was fusing crawling (discovery) with PageRank (ranking), where the latter served as an automated "close enough" approximation of human directory building.

Unfortunately, in the decades since we seem to have forgotten how useful hand-currated directories were, in our haste to build more sophisticated algorithms.

Add to that that the very structure of the web has changed. When PageRank first debuted, people were still manually tagging links to their friends' / other useful sites on their own. Does that sound like the link structure we have in the web now?

Small surprise results are getting worse and worse.

IMHO, we'd get a lot of traction out of creating a symbiotic ecosystem whereby crawlers cooperate with human currators, both of whose enriched output is then fed through machine learning algorithms. Aka a move back to supervised web search learning, vs the currently dominant unsupervised.

[1] https://en.m.wikipedia.org/wiki/World_Wide_Web_Virtual_Libra... , http://vlib.org/

[2] https://en.m.wikipedia.org/wiki/Yahoo!_Directory

[3] https://en.m.wikipedia.org/wiki/DMOZ , https://www.dmoz-odp.org/


Mixing human curation with crawlers is probably something that'd help with search results quality, but the issue comes in trying to get it to scale properly. Directories like the Open Directory Project/DMOZ and Yahoo's directory had a reputation for being slow to update, which left them miles behind Google and its ilk when it came to indexing new sites and information.

This is problematic when entire categories of sites were basically left out of the running, since the directory had no way to categorise them. I had that problem with a site about a video game system the directory hadn't added yet, and I suspect others would have it for say, a site about a newer TV show/film or a new JavaScript framework.

You've also got the increase in resources needed (you need tons of staff for effective curation), and the issues with potential corruption to deal with (another thing which significantly effected the ODP's usefulness in its later years).


Federation would help with both breadth and potential corruption, compared to what we had with ODP/DMOZ. A federated Web directory (with common naming/categorization standards, but very little beyond that) would probably have been infeasible back then simply because the Internet was so much smaller and fewer people were involved (and DMOZ itself partially made up for that lack by linking to "awesome"-like link lists where applicable) - but I'm quite sure that it could work today, particularly in the "commercial-ish" domain where corruption worries are most relevant.


The results are human curated as much as google would like to publicly pretend otherwise.

I think a more fundamental problem is a large portion of content production is now either unindexable or difficult to index - Facebook, Instagram, Discord, and YouTube to name a few. Pre-Facebook the bulk of new content was indexable.

YouTube is relatively open, but the content and contexts of what is being produced is difficult to extract, if, for the only reason that people talk differently than they write. That doesn’t mean, in my opinion, that the quality of a YouTube video is lower than what would have been written in a blog post 15 years ago, but it makes it much more difficult to extract snippets of knowledge.

Ad monetization has created a lot of noise too, but I’m not sure without it, there would be less noise. Rather it’s a profit motive issue. Many, many searches I just go straight to Wikipedia and wouldn’t for a moment consider using Google for.

Frankly I think the discussion here is way better than the pretty mediocre to terrible “case study” that was posted.



Immediately before Google were search engines like AltaVista https://en.wikipedia.org/wiki/AltaVista (1995) and Lycos https://en.wikipedia.org/wiki/Lycos (1994) which were not directories like Yahoo. Google won by not being cluttered with non-search web portal clutter, and by the effectiveness of PageRank, and because by the late 1990s the web was too big to be indexed by a manually curated directory.


"Halt And Catch Fire" had a cool way of taking these 2 approaches of search into their plot line.


“When a measure becomes a target, it ceases to be a good measure” - Charles Goodhart


Perhaps it’s not always a new heuristic that is needed, but a better way to manage the externalities around current/preceding heuristics.

From a “knowledge-searching” perspective, at a very rudimentary level, it makes sense to look to sites/pages that are often cited (linked to) by others as better sources to rank higher up in the search results. It’s a similar concept to looking at how often academic papers are cited to judge how “prominent” of a resource they are on a particular topic.

However, as with academia, even though this system could work pretty well for a long time at its best (science has come a long way over hundreds of years of publications), that doesn’t mean it’s perfect. There’s interference that could be done to skew results in one’s favor, there’s funneling money into pseudoscience to turn into citable sources, there’s leveraging connections and credibility for individual gain, - the list goes on.

The heuristic itself in not innately the problem. The incentive system that exists for people to use the heuristic in their favor creates the issue. Because even if a new heuristic emerges, as long as the incentive system exists, people will just alter course to try to be a forerunner in the “new” system to grab as big a slice of the pie while they can.

That’s a tough nut for google (or anyone) to crack. As a company, how could they actually curate, maintain, and evaluate the entire internet on a personal level while pursuing profitability? That seems near impossible. Wikipedia does a pretty damn good job at managing their knowledge base as a nonprofit, but even then they are always capped by amount of donations.

It’s hard to keep the “shit-level” low on search results when pretty much anyone, anywhere, anytime could be adding in more information to the corpus and influencing the algorithms to alter the outcomes. It gets to a point where getting what you need is like finding a needle in a haystack that the farmer got paid for putting there.


> I have been thinking about the same problem since a few weeks. The real problem with search engines is the fact that so many websites have hacked SEO that there is no meritocracy left.

That's not actually the problem described here. His problem is actually a bit deeper rootet since he specified the exact parameters of what he wants to see, but got terrible results. He specified a search for "site:reddit.com" but the resilts he got were ireelevant and worse than the results that he would have got when searching reddit directly.

I don't say that SEO, sites that copy content and only want to genrate clicks and large sites that culminate everything are bad fkr the internet of today, but the level of results we get off of search engines today is with one word abysmal.


Wrong. The site query worked. The issue is that there is no clear way to determine information date, as pages themselves change. Since more recent results are favored, SEO strategy of freshness throws off date queries. https://www.searchenginejournal.com/google-algorithm-history...


Wrong. In the article he elaborates.

> As you can see, I didn’t even bother clicking on all of them now, but I can tell you the first result is deeply irrelevant and the second one leads to a 4 year old thread.

He also wrote

> At this point I visibly checked on reddit if there’ve been posts about buying a phone from the last month and there are.

Duckduckgo even recognized the date to be 4 years old and reddit doesn't hide the age of posts. There are newer more fitting posts, but they aren't shown. And again a quote

> Why are the results reported as recent when they are from years ago, I don’t know - those are archived post so no changes have been made.

So your argument (also it really is a problem) in this case is a red herring. The problem lies deeper since google seems to be unable to do something as simple as extracting the right date and ddg ignores it. Also since all the results are years old it adds to the confusion why the results don't match the query. (He also wrote that the better matches were indeed indexed, but not found)


You said this wrong thing: He specified a search for "site:reddit.com" and claimed it was irrelevant. THAT IS NOT A RELEVANCY TERM. It correctly scopes searches to the specified site.

The entirety of the problem is the date query is not working, because of SEO for freshness. You also said this other wrong thing: "That's not actually the problem described here" . That is the problem here. The page shows up as being updated because of SEO.

The date in a search engine is the date the page was last judged to be updated, not one of the many dates that may be present on the page. When was the last reddit design update? Do you think that didn't cause the pages to change? Wake up.


>but the resilts he got were ireelevant and worse than the results that he would have got when searching reddit directly

Wrong. Internal reddit search has bad results too, and why even let you filter by last month.


Not to totally detract from your point, but my previous experience with SEO people showed that some SEO strategies actually not only improve page ranking, but also actual usability.

The first, and the most important perhaps was page load speed. We adopted a slightly more complicated pipeline on the server side, reduced the amount of JS required by the page, and made page loading faster. That improved both the ranking and actual usability.

The second was that SEO people told us our homepage contained too many graphics and too few text, so search engines didn't quite extract as much content from our pages. We responded by adding more text in addition to the fancy eye-catching graphics. That improved both the ranking and actual accessibility of the site.


I have noticed most HN comments with SEO in them take it as being bad bad bad and the reason for the death of good search, the need for powerful whatever..

I really wish everyone would qualify, and not just black-hat seo / whitehat - there are many types of SEO, often with different intentions.

I understand there has been a lot of google koolaid (and others) about how seo is evil it's poisoning the web, etc...

But now, or has it been a couple years how? google had a video come out saying an SEO firm is okay if they tell you it takes 6 months... they have upgraded their pagespeed tool which helps with seo, and were quite public about how they wanted ssl/https on everything and that that would help with google seo..

so there are different levels of SEO, someone mentioned an seo plugin I was using on a site as being a negative indicator, and I chuckled - the only thing that plugin does is try to fix some of the inherent obvious screwups of wordpress out of the box... things like no meta description which google flags on webmaster tools as multiple same meta descriptions.. also tries to alleviate duplicate content penalties by no-indexing archives or and categories or whatever.

So there is SEO that is trying to work with google, and then there is SEO where someone goes out and puts comments on 10,000 web sites only for the reason of ranking higher.. to me that is kind of grey hat if it was a few comments, but shady if it's hundreds and especially if it's automated..

but real blackhat stuff - hacking other web sites and adding links.. or having a site that is selling breast enlarging pills and trying to get people who type in a keyword for britney spears.. that is trying to fool people.

I have built sites with good info and had to do things to make them prettier for the ranking bot, but they are giving the surfer what they are looking for when they type 'whatever phrase'... I have also made web sites better when trying to also get them to show up in top results.

So it's not always seo=bad, sometimes seo=good for the algorythm and the users.

and sometimes it's madness - like extra fluff to keep a visitor on page longer to keep google happy like recipes - haha - many different flavors of it - and different intentions.


I've often thought one approach, though one I wouldn't necessarily want to be the standard paradigm, would be exclusivity based on usefulness.

So for example duckduckgo is still trying to use various providers to emulate essentially "early google without the modern google privacy violations", but when I start to think about many of the most successful company-netizens, one thing that stands out is early day exclusivity has a major appeal.

So I imagine a search engine that is only crawling the most useful websites and blogs and works on a whitelist basis. Instead of trying to order search results to push bad results down, just don't include them at all or give them a chance to taint the results. It would have more overhead, and would take a certain amount of time to make sure it was catching non-major websites that are still full of good info ... but once that was done it would probably be one of the best search panes in existence. I have also thought to streamline this, and I know it's cliche, but surely there could be some ml analysis applied to figuring out which sites are SEO gaming or click-baiting regurgitators and weed them out.

Just something I've been mulling over for a while now.


And how do you determine which websites are good other than checking if they are doing seo? Is reddit.com good or bad? If a good site that does seo should it be taken out?

And what if what you're searching exists only in a non good website? Isn't it better to show a result from a non good website than showing nothing?


> It's just surface-level knowledge that I get from competing websites who just want to make money off pageviews.

Can you give some examples of queries/topics? Not that I disagree, I often have the same problem, but have found ways to mitigate.


Can you elaborate some of these ways? I just have a big minus list of sites.


I would need to hear examples, queries or topics that results in solely superficial information, like OP stated.


I too am asking people to start making lists of lame query returns. I have taken screen shots of some, even made a video about one... but a solid list in a spreadsheet perhaps would be helpful.. of course with results varying for different people / locations and month to month / year to year, having some screen shots would be helpful too. Not sure if there is a good program for snapping some screen shots and pulling some key phrases and putting it all together well...


This is very much misguided.

Many websites do have "hacked" (blackhat/shady) SEO, but these websites do not last long, and are entirely wiped out (see: de-ranked) every major algorithm update.

The major players you see on the top rankings today do utilize some blackhat SEO, but it's not at a level that significantly impacts their rankings. Blackhat SEO is inherently dangerous, because Google's algorithm will penalize you at best when it finds out -- and it always does -- and at worst completely unlist your domain from search results, giving it a scarlet letter until it cools off.

However, the bulk of all major websites primary utilize whitehat SEO, i.e "non-hacked," i.e "Google-approved" SEO to maintain their rankings. They have to, else their entire brand and business would collapse, either from being out-ranked or by being blacklisted for shady practices.

Additionally, Google's algorithim hasn't changed much at all from pagerank, in the grand scheme of things. If you can read between their lines, the biggest SEO factor is: how many backlinks from reputable domains do you have pointing at your website? Everything else, including blackhat SEO, are small optimizations for breaking ties. Sort of like PED usage in competitive sports; when you're at the elite level, every little bit extra can make a difference.

Google's algorithm works for its intended purposes, which is to serve pages that will benefit the highest amount of people searching for a specific term. If you are more than 1 SD from the "norm" searching for a specific term, it will likely not return a page that suits you best.

Google's search engine based on virality and pre-approval. "Is this page ranked highly by other highly ranked pages, and does this page serve the most amount of people?" It is not based on accuracy, or informational-integrity -- as many would believe by the latest Medic update -- but simply "does this conform to normal human biases the most?"

If you have a problem with Google's results, then you need to point the finger at yourself or at Google. SEO experts, website operators, etc. are all playing a game that's set on Google's terms. They would not serve such shit content if Google did not: allow it, encourage it, and greatly reward it.

Google will never change the algorithm to suit outliers, the return profile is too poor. So, the next person to point a finger at is you: the user. Let me reiterate, Google's search engine is not designed for you; it is designed for the masses. So there is no logical reason for you to continue using it the way you do.

If you wish to find "deep enough" sources, that task is on you, because it cannot be readily or easily monetized; thus, the task will not be fulfilled for free by any business. So, you must look at where "deep enough" sources lay: books, journals, and experts.

Books are available from libraries, and a large assortment of them are cataloged online for free at Library Genesis. For any topic you can think of, there is likely to be a book that goes into excruciating detail that satisfies your thirst for "deep enough."

Journals, similarly. Library Genesis or any other online publisher, e.g NIH, will do.

Experts are even better. You can pick their brains and get even more leads to go down. Simply, find an author on the subject -- Google makes this very easy -- and contact them.

I'm out of steam, but I really felt the need to debunk this myth that Google is a poor, abused victim, and not an uncaring tyrant that approves of the status quo.


> Google's algorithm works for its intended purposes, which is to serve pages that will benefit the highest amount of people searching for a specific term.

Does it? So for any product search, thrown-together comparison sites without actual substance but lots of affiliate links are really among the best results? Or maybe they are the most profitable result, and thus the one most able to invest in optimizing for ranking? Similarly, do we really expect results on (to a human) clearly hacked domains to be the best for anything, but Google will still put them in the top 20 for some queries? "Normal people want this crap" is a questionable starting point in many cases.


Over the long-term, Google's algorithm will connect the average person to the page most likely to benefit them, more than it won't.

There is no "best result."

Any page falling under "thrown-together comparison sites without actual substance but lots of affiliate links" are temporal inefficiencies that get removed after each major update.

Will more pop up? Yes, and they will take advantage of any ineffeciency or edge-cases in the algorithim to boost their rankings to #1.

Will they stay there for more than a few months? No. They will be squashed out, and legitimate players will over time win out.

This is the dichotomy between "churn and burn" businesses and "long term" businesses. You will make a very lucrative, and quick, buck going full blackhat, but your business won't last and you will be consistently need to adapt to each successive algo update. While long-standing "legit" businesses will only need to maintain market dominance -- something much easier to do than break into the market from ground zero, which churn and burners will have to do in perpetuity until they burn out themselves.

If you want to test this, go and find 10 websites you think are shady, but have top 5 rankings for a certain search phrase. Mark down the sites, keyword, and exact pages linked. Now, wait a few months. Search again using that exact phrase. More likely than not, i.e more than 5 out of 10, will no longer be in the top 5 for their respective phrases, and a couple domains will have been shuttered. I should note that "not deep info" is not "shady," because the results are for the average person. Ex. WebMD is not deep, but it's not shady either.

I implore people to try and get a site ranked with blackhat tricks and lots of starting capital, and see just how hard it is to keep ranked consistantly using said tricks. It's easy to speculate and make logical statements, but they don't hold much weight without first-hand experience and observation.


>Will they stay there for more than a few months? No. They will be squashed out, and legitimate players will over time win out.

This isn't true at all in my experience. As a quick test I tried searching for "best cordless iron", on the first page there is an article from 2018 that leads to a very broken page with filler content and affiliate links. [1] There are a couple of other articles with basically the exact same content rewritten in various ways also on the first page.

A quick SERP history check confirms that this page has returned in the top 10 results for various keywords since late 2018.

>It's easy to speculate and make logical statements, but they don't hold much weight without first-hand experience and observation.

This statement is a bit ironic given that it took me 1 keyword and 5 seconds of digging to find this one example.

[1] https://www.theironingroom.com/best-cordless-iron-reviews-of...


Here's an xkcd [0] inspired idea. We have several search engines, each with some level of bias. We're not looking to crawl the whole internet because we can't compete with their crawlers. However, we could make a crawler to crawl their results, and re-rank the top N from each engine according to our own metric. Maybe even expose the parameters in an "advanced" search. I'm assuming this would violate some sort of eula though. Any idea if someone has tried this approach?

Edit: thinking more about this post's specific issue, I'm not sure what to do if all the crawlers fail. Could always hook into the search apis for github, reddit, so, wiki, etc. Full shotgun approach.

[0] https://xkcd.com/927/


Isn't this basically what DuckDuckGo does?


Good to hear your concerns.

> The real problem with search engines is the fact that so many websites have hacked SEO that there is no meritocracy left.

I intend to announce the alpha test of my search engine here on HN.

My search engine is immune to all SEO efforts.

> I can possibly not find anything deep enough about any topic by searching on Google anymore.

In simple terms my search engine gives users content with the meaning they want and in particular stands to be very good, by far the best, at delivering content with "deep" meaning.

> I need something better.

Coming up.

> However, it will be interesting to figure the heuristics to deliver better quality search results today.

Uh, sorry, it's not fair to say that my search engine is based on "heuristics".

I'm betting on my search engine being successful and would have no confidence in heuristics.

Instead of heuristics I took some new approaches:

(1) I get some crucial, powerful new data.

(2) I manipulate the data to get the desired results, i.e., the meaning.

(3) The search engine likely has by far the best protections of user privacy. E.g., search results are the same for any two users doing the same query at essentially the same time and, thus, in particular, independent of any user history.

(4) The search engine is fully intended to be safe for work, families, and children.

For those data manipulations, I regarded the challenge as a math problem and took a math approach complete with theorems and proofs.

The theorems and proofs are from some advanced, not widely known, pure math with some original applied math I derived. Basically the manipulations are as specified in math theorems with proofs.

> A new breakthrough heuristic today will look something totally different, just as meritocratic and possibly resistant to gaming.

My search engine is "something totally different".

My search engine is my startup. I'm a sole, solo founder and have done all the work. In particular I designed and wrote the code: It's 100,000 lines of typing using Microsoft's .NET.

The typing was without an IDE (integrated development environment) and, instead, was just into my favorite general purpose text editor KEdit.

It's my first Web site: I got a good start on Microsoft's .NET and ASP.NET (for the Web pages) from

Jim Buyens, Web Database Development, Step by Step, .NET Edition, ISBN 0-7356-1637-X, Microsoft Press.

The code seems to run as intended. The code is not supposed to be just a "minimum viable product" but is intended for first production to peak usage of about 100 users a second; after that I'll have to do some extensions for more capacity. I wrote no prototype code. The code needs no refactoring and has no technical debt.

While users won't be aware of anything mathematical, I regard the effort as a math project. The crucial part is the core math that lets me give the results. I believe that that math will be difficult to duplicate or equal. After the math and the code for the math, the rest has been routine.

Ah, venture capital and YC were not interested in it! So I'm like the story "The Little Red Hen" that found a grain of wheat, couldn't get any help, then alone grew that grain into a successful bakery. But I'm able to fund the work just from my checkbook.

The project does seem to respond to your concerns. I hope you and others like it.

How should I announce the alpha test here at HN?


This sounds extremely implausible; especially claiming "immune to SEO" is like declaring encryption "unbreakable". A lot of human effort would be devoted to it if your engine became popular.


How to be immune to SEO? Easy once see how.

More details are in my now old HN post at

https://news.ycombinator.com/item?id=12404641

For a short answer, SEO has to do with keywords. My startup has nothing to do with keywords or even the English language at all. In particular, I'm not parsing the English language or any natural language; I'm not using natural language understanding techniques. In particular, my Web pages are so just dirt simple to use (user interface and user experience) that a child of 8 or so who knows no English should be able to learn to use the site in about 15 minutes of experimenting and about three minutes of watching someone use the site, e.g., via a YouTube video clip of screen captures. E.g., lots of kids of about that age get good with some video games without much or any use of English.

E.g., how are you and your spouse going to use keywords to look for an art print to hang on the wall over your living room?

Keyword search essentially assumes (1) you know what you want, (2) know that it exists, and (3) have keywords that accurately characterize it. That's the case, and commonly works great, for a lot of search, enough for a Google, Bing, and more in, IIRC, Russia and China. Also it long worked for the subject index of an old library card catalog.

But as in the post I replied to, it doesn't work very well when trying to go "deep".

Really, what people want is content with some meaning they have at least roughly in mind, e.g., a print that fits their artistic taste, sense of style, etc., for over the living room sofa. Well, it turns out there's some advanced pure math, not widely known, and still less widely really understood, for that.

Yes I encountered a LOT of obstacles since I wrote that post. The work is just fine; the obstacles were elsewhere. E.g., most recently I moved. But I'm getting the obstacles out of the way and getting back to the real work.


> Really, what people want is content with some meaning they have at least roughly in mind

Yes, but capturing meaning mathematically is somewhat an unsolved problem in both mathematics, linguistics and semiotics. Your post claims you have some mathematics but (obviously as it's a secret) doesn't explain what.

SEO currently relies on keywords, but SEO as a practice is humans learning. There is a feedback loop between "write page", "user types string into search engine" and "page appears at certain rank in search listing". Humans are going to iteratively mutate their content and see where it appears in the listing. That will produce a set of techniques that are observed to increase the ranking.


> Yes, but capturing meaning mathematically is somewhat an unsolved problem in both mathematics, linguistics and semiotics.

I've been successful via my search math. For your claim, as far as I know, you are correct, but actually that does not make my search engine and its math impossible.

> That will produce a set of techniques that are observed to increase the ranking.

Ranking? I can't resist, to borrow from one of the most famous scenes in all of movies: "Ranking? What ranking? We don't need no stink'n ranking".

Nowhere in my search engine is anything like a ranking.


So, do you only ever display one single result? Or do you display multiple results? Because if you display multiple results, they will be in a textual order, whether that's top to bottom or left to right, and that is a ranking.

People pay tens or even hundreds of thousands of dollars to move their result from #2 to #1 in the list of Google results.


> #2 to #1 in the list of Google results.

My user interface is very different from Google's, so different there's no real issue of #1 or #2.

Actually that #1 or #2, etc. is apparently such a big issue for Google, SEO, etc. that it suggests a weakness in Google, one that my work avoids.

You will see when you play a few minutes with my site after I announce the alpha test.

Google often works well; when Google works well, my site is not better. But the post I responded to mentions some ways Google doesn't work well, and for those and some others my site stands to work much better. I'm not really in direct competition with Google.


stop vaguebooking and post it up on HN. if you're comfortable with where the product is at at the current moment then share it. it will never be finished so share it today.


I was not really announcing a new Web site, which I do intend to do at HN once my omelet is ready for actual eating, but just replying to the post

https://news.ycombinator.com/item?id=22092248

of rahulchhabra07.

His post was interesting to me since it mentioned some of the problems that I saw and that got me to work on my startup. And my post might have been interesting to him since it confirms that (i) someone else also sees the same problems and (ii) has a solution on the way.

For explaining my work fully, maybe even going open source, lots of people would say that I shouldn't do that. Indeed, that anyone would do a startup in Internet search seems just wack-o since they would be competing with Google and Bing, some of the most valuable efforts on the planet.

So that my efforts are not just wack-o, (i) I'm going for a part of search, e.g., solving the problems of rahulchhabra07, not currently solved well; (ii) my work does not really replace Google or Bing when they work well, and they do, what, some billions of times a second or some such?; (iii) my user interface is so different from that of Google and Bing that at least first cut my work would be like combining a racoon and a beaver into a racobeaver or a beavorac; and (iv) at least to get started I need the protection of trade secret internals.

Or, uh, although I only just now thought of this, maybe Google would like my work because it might provide some evidence that they are not all of search and don't really have a monopoly, an issue in recent news.


Nah - unbreakable encryption is actually possible with one time pads.

The only way SEO could be impossible is if there was no capacity to change search ranking no matter what - which would be both useless and impossible.


Get feedback before you launch. Id be happy to test it.


Thanks. I intend to announce the alpha test here at HN, and I will have an e-mail address for feedback (already do -- at least got that little item off my TODO list although it took 36 hours of on the phone mud wrestling with my ISP to set it up).


[flagged]


Ouch, please don't be a jerk on HN. I know it's common on the rest of the internet, but it will get an account banned on Hacker News. Would you mind reviewing the site guidelines and taking the spirit of this site to heart? We'd be grateful. https://news.ycombinator.com/newsguidelines.html

graycat is an elder here. He has an interesting history in technology, I dare say more interesting than most of us ever will. He deserves better than your reply (as would any other HN user). Check out these posts:

https://news.ycombinator.com/item?id=12404220

https://news.ycombinator.com/item?id=5600111


An AI crawler is needed.


From my end, it looks like google search is very strongly prioritising paid clients and excluding references to everything else. Try a search or view from maps, it shows a world that only includes google ad purchasers.

Google has become a not very useful search - certainly not the first place I go when looking for anything except purchases. They've broken their "core business".


It also favors itself. I searched for "work music". First 9 results are from youtube.


This should probably be a separate submission but why is search so bad everywhere?

- Confluence: Native search is horrible IME

- Microsoft Help (Applications): .chm files Need I say more.

- Microsoft Task Bar: Native search okay and then horrible beyond a few key words and then ... BING :-(

- Microsoft File Search: Even with full disk indexing (I turned it on) it still takes 15-20 minutes to find all jpegs with an SSD. What's going on there?

- Adobe PDFs: Readers all versions. What? You mean you want to search for TWO words. Sacrilege. Don't do it.

Seriously though with all the interview code tests bubble sort, quick sort, bloom filters, etc. Why can't companies or even websites get this right?

And I agree with other commenters as far as Google, Bing, DDG, or other search sites it's been going down hill but the speed of uselessness is picking up.

The other nagging problem (at least for me) is that explicit searches which used to yield more relevant results now are front loaded with garbage. If I'm looking for datasheet on an STM (ST Microsystems) Chip and I start search with STM as of today STM is no longer relevant (it is, meaning it shows up after a few pages). But wow it seems like the SEOs are winning but companies that use this technique won't get my business.


Or MacOS Spotlight. Good lord. Most common occurrence: searching for Telegram, an app I have open 24/7 and interact with dozens of times a day.

CMD+Space

"T": LaTeXIT.app (an app I have used fewer than a dozen times in two years)

"E": LaTeXIT.app

"L": Telegram.app

"E": Electrum.app (how on earth??)

"G": telemetry.app (an app which cannot even be run)

"RAM" : Telegram

Similar experience searching for most apps, files, and words. It's horrendous.

MacOS Mojave 10.14.6 on a MacBook Pro (Retina, 15-inch, Mid 2015)


Anything Microsoft other than Office+outlook sucks. I don't know about azure though as I have not endured it yet.

Adobe wants to have you by your balls the moment you install their installer :-) I keep a separate computer for Adobe stuff just for that reason. Actually to run some MS junk too.

>Seriously though with all the interview code tests bubble sort, quick sort, bloom filters, etc. Why can't companies or even websites get this right?

I have see some of the stinkiest stuff created by people who will appear smartest in any test these companies can throw at them. Some people are always gambling/gaming and winging it. They leave a trail...unfortunately.


I envy your luck if you think office and outlook don't suck! Performance and reliability is terrible from my experience.


Performance and reliability is indeed terrible. It is a mystery that a word processor crashes so often and take 10s of seconds just to quit. But the fact is that they get the job done and I haven't seen any decent alternatives to word, excel and for that matter even outlook. If you know something reasonably close, then please share.


> - Microsoft File Search: Even with full disk indexing (I turned it on) it still takes 15-20 minutes to find all jpegs with an SSD. What's going on there?

I use this software utility called Search Everywhere, its surprisingly good, fast and fairly accurate most of the times :)


> - Microsoft File Search: Even with full disk indexing (I turned it on) it still takes 15-20 minutes to find all jpegs with an SSD. What's going on there?

Does turning it off speed it up? I think disk indexing (the way Windows does it) is a remnant from HDD times, and might make things worse when used together with a modern SSD.

> Adobe PDFs: Readers all versions. What? You mean you want to search for TWO words. Sacrilege. Don't do it.

If you're just viewing and searching PDFs (and don't have to fill out PDF forms on a regular basis), check out SumatraPDF. Fastest PDF reader on Windows I've come accross so far.


> it still takes 15-20 minutes to find all jpegs with an SSD. What's going on there?

What is going on there? I'm working on a file system indexer in golang and to walk and parse extension to a mimetype runs at several thousand images a second, over NFS. Windows is full of lots of headscratchers "why is this taking so long?"


My gut feeling is search is bad everywhere because no one provides a pure-text API to the content. Cleaning data is hard and its easier to chuck in all the text blasted off an HTML page than to exclude everything non-signal.

I have no answers for Microsoft File Search, it never returns any results for me, I wonder if they even tested it sometimes.


Google has definitely stopped being able to find the things I need.

Pasting stack traces and error messages. Needle in a haystack phrases from an article or book. None of it works anymore.

Does this mean they are ripe for disruption or has search gotten harder?


> Pasting stack traces and error messages.

I cannot fathom the number of times I've pasted an error message enclosed by quotes and got garbage results, and then an hour of troubleshooting and searching later I come across a Github/bugtracker issue, which was nowhere in the search results, were the exact error message appears verbatim.

The garbage results are generally completely unrelated stuff (a lot of Windows forum posts) or pages were a few of the words, or similar words, appear. Despite the search query being a fixed string not only does Google fail to find a verbatim instance of it, but instead of admitting this, they return nonsense results.

> Needle in a haystack phrases from an article or book.

I can confirm this part as well, searching for a very specific phrase will generally find anything but the article in question, despite it being in the search index.

Zero Recall Search.


It seems like pasting a very large search query should actually make it easier for the search engine to find relevant results, but given that this doesn't happen suggests that the search query handler is being too clever and getting in the way.


> a Github/bugtracker issue, which was nowhere in the search results, were the exact error message appears verbatim.

Did you put the error message in quotes? I've never had this problem.


I, too, have pasted error messages verbatim into Google queries only to have garbage returned. I did include the error message in quotes. I started filtering sites from the results eg `-site:quora.com site:stackoverflow.com site:github.com` etc to start to get a hint of other developers with similar issues and/or some bug reports and/or documentation and/or source code.


> -site:quora.com site:stackoverflow.com site:github.com`

A mock-google that excludes quora and optionally targets stackoverflow/github sounds useful.


Could it be that there just isn't a single page in the web with the exact error?


My guess is that suppressing spammy pages got too hard. So they applied some kind of big hammer that has a high false positive rate. You're getting the best of what's left.

Maybe also some quality decline in their gradual shift to less hand weighted attributes and more ML.


My guess is that Google et al are all hell-bent on not telling you that your search returned zero results. They seem to go to great lengths to make sure that your results page has something on it by any means necessary, including: searching for synonyms for words I searched for instead of the specific words I chose, excluding words to increase the number of results (even though the words they exclude are usually the most important to the query), trying to figure out what it thinks I asked for instead of what I actually asked for.

I further suppose a lot of that is that The Masses(tm) don't use Google like I do. I put in key words for something I'm looking for. I suspect that The Masses(tm) type in vague questions full of typos that search engines have to try to parse into a meaningful search query. If you try to change your search engine to caters to The Masses(tm), then you're necessarily going to annoy the people that knew what they were doing, since the things that they knew how to do don't work like they used to (see also: Google removing the + and - operators).


I was going to reply with something along the same lines. Dropping the keyest keywords is a particularly big pet peeve of mine.

For those "needle in a haystack" type queries, instead of pages that include both $keyword1 and $keyword2, I often get a mix of the top results for each keyword. The problem is compounded by news sites that include links to other recent stories in their sidebars. So I might find articles about $keyword1 that just happen to have completely unrelated but recent articles about $keyword2 in the sidebar.

It also appears that Google and DDG both often ignore "advanced" options like putting exact phrase searches in quotation marks, using a - sign to exclude keywords, etc.

None of this seems to have cut down on SEO spam results either, especially once you get past the first page or two of results.

I suspect it all comes down to trying to handle the most common types of queries. Indeed, if I'm searching for something uncomplicated, like the name of the CEO of a company or something like that, the results come out just fine. Longtail searches probably aren't much of a priority, especially when there's not much competition.


Surely most engineers want the power of strict searching and less of the comforts of being always getting filler results, right?

So... is there an internal service at Google that works correctly but they're hiding from the world?

It might be useful for Google to make different search engines for different types of people. The behaviors of people are probably multi-modal, rather than normally distributed along some continuum where you should just assume the most common behavior and preferences. \

It would even be easier to target ads...

Or maybe this doesn't exist and spam is too hard.


> They seem to go to great lengths to make sure that your results page has something on it by any means necessary

You just described how YouTube's search has been working lately. When you type in a somewhat obscure keyword - or any keyword, really - the search results include not only the videos that match, but videos related to your search. And searches related to your keywords. Sometimes it even shows you a part of the "for you" section that belongs to the home page! The search results are so cluttered now.


Searching gibberish to try to get as few results as possible.

I got down to one with "qwerqnalkwea"

"AEWRLKJAFsdalkjas" returns nothing, but youtube helpfully replaces that search with the likewise nonsensical "AEWR LKJAsdf lkj as" which is just full of content.


> I put in key words for something I'm looking for. I suspect that The Masses(tm) type in vague questions full of typos that search engines have to try to parse into a meaningful search query.

Yeeaap, sometime in gradeschool - I think somewhere around 5th grade, age 11 or so, which would be around 1999 - we had a section on computers, where we'd learn the basics about how to use them. One of the topics I remember was "how to do web searches", where a friend was surprised at how easily I found what I was looking for - the other kids had to be trained to use keywords instead of asking it questions.


It's surprisingly easy to get zero results returned pasting cryptic error messages. It doesn't mean there is nothing, though. Omit half the string, and there's the dozen stack overflow threads with the error. Maybe it didn't read over the line break on stack overflow or something, but I haven't tested anything.


Tyranny of the minimum viable user.


Two anecdotes: It’s really fascinating.

1. My work got some attention at CES so I tried to find articles about it. Filtering for items that were from the last X days and searching for a product name found pages and pages of plagiarized content from our help center. Loading any one of the pages showed an OS appropriate fake “your system is compromised! Install this update” box.

What’s the game here? Is someone trying to suppress our legit pages, or piggybacking on the content, or is that just what happens now?

2. I was looking for some OpenCV stuff and found a blog walking through a tutorial - except my spidey sense kept going off because the write up simply didn’t make sense with the code. Looking a bit further I found that some guys really well written blog had been completely plagiarized and posted on some “code academy tutorial” sort of site - with no attribution. What have we come to?


The first seems big right now, on weird subdomains of clearly hacked sites. E.g. some embedded Linux tutorial on a subdomain of a small-town football club.


Yup. Entertainingly I just saw an example of the “lying date” the original article pointed out: according to google the page is from 17 hours ago. However right next to this it says June, xx 2018. Really?


Well that “big hammer” so to speak is that they tend to favor sites that have a lot of trust and authority.

Someone mentioned that the sites that have the answer typically is buried in the results. That’s because they tend to favor big brands and authoritative sites. And those sites oftentimes don’t have the answer to the search query.

Google’s results have gotten worse and worse over the years.


This! I think this is the biggest piece of the puzzling issue.

Was it Panda update or that one plus the one after - it took out so much of the web and replaced it with "better netizens" who weren't doing this bad thing or that bad thing.

Several problems with that - 1 - they took out a lot of good sites. Many good sites did things to get ranked and did things to be better once they got traffic.

The overbroad ban hammer took many down - and many people that likely paid an seo firm not knowing that seo firms were bad in google's eyes (at the time) - so lots of mom and pops and larger businesses got smacked down and put out of the internet business - just like how many blogs have shut down.

Of course local results taking a lot of search space and the instant answers (50% of searches never get a click cuz google gives them the answer right on the results page (often stolen from a site) are compounding this.

They tried having the disavow tool to make amends - but the average small business doesn't know about these things, and getting help on the webmaster forum is a joke if you are tech inclined, imagine what an experience it is for small business owners.

I miss the days of Matt Cutts warning people "get your Press Releases taken down or nofollowed or it's gonna crush you soon" - problem is most of the people who were profiting from no-longer-allowed seo techniques were not reading Matt's words.

I also appreciated his saying 'tell your users to bookmark you, they may not find you in google results soon' - yeah, at least we were warned about it.

The web has not been the same since those updates, and it's gotten worse since. This does help adwords sell and the big companies that can afford them though.

In these ways google has been kind of like the walmart of the internet, coming in, taking out small businesses, taking what works from one place and making it cheap at their place.

I'd much rather have the results of pre-penguin and let the surfers decide by choosing to remain on a site that may be good that also had press releases and blog links... rather than loosing all the sites that had links on blogs. I am betting most of the users out there would prefer the results of days past as well.


I've been using DDG as a good enough search engine for most things, but when I sometimes fall back to Google, it blows me away how many ads are on the page pretending to be results!


'if not ddg(search) { ddg("!g " + search) }' has been my go-to method for awhile now; but as time has progressed, the results from DuckDuckGo have either been getting better, or the Google results have been getting worse; because usually if I can't find it on DDG now, I can't find it on Google either.


I use DDG by default, but I can feel myself mentally flinching unless I basically know what I'm looking for already (i.e. I know I'll end up on StackOverflow). When I'm actually _searching_, it's useless, and I'll always !g.


Same here, I actually prefer DDG to Google now, even for regional (Germany) results.

When I switched, about a year and a half ago, I felt like I was switching to a lesser quality search engine (it was an ethical choice and done because I can), that, however, gradually and constantly got better, whereas Google went the opposite path.

Nowdays I only really use Google to leech bandwidth off their maps services. Despite there being a very good alternative available, OpenStreetMaps, they unfortunately appear to have limited (or at least, way less than Google) bandwidth at their disposal... A pity though, because their maps are so awesome, the bicycle map layer with elevation lines is any boy scout's wet dream... but yeah, to find the next hairdresser, Google'll do.

Speaking of bandwidth and OSM reminds me, is there an "SETI-but-for-bandwidth-not-CPU-cycles" kind of thing one could help out with? Like a torrent for map data?

EDIT: Maybe their bandwidth problems are also more the result of a different philosophy about these things. OSM is likely "Download your own offline copy, save everybody's bandwidth and resources" (highly recommended for smartphones, especially in bandwidth-poor Germany) whereas Google is "I don't care about bandwith, your data is worth it".


> Speaking of bandwidth and OSM reminds me, is there an "SETI-but-for-bandwidth-not-CPU-cycles" kind of thing one could help out with? Like a torrent for map data?

OSM used to have tiles@home, a distributed map rendering stack, but that shut down in 2012. There is currently no OSM torrent distribution system, but I'd like to set that up.


Google images isn't even worth using at all anymore, after that Getty lawsuit that made them remove links to images (the entire damn point of image search as far as I'm concerned..)


I think the Web just kind of stopped being full of searchable information.


Imagine if instead of kneecapping XHTML and the semantic web properties it had baked in, Google had not entered into the web browser space. We might be able to mark articles up with `<article>`, and set their subject tags to the URN of the people, places, and things involved. We could give things a published and revised date with change logs. Mark up questions, solutions, code and language metadata. All of that is extremely computer friendly for ingestion and remixing. It not only turned search into a problem we could all solve, but gave us rails to start linking disparate content into a graph of meaningful relationships.

But instead Google wanted to make things less strict, less semantic, harder to search, and easier to author whatever the hell you wanted. I'm sure it has nothing to do with making it difficult for other entrants to find their way into search space or take away ad-viewing eyeballs. It was all about making HTML easy and forgiving.

It's a good thing they like other machine-friendly semantic formats like RSS and Atom...

"Human friendly authorship" was on the other end of the axis from "easy for machines to consume". I can't believe we trusted the search monopoly to choose the winner of that race.


I work for Google but not on search.

I think in this case semantic web would not work, unless there was some way to weed out spam. There are currently multiple competing microdata formats out there than enable you to specify any kind of metadata but they still won't help if spammers fill those too.

Maybe some sort of webring of trust where trusted people can endorse other sites and the chain breaks if somebody is found endorsing crap? (as in, you lose trust and everybody under you too)


> I think in this case semantic web would not work, unless there was some way to weed out spam.

That's not so hard. It's one of the first problems Google solved.

PageRank, web of trust, pubkey signing articles... I'd much rather tackle this problem in isolation than the search problem we have now.

The trust graph is different from the core problem of extracting meaning from documents. Semantic tags make it easy to derive this from structure, which is a hard problem we're currently trying to use ML and NLP to solve.


>Semantic tags make it easy to derive this from structure

HTML has a lot of structure already (for example all levels of heading are easy to pick out, lists are easy to pick out), and Google does encourage use of semantic tags (for example for review scores, or author details, or hotel details). For most searches I don't think the problem lies with being able to read meaning - the problem is you can't trust the page author to tell you what the page is about, or link to the right pages, because spammers lie. Semantic tags don't help with that at all and it's a hard problem to differentiate spam and good content for a given reader - the reader might not even know the difference.


> PageRank, web of trust, pubkey signing articles...

What prevents spammers from signing articles? How do you implement this without driving authors to throw their hands in the air and give up?


In the interests of not causing a crisis when Top Level Trust Domain endorses the wrong site and the algorithm goes, "Uh uh," (or the endorsement is falsely labeled spam by malicious actors, or whatever), maybe the effect decreases the closer you are to that top level.

But that's hierarchical in a very un-web-y way... Hm.


The internet is still kind of a hierarchy though, "changing" "ownership" from the government DARPA to the non-profit ICANN.

And that has worked... quite fine. I have no objections (maybe they're a bit too liberal with the new TLDs).

Most of the stuff that makes the hierarchies seem bad are actually faults of for-profit organizations (or other unsuited people/entities) being at the top, and not just that someone is at the top per se. In fact, in my experience, and contrary to popular expectation, when a hierarchy works well, an outsider shouldn't actually be able to immediately recognize it as such.


> Imagine if instead of kneecapping XHTML and the semantic web properties it had baked in, Google had not entered into the web browser space. We might be able to mark articles up with `<article>`, and set their subject tags to the URN of the people, places, and things involved. We could give things a published and revised date with change logs. Mark up questions, solutions, code and language metadata.

Can you explain in technical details what you think was lost by Google launching a browser or what properties were unique to XHTML?

Everything you listed above is possible with HTML5 (see e.g. schema.org) and has been for many years so I think it would be better to look at the failure to have market incentives which support that outcome.


Good machine-readable ("semantic") information will only be provided if incentives aren't misaligned against it, as they are on much of the commercial (as opposed to academic, hobbyist, etc.) Web. Given misaligned incentives, these features will be subverted and abused, as we saw back in the 1990s with <meta description="etc."> tags and the like.


I don't think there's any reason to think google was responsible for the semantic web not taking off. People just didn't care that much. It may have been a generally useful idea, but it didn't solve anyone's problem directly enough to matter.


It wouldn’t matter. 0.0001% of content authors would employ semantic markup. Everyone else would continue to serve up puréed tag soup.


If WordPress outputs semantic output that instantly gives you a lot more than 0.0001%. The rest would follow as soon as it improves discoverability of their content


Wordpress can't magically infer semantic meaning from user input any better than Google can. The whole point of the semantic web is to have humans specifically mark their intention. A better UI for semantic tagging would help for that, but it would still be reliant on the user clicking the right buttons rather than just using whichever thing results in the correct visual appearance.


> 0.0001% of content authors would employ semantic markup.

You don't think we'd have rich tooling to support it and make it easy to author?

Once people are using it with success, others will follow.


The breakthrough would be when Google were to rank pages with proper semantic markup higher. Just look at AMP.

(Of course that won't ever happen, but that's what would be needed.)


Did you try putting them in quotes?

EDIT: I don't know why this being downvoted. This is a genuine question to understand if the problem is the size of the index or the fuzzing matching that search engines do.


Quotes doesn't work reliably anymore, this is a big part of the problem. Googlers have been really busy the last 10 years doing everything except:

- fixing search (it has become more and more broken since 2009, possibly before. Today it wotks more or less like their competitors worked before, random mix of results containing some of my keywords.)

- fixing ads (Instagram should have way less data on me and yet manages to present me with ads that I sometimes click instead of ads that are so insulting I go right ahead and enable the ad blocker I had forgotten.)

- saving Reader

- etc


> I don't know why this being downvoted.

tbh, it's one of those "Are you sure you're not an idiot?" replies.


Google blatantly disregard quotes.


I think the behavior is more complex. I do get disregarded quotes from time to time so I typically leave them off. However, for the query 'keyword1 keyword2', if I get a lot of keyword1 results with keyword2 struck through, and I search again with keyword2 in quotes, it works as expected.


Reference?


Will you take my word for it?

They not only disregard quotes but also their own verbatim setting.


Asking for a reference helps:

- Establish the behaviour as documented.

- In representing and demonstrating this to others.

It's not that I doubt your word, but that I'd like to see a developed and credible case made. Because frankly that behaviour drives me to utter frustration and distraction. It's also a large part of the reason I no longer use, nor trust, Google Web Search as my principle online search tool.


I see. I'll try to make a habit out of collecting those again.

That said, I might have something on an old blog somewhere. I'll see if I can find it before work starts...

Edit: found it here http://techinorg.blogspot.com/2013/03/what-is-going-on-with-... . It is from 2013 and had probably been going on for a while already at that point.

Edit2: For those who are still relying on Google, here's a nice hack I discovered that I haven't seen mentioned by anyone else:

Sometimes you might feel that your search experience is even worse than usual. In those cases, try reporting anything in tje search results and then retry tje same search 30 minutes later.

Chances are it will now magically work.

It took quite a while for me to realize this and I think in the beginning I might not have realized how fast it worked.

It seemed totally unrealistic however that a fkx would have been created and a new version deployed in such a short time so my best explanation is they are A/B-testing some really dumb changes and then pulling out whoever complains from the test group.

Thinking about it this might also be a crazy explanation for why search is so bad today compared to ten years ago:

There's no feedback whatsoever so most sane users probably give up talking to the wall after one or two attempts. This leaves them with the impression that everyone is happy, so they just continue on the path back to becoming the search engines they replaced.


Thanks.

I'm getting both mixed experiences and references myself looking into this. Which is if anything more frustrating than knowing unambiguously that quoting doesn't work.

I've run across numerous A/B tests across various Google properties. Or to use the technical term: "user gaslighting".


If they were ripe for disruption and it was easy to do this disrupting just be returning better search results, and returning better search results was an easily doable thing then I suppose all the other functioning businesses that have a stake in web search would already be doing that disrupting.

Search disrupted catalogs. What will disrupt search?


Boutique hand crafted artisanal catalogs?

Not joking I have a feeling subject specific topics will be further distributed based on expertise & trust.


That's exactly what github's Awesome lists are: Decentralized, democratized handcrafted subject-specific catalogs


If they became important sources of information outside technically competent people I suppose we would end up with a bunch of Awesome lists of Content Farms!


Return of the Yahoo! Directory and DMOZ? Heh.


> Boutique hand crafted artisanal catalogs?

I think those are called books. ;-)


Pubmed is an excellent example of a boutique search engine.


So the old Yahoo web index, basically.


Are you sure there's a page on the web that has the stack trace you search for? Maybe there just isn't anything.


Perhaps expectations have risen over time


In my opinion, Google is getting worse constantly, which boils down to basically the following aspects for me:

1. I don’t like the UI anymore. I preferred the condensed view, with more information and less whitespace.

2. Popping up some kind of menu when you return from a search results page shifts down the rest of the items resulting in me clicking search links I am not interested in.

3. It tries to be smarter than me, which it fails in understanding what I am searching for. And by “understanding” I basically mean to honor what I typed and not replacing it with other words.

I try to use DDG more often but Google gives me the best results most of the time if I put in more time.


Yeah, number 3 really pisses me off recently. If I type in 3 words I would like to search by those 3 words. What ends up happening is Google just decides that it's too much of a hassle or that I've made a mistake and just searches using 2. So now I have to input all the words in quotes so that it works like it supposed to in the first place.

This functionallity literaly never helped me during search. Not once.


"try" "putting" "the" "words" "in" "respective" "quotes" "like" "this"


That's what I'm doing, sorry it wasn't clear ;)


You don't even seem to be able to simply get the URL of a search result any more: some hierarchical token thing is used instead to display each search result's address, and copying any link just gives you:

https://www.google.com/url?sa=t&rct=j&q=&esrc=s&source=web&c...

with the url buried somewhere in the GET params.


Everything that you're looking for is (fortunately) still in the HTML source code. When Google started messing with the URLs I wrote a filter for my MITM proxy to put them back. Then it recently changed the format so I had to change the filter again. It's annoying for sure.


Agreed on all three but especially #1. The "modern" web is full of so much whitespace it's infuriating


I've been thinking about this for years[1]. The truth is, what Google solved was parsing the search query, not identifying the best results. In fact, Google is not incentivized to give you the best results, they are designed to maximize their revenue, derived from getting you to view / click ads.

Google is not a search company, they are an advertising company. The more searches you make, the more revenue they make. Their goal is to quickly and often get you to search things. As long as you keep using their platform, the more you search the better.

[1] https://austingwalters.com/is-search-solved/


The ‘Google is an advertising company’ is said often on HN. I agree to some extent but, doesn’t that imply that every newspaper company is also just an advertising company? Google solves a real problem and this works well, for them, with an advertising based revenue model. Do they compromise their search to that end? Probably. Do newspapers? Hopefully not, but maybe. To me, that doesn’t make them advertising companies. Apologies if this is pedantic.


Of course they are. Page ads are probably as old as print. Most reviews are ads. Travel sections are ads. There are real estate sections, and classifieds as well. Perhaps the most honest reporting is found in the local sports section.


> * but, doesn’t that imply that every newspaper company is also just an advertising company?*

Historically, at least, they sold subscriptions.


https://www.opendemocracy.net/en/opendemocracyuk/why-i-have-...

Hsbc demanded The Telegraph pull negative stories or they would pull their advertising.

All newspapers are full of stories about property "investment" and have a separate segment one a week for paid advertising of property.


Subscriptions for the “modern newspaper” did not pay the bills, but were proof that people were actually reading the newspaper.

Prior to that there were papers which did indeed make their money from subscriptions. But their content was different as well: explicitly ideological and argumentative. The NYT or Wapo idea of neutral journalism was a later development.


According to this, the newspaper subscription only covered about 18%. The rest was from advertising.

https://idiallo.com/blog/we-never-paid-for-journalism


I used to work for a company that did product search. Search is hard. The original idea behind Page Rank was really insightful and made search a lot easier, at least until first SEO, then no one linking as much as they used to. The other trick Google figured out was good ML on query results, so all the popular queries have decent results. That sill leave you with search being hard.


Is it time for paid search engines? Make users vote with their wallets and pay for the eternal arms race. Problem is, whoever is behind something like that would have to start with an already sufficiently superior experience (or massive geek cred) to make people pay from the early days. Maybe going back to manually curated results of a subset of topics would work? Or some Stack Overflow-esque model of user powered metadata generation?


No. Just because you pay for something doesn't mean you aren't also the product. The other ideas are specifically ones which failed. It didn't even work that well in the 90s. User metadata isn't even a solution in itself but an optimization layer at best.


Simpler idea:

Paid search engine that ranks sites based on how often the users click results from that site (and didn't bounce, of course). The fact that it's paid prevents sybil attacks (or, at least, turns a sybil attack into a roundabout way of buying ads).

Of course, at this point, you are now the product even though you paid. But it's a tactic that worked for WoW for ages.


Google already includes clicks and bounces in its ranking factors.


Probably not. The problem with current search engines is that they need some way to rank the pages and hence inherently susceptible to clever SEO (that curates pages to get ranked high for certain type of queries).

So having a paid search engine does not fully solve the SEO problem. Having no Ads does not takeout the SEO problem of boosting pages to top.


If the paid for search engine ignores SEO "optimizations" and actually ranks on content AND these results prove better that attracts actual paying users, then we don't have to wait for SEO to die. Just like SEO rose to prominence to win Google result ranking because that's where the users were, the sites would stop the SEO crap.

Getting this hypothetical site to get users is the real problem. Same thing with getting users to a Facebook alternative.


I somehow doubt that all of Google’s internal decisions are as simple as “well we’re an ads company, let’s just not worry about search quality.”


Why do you doubt this? As a company in total, I believe that all decisions are made on "will this sell more ads". At the same time, I believe that some of the ideas from the software devs are born from doing something cool. It then gets rolled into "how can we use this new thing to sell more ads"?


Because search quality drives ads.

There were search engines around but when Google came out with superior search results everyone switched and those search engines quickly vanished. There are search engines in direct competition with Google today. If Google does not provide the best service there's an extremely low switching cost. Bing, Baidu, DuckDuckGo et al would be happy to take your traffic.


Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: