It kills my curiosity and intent with fake knowledge and bad experience. I need something better.
However, it will be interesting to figure the heuristics to deliver better quality search results today. When Google started, it had a breakthrough algorithm - to rank page results based on number of pages linking to it. Which is completely meritocratic as long as people don't game for higher rankings.
A new breakthrough heuristic today will look something totally different, just as meritocratic and possibly resistant to gaming.
I should know - I blew the whistle on the whole censorship regime and walked 950 pages to the DOJ and media outlets.
--> zachvorhies.com <--
What did I disclose? That Google was using a project called "Machine Learning Faireness" to rerank the entire internet.
Part of this beast has to do with a secret Page Rank score that Google's army of workers assign to many of the web pages on the internet.
If wikipedia contains cherry picked slander against a person, topic or website then the raters are instructed to provide a low page rank score. This isn't some conspiracy but something openly admitted by Google itself:
See section 3.2 for the "Expertise, Authoritativeness and Trustworthiness" score.
Despite the fact that I've had around 50 interviews and countless articles written about my disclosure, my website zachvorhies.com doesn't show up on Google's search index, even when using the exact url as a query! Yet bing and duckduckgo return my URL just fine.
Don't listen to the people who say that's its some emergent behavior from bad SEO. This deliberate sabotage of Google's own search engine in order to achieve the political agenda of the controllers. The stock holders of Google should band together in a class action lawsuit and sue the C-Level executives of negligence.
If you want your internet search to be better then stop using Google search. Other search engines don't have this problem: I'm looking at qwant, swisscows, duckduckgo, bing and others.
And maybe your site doesn't get ranked well because it's directly tied to project veritas. I don't like being too political, especially on hn and on an account tied to my real identity, but project veritas and it's associates exhibit appalling behavior in duplicity and misdirection. I would hope that trash like this does get pushed to the bottom.
Of course Google's own bias (and involvement in particular political campaigns) is well known, and opposed to Project Veritas, so it's quite possible that you are right and Google is downranking PV.
Would that be good? Well, that's an opinion that depends mostly on the bias of the commentator.
I doubt this affected search rankings but Project Veritas does have a ton of credibility issues.
If this surprises you, then welcome to the systematic bias of wikipedia.
Not among credible sources.
People have a vested interest in destroying the idea that anything can be a non-partisan "fact". Anything can become a smear. Only the most absolutely egregious ones can be reined in by legal action (e.g. Alex Jones libelling the Sandy Hook parents).
(This is not just internet, of course; the British press mendacity towards the Royal family is playing out at the moment.)
Here is someone who believes that a private company's open attempts to rank websites by quality amounts to "seditious behaviour" deserving of criminal prosecution, and the only people willing to pay attention were Project Veritas. Google has plenty of ethics issues, but this guy's claims are absurd.
I just tried it, it's just showing results for "zach vorhies" instead, which it thinks you meant. I just tried a few other random "people's names as URL" websites I could find, sometimes it does this, sometimes it doesn't.
Furthermore, the results that do appear are hardly unsympathetic to you. If google is censoring you/your opinions, they're doing a very poor job of it.
> If wikipedia contains cherry picked slander against a person, topic or website then the raters are instructed to provide a low page rank score
This sounds like a good thing to me. Sites that contain lies, fabrications, and falsehoods should not be as highly ranked as those which do not.
Why should shareholders sure Google for, as far as I can tell from your argument, trying to provide users with a more useful product?
Google does not have the moral authority to censor the internet, and it's absolutely wrong for them to attempt this. Information should be free, and you don't have the right to get in the way of that.
They do run a popular search page, and have to decide what to do with a search like "Is the Earth flat?".
Personally, I would prefer they prominently display a "no". Others would disagree, but a search engine is curation by definition, that's what makes it useful.
Oh hey! I just tried this, and it does! image: https://i.imgur.com/OqqxSq3.png
What you ask for isn't freedom but control over everyone else - there is nothing stopping you from running your own spiders, search engines, and rankings.
The fun thing about facts is that nobody needs to decide whether or not they are true. Perhaps the fact that you can honestly claim to think otherwise means you need to take a step back and examine your reasoning.
This is an example of a "fact" that I'm talking about. It's not a fact, it's an opinion being presented as fact. I guess if you present yourself this way online you have no problem with Google controlling what "facts" are found when you use their search engine.
I guess I'll just have to wait until they start peddling a perspective you disagree with.
> If wikipedia contains cherry picked slander against a person, topic or website
Just remember that this is the comment we're discussing... how does one determine if a statement is slander? Are you telling me Google has teams of investigative journalists following up on each of their search results? Or did someone at Google form an opinion, then decide their opinion is the one that should be presented as "fact" on the most popular search engine in the world?
I am not sure what is happening but I directly searched for your website : zachvorhies.com on Google (in Australia, if that matters), which returned the website as the first result:
Also, is this guy a Q follower or something? The favicon is a Q.
Actually the favicon is a headshot.
And that's also the icon in the page source:
<link rel=”shortcut icon” href="favicon.ico" type=”image/x-icon” />
I wonder why Google shows a Q.
A placeholder used if the algorithm thinks the favicon is not appropriate for some reason.
Anyhow, I would be interested to know what results you get in the EU.
For what it's worth, I'm in Australia and have the same search results as erklik.
I wouldn't trust a single word that comes out of this guy's mouth.
> Don't listen to the people who say
> this beast has to do with a secret Page Rank score that Google's army of workers
Anyone who tells you to not listen to others intends to tell you gossip about why you shouldn't gather data that conflicts their own views. They'll SAY all sorts of things to try to make you "see" it their way. Rational people DO things to prove or disprove a given belief. (Just to note SAYING a bunch of stuff does not equal DOING a bunch of stuff.)
For anyone rational and interested, Google "this video will make you angry" and bump the speed to 75%. The idea is that memes will compete for "compute" space in both people's minds and the space in which they occupy the Internet. Those who get "infected" with a given meme, will go to all sorts of lengths to rationalize why that meme is true, even though the meme the arguing against is just as irrational as their own.
Just searched it and it's literally the first result.
> It kills my curiosity and intent with fake knowledge and bad experience. I need something better.
It's hard for me to take this seriously when wikipedia exists, and almost always ranks very highly in search results for searches for "knowledge topics". Between wikipedia and sources cited on wikipedia, I find the depth of almost everything worth learning about to be far greater than I can remember in, say, the early 2000s, which is seems like the "peak" of google before SEO became so influential.
In general, I think there are a lot of people wearing rose tinted glasses looking at the "old internet" in this thread. The only thing that has maybe gotten worse is "commercial" queries like those for products and services. Everything else is leaps and bounds better.
These days I rarely see a forum result appear unless I know the specific name of the forum to begin with and utilize the search by site domain operator.
Another problem these days, unrelated to search but dooming it in the process, is all these companies naming themselves or their products after irrelevant existing english words, rather than making up something unique. It's usually fine with major companies, but I think a lot of smaller companies/products shoot themselves in the foot with this and don't realize it. I was once looking for some good discussion and comparison on a bibliography tool called Papers, and that was just a pit of suffering getting anything relevant at all with a name like that.
... still does nothing to answer the point that Web search itself is unambiguously and consistently poorer now than it was 5-10 years ago.
Yes, I find myself relying far more on specific domain searches, either in the Internet sense, or by searching for books / articles on topics rather than Web pages. Because so much more traditionally-published information is online, this actually means the net of online-based search has improved, but not for the most part because of improved Web-oriented resources (Webpages, discussions, etc.), but because old-school media is now Web-accessible.
You search for quality books online, mostly through discussion forums as search fails here, or through following references of books and articles. Then spend time digesting them.
What’s sad is Google has generally indexed the pages I want, it’s just getting harder to actually find them.
Clearly, they don’t want it available because their tech docs they host stop at AM3b. I was hoping an X470 (or other flavor) motherboard manufacturer would have something floating around...
I wonder how much of this could be obtained back by penalizing:
2. The number of ads on the page, or the depth of the ad network
This might start a virtuous circle, but in the end, this is just a game of cat-and-mouse, and website might optimize for this as well.
What we might need to break this is a variety of search engines that uses different criteria to rank pages. I suspect it would be pretty hard, if not impossible, to optimize for all of them.
And in any case, frequently change the ranking algorithms to combat over-optimization by the websites (as that's classically done against ossification for protocols, or any overfitting to outside forces in a competitive system).
Maybe instead of a problem, there is an opportunity here.
Back before Google ate the intarwebs, there used to be niche search engines. Perhaps that is an idea whose time has come again.
For example, if I want information from a government source, I use a search engine that specializes in crawling only government web sites.
If I want information about Berlin, I use a search engine that only crawls web sites with information about Berlin, or that are located in Berlin.
If I want information about health, I use a search engine that only crawls medical web sites.
Each topic is still a wealth of information, but siloed enough that the amount of data could be manageable to a small or medium-sized company. And the market would keep the niches from getting so small that they become useful. A search engine dedicated to Hello Kitty lanyards isn't going to monetize.
featuring the semantic map of  https://swisscows.ch/
incorporating  https://curlie.org/ and Wikipedia and something like Yelp/YellowPages embedded in Open Streetmaps for businesses and points of interest, with a no frills interface showing the history (via timeslide?) of edits.
Not really. A web directory is a directory of web sites. I can't search a web directory for content within the web sites, which is what a niche search engine would do.
Another potentially useful approach is to construct a graph database of all these sites, with links as edges. If one page gets flagged as junk then you can lower the scores of all other pages within its clique . This could potentially cause a cascade of junk-flagging, cleaning large swathes of these undesirable sites from the index.
I don’t like the whole concept of SEO, I don’t like the way the web is today, but I think we should stop and think before resorting to “immoral few is destroying things, we unfuck it reclaim what we deserve” type simplification.
So much is obvious. The discussion is about whether there is a less shitty metric.
This is ultimately whack-a-mole. For the past decade or so, point-of-origin based blockers have worked effectively, because that's how advertising networks have operated. If the ad targets start getting unified, we may have to switch to other signatures:
- Again, sizes of images or DOM elements.
- Content matching known hash signatures, or constant across multiple requests to a site (other than known branding elements / graphics).
- "Things that behave like ads behave" as defined by AI encoded into ad blockers.
- CSS / page elements. Perhaps applying whitelist rather than blacklist policies.
- User-defined element subtraction.
There's little in the history of online advertising that suggests users will simply give up.
And that will probably be blocked or severely locked down by your most popular browser, chrome.
I don't need to give advertisers data myself when someone else I know can. I really doubt it is easy to throw off chrome monopoly at this stage. I presume we will see a chilling effect before anything moves like IE.
I had a site that appeared in DMOZ, and the description was written in such a way that nobody would want to visit it. But it was one of only a few sites on the internet at the time with its information, so it was included.
Create a core protocol at the same level as DNS etc., that web servers can use to offer an index of everything they serve/relay. A multitude of user-side apps may then query that protocol, with each app using different algorithms, heuristics and offering different options.
There are several puzzling omissions from Web standards, particularly given that keyword-based search was part of the original CERN WWW discussion:
IF we had a distributable search protocol, index, and infrastructure ... the entire online landscape might look rather different.
Note that you'd likely need some level of client support for this. And the world's leading client developer has a strongly-motivated incentive to NOT provide this functionality integrally.
A distributed self-provided search would also have numerous issues -- false or misleading results (keyword stuffing, etc.) would be harder to vet than the present situation. Which suggests that some form of vetting / verifying provided indices would be required.
Even a provided-index model would still require a reputational (ranking) mechanism. Arguably, Google's biggest innovation wasn't spidering, but ranking. The problem now is that Google's ranking ... both doesn't work, and incentivises behaviours strongly opposed to user interests. Penalising abusive practices has to be built into the system, with those penalties being rapid, effective, and for repeat offenders, highly durable.
The problem of potential for third-party malfeasance -- e.g., engaging in behaviours appearing to favour one site, but performed to harm that site's reputation through black-hat SEO penalties, also has to be considered.
As a user, the one thing I'd most like to be able to do is specify blacklists of sites / domains I never want to have appear in my search results. Without having to log in to a search provider and leave a "personalised" record of what those sites are.
(Some form of truly anonymised aggregation of such blocklists would, of course, be of some use, and facilitating this is an interesting challenge.)
I decided it is time for us to have a bouncer-bots portal (or multiple) - this would help not only with search results, but also could help people when using twitter or similar - good for the decentralized and centralized web.
My initial thinking was these would be 'pull' bots, but I think they would be just as useful, and more used, if they were perhaps active browser extensions..
This way people can choose which type of censoring they want, rather than relying on a few others to choose.
I believe creating some portals for these, similar to ad-block lists - people can choose to use Pete'sTooManyAds bouncer, and or SamsItsTooSexyfor work bouncer..
ultimately I think the better bots will have switches where you can turn on and off certain aspects of them and re-search.. or pull latest twitter/mastodon things.
I can think of many types of blockers that people would want, and some that people would want part of - so either varying degrees of blocking sexual things, or varying bots for varying types of things.. maybe some have sliders instead of switches..
make them easy to form and comment on and provide that info to the world.
I'd really like to get this project started, not sure what the tooling should be - and what the backup would be if it started out as a browser extension but then got booted from the chrome store or whatever.
Should this / could this be a good browser extension? What language / skills required for making this? It's on my definite to do future list.
1. Become large.
2. Are shared.
3. And not particularly closely scrutinised by users.
4. Via very highly followed / celebrity accounts.
There are some vaguely similar cases of this occurring on Twitter, though some mechanics differ. Celebs / high-profile users attract a lot of flack, and take to using shared blocklists. Those get shared not only among celeb accounts but their followers, though, because celebs themselves are a major amplifying factor on the platform, being listed effectively means disappearing from the platform. Particularly critical for those who depend on Twitter reach (some artists, small businesses, and others).
Names may be added to lists in error or malice.
This blew up summer of 2018 and carried over to other networks.
Some of the mechanics differ, but a similar situation playing out over informally shared Web / search-engine blocklists could have similar effects.
Isn't that pretty much a site map?
Systems such as lunr.js are closer in spirit to a site-oriented search index, though that's not how they're presently positioned, but instead offer JS-based, client-implemented site search for otherwise static websites.
Fail an audit, lose your reputation (ranking).
The basic principle of auditing is to randomly sample results. BlackHat SEO tends to rely on volume in ways that would be very difficult to hide from even modest sampling sizes.
If a good site is on shared hosting will it always be dismissed because of the signal of the other [bad] sites on that same host? (you did say at DNS level, not domain level)
So, back to gopher? That might actually work!
How about we don't start looking at the /how/ a site is made, when it's already difficult to sort out the /what/ it is.
(Were you on mobile and using a smart keyboard?)
Sometimes its pretty innocuous, a CMS system updates every page with an updated copyright notice at the start of each year. Other times its less innocuous where the page simply updates the "related links" or side bar material and refreshes the content.
It is still an unsolved ranking relevance problem where a student written, 3 month old description of how AM modulation works ranks higher than a 12 year old, professor written description. There isn't a ranking signal for 'author authority'. I believe it is possible to build such a system but doing so doesn't align well with the advertising goals of a search engine these days.
 disclaimer I worked at Blekko.
Is it possible that there is no site providing non fluffy content on your query? For a lot of niche subjects, there really are very few if any substantial content on that topic.
“Very few if any substantial”
The problem is that google won’t even show me the very few anymore. It’s just fluff upon fluff and depth (or real insight at least) is buried in twitter threads and reddit/hn comments, and github issue discussion.
I fear the seo problem has not only killed knowledge propagation, but also thoroughly disincentivized smart people from even trying. And that makes me sad.
Yeah, if I know where I'm looking (the sites) then google is useful since I can narrow it to that domain. But if I don't know where to look then I'm SOL.
The serendipity of good results on Google.com is no longer there. And given the talent at google you have to wonder why.
And guess what: most users are normal. Us here on HN are weird:
1. Something is or provides access to good quality content.
2. Because of this quality, it gets more and more popular.
3. As popularity grows, and commercialization takes over, the incentive becomes to make things "more accessible" or "appealing" to the "average" user. More users is always better right!?
4. This works, and quality plummets.
5. The thing begins to lose popularity. Sometimes it collapses into total unprofitability. Sometimes it remains but the core users that built the quality content move somewhere else, and then that new thing starts to offer tremendous value in comparison to the now low quality thing.
If only there were some kind of analog for effective ways to locate information. Like if everything were written on paper, bound into collections, and then tossed into a large holding room.
I guess it's past the Internet's event horizon now, but crawler-primary searching wasn't the only evolutionary path to search.
Prior to Google (technically: AdWords revenue funding Google) seizing the market, human-currated directories were dominant [1, Virtual Library, 1991] [2, Yahoo Directory, 1994] [3, DMOZ, 1998].
Their weakness was always cost of maintenance (link rot), scaling with exponential web growth, and initial indexing.
Their strength was deep domain expertise.
Google's initial success was fusing crawling (discovery) with PageRank (ranking), where the latter served as an automated "close enough" approximation of human directory building.
Unfortunately, in the decades since we seem to have forgotten how useful hand-currated directories were, in our haste to build more sophisticated algorithms.
Add to that that the very structure of the web has changed. When PageRank first debuted, people were still manually tagging links to their friends' / other useful sites on their own. Does that sound like the link structure we have in the web now?
Small surprise results are getting worse and worse.
IMHO, we'd get a lot of traction out of creating a symbiotic ecosystem whereby crawlers cooperate with human currators, both of whose enriched output is then fed through machine learning algorithms. Aka a move back to supervised web search learning, vs the currently dominant unsupervised.
 https://en.m.wikipedia.org/wiki/World_Wide_Web_Virtual_Libra... , http://vlib.org/
 https://en.m.wikipedia.org/wiki/DMOZ , https://www.dmoz-odp.org/
You've also got the increase in resources needed (you need tons of staff for effective curation), and the issues with potential corruption to deal with (another thing which significantly effected the ODP's usefulness in its later years).
I think a more fundamental problem is a large portion of content production is now either unindexable or difficult to index - Facebook, Instagram, Discord, and YouTube to name a few. Pre-Facebook the bulk of new content was indexable.
YouTube is relatively open, but the content and contexts of what is being produced is difficult to extract, if, for the only reason that people talk differently than they write. That doesn’t mean, in my opinion, that the quality of a YouTube video is lower than what would have been written in a blog post 15 years ago, but it makes it much more difficult to extract snippets of knowledge.
Ad monetization has created a lot of noise too, but I’m not sure without it, there would be less noise. Rather it’s a profit motive issue. Many, many searches I just go straight to Wikipedia and wouldn’t for a moment consider using Google for.
Frankly I think the discussion here is way better than the pretty mediocre to terrible “case study” that was posted.
From a “knowledge-searching” perspective, at a very rudimentary level, it makes sense to look to sites/pages that are often cited (linked to) by others as better sources to rank higher up in the search results. It’s a similar concept to looking at how often academic papers are cited to judge how “prominent” of a resource they are on a particular topic.
However, as with academia, even though this system could work pretty well for a long time at its best (science has come a long way over hundreds of years of publications), that doesn’t mean it’s perfect. There’s interference that could be done to skew results in one’s favor, there’s funneling money into pseudoscience to turn into citable sources, there’s leveraging connections and credibility for individual gain, - the list goes on.
The heuristic itself in not innately the problem. The incentive system that exists for people to use the heuristic in their favor creates the issue. Because even if a new heuristic emerges, as long as the incentive system exists, people will just alter course to try to be a forerunner in the “new” system to grab as big a slice of the pie while they can.
That’s a tough nut for google (or anyone) to crack. As a company, how could they actually curate, maintain, and evaluate the entire internet on a personal level while pursuing profitability? That seems near impossible. Wikipedia does a pretty damn good job at managing their knowledge base as a nonprofit, but even then they are always capped by amount of donations.
It’s hard to keep the “shit-level” low on search results when pretty much anyone, anywhere, anytime could be adding in more information to the corpus and influencing the algorithms to alter the outcomes. It gets to a point where getting what you need is like finding a needle in a haystack that the farmer got paid for putting there.
That's not actually the problem described here. His problem is actually a bit deeper rootet since he specified the exact parameters of what he wants to see, but got terrible results. He specified a search for "site:reddit.com" but the resilts he got were ireelevant and worse than the results that he would have got when searching reddit directly.
I don't say that SEO, sites that copy content and only want to genrate clicks and large sites that culminate everything are bad fkr the internet of today, but the level of results we get off of search engines today is with one word abysmal.
> As you can see, I didn’t even bother clicking on all of them now, but I can tell you the first result is deeply irrelevant and the second one leads to a 4 year old thread.
He also wrote
> At this point I visibly checked on reddit if there’ve been posts about buying a phone from the last month and there are.
Duckduckgo even recognized the date to be 4 years old and reddit doesn't hide the age of posts. There are newer more fitting posts, but they aren't shown. And again a quote
> Why are the results reported as recent when they are from years ago, I don’t know - those are archived post so no changes have been made.
So your argument (also it really is a problem) in this case is a red herring. The problem lies deeper since google seems to be unable to do something as simple as extracting the right date and ddg ignores it. Also since all the results are years old it adds to the confusion why the results don't match the query. (He also wrote that the better matches were indeed indexed, but not found)
The entirety of the problem is the date query is not working, because of SEO for freshness. You also said this other wrong thing: "That's not actually the problem described here" . That is the problem here. The page shows up as being updated because of SEO.
The date in a search engine is the date the page was last judged to be updated, not one of the many dates that may be present on the page. When was the last reddit design update? Do you think that didn't cause the pages to change? Wake up.
Wrong. Internal reddit search has bad results too, and why even let you filter by last month.
The first, and the most important perhaps was page load speed. We adopted a slightly more complicated pipeline on the server side, reduced the amount of JS required by the page, and made page loading faster. That improved both the ranking and actual usability.
The second was that SEO people told us our homepage contained too many graphics and too few text, so search engines didn't quite extract as much content from our pages. We responded by adding more text in addition to the fancy eye-catching graphics. That improved both the ranking and actual accessibility of the site.
I really wish everyone would qualify, and not just black-hat seo / whitehat - there are many types of SEO, often with different intentions.
I understand there has been a lot of google koolaid (and others) about how seo is evil it's poisoning the web, etc...
But now, or has it been a couple years how? google had a video come out saying an SEO firm is okay if they tell you it takes 6 months... they have upgraded their pagespeed tool which helps with seo, and were quite public about how they wanted ssl/https on everything and that that would help with google seo..
so there are different levels of SEO, someone mentioned an seo plugin I was using on a site as being a negative indicator, and I chuckled - the only thing that plugin does is try to fix some of the inherent obvious screwups of wordpress out of the box... things like no meta description which google flags on webmaster tools as multiple same meta descriptions.. also tries to alleviate duplicate content penalties by no-indexing archives or and categories or whatever.
So there is SEO that is trying to work with google, and then there is SEO where someone goes out and puts comments on 10,000 web sites only for the reason of ranking higher.. to me that is kind of grey hat if it was a few comments, but shady if it's hundreds and especially if it's automated..
but real blackhat stuff - hacking other web sites and adding links.. or having a site that is selling breast enlarging pills and trying to get people who type in a keyword for britney spears.. that is trying to fool people.
I have built sites with good info and had to do things to make them prettier for the ranking bot, but they are giving the surfer what they are looking for when they type 'whatever phrase'... I have also made web sites better when trying to also get them to show up in top results.
So it's not always seo=bad, sometimes seo=good for the algorythm and the users.
and sometimes it's madness - like extra fluff to keep a visitor on page longer to keep google happy like recipes - haha - many different flavors of it - and different intentions.
So for example duckduckgo is still trying to use various providers to emulate essentially "early google without the modern google privacy violations", but when I start to think about many of the most successful company-netizens, one thing that stands out is early day exclusivity has a major appeal.
So I imagine a search engine that is only crawling the most useful websites and blogs and works on a whitelist basis. Instead of trying to order search results to push bad results down, just don't include them at all or give them a chance to taint the results. It would have more overhead, and would take a certain amount of time to make sure it was catching non-major websites that are still full of good info ... but once that was done it would probably be one of the best search panes in existence. I have also thought to streamline this, and I know it's cliche, but surely there could be some ml analysis applied to figuring out which sites are SEO gaming or click-baiting regurgitators and weed them out.
Just something I've been mulling over for a while now.
And what if what you're searching exists only in a non good website? Isn't it better to show a result from a non good website than showing nothing?
Can you give some examples of queries/topics? Not that I disagree, I often have the same problem, but have found ways to mitigate.
Many websites do have "hacked" (blackhat/shady) SEO, but these websites do not last long, and are entirely wiped out (see: de-ranked) every major algorithm update.
The major players you see on the top rankings today do utilize some blackhat SEO, but it's not at a level that significantly impacts their rankings. Blackhat SEO is inherently dangerous, because Google's algorithm will penalize you at best when it finds out -- and it always does -- and at worst completely unlist your domain from search results, giving it a scarlet letter until it cools off.
However, the bulk of all major websites primary utilize whitehat SEO, i.e "non-hacked," i.e "Google-approved" SEO to maintain their rankings. They have to, else their entire brand and business would collapse, either from being out-ranked or by being blacklisted for shady practices.
Additionally, Google's algorithim hasn't changed much at all from pagerank, in the grand scheme of things. If you can read between their lines, the biggest SEO factor is: how many backlinks from reputable domains do you have pointing at your website? Everything else, including blackhat SEO, are small optimizations for breaking ties. Sort of like PED usage in competitive sports; when you're at the elite level, every little bit extra can make a difference.
Google's algorithm works for its intended purposes, which is to serve pages that will benefit the highest amount of people searching for a specific term. If you are more than 1 SD from the "norm" searching for a specific term, it will likely not return a page that suits you best.
Google's search engine based on virality and pre-approval. "Is this page ranked highly by other highly ranked pages, and does this page serve the most amount of people?" It is not based on accuracy, or informational-integrity -- as many would believe by the latest Medic update -- but simply "does this conform to normal human biases the most?"
If you have a problem with Google's results, then you need to point the finger at yourself or at Google. SEO experts, website operators, etc. are all playing a game that's set on Google's terms. They would not serve such shit content if Google did not: allow it, encourage it, and greatly reward it.
Google will never change the algorithm to suit outliers, the return profile is too poor. So, the next person to point a finger at is you: the user. Let me reiterate, Google's search engine is not designed for you; it is designed for the masses. So there is no logical reason for you to continue using it the way you do.
If you wish to find "deep enough" sources, that task is on you, because it cannot be readily or easily monetized; thus, the task will not be fulfilled for free by any business. So, you must look at where "deep enough" sources lay: books, journals, and experts.
Books are available from libraries, and a large assortment of them are cataloged online for free at Library Genesis. For any topic you can think of, there is likely to be a book that goes into excruciating detail that satisfies your thirst for "deep enough."
Journals, similarly. Library Genesis or any other online publisher, e.g NIH, will do.
Experts are even better. You can pick their brains and get even more leads to go down. Simply, find an author on the subject -- Google makes this very easy -- and contact them.
I'm out of steam, but I really felt the need to debunk this myth that Google is a poor, abused victim, and not an uncaring tyrant that approves of the status quo.
Does it? So for any product search, thrown-together comparison sites without actual substance but lots of affiliate links are really among the best results? Or maybe they are the most profitable result, and thus the one most able to invest in optimizing for ranking? Similarly, do we really expect results on (to a human) clearly hacked domains to be the best for anything, but Google will still put them in the top 20 for some queries? "Normal people want this crap" is a questionable starting point in many cases.
There is no "best result."
Any page falling under "thrown-together comparison sites without actual substance but lots of affiliate links" are temporal inefficiencies that get removed after each major update.
Will more pop up? Yes, and they will take advantage of any ineffeciency or edge-cases in the algorithim to boost their rankings to #1.
Will they stay there for more than a few months? No. They will be squashed out, and legitimate players will over time win out.
This is the dichotomy between "churn and burn" businesses and "long term" businesses. You will make a very lucrative, and quick, buck going full blackhat, but your business won't last and you will be consistently need to adapt to each successive algo update. While long-standing "legit" businesses will only need to maintain market dominance -- something much easier to do than break into the market from ground zero, which churn and burners will have to do in perpetuity until they burn out themselves.
If you want to test this, go and find 10 websites you think are shady, but have top 5 rankings for a certain search phrase. Mark down the sites, keyword, and exact pages linked. Now, wait a few months. Search again using that exact phrase. More likely than not, i.e more than 5 out of 10, will no longer be in the top 5 for their respective phrases, and a couple domains will have been shuttered. I should note that "not deep info" is not "shady," because the results are for the average person. Ex. WebMD is not deep, but it's not shady either.
I implore people to try and get a site ranked with blackhat tricks and lots of starting capital, and see just how hard it is to keep ranked consistantly using said tricks. It's easy to speculate and make logical statements, but they don't hold much weight without first-hand experience and observation.
This isn't true at all in my experience. As a quick test I tried searching for "best cordless iron", on the first page there is an article from 2018 that leads to a very broken page with filler content and affiliate links.  There are a couple of other articles with basically the exact same content rewritten in various ways also on the first page.
A quick SERP history check confirms that this page has returned in the top 10 results for various keywords since late 2018.
>It's easy to speculate and make logical statements, but they don't hold much weight without first-hand experience and observation.
This statement is a bit ironic given that it took me 1 keyword and 5 seconds of digging to find this one example.
Edit: thinking more about this post's specific issue, I'm not sure what to do if all the crawlers fail. Could always hook into the search apis for github, reddit, so, wiki, etc. Full shotgun approach.
> The real problem with search engines is
the fact that so many websites have hacked
SEO that there is no meritocracy left.
I intend to announce the alpha test of my
search engine here on HN.
My search engine is immune to all SEO
> I can possibly not find anything deep
enough about any topic by searching on
In simple terms my search engine gives
users content with the meaning they want
and in particular stands to be very good,
by far the best, at delivering content
with "deep" meaning.
> I need something better.
> However, it will be interesting to
figure the heuristics to deliver better
quality search results today.
Uh, sorry, it's not fair to say that my
search engine is based on "heuristics".
I'm betting on my search engine being
successful and would have no confidence in
Instead of heuristics I took some new
(1) I get some crucial, powerful new data.
(2) I manipulate the data to get the
desired results, i.e., the meaning.
(3) The search engine likely has by far
the best protections of user privacy.
E.g., search results are the same for any
two users doing the same query at
essentially the same time and, thus, in
particular, independent of any user
(4) The search engine is fully intended to
be safe for work, families, and children.
For those data manipulations, I regarded
the challenge as a math problem and took a
math approach complete with theorems and
The theorems and proofs are from some
advanced, not widely known, pure math with
some original applied math I derived.
Basically the manipulations are as
specified in math theorems with proofs.
> A new breakthrough heuristic today will
look something totally different, just as
meritocratic and possibly resistant to
My search engine is "something totally
My search engine is my startup. I'm a
sole, solo founder and have done all the
work. In particular I designed and wrote
the code: It's 100,000 lines of typing
using Microsoft's .NET.
The typing was without an IDE (integrated
development environment) and, instead, was
just into my favorite general purpose text
It's my first Web site: I got a good
start on Microsoft's .NET and ASP.NET (for
the Web pages) from
Jim Buyens, Web Database Development,
Step by Step, .NET Edition, ISBN
0-7356-1637-X, Microsoft Press.
The code seems to run as intended. The
code is not supposed to be just a "minimum
viable product" but is intended for first
production to peak usage of about 100
users a second; after that I'll have to do
some extensions for more capacity. I
wrote no prototype code. The code needs
no refactoring and has no technical
While users won't be aware of anything
mathematical, I regard the effort as a
math project. The crucial part is the
core math that lets me give the results.
I believe that that math will be difficult
to duplicate or equal. After the math and
the code for the math, the rest has been
Ah, venture capital and YC were not
interested in it! So I'm like the story
"The Little Red Hen" that found a grain of
wheat, couldn't get any help, then alone
grew that grain into a successful bakery.
But I'm able to fund the work just from my
The project does seem to respond to your
concerns. I hope you and others like it.
How should I announce the alpha test here
More details are in my now old HN post at
For a short answer, SEO has to do with keywords. My startup has nothing to do with keywords or even the English language at all. In particular, I'm not parsing the English language or any natural language; I'm not using natural language understanding techniques. In particular, my Web pages are so just dirt simple to use (user interface and user experience) that a child of 8 or so who knows no English should be able to learn to use the site in about 15 minutes of experimenting and about three minutes of watching someone use the site, e.g., via a YouTube video clip of screen captures. E.g., lots of kids of about that age get good with some video games without much or any use of English.
E.g., how are you and your spouse going to use keywords to look for an art print to hang on the wall over your living room?
Keyword search essentially assumes (1) you know what you want, (2) know that it exists, and (3) have keywords that accurately characterize it. That's the case, and commonly works great, for a lot of search, enough for a Google, Bing, and more in, IIRC, Russia and China. Also it long worked for the subject index of an old library card catalog.
But as in the post I replied to, it doesn't work very well when trying to go "deep".
Really, what people want is content with some meaning they have at least roughly in mind, e.g., a print that fits their artistic taste, sense of style, etc., for over the living room sofa. Well, it turns out there's some advanced pure math, not widely known, and still less widely really understood, for that.
Yes I encountered a LOT of obstacles since I wrote that post. The work is just fine; the obstacles were elsewhere. E.g., most recently I moved. But I'm getting the obstacles out of the way and getting back to the real work.
Yes, but capturing meaning mathematically is somewhat an unsolved problem in both mathematics, linguistics and semiotics. Your post claims you have some mathematics but (obviously as it's a secret) doesn't explain what.
SEO currently relies on keywords, but SEO as a practice is humans learning. There is a feedback loop between "write page", "user types string into search engine" and "page appears at certain rank in search listing". Humans are going to iteratively mutate their content and see where it appears in the listing. That will produce a set of techniques that are observed to increase the ranking.
I've been successful via my search math. For your claim, as far as I know, you are correct, but actually that does not make my search engine and its math impossible.
> That will produce a set of techniques that are observed to increase the ranking.
Ranking? I can't resist, to borrow from one of the most famous scenes in all of movies: "Ranking? What ranking? We don't need no stink'n ranking".
Nowhere in my search engine is anything like a ranking.
People pay tens or even hundreds of thousands of dollars to move their result from #2 to #1 in the list of Google results.
My user interface is very different from Google's, so different there's no real issue of #1 or #2.
Actually that #1 or #2, etc. is apparently such a big issue for Google, SEO, etc. that it suggests a weakness in Google, one that my work avoids.
You will see when you play a few minutes with my site after I announce the alpha test.
Google often works well; when Google works well, my site is not better. But the post I responded to mentions some ways Google doesn't work well, and for those and some others my site stands to work much better. I'm not really in direct competition with Google.
His post was interesting to me since it mentioned some of the problems that I saw and that got me to work on my startup. And my post might have been interesting to him since it confirms that (i) someone else also sees the same problems and (ii) has a solution on the way.
For explaining my work fully, maybe even going open source, lots of people would say that I shouldn't do that. Indeed, that anyone would do a startup in Internet search seems just wack-o since they would be competing with Google and Bing, some of the most valuable efforts on the planet.
So that my efforts are not just wack-o, (i) I'm going for a part of search, e.g., solving the problems of rahulchhabra07, not currently solved well; (ii) my work does not really replace Google or Bing when they work well, and they do, what, some billions of times a second or some such?; (iii) my user interface is so different from that of Google and Bing that at least first cut my work would be like combining a racoon and a beaver into a racobeaver or a beavorac; and (iv) at least to get started I need the protection of trade secret internals.
Or, uh, although I only just now thought of this, maybe Google would like my work because it might provide some evidence that they are not all of search and don't really have a monopoly, an issue in recent news.
The only way SEO could be impossible is if there was no capacity to change search ranking no matter what - which would be both useless and impossible.
Google has become a not very useful search - certainly not the first place I go when looking for anything except purchases. They've broken their "core business".
- Confluence: Native search is horrible IME
- Microsoft Help (Applications): .chm files Need I say more.
- Microsoft Task Bar: Native search okay and then horrible beyond a few key words and then ... BING :-(
- Microsoft File Search: Even with full disk indexing (I turned it on) it still takes 15-20 minutes to find all jpegs with an SSD. What's going on there?
- Adobe PDFs: Readers all versions. What? You mean you want to search for TWO words. Sacrilege. Don't do it.
Seriously though with all the interview code tests bubble sort, quick sort, bloom filters, etc. Why can't companies or even websites get this right?
And I agree with other commenters as far as Google, Bing, DDG, or other search sites it's been going down hill but the speed of uselessness is picking up.
The other nagging problem (at least for me) is that explicit searches which used to yield more relevant results now are front loaded with garbage. If I'm looking for datasheet on an STM (ST Microsystems) Chip and I start search with STM as of today STM is no longer relevant (it is, meaning it shows up after a few pages). But wow it seems like the SEOs are winning but companies that use this technique won't get my business.
"T": LaTeXIT.app (an app I have used fewer than a dozen times in two years)
"E": Electrum.app (how on earth??)
"G": telemetry.app (an app which cannot even be run)
"RAM" : Telegram
Similar experience searching for most apps, files, and words. It's horrendous.
MacOS Mojave 10.14.6 on a MacBook Pro (Retina, 15-inch, Mid 2015)
Adobe wants to have you by your balls the moment you install their installer :-) I keep a separate computer for Adobe stuff just for that reason. Actually to run some MS junk too.
>Seriously though with all the interview code tests bubble sort, quick sort, bloom filters, etc. Why can't companies or even websites get this right?
I have see some of the stinkiest stuff created by people who will appear smartest in any test these companies can throw at them. Some people are always gambling/gaming and winging it. They leave a trail...unfortunately.
I use this software utility called Search Everywhere, its surprisingly good, fast and fairly accurate most of the times :)
Does turning it off speed it up? I think disk indexing (the way Windows does it) is a remnant from HDD times, and might make things worse when used together with a modern SSD.
> Adobe PDFs: Readers all versions. What? You mean you want to search for TWO words. Sacrilege. Don't do it.
If you're just viewing and searching PDFs (and don't have to fill out PDF forms on a regular basis), check out SumatraPDF. Fastest PDF reader on Windows I've come accross so far.
What is going on there? I'm working on a file system indexer in golang and to walk and parse extension to a mimetype runs at several thousand images a second, over NFS. Windows is full of lots of headscratchers "why is this taking so long?"
I have no answers for Microsoft File Search, it never returns any results for me, I wonder if they even tested it sometimes.
Pasting stack traces and error messages. Needle in a haystack phrases from an article or book. None of it works anymore.
Does this mean they are ripe for disruption or has search gotten harder?
I cannot fathom the number of times I've pasted an error message enclosed by quotes and got garbage results, and then an hour of troubleshooting and searching later I come across a Github/bugtracker issue, which was nowhere in the search results, were the exact error message appears verbatim.
The garbage results are generally completely unrelated stuff (a lot of Windows forum posts) or pages were a few of the words, or similar words, appear. Despite the search query being a fixed string not only does Google fail to find a verbatim instance of it, but instead of admitting this, they return nonsense results.
> Needle in a haystack phrases from an article or book.
I can confirm this part as well, searching for a very specific phrase will generally find anything but the article in question, despite it being in the search index.
Zero Recall Search.
Did you put the error message in quotes? I've never had this problem.
A mock-google that excludes quora and optionally targets stackoverflow/github sounds useful.
Maybe also some quality decline in their gradual shift to less hand weighted attributes and more ML.
I further suppose a lot of that is that The Masses(tm) don't use Google like I do. I put in key words for something I'm looking for. I suspect that The Masses(tm) type in vague questions full of typos that search engines have to try to parse into a meaningful search query. If you try to change your search engine to caters to The Masses(tm), then you're necessarily going to annoy the people that knew what they were doing, since the things that they knew how to do don't work like they used to (see also: Google removing the + and - operators).
For those "needle in a haystack" type queries, instead of pages that include both $keyword1 and $keyword2, I often get a mix of the top results for each keyword. The problem is compounded by news sites that include links to other recent stories in their sidebars. So I might find articles about $keyword1 that just happen to have completely unrelated but recent articles about $keyword2 in the sidebar.
It also appears that Google and DDG both often ignore "advanced" options like putting exact phrase searches in quotation marks, using a - sign to exclude keywords, etc.
None of this seems to have cut down on SEO spam results either, especially once you get past the first page or two of results.
I suspect it all comes down to trying to handle the most common types of queries. Indeed, if I'm searching for something uncomplicated, like the name of the CEO of a company or something like that, the results come out just fine. Longtail searches probably aren't much of a priority, especially when there's not much competition.
So... is there an internal service at Google that works correctly but they're hiding from the world?
It might be useful for Google to make different search engines for different types of people. The behaviors of people are probably multi-modal, rather than normally distributed along some continuum where you should just assume the most common behavior and preferences. \
It would even be easier to target ads...
Or maybe this doesn't exist and spam is too hard.
You just described how YouTube's search has been working lately. When you type in a somewhat obscure keyword - or any keyword, really - the search results include not only the videos that match, but videos related to your search. And searches related to your keywords. Sometimes it even shows you a part of the "for you" section that belongs to the home page! The search results are so cluttered now.
I got down to one with "qwerqnalkwea"
"AEWRLKJAFsdalkjas" returns nothing, but youtube helpfully replaces that search with the likewise nonsensical "AEWR LKJAsdf lkj as" which is just full of content.
Yeeaap, sometime in gradeschool - I think somewhere around 5th grade, age 11 or so, which would be around 1999 - we had a section on computers, where we'd learn the basics about how to use them. One of the topics I remember was "how to do web searches", where a friend was surprised at how easily I found what I was looking for - the other kids had to be trained to use keywords instead of asking it questions.
1. My work got some attention at CES so I tried to find articles about it. Filtering for items that were from the last X days and searching for a product name found pages and pages of plagiarized content from our help center. Loading any one of the pages showed an OS appropriate fake “your system is compromised! Install this update” box.
What’s the game here? Is someone trying to suppress our legit pages, or piggybacking on the content, or is that just what happens now?
2. I was looking for some OpenCV stuff and found a blog walking through a tutorial - except my spidey sense kept going off because the write up simply didn’t make sense with the code. Looking a bit further I found that some guys really well written blog had been completely plagiarized and posted on some “code academy tutorial” sort of site - with no attribution. What have we come to?
Someone mentioned that the sites that have the answer typically is buried in the results. That’s because they tend to favor big brands and authoritative sites. And those sites oftentimes don’t have the answer to the search query.
Google’s results have gotten worse and worse over the years.
Was it Panda update or that one plus the one after - it took out so much of the web and replaced it with "better netizens" who weren't doing this bad thing or that bad thing.
Several problems with that - 1 - they took out a lot of good sites. Many good sites did things to get ranked and did things to be better once they got traffic.
The overbroad ban hammer took many down - and many people that likely paid an seo firm not knowing that seo firms were bad in google's eyes (at the time) - so lots of mom and pops and larger businesses got smacked down and put out of the internet business - just like how many blogs have shut down.
Of course local results taking a lot of search space and the instant answers (50% of searches never get a click cuz google gives them the answer right on the results page (often stolen from a site) are compounding this.
They tried having the disavow tool to make amends - but the average small business doesn't know about these things, and getting help on the webmaster forum is a joke if you are tech inclined, imagine what an experience it is for small business owners.
I miss the days of Matt Cutts warning people "get your Press Releases taken down or nofollowed or it's gonna crush you soon" - problem is most of the people who were profiting from no-longer-allowed seo techniques were not reading Matt's words.
I also appreciated his saying 'tell your users to bookmark you, they may not find you in google results soon' - yeah, at least we were warned about it.
The web has not been the same since those updates, and it's gotten worse since. This does help adwords sell and the big companies that can afford them though.
In these ways google has been kind of like the walmart of the internet, coming in, taking out small businesses, taking what works from one place and making it cheap at their place.
I'd much rather have the results of pre-penguin and let the surfers decide by choosing to remain on a site that may be good that also had press releases and blog links... rather than loosing all the sites that had links on blogs. I am betting most of the users out there would prefer the results of days past as well.
When I switched, about a year and a half ago, I felt like I was switching to a lesser quality search engine (it was an ethical choice and done because I can), that, however, gradually and constantly got better, whereas Google went the opposite path.
Nowdays I only really use Google to leech bandwidth off their maps services. Despite there being a very good alternative available, OpenStreetMaps, they unfortunately appear to have limited (or at least, way less than Google) bandwidth at their disposal... A pity though, because their maps are so awesome, the bicycle map layer with elevation lines is any boy scout's wet dream... but yeah, to find the next hairdresser, Google'll do.
Speaking of bandwidth and OSM reminds me, is there an "SETI-but-for-bandwidth-not-CPU-cycles" kind of thing one could help out with? Like a torrent for map data?
EDIT: Maybe their bandwidth problems are also more the result of a different philosophy about these things. OSM is likely "Download your own offline copy, save everybody's bandwidth and resources" (highly recommended for smartphones, especially in bandwidth-poor Germany) whereas Google is "I don't care about bandwith, your data is worth it".
OSM used to have tiles@home, a distributed map rendering stack, but that shut down in 2012. There is currently no OSM torrent distribution system, but I'd like to set that up.
But instead Google wanted to make things less strict, less semantic, harder to search, and easier to author whatever the hell you wanted. I'm sure it has nothing to do with making it difficult for other entrants to find their way into search space or take away ad-viewing eyeballs. It was all about making HTML easy and forgiving.
It's a good thing they like other machine-friendly semantic formats like RSS and Atom...
"Human friendly authorship" was on the other end of the axis from "easy for machines to consume". I can't believe we trusted the search monopoly to choose the winner of that race.
I think in this case semantic web would not work, unless there was some way to weed out spam. There are currently multiple competing microdata formats out there than enable you to specify any kind of metadata but they still won't help if spammers fill those too.
Maybe some sort of webring of trust where trusted people can endorse other sites and the chain breaks if somebody is found endorsing crap? (as in, you lose trust and everybody under you too)
That's not so hard. It's one of the first problems Google solved.
PageRank, web of trust, pubkey signing articles... I'd much rather tackle this problem in isolation than the search problem we have now.
The trust graph is different from the core problem of extracting meaning from documents. Semantic tags make it easy to derive this from structure, which is a hard problem we're currently trying to use ML and NLP to solve.
HTML has a lot of structure already (for example all levels of heading are easy to pick out, lists are easy to pick out), and Google does encourage use of semantic tags (for example for review scores, or author details, or hotel details). For most searches I don't think the problem lies with being able to read meaning - the problem is you can't trust the page author to tell you what the page is about, or link to the right pages, because spammers lie. Semantic tags don't help with that at all and it's a hard problem to differentiate spam and good content for a given reader - the reader might not even know the difference.
What prevents spammers from signing articles? How do you implement this without driving authors to throw their hands in the air and give up?
But that's hierarchical in a very un-web-y way... Hm.
And that has worked... quite fine. I have no objections (maybe they're a bit too liberal with the new TLDs).
Most of the stuff that makes the hierarchies seem bad are actually faults of for-profit organizations (or other unsuited people/entities) being at the top, and not just that someone is at the top per se. In fact, in my experience, and contrary to popular expectation, when a hierarchy works well, an outsider shouldn't actually be able to immediately recognize it as such.
Can you explain in technical details what you think was lost by Google launching a browser or what properties were unique to XHTML?
Everything you listed above is possible with HTML5 (see e.g. schema.org) and has been for many years so I think it would be better to look at the failure to have market incentives which support that outcome.
You don't think we'd have rich tooling to support it and make it easy to author?
Once people are using it with success, others will follow.
(Of course that won't ever happen, but that's what would be needed.)
EDIT: I don't know why this being downvoted. This is a genuine question to understand if the problem is the size of the index or the fuzzing matching that search engines do.
- fixing search (it has become more and more broken since 2009, possibly before. Today it wotks more or less like their competitors worked before, random mix of results containing some of my keywords.)
- fixing ads (Instagram should have way less data on me and yet manages to present me with ads that I sometimes click instead of ads that are so insulting I go right ahead and enable the ad blocker I had forgotten.)
- saving Reader
tbh, it's one of those "Are you sure you're not an idiot?" replies.
They not only disregard quotes but also their own verbatim setting.
- Establish the behaviour as documented.
- In representing and demonstrating this to others.
It's not that I doubt your word, but that I'd like to see a developed and credible case made. Because frankly that behaviour drives me to utter frustration and distraction. It's also a large part of the reason I no longer use, nor trust, Google Web Search as my principle online search tool.
That said, I might have something on an old blog somewhere. I'll see if I can find it before work starts...
Edit: found it here http://techinorg.blogspot.com/2013/03/what-is-going-on-with-... . It is from 2013 and had probably been going on for a while already at that point.
Edit2: For those who are still relying on Google, here's a nice hack I discovered that I haven't seen mentioned by anyone else:
Sometimes you might feel that your search experience is even worse than usual. In those cases, try reporting anything in tje search results and then retry tje same search 30 minutes later.
Chances are it will now magically work.
It took quite a while for me to realize this and I think in the beginning I might not have realized how fast it worked.
It seemed totally unrealistic however that a fkx would have been created and a new version deployed in such a short time so my best explanation is they are A/B-testing some really dumb changes and then pulling out whoever complains from the test group.
Thinking about it this might also be a crazy explanation for why search is so bad today compared to ten years ago:
There's no feedback whatsoever so most sane users probably give up talking to the wall after one or two attempts. This leaves them with the impression that everyone is happy, so they just continue on the path back to becoming the search engines they replaced.
I'm getting both mixed experiences and references myself looking into this. Which is if anything more frustrating than knowing unambiguously that quoting doesn't work.
I've run across numerous A/B tests across various Google properties. Or to use the technical term: "user gaslighting".
Search disrupted catalogs. What will disrupt search?
Not joking I have a feeling subject specific topics will be further distributed based on expertise & trust.
I think those are called books. ;-)
1. I don’t like the UI anymore. I preferred the condensed view, with more information and less whitespace.
2. Popping up some kind of menu when you return from a search results page shifts down the rest of the items resulting in me clicking search links I am not interested in.
3. It tries to be smarter than me, which it fails in understanding what I am searching for. And by “understanding” I basically mean to honor what I typed and not replacing it with other words.
I try to use DDG more often but Google gives me the best results most of the time if I put in more time.
This functionallity literaly never helped me during search. Not once.
with the url buried somewhere in the GET params.
Google is not a search company, they are an advertising company. The more searches you make, the more revenue they make. Their goal is to quickly and often get you to search things. As long as you keep using their platform, the more you search the better.
Historically, at least, they sold subscriptions.
Hsbc demanded The Telegraph pull negative stories or they would pull their advertising.
All newspapers are full of stories about property "investment" and have a separate segment one a week for paid advertising of property.
Prior to that there were papers which did indeed make their money from subscriptions. But their content was different as well: explicitly ideological and argumentative. The NYT or Wapo idea of neutral journalism was a later development.
Paid search engine that ranks sites based on how often the users click results from that site (and didn't bounce, of course). The fact that it's paid prevents sybil attacks (or, at least, turns a sybil attack into a roundabout way of buying ads).
Of course, at this point, you are now the product even though you paid. But it's a tactic that worked for WoW for ages.
So having a paid search engine does not fully solve the SEO problem. Having no Ads does not takeout the SEO problem of boosting pages to top.
Getting this hypothetical site to get users is the real problem. Same thing with getting users to a Facebook alternative.
There were search engines around but when Google came out with superior search results everyone switched and those search engines quickly vanished. There are search engines in direct competition with Google today. If Google does not provide the best service there's an extremely low switching cost. Bing, Baidu, DuckDuckGo et al would be happy to take your traffic.
Maybe if you want to search reddit, the best search engine is the search bar on reddit.com.
Which is... reasonable.
Google COULD offer more time machine features and perform diffing on pages. But a reddit "page" will always have content changes, as everything is generated from a database and kept fresh on the page. The ONLY metric therefore Google could use would be whatever meta tag or header tag that reddit provides.
That doesn't explain why Google lists the old search results as being from this month, while Duck correctly lists them as being from years past.
I've always wondered, how search engines get a hold of timestamps. Locally with a cached sample, like I explained above, parsing a page's content or some metadata? It's not like the HTTP protocol sends me a "file created/last modified date" along with the payload, does it?
That could explain the first screenshot, but definitely not the second, where google has it tagged as years old.
One function that takes the output of the page, and renders it so only what's user visible, actually gets indexed. So no headers, no JSON data, no nothing, unless it's actually in the final outcome of the page when rendered. This would require jsdom or some other DOM implementation. Hardly hard for Google (Chrome) to achieve this, and been done multiple times.
Second function is a function that does the same call twice, passing the page to function one each time, then compare them two. If you make two calls right next to each other, and some data is different, you discard that from your search index. Instead you only index data that appears in both calls.
Now you don't have the issue of "dynamic content" anymore...
But I do like your idea.
To go a bit further on your idea - you could apply machine learning to analyse the changes. So for example, ML could determine what is probably the "content area" of the page simply by having built out a NN for each website that self-expires the training data at about 1 month (to account for redesigns over time).
The major problem will still be "ads" in the middle of the content, especially odd scroll designs ads that have a different "picture" at each scroll position, as well as video ads that are likely to be different at each screen shot.
Another form of ad being the "linked" words like when you see random words of the paragraphs becoming links that go to shitty websites that define the word but show a bunch of other ads.
I suppose Google could simply install UBlock in it's training data collector harness to help with that stuff. >()
Obviously, it doesn't need to consider the upvotes directly but maybe the text inside the page. Or the date.
Initially I had the impression that search was hard to implement. However, spending a work week figuring it out with ElasticSearch, Solr and Sphinx changed my mind. Getting the solution to work with the scale of the website would take more work, but all the know-how is there, and they could put a whole team to the task for a month.
Google does use it, last time I used it there was even tooling for it in the 'check how my site is to crawl' console, whatever it's called.
Yet I haven't seen even one instance of this anywhere. :(
If Google gives you what you want straight away then you leave; sure, you come back, but they want to be bad enough to keep you on there and good enough to be better than other searches. Their reach and resources cures the latter.
I think, unfortunately, this kind of curated, social approach to search will never be compatible with monetization by ads. I'm not quite sure how to make a search engine profitable without significantly distorting its results. Maybe, depressingly, Google is the best thing possible given the constraint of making a profit?
I guess whatever sauce Google applies to the query maybe works better according to some metrics for some users, but it is a source of endless frustration for me.
A second problem is that now that Google likes to rudely assume to know what you want, i.e. ignoring quotes and negation in search or even modifying keywords, its even harder to find what you want especially if its not on the first page or two of results. Because of this interference even changing your search parameters doesn't change the results much and you see the essentially the same links. What I'd like to see is a search engine that will do a delta between say Google and Bing and drop the links common to the two services. This might lead to uncovering the more esoteric or hidden links buried by the assumptions of the algorithm.
Finally, a last problem that I see right now is the filter bubble effect. I had to search for how to spell "kardashians" in the above paragraph. Now my searches and ads for the next 2-3 weeks will be poisoned by articles or ads about the Kardashians. Taking one for team to make my point, I suppose.
LOL, was that by chance the time that single from Green Day was released? ;)
["Eternal September or the September that never ended is Usenet slang for a period beginning in September 1993"]: https://en.wikipedia.org/wiki/Eternal_September
EDIT: TIL Green Day's single has nothing to do with that September. Huh. I gave them more nerdcred than they deserved...
Want to get in touch with someone? name job site:linkedin.com. Want to find how to solve a tech issue? "issue" site:stackoverflow.com, and so on.
Google's search of sites like this is pretty good (although the recency working well would be really good... but perhaps impossible to solve well), and often better than in site search. But that + very basic fact finding "how much is a lb in kg", "where is restaurant X" are pretty much all I feel it's good for. Then again, I guess it's not supposed to be an encyclopaedia (or it can be, site:wikipedia.org!)
I don't know what that is, but there needs to be paradigmatic change.
best cell phone to buy site:reddit.com
Honestly, my second theory is that google knows exactly when a given reddit article was posted but doesn't trust the user to judge the relevancy of that information. Which might even be reasonable in many, many cases. But it's also annoying. I'm definitely seeing a trend towards "editorializing" search results on Google, where often the first half of the page isn't even websites anymore but some random info box and whatnot, and your search terms are interpreted very liberally, even when in quotes. It's one of those things that is probably better for 99% of users/uses but super annoying when you actually want precise results.
[reddit phone to buy]
What is astonishing, is that the most obvious part of the non-progress in search over the last two decades, is just accepted as the starting point to the way things are.
I haven't noticed it before, is it a recent bug? It certainly seems a significant one, but not representative of my experience using Google - which I also acknowledge is skewed according to the data they have on you and SEO gaming.
But generally - those constraints acknowledged - I still find Google's search to be one of the modern wonders of the world, still the go to, and - yes - not perfect.
As a contrast, take a look at how Diffbot, an AI system that understands the content of the page by using computer vision and NLP techniques on it, interprets the page in question:
It can reliably extract the publication date on each post, without resorting to using site-specific rules. (You can try it on other discussion threads and article pages, that have a visible publication date).
- Spammers had basically two ways to verify their efficacy – they could either sign up to every provider under the stars and test each email with each of them individually, or they could use the absence of a signal as "proof" of being caught by a filter. But neither of these are very efficient. An SEO expert can simply wait for the search engine to detect their changes and verify the result with two or three search engines quickly and automatically.
- For practical purposes whether an email is spam is answered in a binary form: either it ends up in your spam box or it does not. Removing spam-looking things from search results entirely would be devastating for any site victim of a false positive. And how do you implement the equivalent of a spam box in a search engine in a useable way?
- Spam filtering was implemented in different ways on every mail provider, so the bar to entry was "randomized" and spammers would have to be quite careful to pass the filters on a large subset of providers. ISPs and users currently have nowhere near the resources to implement their own ranking rules, but maybe this could be a solution in the mid to long term with massively cheaper hardware.
I was searching for "what phone should i buy in 2020 site:reddit.com" and while a few results where from a year ago, most where from January 2020.
what phone should i buy nov 2019 site:reddit.com
Only one link on the first page, the wikipedia entry for "piano" had anything to do with pianos, (i.e., the instrument invented in Italy 300+ years ago that has hammers, strings, and an iron frame).
So I'm curious whether the issue is that there are too many shopping/business related pages (which is fair, but at least those seem to be piano related), or whether you're getting something completely different.
Then the wikipedia page, then a couple of "online" non-pianos, then a company that happens to be called piano.io
Shopping pages are fine, if we'd get links to, say Steinway, Yamaha, and Bosendorfer, or links to Lang Lang's home page, or something that has more to do with _pianos_.
Point is, it is not a trivial task at all to automatically find out the time that corresponds to the intuitive understanding of the "age" of a web page.
Although it's not using the "This Month" dropdown, doing the vanilla search still gets you the "most correct" answer imho.
For those who might be misled like I used to be DuckDuckGo is just a proxy for Bing.
> DuckDuckGo's results are a compilation of "over 400" sources, including Yahoo! Search BOSS, Wolfram Alpha, Bing, Yandex, its own Web crawler (the DuckDuckBot) and others.
Could you elaborate on your comment, I am sincerely interested in learning the details.
Every time the topic of search comes up on HN, someone always jumps in and says this.
Then there are a bunch of other people who jump in and say that Duck is much more than that.
So, which is correct?
Interpret it as you wish. To me it sounds like they are using 400 sources and their own crawler for the Instant Answers stuff but get all their "traditional links in the search result" from Verizon(?) and Bing.
You can run an experiment: would you ever have to be persuaded that for instance Google is much more than just X? Well, no because it's usually proven that it is without someone on the internet having to tell you about it.
Crawling most of the web on a regular basis is incredibly difficult and requires resources that only a company like Google can provide.
1. SEO has totally warped result rankings. Now instead of getting results which naturally match my keywords because of content, I'm presented with almost exclusively commercial websites which are trying to sell me something. Gone are the days where you could search for technical terms and not be bombarded by marketing websites.
2. Google's AI is far too aggressive for technical searching. It is clear that Google is using NLP to parse queries and substitute synonyms based on some sort of BERT-like encoding. The problem is that a given word may have synonyms that are actually orthogonal in meaning space. For example, if I search for trunk, Google may return results for "boot" as in car trunk, instead of anything related to SVN. Contrived example, sure - but here's where the real problem is: Google's AI is regressing to the layman's mean. It is effectively overfitting to Grandma's average search query. Think of it as the endless summer of search...and since there's no way to usefully customize your search now (can't give people too many options or they might get confused!), you're stuck combing through unrelated results and it is increasingly difficult to disambiguate your search query. Remember when advanced search existed and typing in a question to search was terrible practice? That shouldn't have changed - but as more and more non-technical people started searching, Google (rightly, from a marketing perspective) seized the opportunity aggressively.
3. Primarily because of a combination of points 1 and 2 above, and the endless summer of non-technical users, informative websites have all but disappeared in search results, replaced by shitty SEO optimized blog spam and commercial websites which offer high level summaries primarily to generate traffic and sell you shit. Curious about how to repair your own roof in detail? Well don't bother searching "roof repair" (and quotes seem to be broken too btw) because the first two pages will be full of roof repair company websites.
So what is the result? The portal to the greatest asset in the history of the civilization, the internet, has gradually turned into a neutered, commercialized corporate service where users are a product. It's tragic to see all of that empowerment thrown away in the name of profit. As they say, if the user doesn't see it, it isn't there, and for this reason Google is effectively killing the internet.
I haven't even gotten into the demonstrated potential for search curation and autocomplete abuse, where Google becomes an effective, centralized arbiter of truth as the defacto portal to the internet, and how dangerous such a concentrated power over society can be.
Google really was admirable when it wasn't evil - now I'm about convinced that it needs to die.
Second, Google shifted about 7-10 years ago from searching for webpages to searching for answers. This was reflected in how they communicated about search and also in stuff like showing infoboxes and AMP results. I think this was move into mobile, but it leaves the actual web underserved.
The worst part is they have been removing tons of older content from the web that is still there and is still valuable but has become no longer surfaceable, even with direct quoted phrases.
Drives me mad, Especially when I only want results from a certain year/month.
Now back to the search quality, I wanted to find out when exactly world economic forum 2020 is happening and google won hands down (search term = world economic forum 2020 dates). But for now and for most of my day to day search terms duckduckgo is doing okay. I know this won't last for long as they also are looking at advertising dollars but I hope then someone else will stand up to challenge duckduckgo+google combined.
Try Yandex for an example of a much smaller company doing a fine job at search:
Produces exactly what OP was looking for. QED.
Qwant isn't bad either; don't remember if they piggyback off of other search results:
I also have the experience that Google is much worse the some years ago, I'm also frustrated by everything, but still, it does the job much better than anyone else probably -- otherwise someone would have better results overall.
(Also, yes, I use DuckDuckGo and I don't think it's so much worse than Google, which is good, because I used to think no one would ever be able to do be as good as Google but today there are many competitors that come close.)
I never heard a clear explanation as to why, I just imagined that it was some sort of A/B tested paternalism. Maybe most users really want fuzzy searches when using the commands I use for a strict search.
Fantasy-future: Mozilla could "widen out" their library and hire 20,000 librarians to curate the "New Web" in a non-wiki-format. If you paid each librarian about $150K/annum, that's about $0.60/annum from 5B subscribers, just for their salaries for the advertising-free library.
I guess another committee to paint the whole garden shed is easy, talking about what paint to use is hard.
I suspect Reddit needs to add meta data of the publishing date.
It is complicated in a forum, is it publish date or last comment date. But Google is still getting the basics wrong ie Every comment date is a year ago, still less than a month in search.
It still doesn't help many time based issues (News always displays new headlines on old articles. So you'll see Iran funeral crowd crush on 'old' news articles in search) but it's a start.
Why hasn't a superior competitor emerged yet?
I get that the web is super broad and indexing the entire thing is an enormous task. But perhaps there's room for more niche search engines (eg. focused on tech) to stab away at this Google search monopoly.
Whether this is Google or Reddit causing this is another issue.
Much like resolving DNS queries, I could then ask every server near me for specific terms. If they serve that content, they'll return a list of links containing those search terms.
We could have different apps with different algorithms to sort the bare results according to the criteria that's most relevant to each individual user.
I'm not saying we regress back to the past... I'm sure there is some hidden underlying directory structure in web search today. What I'm suggesting is making it more accessible to users as a way of navigating the web for good content.
Makes sense that it is so. Clearly there's incentives to show up first on a "where to buy phone" search. The actual answer without vested interests has nobody to pay for it. I bet there's some street in Berlin where there's a number of phone shops, but without coordination (eg a shopping centre) there's no way any of them will tell you that the selection is best if you just show up somewhere on that street.
Also on a deeper note, I fear the internet is being flooded with terrible authority sites. Superficial articles that sound right and have lots of affilicate links. But the incentive is not to be informative and correct, rather to look right to people who don't know better - that's why they're there! - in order to funnel them to certain shops. I can just imagine there's an anti-vaxx site somewhere that purports to have real evidence and sells vaginal anti-cancer eggs.
Similarly, a lot of inbound/offsite SEO tactics are theoretically things that help the user as well. Providing content people want to link to, getting authorities to link to said relevant content, etc are all things a user would appreciate.
Using SEO tactics as an anti signal would boost poorly designed sites, inaccessible sites, etc. What really needs to be done is something that filters out sites creating thin content just for the purposes of getting traffic, and that's harder to filter out.
Turtles all the way down.
And you are just making unjustified bold statements
"Reddit best 2020 phone" works for me much better, though ...
User friendly it is? No.
Also it's funny how they lie about having about a quadrillion results, and then when you're on page 12, suddenly that's all the 150 results they have, sorry.
I have not hit captchas very often. Even IP bans only last hours. It is annoying, but low risk. I do searching from shell prompt, never browser.
What always amazed me about Google is that they are not willing to let users to skip pages 1-11 and immediately jump to, say, page 12.
Sometimes queries are non-commercial and there are no ads. Still, jumping straight to page 12 can trigger a captcha.
I wrote a script that reverses or randomises the order of Google results, as an experiment.
I was doing some deep searching in very specific date ranges with lots of modifiers and google gave me the "your query reminds us of bots or some shit try again later" and I had to stop because it wouldn't let the queries through anymore. Is that what that was?
I've encountered Google ReCaptcha as well a few times when searching from a logged out browser.
So it's NOT a case study...
My broad take is that previously search worked (Altavista era through early-mid Google) because it referenced organic links put in place by real humans and keywords, plus basic metadata like physical location of servers, freshness of content, frequency of update, metadata behind domains, etc.
Since the mid 1990s that has increasingly been gamed heavily, PageRank style approaches have come and sort of gone, and a vast majority of content accessed by consumers has moved to one of a small number of platforms or walled gardens, often mobile applications. I don't know for sure, but I'd assume with confidence that the majority of result inclusion decisions made by Google are now based on rejection blacklists, 'known good' safe hits and effectively minimizing anomalous results above the fold. Simultaneously, the internet has become an international place and the bar has been raised for new entrants such that an incapacity to return meaningful results in multiple languages bars a search engine from any significant market position. A huge percentage of results are either Wikipedia/reference pages, local news or Q&A sites. Further, huge amounts of what is out there is behind Cloudflare or similar firewalls which will probably frustrate new and emerging spiders.
The existing monopolies, having some established capacity and reputation in this regard, may have become somewhat entrenched and lazy, and do not care enough about improvement. They are literally able to sail happily on market inertia, while generating ridiculous advertising revenues. In China we have Baidu, and in most rest of the world, Google.
Who will bring about a new search engine? Greg Lindahl https://news.ycombinator.com/user?id=greglindahl who formerly made Blekko is apparently working on another one.
I once wrote a small one (~2001) which was based upon the concept of multilingual semantic indices (a sort of non-rigorously obtained language-neutral epistemology was the core database). I still think this would be a meaningful approach to follow, since so much is lost in translation, particularly around current events. One problem with evolving public utilities in this area are that such approaches border on open source intelligence (OSI) and most people with linguistic or computational chops in that area leave academia and get eaten up by the military industrial complex or Google.
Now we have https://commoncrawl.org/the-data/get-started/ which makes reasonable quality sample crawl data super-available. Now "all" we need is people to hack on algorithms and a means to commercialize them as alternatives to the status quo.
Not saying you're wrong about the dates but... I dunno... seems like an odd query. "Phone to buy?" And why not just search Reddit?
This site feels like we're just complaining about nothing these days. (Downvote away!)
He wasn't looking for where to buy a phone in Berlin, and besides, it's an old reddit thread.
The other problem is that websites provide very inconsistent meta-data, and worse, are actively trying to game the system by abusing that metadata. So, things like timestamps are not standardized at all (well, a little bit via things like microformats). So recency of data is important as one of many relevance signals but not necessarily super accurate. And given that it's a relevance signal, you have people doing SEO trying to game that as well.
Anyway, Hacker News could also do with some search improvements to its ranking. It always pulls up some ancient article as the most relevant thing as opposed as the article from last week that I remembered and wanted to find back. I consult people on building search engines with Elasticsearch, so I have some idea what I'm talking about. It seems the ranking is basically "sort by points". Probably not that hard to fix that with some additional ranking signals. I just searched for "search" expecting to find this article near the top 5 (because it is recent and has search in the title). Nope; not a thing.