It kills my curiosity and intent with fake knowledge and bad experience. I need something better.
However, it will be interesting to figure the heuristics to deliver better quality search results today. When Google started, it had a breakthrough algorithm - to rank page results based on number of pages linking to it. Which is completely meritocratic as long as people don't game for higher rankings.
A new breakthrough heuristic today will look something totally different, just as meritocratic and possibly resistant to gaming.
> It kills my curiosity and intent with fake knowledge and bad experience. I need something better.
It's hard for me to take this seriously when wikipedia exists, and almost always ranks very highly in search results for searches for "knowledge topics". Between wikipedia and sources cited on wikipedia, I find the depth of almost everything worth learning about to be far greater than I can remember in, say, the early 2000s, which is seems like the "peak" of google before SEO became so influential.
In general, I think there are a lot of people wearing rose tinted glasses looking at the "old internet" in this thread. The only thing that has maybe gotten worse is "commercial" queries like those for products and services. Everything else is leaps and bounds better.
These days I rarely see a forum result appear unless I know the specific name of the forum to begin with and utilize the search by site domain operator.
Another problem these days, unrelated to search but dooming it in the process, is all these companies naming themselves or their products after irrelevant existing english words, rather than making up something unique. It's usually fine with major companies, but I think a lot of smaller companies/products shoot themselves in the foot with this and don't realize it. I was once looking for some good discussion and comparison on a bibliography tool called Papers, and that was just a pit of suffering getting anything relevant at all with a name like that.
... still does nothing to answer the point that Web search itself is unambiguously and consistently poorer now than it was 5-10 years ago.
Yes, I find myself relying far more on specific domain searches, either in the Internet sense, or by searching for books / articles on topics rather than Web pages. Because so much more traditionally-published information is online, this actually means the net of online-based search has improved, but not for the most part because of improved Web-oriented resources (Webpages, discussions, etc.), but because old-school media is now Web-accessible.
You search for quality books online, mostly through discussion forums as search fails here, or through following references of books and articles. Then spend time digesting them.
What’s sad is Google has generally indexed the pages I want, it’s just getting harder to actually find them.
Clearly, they don’t want it available because their tech docs they host stop at AM3b. I was hoping an X470 (or other flavor) motherboard manufacturer would have something floating around...
I wonder how much of this could be obtained back by penalizing:
2. The number of ads on the page, or the depth of the ad network
This might start a virtuous circle, but in the end, this is just a game of cat-and-mouse, and website might optimize for this as well.
What we might need to break this is a variety of search engines that uses different criteria to rank pages. I suspect it would be pretty hard, if not impossible, to optimize for all of them.
And in any case, frequently change the ranking algorithms to combat over-optimization by the websites (as that's classically done against ossification for protocols, or any overfitting to outside forces in a competitive system).
Maybe instead of a problem, there is an opportunity here.
Back before Google ate the intarwebs, there used to be niche search engines. Perhaps that is an idea whose time has come again.
For example, if I want information from a government source, I use a search engine that specializes in crawling only government web sites.
If I want information about Berlin, I use a search engine that only crawls web sites with information about Berlin, or that are located in Berlin.
If I want information about health, I use a search engine that only crawls medical web sites.
Each topic is still a wealth of information, but siloed enough that the amount of data could be manageable to a small or medium-sized company. And the market would keep the niches from getting so small that they become useful. A search engine dedicated to Hello Kitty lanyards isn't going to monetize.
featuring the semantic map of  https://swisscows.ch/
incorporating  https://curlie.org/ and Wikipedia and something like Yelp/YellowPages embedded in Open Streetmaps for businesses and points of interest, with a no frills interface showing the history (via timeslide?) of edits.
Not really. A web directory is a directory of web sites. I can't search a web directory for content within the web sites, which is what a niche search engine would do.
Another potentially useful approach is to construct a graph database of all these sites, with links as edges. If one page gets flagged as junk then you can lower the scores of all other pages within its clique . This could potentially cause a cascade of junk-flagging, cleaning large swathes of these undesirable sites from the index.
I don’t like the whole concept of SEO, I don’t like the way the web is today, but I think we should stop and think before resorting to “immoral few is destroying things, we unfuck it reclaim what we deserve” type simplification.
So much is obvious. The discussion is about whether there is a less shitty metric.
This is ultimately whack-a-mole. For the past decade or so, point-of-origin based blockers have worked effectively, because that's how advertising networks have operated. If the ad targets start getting unified, we may have to switch to other signatures:
- Again, sizes of images or DOM elements.
- Content matching known hash signatures, or constant across multiple requests to a site (other than known branding elements / graphics).
- "Things that behave like ads behave" as defined by AI encoded into ad blockers.
- CSS / page elements. Perhaps applying whitelist rather than blacklist policies.
- User-defined element subtraction.
There's little in the history of online advertising that suggests users will simply give up.
And that will probably be blocked or severely locked down by your most popular browser, chrome.
I don't need to give advertisers data myself when someone else I know can. I really doubt it is easy to throw off chrome monopoly at this stage. I presume we will see a chilling effect before anything moves like IE.
I had a site that appeared in DMOZ, and the description was written in such a way that nobody would want to visit it. But it was one of only a few sites on the internet at the time with its information, so it was included.
Create a core protocol at the same level as DNS etc., that web servers can use to offer an index of everything they serve/relay. A multitude of user-side apps may then query that protocol, with each app using different algorithms, heuristics and offering different options.
There are several puzzling omissions from Web standards, particularly given that keyword-based search was part of the original CERN WWW discussion:
IF we had a distributable search protocol, index, and infrastructure ... the entire online landscape might look rather different.
Note that you'd likely need some level of client support for this. And the world's leading client developer has a strongly-motivated incentive to NOT provide this functionality integrally.
A distributed self-provided search would also have numerous issues -- false or misleading results (keyword stuffing, etc.) would be harder to vet than the present situation. Which suggests that some form of vetting / verifying provided indices would be required.
Even a provided-index model would still require a reputational (ranking) mechanism. Arguably, Google's biggest innovation wasn't spidering, but ranking. The problem now is that Google's ranking ... both doesn't work, and incentivises behaviours strongly opposed to user interests. Penalising abusive practices has to be built into the system, with those penalties being rapid, effective, and for repeat offenders, highly durable.
The problem of potential for third-party malfeasance -- e.g., engaging in behaviours appearing to favour one site, but performed to harm that site's reputation through black-hat SEO penalties, also has to be considered.
As a user, the one thing I'd most like to be able to do is specify blacklists of sites / domains I never want to have appear in my search results. Without having to log in to a search provider and leave a "personalised" record of what those sites are.
(Some form of truly anonymised aggregation of such blocklists would, of course, be of some use, and facilitating this is an interesting challenge.)
I decided it is time for us to have a bouncer-bots portal (or multiple) - this would help not only with search results, but also could help people when using twitter or similar - good for the decentralized and centralized web.
My initial thinking was these would be 'pull' bots, but I think they would be just as useful, and more used, if they were perhaps active browser extensions..
This way people can choose which type of censoring they want, rather than relying on a few others to choose.
I believe creating some portals for these, similar to ad-block lists - people can choose to use Pete'sTooManyAds bouncer, and or SamsItsTooSexyfor work bouncer..
ultimately I think the better bots will have switches where you can turn on and off certain aspects of them and re-search.. or pull latest twitter/mastodon things.
I can think of many types of blockers that people would want, and some that people would want part of - so either varying degrees of blocking sexual things, or varying bots for varying types of things.. maybe some have sliders instead of switches..
make them easy to form and comment on and provide that info to the world.
I'd really like to get this project started, not sure what the tooling should be - and what the backup would be if it started out as a browser extension but then got booted from the chrome store or whatever.
Should this / could this be a good browser extension? What language / skills required for making this? It's on my definite to do future list.
1. Become large.
2. Are shared.
3. And not particularly closely scrutinised by users.
4. Via very highly followed / celebrity accounts.
There are some vaguely similar cases of this occurring on Twitter, though some mechanics differ. Celebs / high-profile users attract a lot of flack, and take to using shared blocklists. Those get shared not only among celeb accounts but their followers, though, because celebs themselves are a major amplifying factor on the platform, being listed effectively means disappearing from the platform. Particularly critical for those who depend on Twitter reach (some artists, small businesses, and others).
Names may be added to lists in error or malice.
This blew up summer of 2018 and carried over to other networks.
Some of the mechanics differ, but a similar situation playing out over informally shared Web / search-engine blocklists could have similar effects.
Isn't that pretty much a site map?
Systems such as lunr.js are closer in spirit to a site-oriented search index, though that's not how they're presently positioned, but instead offer JS-based, client-implemented site search for otherwise static websites.
Fail an audit, lose your reputation (ranking).
The basic principle of auditing is to randomly sample results. BlackHat SEO tends to rely on volume in ways that would be very difficult to hide from even modest sampling sizes.
If a good site is on shared hosting will it always be dismissed because of the signal of the other [bad] sites on that same host? (you did say at DNS level, not domain level)
So, back to gopher? That might actually work!
How about we don't start looking at the /how/ a site is made, when it's already difficult to sort out the /what/ it is.
I should know - I blew the whistle on the whole censorship regime and walked 950 pages to the DOJ and media outlets.
--> zachvorhies.com <--
What did I disclose? That Google was using a project called "Machine Learning Faireness" to rerank the entire internet.
Part of this beast has to do with a secret Page Rank score that Google's army of workers assign to many of the web pages on the internet.
If wikipedia contains cherry picked slander against a person, topic or website then the raters are instructed to provide a low page rank score. This isn't some conspiracy but something openly admitted by Google itself:
See section 3.2 for the "Expertise, Authoritativeness and Trustworthiness" score.
Despite the fact that I've had around 50 interviews and countless articles written about my disclosure, my website zachvorhies.com doesn't show up on Google's search index, even when using the exact url as a query! Yet bing and duckduckgo return my URL just fine.
Don't listen to the people who say that's its some emergent behavior from bad SEO. This deliberate sabotage of Google's own search engine in order to achieve the political agenda of the controllers. The stock holders of Google should band together in a class action lawsuit and sue the C-Level executives of negligence.
If you want your internet search to be better then stop using Google search. Other search engines don't have this problem: I'm looking at qwant, swisscows, duckduckgo, bing and others.
And maybe your site doesn't get ranked well because it's directly tied to project veritas. I don't like being too political, especially on hn and on an account tied to my real identity, but project veritas and it's associates exhibit appalling behavior in duplicity and misdirection. I would hope that trash like this does get pushed to the bottom.
Of course Google's own bias (and involvement in particular political campaigns) is well known, and opposed to Project Veritas, so it's quite possible that you are right and Google is downranking PV.
Would that be good? Well, that's an opinion that depends mostly on the bias of the commentator.
I doubt this affected search rankings but Project Veritas does have a ton of credibility issues.
If this surprises you, then welcome to the systematic bias of wikipedia.
"Gotchas" and "gaffes" are taken out of context all the time. And when reporters lie to go undercover and obtain a story, they're usually hailed as heroes.
These objections are only used to "discredit" opposing viewpoints. People don't object when sources they agree with use the same tactics.
And to be even more fair, you aren’t talking about “people” here but the OP who can clearly state their own personal preferences, without the need for you to construct a theory of hidden bias.
Most people don’t identify as independents, and, anyhow, studies of voting behavior show that most independents have a clear party leaning between the two major parties and are just as consistently attached to the party they lean toward as people who identify with the party.
So not only is it the case that most people don't identify as independents, most of those who do aren't distinguishable from self-identified partisans when it comes to voting behavior.
That's not “on the contrary”; 38% < 50%; most Americans don't identify as independents.
Heck, even 43% (the number that don't identify as either Republicans or Democrats) is less than 50%; most Americans specifically identify as either Democrats or Republicans.
> but independents are much more important in politics than they are usually given credit for.
On the contrary, independents are given outsize importance by the major media, because they are treated as just as large of a group as they actually are, but are treated as if their voting behavior was much more different from that of self-identified partisans than it actually is.
> most Americans specifically identify as either Democrats or Republicans
Now you just did the same thing I did but in reverse! See my comments about more specific words for plurality such as "most".
> independents are given outsize importance by the major media
I don't think independents are given much importance at all by the major media, but I have increasingly disconnected from that circus too so I might not be a good judge of it.
No, I didn't.
> See my comments about more specific words for plurality such as "most".
“Most” (as an adjective applied to a group) is for a majority, not a plurality, but that's okay, because 57% is an absolute majority, anyway, which is why my reference, which did use “most” for a majority, was not what you did.
Maybe not the best source, but quite a few sites returned similar verbiage about most usually but not always referring to a plurality. I'm open to correction on this point, and am genuinely curious about this meta argument now. Having a hard time finding a statistical dictionary that references the words we are using here (majority, most, etc).
Why the assumption of left or right, when at least
in the US most people identify as independents?
Voters declaring themselves as independent of a major
political party rose nationally by 6 percentage points
to 28 percent of the U.S. electorate between 2000 and
last year’s congressional midterm contest in states
with party registration, analyst Rhodes Cook wrote
in Sabato’s Crystal Ball political newsletter at the
University of Virginia Center for Politics.
But as few as 19 percent of Americans identifying as
independent truly have no leaning to a party, according
to the Pew Research Center.
'Difference-maker' independent voters in U.S. presidential election crosshairs
The gist of it is that for any given party, more people are not members of it (e.g. 70%+ of Americans are not Democrats) so you are not safe in guessing someone political affiliations. Stats wise, you will likely be wrong.
Secondarily, saying that someone who leans to a party is basically in that party is wrong.
For example, I am an independent. I lean republican on the local level, but in the Federal elections I lean Democrat. If you polled me, I would say I lean republican since that’s usually what’s on my mind
Look up the story I'm talking about and you'll see that it was ignored by left-wing sources:
The entire media establishment ignores these tactics when used by other organizations they agree with.
You're free to select the occasional time when a news agency gets it wrong (and apologizes and issues a correction)...but to try and compare it to an intentionally biased organization that always, intentionally gets it wrong (by willfully setting up people, then editing their responses, to paint things in a particular light), and never backs down, never apologizes, well...at best, your bias is showing.
And I'd say you are mistaken when you call changing "take that [violence] to the suburbs" into "a call for peace" merely getting it wrong. What CNN did could hardly be anything other than intentional deception from an organization with a strong left-wing bias.
That's quite a claim. If that's true, why aren't they in jail?
I'm sure they have many powerful enemies who would love to see them convicted of these multiple felonies.
O'Keefe plead out to a misdemeanor and served probation + a fine. But the crime he committed was a felony.
> In January 2012, O'Keefe released a video of associates obtaining a number of ballots for the New Hampshire Primary by using the names of recently deceased voters.
That's probably a felony in most parts of the US, although O'Keefe claims it wasn't since he didn't actually vote.
Not among credible sources.
People have a vested interest in destroying the idea that anything can be a non-partisan "fact". Anything can become a smear. Only the most absolutely egregious ones can be reined in by legal action (e.g. Alex Jones libelling the Sandy Hook parents).
(This is not just internet, of course; the British press mendacity towards the Royal family is playing out at the moment.)
Here is someone who believes that a private company's open attempts to rank websites by quality amounts to "seditious behaviour" deserving of criminal prosecution, and the only people willing to pay attention were Project Veritas. Google has plenty of ethics issues, but this guy's claims are absurd.
I just tried it, it's just showing results for "zach vorhies" instead, which it thinks you meant. I just tried a few other random "people's names as URL" websites I could find, sometimes it does this, sometimes it doesn't.
Furthermore, the results that do appear are hardly unsympathetic to you. If google is censoring you/your opinions, they're doing a very poor job of it.
> If wikipedia contains cherry picked slander against a person, topic or website then the raters are instructed to provide a low page rank score
This sounds like a good thing to me. Sites that contain lies, fabrications, and falsehoods should not be as highly ranked as those which do not.
Why should shareholders sure Google for, as far as I can tell from your argument, trying to provide users with a more useful product?
Google does not have the moral authority to censor the internet, and it's absolutely wrong for them to attempt this. Information should be free, and you don't have the right to get in the way of that.
They do run a popular search page, and have to decide what to do with a search like "Is the Earth flat?".
Personally, I would prefer they prominently display a "no". Others would disagree, but a search engine is curation by definition, that's what makes it useful.
Oh hey! I just tried this, and it does! image: https://i.imgur.com/OqqxSq3.png
What you ask for isn't freedom but control over everyone else - there is nothing stopping you from running your own spiders, search engines, and rankings.
The fun thing about facts is that nobody needs to decide whether or not they are true. Perhaps the fact that you can honestly claim to think otherwise means you need to take a step back and examine your reasoning.
This is an example of a "fact" that I'm talking about. It's not a fact, it's an opinion being presented as fact. I guess if you present yourself this way online you have no problem with Google controlling what "facts" are found when you use their search engine.
I guess I'll just have to wait until they start peddling a perspective you disagree with.
> If wikipedia contains cherry picked slander against a person, topic or website
Just remember that this is the comment we're discussing... how does one determine if a statement is slander? Are you telling me Google has teams of investigative journalists following up on each of their search results? Or did someone at Google form an opinion, then decide their opinion is the one that should be presented as "fact" on the most popular search engine in the world?
I am not sure what is happening but I directly searched for your website : zachvorhies.com on Google (in Australia, if that matters), which returned the website as the first result:
Also, is this guy a Q follower or something? The favicon is a Q.
Actually the favicon is a headshot.
And that's also the icon in the page source:
<link rel=”shortcut icon” href="favicon.ico" type=”image/x-icon” />
I wonder why Google shows a Q.
A placeholder used if the algorithm thinks the favicon is not appropriate for some reason.
Anyhow, I would be interested to know what results you get in the EU.
For what it's worth, I'm in Australia and have the same search results as erklik.
I wouldn't trust a single word that comes out of this guy's mouth.
> Don't listen to the people who say
> this beast has to do with a secret Page Rank score that Google's army of workers
Anyone who tells you to not listen to others intends to tell you gossip about why you shouldn't gather data that conflicts their own views. They'll SAY all sorts of things to try to make you "see" it their way. Rational people DO things to prove or disprove a given belief. (Just to note SAYING a bunch of stuff does not equal DOING a bunch of stuff.)
For anyone rational and interested, Google "this video will make you angry" and bump the speed to 75%. The idea is that memes will compete for "compute" space in both people's minds and the space in which they occupy the Internet. Those who get "infected" with a given meme, will go to all sorts of lengths to rationalize why that meme is true, even though the meme the arguing against is just as irrational as their own.
Just searched it and it's literally the first result.
(Were you on mobile and using a smart keyboard?)
Sometimes its pretty innocuous, a CMS system updates every page with an updated copyright notice at the start of each year. Other times its less innocuous where the page simply updates the "related links" or side bar material and refreshes the content.
It is still an unsolved ranking relevance problem where a student written, 3 month old description of how AM modulation works ranks higher than a 12 year old, professor written description. There isn't a ranking signal for 'author authority'. I believe it is possible to build such a system but doing so doesn't align well with the advertising goals of a search engine these days.
 disclaimer I worked at Blekko.
Is it possible that there is no site providing non fluffy content on your query? For a lot of niche subjects, there really are very few if any substantial content on that topic.
“Very few if any substantial”
The problem is that google won’t even show me the very few anymore. It’s just fluff upon fluff and depth (or real insight at least) is buried in twitter threads and reddit/hn comments, and github issue discussion.
I fear the seo problem has not only killed knowledge propagation, but also thoroughly disincentivized smart people from even trying. And that makes me sad.
Yeah, if I know where I'm looking (the sites) then google is useful since I can narrow it to that domain. But if I don't know where to look then I'm SOL.
The serendipity of good results on Google.com is no longer there. And given the talent at google you have to wonder why.
And guess what: most users are normal. Us here on HN are weird:
1. Something is or provides access to good quality content.
2. Because of this quality, it gets more and more popular.
3. As popularity grows, and commercialization takes over, the incentive becomes to make things "more accessible" or "appealing" to the "average" user. More users is always better right!?
4. This works, and quality plummets.
5. The thing begins to lose popularity. Sometimes it collapses into total unprofitability. Sometimes it remains but the core users that built the quality content move somewhere else, and then that new thing starts to offer tremendous value in comparison to the now low quality thing.
If only there were some kind of analog for effective ways to locate information. Like if everything were written on paper, bound into collections, and then tossed into a large holding room.
I guess it's past the Internet's event horizon now, but crawler-primary searching wasn't the only evolutionary path to search.
Prior to Google (technically: AdWords revenue funding Google) seizing the market, human-currated directories were dominant [1, Virtual Library, 1991] [2, Yahoo Directory, 1994] [3, DMOZ, 1998].
Their weakness was always cost of maintenance (link rot), scaling with exponential web growth, and initial indexing.
Their strength was deep domain expertise.
Google's initial success was fusing crawling (discovery) with PageRank (ranking), where the latter served as an automated "close enough" approximation of human directory building.
Unfortunately, in the decades since we seem to have forgotten how useful hand-currated directories were, in our haste to build more sophisticated algorithms.
Add to that that the very structure of the web has changed. When PageRank first debuted, people were still manually tagging links to their friends' / other useful sites on their own. Does that sound like the link structure we have in the web now?
Small surprise results are getting worse and worse.
IMHO, we'd get a lot of traction out of creating a symbiotic ecosystem whereby crawlers cooperate with human currators, both of whose enriched output is then fed through machine learning algorithms. Aka a move back to supervised web search learning, vs the currently dominant unsupervised.
 https://en.m.wikipedia.org/wiki/World_Wide_Web_Virtual_Libra... , http://vlib.org/
 https://en.m.wikipedia.org/wiki/DMOZ , https://www.dmoz-odp.org/
You've also got the increase in resources needed (you need tons of staff for effective curation), and the issues with potential corruption to deal with (another thing which significantly effected the ODP's usefulness in its later years).
I think a more fundamental problem is a large portion of content production is now either unindexable or difficult to index - Facebook, Instagram, Discord, and YouTube to name a few. Pre-Facebook the bulk of new content was indexable.
YouTube is relatively open, but the content and contexts of what is being produced is difficult to extract, if, for the only reason that people talk differently than they write. That doesn’t mean, in my opinion, that the quality of a YouTube video is lower than what would have been written in a blog post 15 years ago, but it makes it much more difficult to extract snippets of knowledge.
Ad monetization has created a lot of noise too, but I’m not sure without it, there would be less noise. Rather it’s a profit motive issue. Many, many searches I just go straight to Wikipedia and wouldn’t for a moment consider using Google for.
Frankly I think the discussion here is way better than the pretty mediocre to terrible “case study” that was posted.
From a “knowledge-searching” perspective, at a very rudimentary level, it makes sense to look to sites/pages that are often cited (linked to) by others as better sources to rank higher up in the search results. It’s a similar concept to looking at how often academic papers are cited to judge how “prominent” of a resource they are on a particular topic.
However, as with academia, even though this system could work pretty well for a long time at its best (science has come a long way over hundreds of years of publications), that doesn’t mean it’s perfect. There’s interference that could be done to skew results in one’s favor, there’s funneling money into pseudoscience to turn into citable sources, there’s leveraging connections and credibility for individual gain, - the list goes on.
The heuristic itself in not innately the problem. The incentive system that exists for people to use the heuristic in their favor creates the issue. Because even if a new heuristic emerges, as long as the incentive system exists, people will just alter course to try to be a forerunner in the “new” system to grab as big a slice of the pie while they can.
That’s a tough nut for google (or anyone) to crack. As a company, how could they actually curate, maintain, and evaluate the entire internet on a personal level while pursuing profitability? That seems near impossible. Wikipedia does a pretty damn good job at managing their knowledge base as a nonprofit, but even then they are always capped by amount of donations.
It’s hard to keep the “shit-level” low on search results when pretty much anyone, anywhere, anytime could be adding in more information to the corpus and influencing the algorithms to alter the outcomes. It gets to a point where getting what you need is like finding a needle in a haystack that the farmer got paid for putting there.
That's not actually the problem described here. His problem is actually a bit deeper rootet since he specified the exact parameters of what he wants to see, but got terrible results. He specified a search for "site:reddit.com" but the resilts he got were ireelevant and worse than the results that he would have got when searching reddit directly.
I don't say that SEO, sites that copy content and only want to genrate clicks and large sites that culminate everything are bad fkr the internet of today, but the level of results we get off of search engines today is with one word abysmal.
> As you can see, I didn’t even bother clicking on all of them now, but I can tell you the first result is deeply irrelevant and the second one leads to a 4 year old thread.
He also wrote
> At this point I visibly checked on reddit if there’ve been posts about buying a phone from the last month and there are.
Duckduckgo even recognized the date to be 4 years old and reddit doesn't hide the age of posts. There are newer more fitting posts, but they aren't shown. And again a quote
> Why are the results reported as recent when they are from years ago, I don’t know - those are archived post so no changes have been made.
So your argument (also it really is a problem) in this case is a red herring. The problem lies deeper since google seems to be unable to do something as simple as extracting the right date and ddg ignores it. Also since all the results are years old it adds to the confusion why the results don't match the query. (He also wrote that the better matches were indeed indexed, but not found)
The entirety of the problem is the date query is not working, because of SEO for freshness. You also said this other wrong thing: "That's not actually the problem described here" . That is the problem here. The page shows up as being updated because of SEO.
The date in a search engine is the date the page was last judged to be updated, not one of the many dates that may be present on the page. When was the last reddit design update? Do you think that didn't cause the pages to change? Wake up.
Wrong. Internal reddit search has bad results too, and why even let you filter by last month.
The first, and the most important perhaps was page load speed. We adopted a slightly more complicated pipeline on the server side, reduced the amount of JS required by the page, and made page loading faster. That improved both the ranking and actual usability.
The second was that SEO people told us our homepage contained too many graphics and too few text, so search engines didn't quite extract as much content from our pages. We responded by adding more text in addition to the fancy eye-catching graphics. That improved both the ranking and actual accessibility of the site.
I really wish everyone would qualify, and not just black-hat seo / whitehat - there are many types of SEO, often with different intentions.
I understand there has been a lot of google koolaid (and others) about how seo is evil it's poisoning the web, etc...
But now, or has it been a couple years how? google had a video come out saying an SEO firm is okay if they tell you it takes 6 months... they have upgraded their pagespeed tool which helps with seo, and were quite public about how they wanted ssl/https on everything and that that would help with google seo..
so there are different levels of SEO, someone mentioned an seo plugin I was using on a site as being a negative indicator, and I chuckled - the only thing that plugin does is try to fix some of the inherent obvious screwups of wordpress out of the box... things like no meta description which google flags on webmaster tools as multiple same meta descriptions.. also tries to alleviate duplicate content penalties by no-indexing archives or and categories or whatever.
So there is SEO that is trying to work with google, and then there is SEO where someone goes out and puts comments on 10,000 web sites only for the reason of ranking higher.. to me that is kind of grey hat if it was a few comments, but shady if it's hundreds and especially if it's automated..
but real blackhat stuff - hacking other web sites and adding links.. or having a site that is selling breast enlarging pills and trying to get people who type in a keyword for britney spears.. that is trying to fool people.
I have built sites with good info and had to do things to make them prettier for the ranking bot, but they are giving the surfer what they are looking for when they type 'whatever phrase'... I have also made web sites better when trying to also get them to show up in top results.
So it's not always seo=bad, sometimes seo=good for the algorythm and the users.
and sometimes it's madness - like extra fluff to keep a visitor on page longer to keep google happy like recipes - haha - many different flavors of it - and different intentions.
So for example duckduckgo is still trying to use various providers to emulate essentially "early google without the modern google privacy violations", but when I start to think about many of the most successful company-netizens, one thing that stands out is early day exclusivity has a major appeal.
So I imagine a search engine that is only crawling the most useful websites and blogs and works on a whitelist basis. Instead of trying to order search results to push bad results down, just don't include them at all or give them a chance to taint the results. It would have more overhead, and would take a certain amount of time to make sure it was catching non-major websites that are still full of good info ... but once that was done it would probably be one of the best search panes in existence. I have also thought to streamline this, and I know it's cliche, but surely there could be some ml analysis applied to figuring out which sites are SEO gaming or click-baiting regurgitators and weed them out.
Just something I've been mulling over for a while now.
And what if what you're searching exists only in a non good website? Isn't it better to show a result from a non good website than showing nothing?
Can you give some examples of queries/topics? Not that I disagree, I often have the same problem, but have found ways to mitigate.
Many websites do have "hacked" (blackhat/shady) SEO, but these websites do not last long, and are entirely wiped out (see: de-ranked) every major algorithm update.
The major players you see on the top rankings today do utilize some blackhat SEO, but it's not at a level that significantly impacts their rankings. Blackhat SEO is inherently dangerous, because Google's algorithm will penalize you at best when it finds out -- and it always does -- and at worst completely unlist your domain from search results, giving it a scarlet letter until it cools off.
However, the bulk of all major websites primary utilize whitehat SEO, i.e "non-hacked," i.e "Google-approved" SEO to maintain their rankings. They have to, else their entire brand and business would collapse, either from being out-ranked or by being blacklisted for shady practices.
Additionally, Google's algorithim hasn't changed much at all from pagerank, in the grand scheme of things. If you can read between their lines, the biggest SEO factor is: how many backlinks from reputable domains do you have pointing at your website? Everything else, including blackhat SEO, are small optimizations for breaking ties. Sort of like PED usage in competitive sports; when you're at the elite level, every little bit extra can make a difference.
Google's algorithm works for its intended purposes, which is to serve pages that will benefit the highest amount of people searching for a specific term. If you are more than 1 SD from the "norm" searching for a specific term, it will likely not return a page that suits you best.
Google's search engine based on virality and pre-approval. "Is this page ranked highly by other highly ranked pages, and does this page serve the most amount of people?" It is not based on accuracy, or informational-integrity -- as many would believe by the latest Medic update -- but simply "does this conform to normal human biases the most?"
If you have a problem with Google's results, then you need to point the finger at yourself or at Google. SEO experts, website operators, etc. are all playing a game that's set on Google's terms. They would not serve such shit content if Google did not: allow it, encourage it, and greatly reward it.
Google will never change the algorithm to suit outliers, the return profile is too poor. So, the next person to point a finger at is you: the user. Let me reiterate, Google's search engine is not designed for you; it is designed for the masses. So there is no logical reason for you to continue using it the way you do.
If you wish to find "deep enough" sources, that task is on you, because it cannot be readily or easily monetized; thus, the task will not be fulfilled for free by any business. So, you must look at where "deep enough" sources lay: books, journals, and experts.
Books are available from libraries, and a large assortment of them are cataloged online for free at Library Genesis. For any topic you can think of, there is likely to be a book that goes into excruciating detail that satisfies your thirst for "deep enough."
Journals, similarly. Library Genesis or any other online publisher, e.g NIH, will do.
Experts are even better. You can pick their brains and get even more leads to go down. Simply, find an author on the subject -- Google makes this very easy -- and contact them.
I'm out of steam, but I really felt the need to debunk this myth that Google is a poor, abused victim, and not an uncaring tyrant that approves of the status quo.
Does it? So for any product search, thrown-together comparison sites without actual substance but lots of affiliate links are really among the best results? Or maybe they are the most profitable result, and thus the one most able to invest in optimizing for ranking? Similarly, do we really expect results on (to a human) clearly hacked domains to be the best for anything, but Google will still put them in the top 20 for some queries? "Normal people want this crap" is a questionable starting point in many cases.
There is no "best result."
Any page falling under "thrown-together comparison sites without actual substance but lots of affiliate links" are temporal inefficiencies that get removed after each major update.
Will more pop up? Yes, and they will take advantage of any ineffeciency or edge-cases in the algorithim to boost their rankings to #1.
Will they stay there for more than a few months? No. They will be squashed out, and legitimate players will over time win out.
This is the dichotomy between "churn and burn" businesses and "long term" businesses. You will make a very lucrative, and quick, buck going full blackhat, but your business won't last and you will be consistently need to adapt to each successive algo update. While long-standing "legit" businesses will only need to maintain market dominance -- something much easier to do than break into the market from ground zero, which churn and burners will have to do in perpetuity until they burn out themselves.
If you want to test this, go and find 10 websites you think are shady, but have top 5 rankings for a certain search phrase. Mark down the sites, keyword, and exact pages linked. Now, wait a few months. Search again using that exact phrase. More likely than not, i.e more than 5 out of 10, will no longer be in the top 5 for their respective phrases, and a couple domains will have been shuttered. I should note that "not deep info" is not "shady," because the results are for the average person. Ex. WebMD is not deep, but it's not shady either.
I implore people to try and get a site ranked with blackhat tricks and lots of starting capital, and see just how hard it is to keep ranked consistantly using said tricks. It's easy to speculate and make logical statements, but they don't hold much weight without first-hand experience and observation.
This isn't true at all in my experience. As a quick test I tried searching for "best cordless iron", on the first page there is an article from 2018 that leads to a very broken page with filler content and affiliate links.  There are a couple of other articles with basically the exact same content rewritten in various ways also on the first page.
A quick SERP history check confirms that this page has returned in the top 10 results for various keywords since late 2018.
>It's easy to speculate and make logical statements, but they don't hold much weight without first-hand experience and observation.
This statement is a bit ironic given that it took me 1 keyword and 5 seconds of digging to find this one example.
Edit: thinking more about this post's specific issue, I'm not sure what to do if all the crawlers fail. Could always hook into the search apis for github, reddit, so, wiki, etc. Full shotgun approach.
> The real problem with search engines is
the fact that so many websites have hacked
SEO that there is no meritocracy left.
I intend to announce the alpha test of my
search engine here on HN.
My search engine is immune to all SEO
> I can possibly not find anything deep
enough about any topic by searching on
In simple terms my search engine gives
users content with the meaning they want
and in particular stands to be very good,
by far the best, at delivering content
with "deep" meaning.
> I need something better.
> However, it will be interesting to
figure the heuristics to deliver better
quality search results today.
Uh, sorry, it's not fair to say that my
search engine is based on "heuristics".
I'm betting on my search engine being
successful and would have no confidence in
Instead of heuristics I took some new
(1) I get some crucial, powerful new data.
(2) I manipulate the data to get the
desired results, i.e., the meaning.
(3) The search engine likely has by far
the best protections of user privacy.
E.g., search results are the same for any
two users doing the same query at
essentially the same time and, thus, in
particular, independent of any user
(4) The search engine is fully intended to
be safe for work, families, and children.
For those data manipulations, I regarded
the challenge as a math problem and took a
math approach complete with theorems and
The theorems and proofs are from some
advanced, not widely known, pure math with
some original applied math I derived.
Basically the manipulations are as
specified in math theorems with proofs.
> A new breakthrough heuristic today will
look something totally different, just as
meritocratic and possibly resistant to
My search engine is "something totally
My search engine is my startup. I'm a
sole, solo founder and have done all the
work. In particular I designed and wrote
the code: It's 100,000 lines of typing
using Microsoft's .NET.
The typing was without an IDE (integrated
development environment) and, instead, was
just into my favorite general purpose text
It's my first Web site: I got a good
start on Microsoft's .NET and ASP.NET (for
the Web pages) from
Jim Buyens, Web Database Development,
Step by Step, .NET Edition, ISBN
0-7356-1637-X, Microsoft Press.
The code seems to run as intended. The
code is not supposed to be just a "minimum
viable product" but is intended for first
production to peak usage of about 100
users a second; after that I'll have to do
some extensions for more capacity. I
wrote no prototype code. The code needs
no refactoring and has no technical
While users won't be aware of anything
mathematical, I regard the effort as a
math project. The crucial part is the
core math that lets me give the results.
I believe that that math will be difficult
to duplicate or equal. After the math and
the code for the math, the rest has been
Ah, venture capital and YC were not
interested in it! So I'm like the story
"The Little Red Hen" that found a grain of
wheat, couldn't get any help, then alone
grew that grain into a successful bakery.
But I'm able to fund the work just from my
The project does seem to respond to your
concerns. I hope you and others like it.
How should I announce the alpha test here
More details are in my now old HN post at
For a short answer, SEO has to do with keywords. My startup has nothing to do with keywords or even the English language at all. In particular, I'm not parsing the English language or any natural language; I'm not using natural language understanding techniques. In particular, my Web pages are so just dirt simple to use (user interface and user experience) that a child of 8 or so who knows no English should be able to learn to use the site in about 15 minutes of experimenting and about three minutes of watching someone use the site, e.g., via a YouTube video clip of screen captures. E.g., lots of kids of about that age get good with some video games without much or any use of English.
E.g., how are you and your spouse going to use keywords to look for an art print to hang on the wall over your living room?
Keyword search essentially assumes (1) you know what you want, (2) know that it exists, and (3) have keywords that accurately characterize it. That's the case, and commonly works great, for a lot of search, enough for a Google, Bing, and more in, IIRC, Russia and China. Also it long worked for the subject index of an old library card catalog.
But as in the post I replied to, it doesn't work very well when trying to go "deep".
Really, what people want is content with some meaning they have at least roughly in mind, e.g., a print that fits their artistic taste, sense of style, etc., for over the living room sofa. Well, it turns out there's some advanced pure math, not widely known, and still less widely really understood, for that.
Yes I encountered a LOT of obstacles since I wrote that post. The work is just fine; the obstacles were elsewhere. E.g., most recently I moved. But I'm getting the obstacles out of the way and getting back to the real work.
Yes, but capturing meaning mathematically is somewhat an unsolved problem in both mathematics, linguistics and semiotics. Your post claims you have some mathematics but (obviously as it's a secret) doesn't explain what.
SEO currently relies on keywords, but SEO as a practice is humans learning. There is a feedback loop between "write page", "user types string into search engine" and "page appears at certain rank in search listing". Humans are going to iteratively mutate their content and see where it appears in the listing. That will produce a set of techniques that are observed to increase the ranking.
I've been successful via my search math. For your claim, as far as I know, you are correct, but actually that does not make my search engine and its math impossible.
> That will produce a set of techniques that are observed to increase the ranking.
Ranking? I can't resist, to borrow from one of the most famous scenes in all of movies: "Ranking? What ranking? We don't need no stink'n ranking".
Nowhere in my search engine is anything like a ranking.
People pay tens or even hundreds of thousands of dollars to move their result from #2 to #1 in the list of Google results.
My user interface is very different from Google's, so different there's no real issue of #1 or #2.
Actually that #1 or #2, etc. is apparently such a big issue for Google, SEO, etc. that it suggests a weakness in Google, one that my work avoids.
You will see when you play a few minutes with my site after I announce the alpha test.
Google often works well; when Google works well, my site is not better. But the post I responded to mentions some ways Google doesn't work well, and for those and some others my site stands to work much better. I'm not really in direct competition with Google.
His post was interesting to me since it mentioned some of the problems that I saw and that got me to work on my startup. And my post might have been interesting to him since it confirms that (i) someone else also sees the same problems and (ii) has a solution on the way.
For explaining my work fully, maybe even going open source, lots of people would say that I shouldn't do that. Indeed, that anyone would do a startup in Internet search seems just wack-o since they would be competing with Google and Bing, some of the most valuable efforts on the planet.
So that my efforts are not just wack-o, (i) I'm going for a part of search, e.g., solving the problems of rahulchhabra07, not currently solved well; (ii) my work does not really replace Google or Bing when they work well, and they do, what, some billions of times a second or some such?; (iii) my user interface is so different from that of Google and Bing that at least first cut my work would be like combining a racoon and a beaver into a racobeaver or a beavorac; and (iv) at least to get started I need the protection of trade secret internals.
Or, uh, although I only just now thought of this, maybe Google would like my work because it might provide some evidence that they are not all of search and don't really have a monopoly, an issue in recent news.
The only way SEO could be impossible is if there was no capacity to change search ranking no matter what - which would be both useless and impossible.
graycat is an elder here. He has an interesting history in technology, I dare say more interesting than most of us ever will. He deserves better than your reply (as would any other HN user). Check out these posts:
Google has become a not very useful search - certainly not the first place I go when looking for anything except purchases. They've broken their "core business".
- Confluence: Native search is horrible IME
- Microsoft Help (Applications): .chm files Need I say more.
- Microsoft Task Bar: Native search okay and then horrible beyond a few key words and then ... BING :-(
- Microsoft File Search: Even with full disk indexing (I turned it on) it still takes 15-20 minutes to find all jpegs with an SSD. What's going on there?
- Adobe PDFs: Readers all versions. What? You mean you want to search for TWO words. Sacrilege. Don't do it.
Seriously though with all the interview code tests bubble sort, quick sort, bloom filters, etc. Why can't companies or even websites get this right?
And I agree with other commenters as far as Google, Bing, DDG, or other search sites it's been going down hill but the speed of uselessness is picking up.
The other nagging problem (at least for me) is that explicit searches which used to yield more relevant results now are front loaded with garbage. If I'm looking for datasheet on an STM (ST Microsystems) Chip and I start search with STM as of today STM is no longer relevant (it is, meaning it shows up after a few pages). But wow it seems like the SEOs are winning but companies that use this technique won't get my business.
"T": LaTeXIT.app (an app I have used fewer than a dozen times in two years)
"E": Electrum.app (how on earth??)
"G": telemetry.app (an app which cannot even be run)
"RAM" : Telegram
Similar experience searching for most apps, files, and words. It's horrendous.
MacOS Mojave 10.14.6 on a MacBook Pro (Retina, 15-inch, Mid 2015)
Adobe wants to have you by your balls the moment you install their installer :-) I keep a separate computer for Adobe stuff just for that reason. Actually to run some MS junk too.
>Seriously though with all the interview code tests bubble sort, quick sort, bloom filters, etc. Why can't companies or even websites get this right?
I have see some of the stinkiest stuff created by people who will appear smartest in any test these companies can throw at them. Some people are always gambling/gaming and winging it. They leave a trail...unfortunately.
I use this software utility called Search Everywhere, its surprisingly good, fast and fairly accurate most of the times :)
Does turning it off speed it up? I think disk indexing (the way Windows does it) is a remnant from HDD times, and might make things worse when used together with a modern SSD.
> Adobe PDFs: Readers all versions. What? You mean you want to search for TWO words. Sacrilege. Don't do it.
If you're just viewing and searching PDFs (and don't have to fill out PDF forms on a regular basis), check out SumatraPDF. Fastest PDF reader on Windows I've come accross so far.
What is going on there? I'm working on a file system indexer in golang and to walk and parse extension to a mimetype runs at several thousand images a second, over NFS. Windows is full of lots of headscratchers "why is this taking so long?"
I have no answers for Microsoft File Search, it never returns any results for me, I wonder if they even tested it sometimes.
Pasting stack traces and error messages. Needle in a haystack phrases from an article or book. None of it works anymore.
Does this mean they are ripe for disruption or has search gotten harder?
I cannot fathom the number of times I've pasted an error message enclosed by quotes and got garbage results, and then an hour of troubleshooting and searching later I come across a Github/bugtracker issue, which was nowhere in the search results, were the exact error message appears verbatim.
The garbage results are generally completely unrelated stuff (a lot of Windows forum posts) or pages were a few of the words, or similar words, appear. Despite the search query being a fixed string not only does Google fail to find a verbatim instance of it, but instead of admitting this, they return nonsense results.
> Needle in a haystack phrases from an article or book.
I can confirm this part as well, searching for a very specific phrase will generally find anything but the article in question, despite it being in the search index.
Zero Recall Search.
Did you put the error message in quotes? I've never had this problem.
A mock-google that excludes quora and optionally targets stackoverflow/github sounds useful.
Maybe also some quality decline in their gradual shift to less hand weighted attributes and more ML.
I further suppose a lot of that is that The Masses(tm) don't use Google like I do. I put in key words for something I'm looking for. I suspect that The Masses(tm) type in vague questions full of typos that search engines have to try to parse into a meaningful search query. If you try to change your search engine to caters to The Masses(tm), then you're necessarily going to annoy the people that knew what they were doing, since the things that they knew how to do don't work like they used to (see also: Google removing the + and - operators).
For those "needle in a haystack" type queries, instead of pages that include both $keyword1 and $keyword2, I often get a mix of the top results for each keyword. The problem is compounded by news sites that include links to other recent stories in their sidebars. So I might find articles about $keyword1 that just happen to have completely unrelated but recent articles about $keyword2 in the sidebar.
It also appears that Google and DDG both often ignore "advanced" options like putting exact phrase searches in quotation marks, using a - sign to exclude keywords, etc.
None of this seems to have cut down on SEO spam results either, especially once you get past the first page or two of results.
I suspect it all comes down to trying to handle the most common types of queries. Indeed, if I'm searching for something uncomplicated, like the name of the CEO of a company or something like that, the results come out just fine. Longtail searches probably aren't much of a priority, especially when there's not much competition.
So... is there an internal service at Google that works correctly but they're hiding from the world?
It might be useful for Google to make different search engines for different types of people. The behaviors of people are probably multi-modal, rather than normally distributed along some continuum where you should just assume the most common behavior and preferences. \
It would even be easier to target ads...
Or maybe this doesn't exist and spam is too hard.
You just described how YouTube's search has been working lately. When you type in a somewhat obscure keyword - or any keyword, really - the search results include not only the videos that match, but videos related to your search. And searches related to your keywords. Sometimes it even shows you a part of the "for you" section that belongs to the home page! The search results are so cluttered now.
I got down to one with "qwerqnalkwea"
"AEWRLKJAFsdalkjas" returns nothing, but youtube helpfully replaces that search with the likewise nonsensical "AEWR LKJAsdf lkj as" which is just full of content.
Yeeaap, sometime in gradeschool - I think somewhere around 5th grade, age 11 or so, which would be around 1999 - we had a section on computers, where we'd learn the basics about how to use them. One of the topics I remember was "how to do web searches", where a friend was surprised at how easily I found what I was looking for - the other kids had to be trained to use keywords instead of asking it questions.
1. My work got some attention at CES so I tried to find articles about it. Filtering for items that were from the last X days and searching for a product name found pages and pages of plagiarized content from our help center. Loading any one of the pages showed an OS appropriate fake “your system is compromised! Install this update” box.
What’s the game here? Is someone trying to suppress our legit pages, or piggybacking on the content, or is that just what happens now?
2. I was looking for some OpenCV stuff and found a blog walking through a tutorial - except my spidey sense kept going off because the write up simply didn’t make sense with the code. Looking a bit further I found that some guys really well written blog had been completely plagiarized and posted on some “code academy tutorial” sort of site - with no attribution. What have we come to?
Someone mentioned that the sites that have the answer typically is buried in the results. That’s because they tend to favor big brands and authoritative sites. And those sites oftentimes don’t have the answer to the search query.
Google’s results have gotten worse and worse over the years.
Was it Panda update or that one plus the one after - it took out so much of the web and replaced it with "better netizens" who weren't doing this bad thing or that bad thing.
Several problems with that - 1 - they took out a lot of good sites. Many good sites did things to get ranked and did things to be better once they got traffic.
The overbroad ban hammer took many down - and many people that likely paid an seo firm not knowing that seo firms were bad in google's eyes (at the time) - so lots of mom and pops and larger businesses got smacked down and put out of the internet business - just like how many blogs have shut down.
Of course local results taking a lot of search space and the instant answers (50% of searches never get a click cuz google gives them the answer right on the results page (often stolen from a site) are compounding this.
They tried having the disavow tool to make amends - but the average small business doesn't know about these things, and getting help on the webmaster forum is a joke if you are tech inclined, imagine what an experience it is for small business owners.
I miss the days of Matt Cutts warning people "get your Press Releases taken down or nofollowed or it's gonna crush you soon" - problem is most of the people who were profiting from no-longer-allowed seo techniques were not reading Matt's words.
I also appreciated his saying 'tell your users to bookmark you, they may not find you in google results soon' - yeah, at least we were warned about it.
The web has not been the same since those updates, and it's gotten worse since. This does help adwords sell and the big companies that can afford them though.
In these ways google has been kind of like the walmart of the internet, coming in, taking out small businesses, taking what works from one place and making it cheap at their place.
I'd much rather have the results of pre-penguin and let the surfers decide by choosing to remain on a site that may be good that also had press releases and blog links... rather than loosing all the sites that had links on blogs. I am betting most of the users out there would prefer the results of days past as well.
When I switched, about a year and a half ago, I felt like I was switching to a lesser quality search engine (it was an ethical choice and done because I can), that, however, gradually and constantly got better, whereas Google went the opposite path.
Nowdays I only really use Google to leech bandwidth off their maps services. Despite there being a very good alternative available, OpenStreetMaps, they unfortunately appear to have limited (or at least, way less than Google) bandwidth at their disposal... A pity though, because their maps are so awesome, the bicycle map layer with elevation lines is any boy scout's wet dream... but yeah, to find the next hairdresser, Google'll do.
Speaking of bandwidth and OSM reminds me, is there an "SETI-but-for-bandwidth-not-CPU-cycles" kind of thing one could help out with? Like a torrent for map data?
EDIT: Maybe their bandwidth problems are also more the result of a different philosophy about these things. OSM is likely "Download your own offline copy, save everybody's bandwidth and resources" (highly recommended for smartphones, especially in bandwidth-poor Germany) whereas Google is "I don't care about bandwith, your data is worth it".
OSM used to have tiles@home, a distributed map rendering stack, but that shut down in 2012. There is currently no OSM torrent distribution system, but I'd like to set that up.
But instead Google wanted to make things less strict, less semantic, harder to search, and easier to author whatever the hell you wanted. I'm sure it has nothing to do with making it difficult for other entrants to find their way into search space or take away ad-viewing eyeballs. It was all about making HTML easy and forgiving.
It's a good thing they like other machine-friendly semantic formats like RSS and Atom...
"Human friendly authorship" was on the other end of the axis from "easy for machines to consume". I can't believe we trusted the search monopoly to choose the winner of that race.
I think in this case semantic web would not work, unless there was some way to weed out spam. There are currently multiple competing microdata formats out there than enable you to specify any kind of metadata but they still won't help if spammers fill those too.
Maybe some sort of webring of trust where trusted people can endorse other sites and the chain breaks if somebody is found endorsing crap? (as in, you lose trust and everybody under you too)
That's not so hard. It's one of the first problems Google solved.
PageRank, web of trust, pubkey signing articles... I'd much rather tackle this problem in isolation than the search problem we have now.
The trust graph is different from the core problem of extracting meaning from documents. Semantic tags make it easy to derive this from structure, which is a hard problem we're currently trying to use ML and NLP to solve.
HTML has a lot of structure already (for example all levels of heading are easy to pick out, lists are easy to pick out), and Google does encourage use of semantic tags (for example for review scores, or author details, or hotel details). For most searches I don't think the problem lies with being able to read meaning - the problem is you can't trust the page author to tell you what the page is about, or link to the right pages, because spammers lie. Semantic tags don't help with that at all and it's a hard problem to differentiate spam and good content for a given reader - the reader might not even know the difference.
What prevents spammers from signing articles? How do you implement this without driving authors to throw their hands in the air and give up?
But that's hierarchical in a very un-web-y way... Hm.
And that has worked... quite fine. I have no objections (maybe they're a bit too liberal with the new TLDs).
Most of the stuff that makes the hierarchies seem bad are actually faults of for-profit organizations (or other unsuited people/entities) being at the top, and not just that someone is at the top per se. In fact, in my experience, and contrary to popular expectation, when a hierarchy works well, an outsider shouldn't actually be able to immediately recognize it as such.
Can you explain in technical details what you think was lost by Google launching a browser or what properties were unique to XHTML?
Everything you listed above is possible with HTML5 (see e.g. schema.org) and has been for many years so I think it would be better to look at the failure to have market incentives which support that outcome.
You don't think we'd have rich tooling to support it and make it easy to author?
Once people are using it with success, others will follow.
(Of course that won't ever happen, but that's what would be needed.)
EDIT: I don't know why this being downvoted. This is a genuine question to understand if the problem is the size of the index or the fuzzing matching that search engines do.
- fixing search (it has become more and more broken since 2009, possibly before. Today it wotks more or less like their competitors worked before, random mix of results containing some of my keywords.)
- fixing ads (Instagram should have way less data on me and yet manages to present me with ads that I sometimes click instead of ads that are so insulting I go right ahead and enable the ad blocker I had forgotten.)
- saving Reader
tbh, it's one of those "Are you sure you're not an idiot?" replies.
They not only disregard quotes but also their own verbatim setting.
- Establish the behaviour as documented.
- In representing and demonstrating this to others.
It's not that I doubt your word, but that I'd like to see a developed and credible case made. Because frankly that behaviour drives me to utter frustration and distraction. It's also a large part of the reason I no longer use, nor trust, Google Web Search as my principle online search tool.
That said, I might have something on an old blog somewhere. I'll see if I can find it before work starts...
Edit: found it here http://techinorg.blogspot.com/2013/03/what-is-going-on-with-... . It is from 2013 and had probably been going on for a while already at that point.
Edit2: For those who are still relying on Google, here's a nice hack I discovered that I haven't seen mentioned by anyone else:
Sometimes you might feel that your search experience is even worse than usual. In those cases, try reporting anything in tje search results and then retry tje same search 30 minutes later.
Chances are it will now magically work.
It took quite a while for me to realize this and I think in the beginning I might not have realized how fast it worked.
It seemed totally unrealistic however that a fkx would have been created and a new version deployed in such a short time so my best explanation is they are A/B-testing some really dumb changes and then pulling out whoever complains from the test group.
Thinking about it this might also be a crazy explanation for why search is so bad today compared to ten years ago:
There's no feedback whatsoever so most sane users probably give up talking to the wall after one or two attempts. This leaves them with the impression that everyone is happy, so they just continue on the path back to becoming the search engines they replaced.
I'm getting both mixed experiences and references myself looking into this. Which is if anything more frustrating than knowing unambiguously that quoting doesn't work.
I've run across numerous A/B tests across various Google properties. Or to use the technical term: "user gaslighting".
Search disrupted catalogs. What will disrupt search?
Not joking I have a feeling subject specific topics will be further distributed based on expertise & trust.
I think those are called books. ;-)
1. I don’t like the UI anymore. I preferred the condensed view, with more information and less whitespace.
2. Popping up some kind of menu when you return from a search results page shifts down the rest of the items resulting in me clicking search links I am not interested in.
3. It tries to be smarter than me, which it fails in understanding what I am searching for. And by “understanding” I basically mean to honor what I typed and not replacing it with other words.
I try to use DDG more often but Google gives me the best results most of the time if I put in more time.
This functionallity literaly never helped me during search. Not once.
with the url buried somewhere in the GET params.
Google is not a search company, they are an advertising company. The more searches you make, the more revenue they make. Their goal is to quickly and often get you to search things. As long as you keep using their platform, the more you search the better.
Historically, at least, they sold subscriptions.
Hsbc demanded The Telegraph pull negative stories or they would pull their advertising.
All newspapers are full of stories about property "investment" and have a separate segment one a week for paid advertising of property.
Prior to that there were papers which did indeed make their money from subscriptions. But their content was different as well: explicitly ideological and argumentative. The NYT or Wapo idea of neutral journalism was a later development.
Paid search engine that ranks sites based on how often the users click results from that site (and didn't bounce, of course). The fact that it's paid prevents sybil attacks (or, at least, turns a sybil attack into a roundabout way of buying ads).
Of course, at this point, you are now the product even though you paid. But it's a tactic that worked for WoW for ages.
So having a paid search engine does not fully solve the SEO problem. Having no Ads does not takeout the SEO problem of boosting pages to top.
Getting this hypothetical site to get users is the real problem. Same thing with getting users to a Facebook alternative.
There were search engines around but when Google came out with superior search results everyone switched and those search engines quickly vanished. There are search engines in direct competition with Google today. If Google does not provide the best service there's an extremely low switching cost. Bing, Baidu, DuckDuckGo et al would be happy to take your traffic.