Hacker News new | past | comments | ask | show | jobs | submit login
Every Google result now looks like an ad (twitter.com/craigmod)
3592 points by cmod on Jan 21, 2020 | hide | past | favorite | 972 comments



I've been in this A/B test for a couple of months now, so I've had time to adjust, and I still hate it. I've just become so used to seeing the complete URL in green. The complete URL! If you hover over the results, you'll see that they like to take bits like numeric components or the query string out.

This is part of Google's attempt to de-prioritise the URL. Their destructive AMP service confusingly shows you Google's domain instead of the website's — and as they can't fix that without losing out on tracking, they're trying to change what the URL means.

Thanks for ruining the Web, Google.


google amp is a plague on the internet and should have been shutdown by regulators. it is definitely anti-competitive. i am sick and tired of our government completely slaving to the big tech companies when it comes to regulations and anti-competitive behavior.

google amp on mobile overrides behavior on mobile, even on android, so instead of web links opening up in the respective and selected default app, it opens up the amp link within google search or the browser. there is no way to turn off this feature on any browser, chrome or not.


Unless I misunderstood, that is something you disable in each (google)app that has that "feature".

In Gmail for instance: settings: General settings:

Open links in gmail. Turn on for "faster" browsing.

Default is on. Tip: Use firefox focus instead as the default browser to open all links. If the link warrants further look copy the url to your main browser.

The above has nothing to do with amp.

But to not detract, amp truly is a plague.


Thanks for the tip. Most of the people here would fix that somehow, e.g. by tweaking browser settings, finding and following comments like yours. But, why does someone who doesn't deal with tech have to think about that, and will they, should they?

I find their UX approach to SEO more concerning than, say, scoring. For example, AMP pages will receive the same score as equally performant non-AMP content. However, they won't benefit from the carousel view, with instant page loads (prefetch), etc...

Small changes like this, applied at a huge scale to a user base that just can't afford to constantly fight hostile UX are damaging, regardless if they come from Google, an airline site, your mailbox switching to "Promotions" at random intervals or Twitter not allowing you to permanently change the feed order.

Internet feels more hostile than it used to. I don't mind the trolls (I can always block them) but I do mind that we depend on services that deploy hostile UX practices.


> Internet feels more hostile than it used to.

It's not Internet, it's software.

You can really feel it when switching from Android to LineageOS, or from Windows to Linux.

It's the difference between software where "you are the product" and software that has been created to serve the user, to be the best tool it can be.

I'm not talking about UX in the sense that open source software can sometimes feel clunky and unfinished, but I'm talking about breath of freedom you feel when you switch, and suddenly ... you notice that this current you have been forced to swim against, just isn't there any more.

I helped a friend get LineageOS on his bootlooped phone, and he's so happy with it. All these features in the settings that are actually helpful (when in Android you always have to second guess, you know that feeling when a setting's description is really vague and uninformative, "this is probably going to spy on me", is it vaguely positive or negative, because the switch can go either way). All the very basic features that would otherwise require an ad-filled app to perform, of course already there. You get privacy controls that aren't mystifying or reset on updates.

Similarly, I've now been on Instagram for about half a year. Always avoided FB, but there's some art I wanted to put out there, so I gave it a try. And oh my god, Instagram is probably the shittiest software ever? Or at least the most user hostile. It's a social thing where you can share images, comments and messages. Except it isn't, it just appears like one. Literally every interaction feels forced, to show me more ads, make me spend more time in this app (??) and mainly constantly throwing up barriers against interacting with any kind of software or data outside its ecosystem. You can't upload from your laptop, many links are not clickable, many text fields are not copyable, most features in the browser are locked unless you make it pretend to be an iPhone (!!), you can't post or reply to comments. The chat is such basic functionality that it seems hard to fuck up but they did. I should stop, but I can go on ...

This is that feeling of constantly swimming against a current, and somehow we're tricked into believing that is how it's supposed to be, because ... I don't know, some people told me when I complained, that most people don't use instagram in the way I do. Well I guess, but that doesn't seem to be most people's choice.


I think you're definitely misunderstanding what Instagram is. I'm not saying it isn't user hostile and full of ads, because it definitely is, but you're complaining about all the wrong things. All of those things you're complaining about are conscious design choices and have been from the very beginning.

Instagram is not some generic photo sharing software that tries to be open and modular and integrate with everything and proliferate arbitrary visual media with a rich tightly coupled messaging system. It was never that and won't ever be that.

Instagram from the start was just about taking low res pictures with your phone camera, putting a filter on them so they look less terrible and then sharing them with your friends. Every other feature was begrudgingly added to increase accessibility and hence DAUs. So you were never supposed to be able to interact with anything outside of the app. You can't cross-post your posts to facebook or twitter, you can't post from your computer, you can't spam links in your photo descriptions. All of this is literally the point of Instagram. It was like this before it and slightly after it got acquired and people loved it, not in spite of the restrictions, but because of them.

Then Zuck crammed it full of ads and a terrible glommed on messaging system and ruined it.


And Twitter was a service to share SMS text messages of max 140 characters.

I understand that what you describe is what Instagram was, but given what they have both become, they actually have little functional difference.


Amen.

You haven't tried snapchat btw. It keeps getting worse. I'm sure many of us in here are feeling like dinosaurs, unable to fit in.


I did try snapchat but I couldn't stand it, I only had one contact (and "Team Snapchat"! yay) and ultimately got rid of it.


I installed an SSH client on my phone and my life instantly got better.


In fairness, this applies equally to all things one can install SSH clients on.


Totally agree with the Instagram part. Worst software ever.


WhatsApp is second after that :)


This is not AMP. It’s a feature originally called Chrome Custom Tabs [1], and it’s a faster way to open a webpage than a regular “open URL” intent (that would have to launch the entire browser app) or a custom WebView (which sucks for other reasons). The thing is, this feature takes into account your default browser, so if you set Focus as your system default browser, the custom tabs will launch in Focus, with all the privacy protections it offers.

[1] https://developer.chrome.com/multidevice/android/customtabs


if you use google search as your default search, then there is no way i know of to disable amp, no matter which browser you use, even if you request the browser to handle opening the link. google search passes the amp link to the browser.


> Unless I misunderstood, that is something you disable in each (google)app that has that "feature".

> In Gmail for instance: settings: General settings:

> Open links in gmail. Turn on for "faster" browsing.

I changed this setting, but now I clicked a link in gmail and indeed it opens via Firefox, but it still gets redirected through a google-URL before getting to the real page. I want to disable that behaviour most of all.


Then use a generic email client instead of Gmail mobile app or web email interface.


The Fastmail app is very good, for example.


> Tip: Use firefox focus instead as the default browser to open all links.

Unfortunately iOS doesn't support this nearly as well as Android does. On Android you can enforce this pretty much everywhere whereas on iOS a bunch of apps still open links in Safari no matter what you do.


Its been enough to make me change my iPhone's search default to DDG. I've been using it for about a month and I'm surprisingly happy with the results so far.


Me too, but lately I am experimenting with qwant.com, also love the results.


Semi-off-topic, but I recently found out that there is an even worse version of AMP, called Google Web Light. Running Firefox on an ARM device (e.g. Raspberry Pi), the results speak for themselves [1].

Notably, there is absolutely no way for the end user to disable it, short of spoofing your user agent.

[1] https://googleweblight.com/i?u=https://news.ycombinator.com/...


Truthful name, though? Google, we blight.


The Opera Mini MITM has returned!


> there is no way to turn off this feature on any browser, chrome or not.

Here is a way for Firefox: https://addons.mozilla.org/en-US/firefox/addon/amp2html/


>Thanks for ruining the Web, Google.

I posted the solution below, which I found a few days ago. The script will work with greasemonkey and tampermonkey, it will provide you with results similar to the ones before the change. If you also use uMatrix, there will be no ads.

https://greasyfork.org/en/scripts/395257-better-google


> I posted the solution below,

Sorry to be "that guy" but for me the solution is to use DuckDuckGo.

It's not perfect, but neither is trying to play arms race with Google's JavaScript.


DuckDuckGo is great, but Searx[1] is even better. It's a metasearch engine that aggregates several search providers that you can self-host, or access via one of several public instances.

I run an instance on my local network, but you can run it for free on Heroku, AWS or GCP or even on a Raspberry Pi. There are several Docker images you can choose from.

[1] https://github.com/asciimoo/searx


Ironic that I once switched from Dogpile to Google. We have come full circle.

Edit: Looks like Dogpile still works, and flags the google ads properly. I'm switching back!


Extra irony: I went to try Dogpile, but got redirected to some anti-something site (seems to not like my VPN IP) that wants me to do a Google Recaptcha.


You got a laugh out of me :)


Anyone know what DDG's !bang operator is for Searx?



Yeah, it’s pretty straightforward


I use DDG as much as I can but when I need to find something very specific, or find a solution to a bug Ive encountered, it takes 2x as long on DDG vs Google


I find Google ignoring my attempts to be more specific more often.

Just as a contrived example a search like "R33 RB25DET Motorsport ECU" typically got me specific links relating to the exact car, engine and topic. But the past 5 years or so it seems like it is weighting the more common word and more general terms. Often excluding the target topic altogether and just giving general motorsport results. Perhaps it's a consequence of every SEO specialist and their dogbot hammering general search terms and gumming up the machine with cruft.


That's something that really drives me nuts about it lately. If I just wanted general results, I'd do the lazy thing and not put in the additional terms. Worse, I've been finding that it still ignores some of the terms even when I put them in quotes.

The results just seem really bad lately, especially for anything technical. Just now I'd been looking for "html5 canvas torture test". The top result is a video called "Torture Testing my Nut Sac!!" and then some videos about testing Glock guns. Umm, no, that's not even close to what I'd wanted. (Bing does way better here and DDG is somewhere in between.)

I'm not sure what Google engineers are using to find technical information on the web these days, but I can't imagine it's the public Google search.


I personally find it most helpful to just ask Google a question like I'm a complete idiot. I got the idea from the meme about "that guy wot painted them melty clocks", which works extremely well in my opinion. Looking in my history "how to multilingual in java please", worked fine. You get a laugh out of it, 90% of the time Google figures out what you need, and the rest of the time it's going to show whatever the hell it wants to, any way.

At least both of us are pretending the same level of intelligence, which takes away a lot of the irritation.

I also tried talking to DDG like a duck, but it doesn't give as good results as talking to Google like an idiot.


animate on scroll "responsive"

Top result doesn't have the word responsive, very handy.


Just to be clear, I still use !g pretty frequently.

But psychologically it's rather different. If you find the Google search page to be visually aversive then your goal is to get in and get out quickly. That's a bit harder if Google is your default search.


On DDG, type:

    !s your search terms


In case anyone else is wondering, this sends your query to startpage, which appears to query google for you anonymously.


wtf...thats cool...I'll start trying that


I agree. The moment I saw this was the moment I configured my browser to use DuckDuckGo instead of Google. Good riddance.


If your browser is Chrome it’s still working for Google, not for you


Firefox has been my main browser for about 15 years. Never saw the advantage in Chrome, aside from using it once in a while when a webpage didn't work correctly on Firefox. These latest years we are seeing very aggressive behaviour from Chrome (reducing effectiveness of ad blocking, for example) and that just reinforces my decision.


I think I agree. Google still seems like my first choice mentally, but I used ddg a lot more as first search lately ( past month or so ).

It is getting more and more annoying getting workarounds for everything though. It is more annoying, because I liked G layout, default colors snd so on. It was cleaner.

Now not only is their search quality getting worse ( I got what I asked for on bing of all places ), their presentation managed to degrade too.

If it is testing result, I would be curious to see the data that informed that decision.


I wish google would enable infinite scroll like DDG. Internet search results are of the few times it is more user friendly than pagination.

Just being able to keep scrolling and seeing more results is very helpful. I guess it increases the value of the "first page" for Google.


I usually start with DDG (it's my default search engine on all the devices I use) and then quickly move on to !s (for Startpage) since DDG still is lacking in the quality of results for many searches. It's a bit rare that I go to !g (Google search).


> "the solution"

That's a solution, but not a particularly great one since it requires too much from the user to see mass adoption.


That still requires somewhat less from the user than my solution: a filtering proxy. On the other hand, the latter enables a far more customised browsing experience and one that isn't restricted to a single browser on a single computer.

...which brings me to another great point this illustrates: if you want to customise your experience, if you want to be able to control how you see the Web, then you need to make an effort, and the amount you exert is essentially proportional to how much you can change.

Yet the majority of users have shown that they are willing to take whatever Google throws at them with little opposition. I find that a little sad and ironic in this era of "everyone can code" propaganda (I've seen even Google advertises something like that on its homepage); or perhaps the latter is just an attempt to increase the population of intelligent yet docile and obedient corporate drones... I know developers --- web developers --- who really hated the changes yet made no effort to fix it themselves, despite almost certainly having the skills to.


What kind of filtering proxy are you using?


Proxomitron.


Good call, I think you hit it right on the money. We have all seen these problems (basically Google attempting to MITM the entire Internet) getting worse for years along with all of the very real malvertising threats. We have partnered with the Privoxy project to do exactly what you are doing with Proxomitron but a system that will scale to enterprise environments. We have it running in corporate and educational environments already w/out SSL inspection. What we will be able to do with SSL inspection will be a game changer. Check out the virtual appliance! Any feedback or ideas are appreciated. https://www.nextvectorsecurity.com


wow does that ... still exist? is it maintained? (I thought the creator stopped 15+ years ago?) does the user interface still look weird and green? :)


The creator died 16 years ago... but the (rather small) community has made a lot of patches and continues to work on filters. Given that it's basically the equivalent of running all the sites you visit through sed, with a syntax that's more suited for filtering HTML than plaintext, the strength lies in its flexibility and generality.

Yes, the UI is still skinnable, and the default skin is rather... psychedelic.


Ohhhh, right! I remember when that happened, forgot ... Amazing that the software is still in use. And that it's still useful given the temporary nature of these filters. Its syntax was pretty neat; writing your own cosmetic filters was pretty easy. And I suppose that the community wrote some code to auto convert public block lists maybe?

There's something to be said about the adblocker being a filtering proxy, it can really get anything before it hits the browser.

Do you know how it compares to Privoxy nowadays? Way back then it was the open source but harder to configure alternative, that didn't quite work as well as Proxomitron. But maybe Privoxy continued development and got better, I don't know what direction that project took.

Oh and I personally always really liked the default skin :D


Haha I just took a look, expecting a cool Github project page.

Nope - its green, and hasn't been updated since June... of 2003. I have no idea how it is able to be effective against the modern web, considering in 2003 the biggest issue was annoying pop-up Flash ads which no longer exist. Maybe there are updated plugins or something.


I'm confused. Isn't AMP mostly an issue on mobile devices? Can you use this on a phone?


If using Mobile Firefox (not FF Preview or FF Focus), then greasemonkey is available as add on.


This is available only on Android though. On iOS, Firefox Focus (or any Firefox or other browser) is tied to Apple's restrictions. So there's no scope for browser extensions as we generally think of it.


And Mozilla is about to roll out Firefox Preview soo with all add-ons but uBlock Origin disabled until they can fix up and test the add-on ecosystem on the new browser. Unfortunately there's more add-ons for privacy/security/ethics than uBlock (Decentraleyes, tracking token strippers, et. al) which will include add-ons that redirect to the original source from AMP.


Thank you. Solved it for us, at least on desktop.


I can't mentally parse the results anywhere near as fast as I used to be able to. It's horrible.


Mission accomplished?


Depends on what Google’s mission is. If it’s to show you ads, it doesn’t matter how long you’re on their page as the destination will (almost certainly) have more. If it’s to help you, it’s not successful.


>mission

Making profit, obviously. You to be less efficient at distinguishing ads and results, increasing time you look at ads, increasing chances of you interacting with ads.


A few weeks ago, I changed my browsing and searching habits purely to get out of that A/B test bucket. Now it looks like we're all in it.


I am very curious how you achieved that!


Now that is an admission.


Google hates strong SEO sites, because they won't make them any money. So that's a clever way of pushing them further down. I wondered when all results on the first page will be Ads only.


Google decides the layout. You can have the 'strongest SEO' in the world and Google still decide if they put 1 ad or 9 in front of the result.

Strength of SEO is irrelevant to the ads. The only thing Google hate is when sites manipulate themself to rank higher and offer a worse user experience.


Strength of SEO is irrelevant to the ads.

It wouldn't be very surprising if Google varies the number of ads in a search results page based on the search term. For sites that have strong SEO for all of their key search terms that would be indistinguishable from Google placing more ads in pages where that site ranks highly.


My understanding it is very linked to 1) Profitability. Search terms around things like lawyers and credit cards. You'll almost always see 4 ads. 2) Genuine relevance. Google know for certain searches your not likely looking to buy something and to keep credibility don't show ads.

Occasionally you can find pockets of less competitive search's that 1) allow ads 2) relate to your product via the algorithm even if they don't to a human brain 3) Align to your desired audience and these can give great return.


Do they really? Wouldn't a "strong SEO" site be ideal for their spiders?


I guess sites with real, useful content (e.g. Wikipedia) don't need strong SEO since they have a ton of back-links from other sites that validate their high ranking, so "strong SEO" is really about making a less useful site look more useful, which makes sense for them to hate.

SEO really translates to "How to fool Google into boosting your ranking artifically".


In my experience, Wikipedia has been relegated lower and lower in search results.

So much so that I now specifically use their search tool rather than go through google just in case some interesting thing pops up.


Having to search for “xxxxxxx wiki” more and more now. No I don’t want Healthline and Medicinenet links above the Wikipedia entry, thank you Google.

I wish by each search result there was a button that said “banish this domain to oblivion, I never want to see it again.”

You could improve search really fast that way if you still cared about things like that.


There used to be such feature in the results page. I just went looking for it and I got 'Cached' and 'Similar' when I click in the little drop-down arrow. Nice feature that appears to be removed. How does removing that feature benefited the users?


It used to be an extension (an official one from Google, Personal Blocklist). It was never part of the vanilla search results page itself.


No, it used to be part of the official results before the personal blocklist extension ever existed. Then some features were removed from the search results and then partially reimplemented in that extension.


Personal Blocklist by Google has been forked and reimplemented by a number of people:

https://chrome.google.com/webstore/detail/personal-blocklist...


Yes, a ban button has been on my search wish list since Google existed!


I do the same thing when looking for product reviews or useful discussion- generally put "xxxx forum" or "xxxx reddit" etc.


If you know which site you want to search, and its search feature is as decent as wikipedia’s, I suggest adding it as a search keyword. Saves me some time to type e.g. ”wk turtle” in the address bar instead of going through the front page or lazily searching via some third party search engine.


try <foo> !w on DuckDuckGo


Or add "w" keyword for https://en.m.wikipedia.org/wiki/Special:Search?search=%s&go=... bookmark on Android Firefox and then search with "w searchstring" from its address bar.

It's silly that it's necessary to create separate bookmark for this, though. Native search engines, surprisingly, don't support keywords there.


Why silly? It is in fact a bookmark - just a parametrized one. :)


They do have native search engines with neat option to add them from a site's search field. Why not add keywords too as they did on desktop.

I feel silly adding bookmarks for the things I already have in search engine list.


I use duckduckgo, and for the most part, you know what you are searching for so the !tags are really good. !w search term, just takes you right to wikipedea. When I really have no ideal what or where I'm looking for something, I still find myself looking on google a bit, but for the most part, !youtube, !arch, !git, !stack, get me exactly what I want about 99 percent of the time.

Check it out, because it sounds like it might start to match your workflow: https://duckduckgo.com/bang?q=


For YouTube you can use !yt, much shorter.


Is there a shorter domain name for ddg?


There is a https://ddg.gg which redirects to the main domain, so I'm not sure if it's what you're looking for.


https://duck.com Should redirect you.


Duck.com?


Which is complete fucking bullshit. It's driving me mental that when I search for something, Wikipedia usually isn't on the front page. It's almost always the best result for most things, it should be on top.


Shouldn't you then just go to wikipedia and search there ? You know to stop the "F* bullshit" and "save your mental state" ?


Wikipedia's search is totally inferior to google. It requires correct spelling within one or two edit distances and the SERP is far less informative. This is a common enough action that those seconds add up. If ddg wants to be competitive they need to fix this.


Wait, what?

Google used to put the Wiki article right at the top of the results list. It virtually never does that anymore. This is what's bullshit.

The point of a good search engine is that it is supposed to conglomerate good results, relevant results - let's say I'm looking up 'Phillip J. Fry' from 'Futurama', but I still want wiki information. Wikipedia won't even spellcheck for you if you don't know how to spell something correctly, like a city name.

If I use a search engine, I'll get this Wikipedia result: https://en.wikipedia.org/wiki/Philip_J._Fry

But I will also get the far more informative Futurama-wiki result: https://futurama.fandom.com/wiki/Philip_J._Fry

Wikipedia is not a search engine. Although, at this point, Google is barely one, so plastered with sponsored results it can be hard to find the result you're looking for, and with this change, I've finally made the long-needed jump to DuckDuckGo.

Yes, it's time to 'stop the fucking bullshit', and save all our mental states - searching Wiki isn't going to solve that - but not using Google can help. ;)

Comparing a search engine to wikipedia search is like comparing a search engine to a local file search.

If you're looking for a specific driver on your computer that you know the exact spelling and version number of, a local file search will help you find that. A search engine will return many results with download links as well as potentially other drivers, or other versions of drivers for your product - and will generally forgive you if you misspell something.


You are perhaps joking but that has been my tack for a while now. Having specific sites I use to search through. I used to web search the pick from the offerings presented, not caring what site it was so long as it had the information I needed. But now I really value a good website that respects UX and good, honest content with low commercial influence.


I would love to be able to overrule Google and always sort Wikipedia as the first result. Maybe this can be done with a browser extension.


Wikipedia seeming to lose relevance is like my old high school teachers finally getting their way over a decade later...

Do you find their search tool more effective than appending "wikipedia" to your Google search?


I think it's Google's work to deprioritize them on other search engines it is still showing up at first DDG gives it special treatment, by highlighting its summary, which is often what I'm looking for.


At this point I have moved to another search tool as my baseline.

I also use hoogle a lot for work so I'm used to switching search tool.


I noticed that as well over the years. Also, one thing that really drives me crazy is that Google is trying to steer me into using the german wikipedia, even though I am already explicitly searching for the english article name. I really prefer reading the english version for techie topics, no matter if there might be an article in my native language. This is the sort of "smart" behaviour that really feels dumb.


I remember a thread where everyone complained over too many wikipedia results popping up. Now we complain the other way.


I think the idea is that a strong SEO site doesn't have to pay to be seen through searches.


There will still be 2-4+ ads above the fold of the real "good SEO" results anyway.


If you have good SEO and drift to the top of queries that actually are relevant for your site then you don't need to buy as many Google ads.


The way Google defends its income stream against that is simple: They allow your competitors to buy ads on your own names and trademarks, then you're forced to do so as well because otherwise your organic link is below the ads. It used to be that it mattered since only dumb users would click the ads. But now that they're unshaded and look 99% identical, only super nerds bypass ads. Meaning your #1 organic result is basically only good for bragging rights and nothing more if there's anyone willing to pay even a small amount to jump up above you.


Wouldn’t having strong SEO incentive competition to buy ads? Of course it also incentives them to work on SEO, but the only way to ‘get above’ a top ranking site would be to buy an ad, no?


Increasing costs for your competition is a net win for you.


And Google


Depends on the industry, but if you can profit at lower prices you can force your competition to spend less on advertising. That’s bad for Google.


That makes sense. I meant increasing the ad buy spend of a firm's competitors helps Google


> I've been in this A/B test for a couple of months now, so I've had time to adjust, and I still hate it

Me too. It just looks ugly.


I think half of it is the ugly font they are now using. Ugly is the perfect way to describe it.

Google is really not good at creating beautiful products...


My understanding is that they are optimizing for revenue per visitor per month- they are not trying to make the most visually appealing product.


Clearly. It looks like dogshit....


That's a scary optimization to make if it starts costing you visitors.


They'll make it up in volume /s


How is hijacking the domain with Google for AMP not anti-competitive? I'm surprised a class action lawsuit hasn't erupted because of that.


It is not a requirement for AMP. CDNs now let you roll your own domains on the AMP standard: https://blog.cloudflare.com/announcing-amp-real-url/

Bing also uses the AMP standard: https://blogs.bing.com/Webmaster-Blog/September-2018/Introdu...


> It is not a requirement for AMP. CDNs now let you roll your own domains on the AMP standard

All these certificates do is make it so Google's browser (and only Google's browser) will mask the fact you're on Google's domains if you sign the file a certain way.

If anything, this shows more anti-competitive practices -- they're adding features into their browser that specifically benefit a features of their search engine.


That's not true. CDNs also use their own non-Google domains and infrastructure for AMP hosting:

https://amp.cloudflare.com/


Effectively 0 AMP sites are using anything other than Google's CDN.



Yes, sites just host the original copy submitted to Google. You can see all the resources are loaded from https://cdn.ampproject.org

If you visit the page from search results (which is the only place it would be linked) then it would never leave Google's domain.

Here's the actual URL used from search results: https://amp-businessinsider-com.cdn.ampproject.org/v/s/amp.b...


But as long as it's possible it doesn't qualify as lock-in.


You don't need lock-in to be anti-competitive. The requirement of extra work to implement AMP to get that higher search results page placement is the issue.


At which point pushing for new technologies as a private entity is anti-competitive vs moving technology forward?

If the criteria is just "needs extra work" then unfortunately almost nothing can change and we're all going to live with the existing technology. Change inherently has friction and requires "extra work" with the hope that's an investment which provides returns long term.

In other words, say you are a large Internet company that is trying to improve web page loading times. You profile why most web pages are slow and identify issues. You publicly report on those issues and develop guidelines and criteria. Nobody bothers because "extra work". You develop new technology that directly addresses those issues, this technology works within the existing environment but it requires both client and server support to be most effective. Do you think anyone cares? No, because of "extra work". That's why there needs to be incentives. Now you have a "penalty" for not doing that "extra work". You can file it under "it's anti-competitive" (maybe it is) but if you do the "extra work" then suddenly the anti-competitive part works for you, not against you. IMO that's why it's not anti-competitive.

Other examples: why do you think there are so many people that complained when iPhone released with Flash reader? "extra work". Similarly when it removed the audio jack. Change is friction and friction is extra work. But most of the time that's not anti-competitive...


Let me explain based on my 15 years of adtech experience:

HTML is already fast (see HN for an example). HTML is already universal across devices and browsers. HTML is already published and easily cached for billions of pieces of content.

AMP is a fork of HTML that only targets mobile browsers specifically linking from Google search results. It's useless on its own, but AMP is required to get higher placement on search results pages, so publishers are effectively forced to spend technical resources to output an entirely new format just to maintain that ranking.

If Google wanted faster pages then it can do what it always does and incentivize that behavior by ranking results based on loading speed. These signals are already collected and available in your Google webmaster console. There's nothing new to build, just tweak ranking calculation based on existing data. Sites would get faster overnight, and they would be faster for every single user because HTML is universal.

Do you know why they didn't do that? Because it's the ads and tracking that's slow, not the HTML. Google's Doubleclick and Google Analytics are the biggest adserver and tracking systems used on the web. This entire AMP project is created to circumvent their own slow systems. It creates more work for publishers while increasing data collection by running a "free" CDN that never leaves a Google-owned domain and thereby always supports first-party cookies. It's a perfect solution to protect against anti-tracking browsers and why Chrome now will also block 3rd-party cookies, because it won't affect AMP hosted on Google domains.


This makes sense. With all that and browser fingerprinting and accounts and "other" mechanisms, do they even need cookies anymore?


First party storage won't be affected without some major AI tech in browsers so cookies are still the best deterministic connection, especially since most people are logged into a Google service already (gmail, chrome, android, youtube, etc).

Probabilistic techniques are used for anonymous users or environments like iOS Safari that are very strict.


So the user sees your URL, you're getting the revenue from the ads that are shown, sharing will share your URL, your statistics work flawlessly.

In other words: if it behaves exactly as a page hosted on your site (just faster), why do you care?

I'm getting the impression that HN users care a whole lot about seeing the request in the nginx log they are tailing.


Well, as a user, I care about not announcing loudly to Google every single step I take on the web.


Then why are you searching on Google? That's where you would see an AMP page served from a Google AMP cache. If you searched on Bing, you would get AMP pages served from a Bing AMP cache instead.


In the past year, I've seen amp pages increasingly often linked from all sorts of places (reddit, FB, here, etc) besides Google's search results.


AMP pages hosted by the publisher, Google's AMP cache, Bing's AMP cache, or some other company's AMP cache? GGP was complaining about sending any information to Google. Only one of those options does so.


I am obviously not.


It's not always faster. There are plenty of performance and usability issues with AMP pages, not to mention all the extra development effort needing to maintain a different version of the site just for a few mobile browsers.


It's anticompetitive af.


The content is still served from their CDN regardless of the domain. There is no way to serve AMP sites from your own servers and appear in the search carousel among other AMP articles.

Google is strong-arming the entire web to switch to AMP in order to increase their control over the distribution of content, and to be in a better position for tracking users.

The fact that Microsoft and Cloudflare have joined the party does not change the fact that you're about to lose control over your own content if this is not stopped.


That's not true. CDNs host AMP sites on their own domains and infrastructure, independent of Google:

https://amp.cloudflare.com/


By "their CDN" I meant Google, Cloudflare and Microsoft. Can we set up our own CDN to serve our own content from our own servers and receive the AMP badge in search results?

Please disclose your affiliation to Google either in your bio or in comments, and don't post the same comment in multiple places.


> Can we set up our own CDN to serve our own content from our own servers and receive the AMP badge in search results

This...doesn't make sense. You lose the value of a CDN (both to you and to the consumer of your content, in this case Google and the end user) if you're rolling your own.


It no longer makes sense to be able serve our own content without it being pushed down in search results?

We were talking about CDNs because your collegue mentioned AMP CDNs, but the main point doesn't change: we cannot serve our own content from our own servers and get the same placement in search results as AMP content, even if our content loads verifiably fast and is as performant as an AMP page on the client.


> We were talking about CDNs because your collegue mentioned AMP CDNs

I have no clue who bdeurst is. They certainly aren't a colleague of mine.

> even if our content loads verifiably fast and is as performant as an AMP page on the client.

Can you explain to me how your page load time is 0ms? My understanding is that a correctly functioning AMP-cached page will load for the user in a whopping 0ms, because it can be preloaded.

The entire design of AMP starts from a fairly straightforward premise: "How do we reduce (user-visible) page load times to 0, safely, cross origin?" If your pages user-visible loading time is longer than 0, you're failing to keep up with AMP.


AMP pages are preloaded, that's how your get the 0ms load time. If Google would instruct the browser to preload other search results the same way, those would also be available in 0ms when the user accesses them.


See my statement about secure, cross origin preloads. You're asking a search engine to XSS attack its own users.


I think the correct term I was looking for is prefetching. That's a secure way to tell the browser to start loading search result links in the background.


> That's a secure way to tell the browser to start loading search result links in the background

prefetching isn't private cross origin:

> Along with the referral and URL-following implications already mentioned above, prefetching will generally cause the cookies of the prefetched site to be accessed.[1]

IDK about you, but I'd generally prefer that my cookies and IP not be exposed to all of the links that happen to be in the first page of search results.

[1]: https://developer.mozilla.org/en-US/docs/Web/HTTP/Link_prefe...


Browser specs can be improved, and new ones introduced. And even if safe prefetching is deemed technically impossible, the question remains: should we give Google and a handful of other companies disproportionate control over how we publish and consume content, for 50ms of load time?


> Browser specs can be improved, and new ones introduced.

Yes, for example Signed Exchanges, which on a technical level solves all of the problems of rel=prefetch (and a number of the problems with AMP, like link pasting and copying).

> Should we allow a handful of companies to be pinged every time we load a page on the web

I'm hopelessly confused here: you're only going to "ping" one of the handful of companies if you were referred by that company. (In a world with signed exchanges) You're not going to come across an AMP-cache link organically. You'll navigate to example.com directly, without anyone except example.com (and your DNS provider) knowing. The cache provider will only know if you navigate to the cached site via the cache provider. Concretely, you don't go to the Google amp-cache unless you're navigating there directly from Google's search results. Same for Microsoft/Bing.

So if your metric is

> Should we allow a handful of companies to be pinged every time we load a page on the web

Then yes, absolutely, because nothing changes!

Edit: To address your other question,

> should we give Google and a handful of other companies disproportionate control over how we publish and consume content, for 50ms of load time?

Alright: how is AMP materially different from <whatever other algorithmic choices rated search results before>?

You seem to be claiming that AMP is harmful to someone but who? It's not harmful to competitors or to end users, and its only harmful to developers if you make the most strained argument.

My premises here are that users actually prefer AMP results. You may not, but my understanding is that most users do. So from the perspective of an end user browsing the internet, AMP leads to an improved experience.

So it's good for users.

No one has yet been able to explain to me how its actually harmful to a web developer who now has an incentive to make AMP-compatible sites. Like sure, you now have to work with a framework you may not like, but that's not a compelling argument when people are claiming that AMP is a threat to the sanctity of the internet.

So it's not like bad for web developers, it's just sort of a lateral move.

That leaves competitors to the giants. But AMP is an open standard, and DDG could, if they wished, implement an AMP cache themselves today and it would just work. And they'd, if anything, benefit from the bigger players pushing that ecosystem. There's the potential for abuse via the caches.json registry, but the AMP project is aware of this and notes that the registry could be decentralized using Subresource Integrity or similar, if such a standard was adopted[1].

So again: I'm confused by how exactly it's bad, beyond the "I am forced to develop in a way I don't want to if I want to appear near the top of the search page", which isn't new.

[1]: https://github.com/ampproject/amphtml/pull/18495/commits/c5d...


I've actually removed that second question before I noticed your answer, because I knew you would then skip over the first one. Feel free to address the main point I was making in all my posts in this thread, whenever you feel ready.

> So if your metric is

Google being pinged obviously isn't my main metric, as you can see from all my posts in our discussion. My main concern is that publishers will be forced to use specific publishing mechanisms (AMP, Signed Exchanges) to appear at the top of Google Search results. That loss of control puts publishers in a vulnerable position, and hurts innovation across the web.

> Then yes, absolutely, because nothing changes!

Everything changes. Google's influence and control won't end at the moment the user navigates away to a top result on Google Search.


I updated the prior post, but to be a bit more concrete, what differentiates AMP signed exchanges from, say, HTTPS?


Google could serve an older version without the user knowing.


The signed exchanges protocol requires that the content be signed with a key with a short expiry date (< 1 week), sites are free to make it shorter. In extreme cases, the providing site could sign with an expiry of < 1 day or even something like 1 hour.

And iiuc, sites are still free to revoke their certs. So this is actually probably more secure compared to something like https in that regard.

[0]: https://github.com/WICG/webpackage/blob/6cc3237b36c2f9ce7534...


I see what you're saying but im not sure that's a problem - how many people are building CDNs in their garages that need to be certified?

Also, everything I say on HN reflects my own opinion and not any organization, which is what my profile states. I do not hide behind an anonymous username precisely for this reason. Poisoning the well by doxxing me doesn't change how the AMP standard works for CDNs either, and only serves to derail the conversation.


>Everything I say on HN reflects my own opinion and not any organizations, which is what my profile states

Sorry, but I agree with dessant. Your profile doesn't disclose your affiliation and nor do your comments. It's absolutely relevant to the discussion, because whether you want to admit it or not (or try your best to act neutral), your day job will have some influence over your opinions on these sort of projects.

You're right that it doesn't change objective facts about the specification, but I think it's misleading to suggest that, in general, external third-party CDNs are first class citizens in the AMP ecosystem when they're not treated the same within search.


Because the Department of Justice doesn't enforce anti-trust law anymore. Read the book "Chickenshit" for more details.


Also what is the technical benefit? On android it feels like a downgrade


It seems faster for me across devices but there’s usability issues, at least on some sites/pages. In my experience “find on the page” is spotty or impossible on mobile sometimes.


What makes you think it feels like a downgrade? AMP results seem to, on average, load faster with a better UI to me.


Not sure why this is being downvoted. I encourage people to visit the non-AMP version of smaller news providers.

Enjoy the popups, video ads, autoplay, bloated sizes, etc.


Whenever I get an AMP page, I always go to the non-AMP version, even for smaller news providers.

I find the non-AMP version to be superior every single time, but I use NoScript so I don't see the popups, video ads, autoplay, etc.


If you use adblocking addons you don't get many of those.


Or even reader mode - I don't mind flashing the cruft for a few seconds while I turn it on, and it's the best of all worlds. The site gets their ads (briefly) loaded, and I get a clean page to read.


AMP isn't being forced on websites, they choose to enable it or not.


That's an arguable point if not enabling it means lower rankings on search... Which is problematic when both are being offered by the same company.


You either choose AMP and appear in the search carousel at the top of search results in the biggest search engine in the world. Or you choose not to implement AMP and you don't appear there.

Your "choice".


Cake or death.


Yeah and death row prisoners choose their last meal /s


There are alternative search engines of course.

I use StartPage which has the benefit of local results (but can be fully anonymous) but still uses Google search results. DDG is a great alternative, but isn't for me. And then there's Bing


Just FYI, Startpage was recently acquired by a shady company.


Huh, you're right, this flew completely below my radar.

https://restoreprivacy.com/startpage-system1-privacy-one-gro...


Wow. Did not know that, time to move along then. DDG here we go. That said, their privacy policy still looks pretty strong: https://www.startpage.com/en/search/privacy-policy.html

...and at the end of the day, who is actually better out there, and that can prove that no data is leaking? Maybe running your own searx is the only option? (http://asciimoo.github.io/searx/)


Startpage person here ️. Maybe I can shed some light on this. Last year, Startpage announced an investment in Startpage by System1 through Privacy One Group, a wholly-owned subsidiary of System1. With this investment, we hope to further expand our privacy features & reach new users. Rest assured, the Startpage founders have control over the privacy components of Startpage (https://support.startpage.com/index.php?/Knowledgebase/Artic...).

Also, a couple of things that set Startpage apart from DDG: 1) We're HQ'ed in the Netherlands, ensuring all our users are protected by stringent Dutch & EU privacy laws, 2) we give you Google results without tracking, 3) with Anonymous View you can visit results in full privacy.


> Their destructive AMP service confusingly shows you Google's domain instead of the website's

No, it doesn't. It's actually served from Google's URL, but it (the AMP service) shows you the original site URL (well, it shows the domain by default but that's a button that expands to the URL if you click it.)

Your address bar shows you the Google URL, but that's not misleading, either, since what the address bar has always shown is what location content is being served from, not a content identifier distinct from the mechanics of content distribution.

> they can't fix that without losing out on tracking

Nah, they could track of they worked like a classic CDN


I mean, I generally get the gist of what you are saying, but you are saying "no, you're confused, it's not misleading..." It's kind of like saying "no, you don't have hypochondria, it's all in your head!"


I'd love to learn what % of A/B tests get rejected after the test has concluded.

I suspect there's naturally a laundry list of biases that all the work we designed and implemented needs to succeed or boy do we look silly.


If you want a favicon fix. Use this CSS userstyle https://userstyles.org/styles/179230/google-search-old-style with Chrome/Firefox extension "Stylus".


>Thanks for ruining the Web, Google.

When does the mass-migration to DuckDuckGo go mainstream?



Might be now? I just swapped my default search to DuckDuckGo.

I have been having several issues with google search recently, this just seems like a good time to make the jump.


Yeah for a long while I felt like an idiosyncratic person for using it and it did feel like a minor sacrifice. Nowadays it really does seem like a competitive platform on quality. Slightly less good at parsing the semantics, but much better at actually showing me search results instead of ads.


I switched yesterday. I love it does not try to guess what i search for


I use duckduck go on mobile because it doesn't prevent me from saving images.


Why not just use Bing then? Why drink the same wine from different bottle?


It slowly moves a little bit more each day.

Not useful results will change habits.


What a bad thing this AMP is... it is very hard to go directly to the website. Good that only shitty news outlets use it though


I hate to be that guy, but have you considered moving to another search engine? For example, DuckDuckGo is very decent once you adapt to it. And for the few cases where you absolutely need Google to read your mind, just add "!g" to your query and you're automatically there.


The thing which I use Google most for is typing some random place’s name in and it gives me that little card which shows how busy it typically is, what time it’s open, directions, contact number, etc. DDG just gives me a search result and 90% of the time that’s not what I want. I’m looking for information, not a list of URLs.


You can go straight to Google Maps for that. Or use the "!gm" bang in ddg :)


> I still hate it

I left a strongly worded feedback on their form


Just wait until their feedback forms get ads.


I'm sure a sentiment analysis bot looked at it and added a '-1' somewhere in their databases.


It's funny to think that, of all the products Google kills on the regular, the one thing most everyone wants them to be done with (AMP) is probably gonna stick around forever.


> This is part of Google's attempt to de-prioritise the URL

And people wonder why phishing is a thing?


> I've been in this A/B test for a couple of months now, so I've had time to adjust, and I still hate it.

I have been trying to switch from google to duckduckgo for years but its only the past few months that I have been successful and I have google to thank for that.


They could fix this for publishers who point a subdomain that way. Or url rewrite the address to the publishers.


Google is satanic. They a mirror of the Soviet Union. Google also works with 3rd party websites to blacklist IP addresses so you can never post on a forum . And people always say you get blacklisted because you spam. I think that's bull crap . I think people get blacklisted for their political beliefs.


To be fair didn't internet marketers ruin things first by cramming keywords in URLS and page titles to game search engines?


Two wrongs...


Just saying IMO Google isn't ruining the internet. Of course every decision being scrutinized they can't have a perfect record. They have a search engine that due to it's popularity is the target of all kinds of stats gaming and optimizing, While at the same time trying to grow a profitable product. So it's not like there are simple solutions or easy quips like Google Ruined the Internet. It sounds like a lazy argument.


> Thanks for ruining the Web, Google.

Wow...


It's not just ruining the web, it's essentially dishonesty. Or if you are a pragmatist: lying.


> Thanks for ruining the Web, Google.

So, they helped make the web a far easier place to look around on things for a few decades and one layout change and you call them that they ruined the web?

It's easy to be on the barking side.


It's likely a distaste at the culmination of a lot of anti-customer moves that Google has made over the last few years

Barking side, indeed


Google gave us a lot, but that does not mean they should not be criticized. Without emotion I can say that using Google is far worse today than it was in the early 2000s. Besides delivering the results in a much more readable format, at that time we were able to search specifically in forums and there were a lot of advanced features like "linkto:" that are not supported anymore.

The problem I see is that Google does not care about us Geeks anymore. They are 100% focused on consumers now, and that sucks a lot.


They are 100% focused on consumers now, and that sucks a lot.

Focused on exploiting consumers, that is. Consumers are the livestock in Google’s factory farm. They are actively hostile to end users now.


Criticizing is fine, sure but calling them they ruined the web? A bit stretched?


Google has done away with most of its original competitive advantages. Its biggest advantages now are its name recognition and its size. If Google had started off with paid search results and what-you-search-for-is-not-what-you-get we'd probably all still be using Altavista.


that's a rather simplistic and faulty argument. there are plenty of things wrong with "lure and then abuse" scenarios, and it isn't really an argument at all to say "but look at all the good the lure phase did".


AMP obscuring the URL is a side effect of the current technical implementation, not a goal – and they are working to fix it: https://searchengineland.com/google-announces-signed-exchang...


To users, side effects are the same as goals. We have nothing to judge a vendor on except observed behavior.


It's the intended side-effect. For the longest time ever AMP wouldn't even acknowledge it's a problem (oh, we just provide the standard, it's the browsers' fault).

Then Google relented and provided a non-solution in the form of an obscure bar on top of AMP pages (in which the link to the original page is deliberately designed to not look like a link).

The signed exchanges is a bone thrown towards standards committees after all the damage has already been done.

And the "solution" has been directly called by Mozilla harmful, they are not going to implement it. Safari shares Mozilla's concern.


The parent mentioned that: "as they can't fix [the URL appearance] without losing out on tracking, they're trying to change what the URL means."


Actually, that so-called "fix" obscures the actual Google URL where the content is served from.


Yes, and that fix is worse than the original problem.


Cloudflare has an option for native AMP urls, at your own domain, as well.


Is any site using this? It requires a special certificate and signing your content (which admittedly Cloudflare will take care of for you) but even then it’s only for Chrome and Firefox and Apple have said they won’t support it. Over a year after announcing it I’ve yet to find a single site that does this.


> and as they can't fix that without losing out on tracking

Tracking what users click to as the result of a search is critical feedback information for training your models/algorithms, it's not just about "hey let's see where this user goes to fine tune ad targeting for them". And, AFAIK, every search engine out there does it(?)


They can track it with a simple js onclick handler, or a simple redirect on their server, that's not the problem. (They do both btw.) The problem is that they want to track what I do on the links I already clicked - which is absolutely none of their business.


Perhaps so, but I still find it highly objectionable.


Just search in a private window then, the part of the tracking that I assume you find objectionable (gathering info on what sites the current user visits) goes away when you close the window while the part that helps the search engine (and thus results in better search results for everyone) still works.

Now sure, we can argue that maybe the company should provide options where you can say "you can use what I click on for search training but not for targeted advertising" (I think Google does provide a set of options that pretty much disable all web history/targeted ad collection), assuming you believe they follow through. But the company needs to pay for its services somehow so I can't blame them for tying the two types of tracking together, I still have tools as a user (private window) to avoid it if I care enough to.


> This is part of Google's attempt to de-prioritise the URL.

URLs have always been an implementation detail and not a user feature. From the very beginning it was intended that users would follow links, not type in URLs. HTML was built on hiding URLs behind text. Then AOL keywords happened. Then search explosion happened. And short URLs. And QR codes for real-world linking. And bookmarks because yet again typing in URLs is not a major driving use case.

Typing in un-obfuscated URLs has almost never been a key feature or use-case of the web. If anything URL obfuscation is a core building block of the web and is a huge reason _why_ the web skyrocketed in popularity & usage. Don't pretend that somehow AMP obfuscating URLs will be the death of the web. The web exploded in growth despite massive, wide-spread URL obfuscation over the last 20 years. Nothing is actually changing here.


I am not comfortable with someone else's domain becoming the de facto front door to my website.

There's nothing I can do it about it, but I tend to hate it.

If my name is Mike and someone powerful calls me Chucklehead, I will have to start answering to that name in order to continue doing business.

But what REALLY concerns me is if a year later, that powerful someone calls someone ELSE Chucklehead.


Well then don't use AMP? It's your domain, it's under your control. You at least have a choice here, whereas you can't block most other forms of URL obfuscation when being linked elsewhere.


Google is deprioritizing sites without amp in search to force them to use it.

So, if I search for something on reddit, I already learned to use duck duck go. Cause then I don't have to edit url to get rid of amp part not scroll up and down for that link.


> Google is deprioritizing sites without amp in search to force them to use it.

Isn't that just a protection racket?


That's a super valid complaint and also completely orthogonal to any of this nonsense about the web being killed because a URL was obfuscated.


Isn't google already the front door to everyone's website? Are the amp URLS really crazy? If you have a .com address is that roughly the URL in search results?

Your link is being shared from Google's search results and their application, so you might not like it but they have every right to control how it's displayed. Is it difficult to accept traffic from an AMP link? Are there technical downsides besides being called a name you don't want?


The "web" is built around "human readable" technologies. Even actual implementation details that the user doesn't care about - like the application layer protocol (HTTP) and the source code for pages (HTML, CSS) - is human readable.

The "point" of the web was to serve humans, not machines. If we wanted to serve machines, we'd just throw binary blobs around, which would be orders of magnitude more efficient.

That said, I still have a bunch of "ancient" tech magazines that had directories of URLs for (then) popular websites, grouped by category. That's how we found things then.

People forget that there was a world before Google.


> From the very beginning it was intended that users would follow links, not type in URLs.

so, about that

    4.6 Locators are human transcribable.

   Users can copy Internet locators from one medium to another (such as
   voice to paper, or paper to keyboard) without loss or corruption of
   information.  This process is not required to be comfortable.

you can't copy a page title into a client and expect it to find the right resource, now, do you?

accessing the URL is listed as one of the fundamental use case for them, and for good reasons, detailed elsewhere in the same rfc


I agree with most this... but have somewhat of a counter point to "you can't copy a page title".

You (often) can copy a page title into a client (Google) and expect it to find the right resource. This is usually done with articles, etc.

Even your comment provides a perfect example - you didn't link to RFC 1736, but the text "Locators are human transcribable" is unique enough that the first Google result is correct.

So you didn't have to provide a URL to lead someone to this page, just 'enough' unique text for it to be findable.

Which is kind of amazing - and maybe not what the original RFC intended.


counterpoint: https://i.imgur.com/ORLixJa.png

none of the result here are pointer to the source I've used

sure the content is the same, but the resource isn't, if anything this demonstrates how easy is to misled a user, directing him on a different resource thinking it's the same.

compare with stack overflow result:

https://i.imgur.com/maJi47G.png

luckily prominent sites get pushed on top of the result queue (had to cut it because it was submersed by advertisement) but the attack vector is evident.


>You (often) can copy a page title into a client (Google) and expect it to find the right resource. This is usually done with articles, etc.

So web was designed with using Google in mind? woot


> you can't copy a page title into a client and expect it to find the right resource, now, do you?

You can type it into Google, though...


My mother opens every webpage by typing the URL into google


Because web browsers default "home screen" has a large text box by default. People think that's where the address goes, but not really, that's actually a web page which does search. Shame on us for designing it that way.

But on another note, it will be the day when your mother (or anyone else for that matter) types in 'www.walmart.com' and actually goes to 'www.target.com' (names taken from other examples that I've seen on Twitter).


That is already happening

I made a new webpage for my father, and my mother cannot open it, since google does not know about that and there are no links to it.


Maybe Firefox should change the default screen from containing search to containing the address bar instead. That way if you type in a proper URL there it should go directly to it rather than doing a search.


Not going to happen, that very search box is Firefox's main source of income.

There is unfortunately a tax on stupidity, and in this day and age it is paid with one's personal data to multinationals.



Thank you.


> Typing in un-obfuscated URLs has almost never been a key feature or use-case of the web

Making people type in URLs is not a key feature of the web. Letting people interact with their various signifiers has always been.

Too many people involved in UX and product and commentary have forgotten that "don't make me think" doesn't mean "don't let me think."

The URL is a set of half a dozen affordances.


URLs have always meant something to users. It’s how you trust the content and indeed the link. And now with so many exploits and phishing it’s even more important. Typing in a URL might be rarer, an honour only reserved for google, facebook etc. but reading urls is very important


Let's distinguish between obfuscating the query parameters, the the path, and the domain. They are different things, and the domain name especially is a major security boundary.


Most of what I mentioned did obfuscate the domain name. Users do not care about domain name security boundaries - that's yet again an implementation detail. An important one, but still not something most users recognize or care about. Hence, you know, why phishing is so successful.


Users used to care about paths and other descriptors. A decade ago, we were fighting for human readable resource locators. However, users have been taught progressively to ignore them as too complex. (Just the same as it happened for any advanced, but user accessible browser controls.) Why is it that 25 years into web usage, any controlled access is deemed too complicated? (Mind that other technologies were already approaching the end of their lifespan at a similar age. The web is neither new nor disruptive anymore.)


Why is it that 25 years into web usage, any controlled access is deemed too complicated?

Two words: corporate greed. It's so much easier to persuade and herd them to where they can be "monetised" when they don't know how things work, nor can't figure out how and where to learn.


Somehow the knowledge industry has turned into dumbing down as a business. There's a TV ad for a smart speaker integrated car, showing a hip, but managerial type man downloading the route to his workplace to his car by a voice command from home and subsequently happily arriving at work. – How do you manage to make a living, if you can't memorize you daily drive from home to work? How is this a product? Has augmentation of human intellect turned into a zero-sum game?


I use navigation to avoid heavy traffic. Most of the time it changes nothing but sometimes saves 30 minutes of driving.


This is categorically wrong. URL is the fundamental part of the web and without it, the Internet couldn't be decentralized. Imagine if I must reach tweeter by typing "tweeter" somewhere then that somewhere now becomes the gatekeeper. The URL allows distributed gatekeepers. A mechanism with a distributed structure will need something like URL infrastructure.


You're talking about writing URLs as if that's the only purpose they serve. You also read URLs.

Same as I'm not going to type a long path to a file on my file browser or CLI, I'm not going to type the full, character-by-character URL. But being able to see the path also provides extremely useful information.


Sidenote: your comment gets downvoted,huh. Though I don't agree with you I'm happy to see someone with opposite arguments. Have an upvote as you contributing greatly to this conversation.


It's funny, because if you run AdSense on your website, Google has very strict guidelines about not misleading users and making a clear distinction between advertisements and your content. However, when Google shows ads on their site, they don't need to follow those rules, they blend them in as closely as possible.

Also, what's the deal with showing an advertisement for the same result that's number one? See the below screenshot.

https://i.imgur.com/f0Kolfv.png

Doesn't this seem wrong? For a lot of people, Google has become a site to not only search the internet, but to simply navigate it. It's normal for someone wanting to visit Expedia to search "expedia.com" or "expedia". They are trying to navigate to that website, Expedia is the first organic result, and yet Expedia is pressured into paying for an advertisement to prevent one of their competitors from appearing first. Even when a competitor hasn't advertised, they're still stuck paying like the above screenshot. To me, this feels inappropriate. Google is getting a hefty payday by simply redirecting someone searching for "expedia.com" to the Expedia website.


They display both because Google is selling ad space on searches like these, where people search the name of a site. If that site doesn't buy the ad, their competitor will. So sites are being forced to buy ads on their own trademark.


I understand why it's happening and I mentioned it in my comment. However, I just find it incredibly inappropriate that...

1. When I search an exact domain Google will take money from a competitor and show their "advertisement" first. I say that in quotations, because it looks like they're showing a search result, not an advertisement. At this point it feels like companies are paying for their search placement. Pay enough money and you can be the first result for any search term.

2. Does Google give Expedia the option to not pay for an advertisement when there is no competition? I don't think so, and in the example I posted, Google has basically scared Expedia into outbidding no one.

The whole thing feels like extortion. Pay us money or we'll send people trying to navigate to your website to one of your competitors.


I 100% agree with you. It is extortion.


Google = the new Yelp


That's what happens when MBAs and not engineers run your tech company


There are enough unethical engineers in the world (in fact, many of them work at Google!) that you don’t need to make this accusatory statement.


Your statement has unquantified "enough, "many" and unqualified "in fact". It also completely misses the point - it is the proportion of unethical decision makers that matters. Not the absolute numbers.


To be fair this exact behaviour has been happening since pretty much day 1, Google no longer being run by engineers is a very recent development.


The difference now is gate keeping. Prior behavior did leave one to make choices.

Google made their favored ones easy.

Now that fails to deliver expected results. Too many people not making the choices Google would have them make.


I aggree with your point #1, if you search for an specific term / domain it should always appear first if there's a direct match.

but for #2 there's a pro-competition argument here. If you search for Expedia and all you ever get is Expedia and expedia pages underneath that, in theory that's good. But what if you don't know about other online travel sites? You'll never see them, so it kind of makes sense that you are shown other sites in there.

Google should recognise that a search for Expedia is either: a) For Expedia b) For a travel holiday

and let other competitors rank for b), showing Expedia as the biggest and main CTA on the page.


so it kind of makes sense that you are shown other sites in there

Why? If I search for a specific thing (Expedia), why would Google assume I want to see other options that I did not ask for other than "it's more profitable for Google"? More specifically, in the event I want other options, why is the correct answer "you want to see the other options that are paying Google the most"? That's not "pro-competition", that's "pro-Google".

Google should recognise that a search for Expedia is either: a) For Expedia b) For a travel holiday

Why, short of mind reading (I assume they're working on it, but it's not in the 10Q), should Google ever assume B?


What about generic or almost generic trademarks? If I search for something like bandaid I am certainly _not_ expecting Google to rank Johnson and Johnson (the owner of the trademark for bandaid) above other potentially more relevant results.

And the same applies to other terms like Kleenex, Ziploc, aspirin, etc.


> But what if you don't know about other online travel sites? You'll never see them, so it kind of makes sense that you are shown other sites in there.

I don't think that makes sense. Or, at least, such results should be below all the actually relevant results.

If I'm searching for Expedia, then what I want is results about Expedia. Nothing else. If I want to know about other travel sites, I'd be searching for "travel sites" instead.


And if you don't know the name for the generic product to search for?

Jacuzzi, styrofoam, and Super Glue are all brand specific trademarked items[0], but I doubt most users care about that when they're searching for those terms.

[0] https://en.wikipedia.org/wiki/List_of_generic_and_genericize...


To play devil's advocate, these kind of brand ads are usually very cheap and frequently generate some incrementality - you're basically paying to have your links take over a larger part of page 1. Even if competitors weren't going to advertise, it might be worth it.


I pay five figures a month for my "$domain" ad. It's not cheap by any means, but we're forced to buy, otherwise our competitors steal our traffic.


Similarly with GTLDs. Many companies have to register their own .sucks so someone else doesn't stand a website up there. What a pointless waste of money.


And .sucks is expensive for a TLD. I also hate .co. They're basically a top-level typo-squatter IMO.


"Steal"?

If customers searching for your brand are happy to buy from your competitor with a different name, maybe your brand isn't so strong?

And if you are profiting from advertising your brand to people who don't choose your product over alternatives, isn't your marketing just "stealing" purchases from competitors?


Yeah, that's not how it is working right now. It's more like:

User: "Google, how do I get to Costco?"

Google: "Here are directions to Target."

User:"But I asked for directions to Costco."

Google: "Target gave us a lot of money. So, firstly here are directions to Target. If you still want Costco, keep scrolling, but Target is great. You should shop here."

When you literally search for one brand and the first result is a competitor, just because they bought out the ad space for your brand name's keywords, that is theft, imo. Or extortion if you'd prefer. It's at the very least, extremely disingenuous and sleazy. Customer loyalty is looking up the preferred brand to begin with.


I imagine it kind of like this...

Two stores, A and B are competitors. I visit company A and say that I'll stand at the entrance of their competitor, store B. I'll tell each customer trying to enter store B about store A and attempt to refer them, as long as store A pays me $0.50 for each person I try to send their way.

This works, some people that drove to visit store B now go to store A instead. Now, store B is getting annoyed at me "stealing" their customers, but I have a solution. I tell store B the more money they pay me, the less likely it is that I'll refer their next customer to store A. So, store B starts paying me $1.00 each time someone tries to enter their own store, and in return, I do nothing but stand there.

A year goes by and I'm standing at the entrance of every store in the country. I collect a dollar when someone tries to enter every store, and I do nothing but let them pass.


That's naive way of looking at things. Do you know how many brands and trademarks exist in the world? It's over 40 million! Do you know how many brands easily gets mixed up with normal English? When someone searches for best buy, do they always want to go to best buy store? When I type subway, do I want a sandwich shop or subway station? Doing things at scale is very different. And you can't be unfair to small brands while only taking care of big brands. While I understand your issue, there is no clear cut solution here.


To be fair, they don’t do that on Google Maps. However they will push their own Google Flights service when searching for United. Eg “Book United” redirection to Google Flight.


Why does Google owe you traffic?


If I promise to give you a truthful answer to a question you ask, "What's 2+2?", and then I say, "5" (because math teachers pay me to give that answer), the same argument could be made.

Why do I owe you a fulfilled promise? Because I said I'd be honest? Why do I owe you honesty about my honesty?

Reductio ad absurdum

If I build a search engine and promise that it will give accurate and honest results to your search queries, and then it doesn't do that... why do I owe you? "I didn't force you to trust me" is the most childish way to try and weasel out of broken promises there is.

To answer your question though - they don't, but it makes them a shitty company to lie to/manipulate their users.


It's lying insofar as grocery stores having two brands of chips on an endcap that you see first before you get to the actual aisle with the other 30 brands is lying.


More like specifically asking a clerk where the Lays Salt & Vinegar chips, and they walk you to the chip aisle and respond with "Here are the Doritos", but sure...


Except this store only makes money on showing you where the chips are and not on the sales so if they can't show you the Doritos first, there's no store and no chips of any kind. If that's what you'd prefer, then pretend the store doesn't exist and go to a different one.


That's not the only way they make money though. Even if it was the only way they make money, giving accurate results in those cases is the cost of keeping your users for cases you can serve them ads that are relevant. The only reason they got big was their accurate results so it is a shitty thing to do.


Google owes mostly-unbiased accuracy to the internet at large. They built their brand and reputation on that. It should not be possible to purchase a shortcut on the accuracy, especially someone else's.


It doesn't owe anyone anything. It trades this unbiased accuracy for eyeballs on its advertising, the same advertising that many folks in this comment section are condemning. Without the advertising, Google doesn't owe anyone anything. It's not a charity.


Its a sad world you live in where businesses don't owe you to behave ethically.

You'll probably figure out at some point that they do, and that this whole law thing is a proxy for ethics we put in place so that we can punish people that don't.

Until then, live in your world.


I live in the same world you do, minus that smug sense of entitlement to products and services others provide.


Google shouldn’t be a purveyor of brand quality in the case where the user has made an explicit request. If a user types in “shoes” should they 100% get shoes.com as the first result? Of course not. But Expedia? It feels wrong not to make that the first result.


Expedia is indeed the first result for "Expedia", after the 0-2 slots of advertising that may or may not be Expedia. If Google can't monetize the search results page, why are they obligated to provide the search results in the first place? Until Google recoups the costs by showing ads, the relationship between Expedia and Google is one sided: Google provides a lot of traffic to Expedia with nothing in return. How is that fair?


It doesn't. And I totally understand from a business point of view why they do what they do. This is an unfortunate side effect of the monopoly they have on web search, which due to network effects and aggregation is unlikely to go away.

The only real solution is to regulate this to make competition fairer again.


I can't speak for the OP, but in our situation, the customer doesn't realize they are buying from a competitor. We regularly get support requests from customers we can't find an order history for. It's almost always the case that they ordered from someone else thinking they were ordering from us. They searched our name, clicked the first result, and placed the order.

We've even seen this happen with repeat customers of ours. It's tough since Google allows the ads to be very confusing, but then we also have competitors who have blatantly copied our style, content, and have even named themselves in a similar manner.

We also spend just under five figures a month on branded keywords to combat this. We rank #1 organically for all of them, but there are always 3-4 competitors bidding on them so we only get a small portion of that traffic if we don't pay for those keywords and get the #1 paid spot as well.

Even with that in place people will still get duped and click ad #2, #3, or #4 that go to competitor sites.


If I were a competitor, and it’s that cheap and effective for Expedia, then I’d never allow Expedia to have that real estate.


It's absolutely extortion, but it's cheaper for Expedia to buy than a competitor due to Google's auction rules.

Essentially, the bid to be #1 is discounted by the click-through rate. If 90% of people who search "expedia.com" will click an ad for Expedia, and only 10% of people who search "expedia.com" will click an ad for travelsite.com, travelsite.com has to pay 10x what Expedia does to make up for the lost revenue.


Devil's advocate - isn't this a neat way of automatically being anti-incumbent and pro-competition?


It is a tax on customers. Incumbents can generally afford to pay more than upstarts, and of course will just pass that cost on.


? I don't follow the mechanism you're thinking of here.


No. It's a currently non-illegal way to extort businesses.


It might be if it were more transparent. So many people are mislead by these "ads" on branded search keywords though, it really is extortionate.


Wait, I'm confused. Are you saying that Expedia pays Google less when people click on the ad vs when they click on something else?


No, I'm saying Expedia pays less for ads for expedia.com than ExpediaCompetitor pays for ads for expediacompetitor.com.

This gives Expedia an advantage when purchasing ads on the search term "expedia.com".

Of course, Expedia would probably prefer not to purchase ads on "expedia.com" at all.



| 2. Does Google give Expedia the option to not pay for an advertisement when there is no competition?

If nobody bids against Expedia, they'll end up paying a penny to win the auction. Source: https://support.google.com/google-ads/answer/2996564?hl=en&r...


Why are users entering brand names into google instead of going directly to the site? How can we change this behavior?



A shakedown, not sure how it's been legal for so long.


Does this create an opportunity for 'number one' sites to band together and create a competing search engine?

Would it be possible to fund a search engine startup by selling shares to all of those sites who now have to pay to stay at the top? In the long run, it should be cheaper to fund a competitor.


I'm helping this effort along -- all of my sites specifically exclude Google's spiders, but allow those of other search engines.


I'm not sure regulators would look kindly on an pan-industry cartel explicitly designed to suppress smaller business from appearing in the marketplace.

That's a pure antitrust play.


HERE is an industry consortium created to offer an alternative map service.


That sounds like Google to me.


Do you have an example where the competitor website shows up first because the searched company hasn't bought the ad? Haven't seen that before.



This reply reads like a robot wrote it. OP obviously understood all this when making the original comment.


This is the correct answer, not the other answers below. We call it branded search, or brand protection placements.

This is the sort of practice that will get Google in trouble with Congress.


The url they're using for the ad is interesting:

www.expedia.com/Expedia/Official_Site

To me (which may well be a minority view) that actually makes it feel less official, because "/Expedia/Official_Site" is the sort of tactic a scammer would use.

I'd worry about it having the unintended consequence of building people's trust in something like "/Official_Site" vs focusing on the domain name. At the very least it's muddying the waters in this respect.


And that URL results in a 404! Good riddance.


Had the exact same thought. Had to double check the domain name twice since it looked so fishy!


Scammers use the messaging of legit companies, and vice versa, because by assumption consumers can't tell the difference and companies use the most effective messaging they know.


I always scroll past the ad to click on the "real" URL. I don't want to add to the statistic showing support that behaviour.


Indeed. That's why Google is obscuring the boundaries between "ads" and "results", until you no longer have the option to not be a statistic.


You do you, but (1) you aren't moving anyone's needle on revenue or expenses, and (2) you're using the search engine for its main business proposition and then circumventing the compensation model, so you are not being ethical either. You could use another engine or spend some money to compensate the search provider.


> and then circumventing the compensation model > spend some money to compensate the search provider

what?

Wait so you're saying everyone owes it to Google to click the ads? How much? Always? Sometimes?

Does this mean that if you go to the movies you are literally obligated to buy popcorn and soda? I agree with you that if no one clicked ads then Google wouldn't "be able to operate it" for free. But I think that like any business they operate in the aggregate. Some people make you money, some don't. In Google's case, everyone I have ever watched click ads 100% of the time so I don't think they're hurting.

And also, I think if everyone stopped clicking ads and Google was forced to just charge users say $12 a month for search as an informational service with zero ads, I would gladly pay it. And the Internet and the world would be a LOT better.


Is it plausible they've created a Church/State style separation between the Ads and Results? Seems like that would be desirable in a lot of cases, though perhaps not in cases when someone is Googling for something with an obvious first result


This amounts to racketeering. Given Google’s power. If you want to find a particular brand name, that brand name has to pay up for the ad space or their competitors will take it, and either way, Google makes money.


Notice the first search result URL is different from the ad URL. That probably explains why they display both.


>what's the deal with showing an advertisement for the same result that's number one?

They paid for the ad. They probably leave it up as first organic results because some people think clicking ads is malignant.


Fully agreed - and the necessity for the client here can't be overstated: was doing some research into nearby apartment buildings and searching the exact name of the building I was looking for returned an ad with their neighboring competitor as the top result, above the site of the building I was looking for.


almost any average PC user I have worked with, do not know where to enter URL and always search the URL.


For most PC users, the URL goes in the same place as the search, so they absolutely know.


Maybe people who can block that ad wouldn't get any result for Expedia, which would upset Expedia a lot.


Google could keep the search result and just hide the ad.


https://www.google.com/about/honestresults/

It's a little confusing to read now, so for context: at the time Google published this, it only put ads in the sidebar to the right of search results. This post was written to criticize the practice of putting ads atop search results, which competitors sometimes formatted almost indistinguishably from organic search results.


This bit from the 1998 paper by young researchers Brin and Page, in which they introduce a novel search engine called 'google,' is also fun and instructive:

"It is clear that a search engine which was taking money for showing cellular phone ads would have difficulty justifying the page that our system returned to its paying advertisers. For this type of reason and historical experience with other media [Bagdikian 83], we expect that advertising funded search engines will be inherently biased towards the advertisers and away from the needs of the consumers."

http://infolab.stanford.edu/~backrub/google.html


The post was written to criticize "paid placement" search engines like Goto.com/Overture (see https://www.searchenginewatch.com/2002/03/04/how-overture-go... for details). I believe Google has put ads above search results for as long as AdWords has existed (since 2000).


No, at the very beginning, Google only had the ads in little yellow boxes over to the right. They were very distinct. Then they started putting ads above the search results, but they still had a yellow background. Then they got rid of the background, but the results were still fairly visually distinct from the ads, and there would only be one or two ads above the search results. Now they just put this tiny little box that says "Ad" next to the ad, and they're no longer easily visually separable from search results. Also, very often the first screenful of results is entirely ads. The overall experience has massively degraded.

The other thing I've noticed is that more and more of the top results are from garbage content farms. Filtering this kind of crap out was the original reason Google existed and everyone switched to them, but they're failing at it now. IMO Google is overripe to be replaced with something better.


I have also observed people totally ignoring the top link and clicking a lower one even on searches where the top link was not an ad and was the correct page.


They don't show full URL, and a few times when I clicked on "correct" link I got to a shitty landing which I could not escape. Of course most of the blame is on shitty website design, but still I want to see where the link actually goes to because most of the times for me it's confusing.


Top ads in fact came first, in 2000, and were there through the launch of AdWords. History: https://searchengineland.com/google-adwords-turns-15-a-look-...


This suggests otherwise:

“For example, entering the query "buy domain" into the search box on Google’s home page produces search results and an AdWords text advertisement that appears to the RIGHT of Google’s search results” [1]

It is from October, 2000. It is so ancient, it is even before Schmidt happened. I don’t believe anything preceded that, but to be sure we’ll have to wait for Larry to chime in and clarify.

[1] http://googlepress.blogspot.com/2000/10/google-launches-self...


That press release also says:

> Google’s quick-loading AdWords text ads appear to the right of the Google search results and are highlighted as sponsored links, clearly separate from the search results. Google’s premium sponsorship ads will continue to appear at the top of the search results page.


Indeed. Missed that.

Verdict is this: they started at the top, then things went sideways ;)


Vendors pay for placement throughout google's interface and search results, even if the 'list' of links is still 'organic'. For example, the placement of buy links for movies, flights, hotels, etc. Now with instant articles, google places content higher if it allows google to track and advertise within the article.


You can tell that's old because it's only ~200kb...


And even that can be considered excessive for what it is. Sad.


Oh my god, that page is going to disappear soon now.



OFF:

The load speed difference between these two archived pages is huge.


I searched for "google search advertisement" in the period 2000 to 2005, looking for screenshots of how Google ads have changed in style and placement over the years. Turns out the first result was this page... oh, the irony.


Amazing. File under not being evil.


Oh wow, the irony.


Your link is gold. A historic artifact.

Amazing how clear the writing is, how simple the message. That’s, like, totally not the corpspeak Goog emits now on a daily basis.

So, let’s do some digging.

Earliest version of url dates back 4+ years. https://web.archive.org/web/20151213182805/https://www.googl...

Things were a little better than, but not by much. This has to be earlier.

Ah, here is:

https://books.google.ca/books?id=oNT3AwAAQBAJ&pg=PA289&lpg=P...

This page is from a book by Douglas Edwards, employee number 59 published in 2011.

The content of OP url, written by same, is dated March 2002.

That company no longer exists. Goog should remove it from their website.


Excellent sleuthing -- makes sense that the message is so old. There's no way they would produce something like this today.


They'll fire all the people who made them this way, and get back to that, if they want to survive the next decade at their scale.


(And OT, or maybe not, as far as I can recall, AdWords was still CPM back then)


The google books link appears to have died: "You have either reached a page that is unavailable for viewing or reached your viewing limit for this book"

(I'm fairly certain that I have not viewed anything from this book, nor any book for that matter, this year.)



Is it ironic that the websites often involved in freeing up the knowledge ends up getting hosted in Russia?


The three IP's they return when you do a DNS lookup originate form Rumania, Switzerland and the island country Saint Kitts and Nevis.

The site was created by Russians though, just not sure it is hosted there anymore.


Imagine the sheer sociopathic effort it took to overtake Google. The power plays, the backstabbing, the greed. Oh, to be a fly on those walls.


OOPs - I did the same thing before reading your post...



With each AdWords display change, Google's been adding billions of dollars to their revenue by confusing and fooling their users and blurring the line between the content and ads.

A visual guide: "A (mostly comprehensive history of Google's ad shading and labeling" https://i.imgur.com/0RxdzBE.png


For branded keywords, it's not just shady, it's racketeering. You wouldn't want your competitor to show up first when someone searches for your brand, would you? Then pay up.

E.g. https://i.imgur.com/SfomkdQ.png; the second result is a competitor's ad, the third result is the organic result.


This is what Basecamp has been complaining about for a while now too [1]. They even testified in front of Congress about it [2].

[1] https://www.cnbc.com/2019/09/04/google-paid-search-ads-shake...

[2] https://www.nytimes.com/2020/01/17/technology/antitrust-hear...



Thanks for the link. That's useful and entertaining at the same time.


This is a tricky proposition. Google has free speech rights to display whatever they want on their own server pages. These aren't statements of fact so it's not falling under libel & co.

It's like saying that you want to regulate what can advertising companies display on billboards, ex. they cannot display competitor's ads nearby your company's offices. Since the billboard space is private and owned by the ad company (similarly to the ad space on Google servers served pages), they get too decide to put there whatever they want (barring free speech limiting regulations).


If my Michelin alternative Yalp became influential and I allowed restaurants to bubble up to the top based on money, am I engaging in racketeering?


The difference being that you pay magnitudes less for your branded keywords than the competition, through the quality score. Allowing companies to advertise against their peers is actually creating more competition on the marketplace (which is a good thing).

Edit: Disclosure - I work at Google but this is my own opinion on multi-sided markets


Advocating strongly for your employer and their business practices without disclosing the conflict of interest is essentially astroturfing.


You're right, edited my comment to be more clear that this is my opinion only. I don't use an anonymous username on HN because, as you saw in my profile before searching me, everything I say reflects my opinion only.


Put that way, google is a market maker and hence should be regulated. Right now it’s an unsupervised private regulator, which doesn’t make sense.


more competition is not synonymous with good, it is a thing that can have good effects or bad.

If for example changes to the marketplace makes it harder to determine who is a fraud and who is not or simply making it difficult to determine the quality of products, competition between the companies with the quality products and fraudsters may increase and that would be a bad thing.


Never seen this image before. Fascinating. A damning visualisation of Google's slow but steady descent into a soulless money-making machine.


A little while back I started experimenting with changing the user agent string and found that there are actually many different variations of Google’s search UI that are currently accessible. For example, I was surprised to find that by setting the user agent string to Netscape Navigator 4, I could get a lightweight, no-JS version of Google that looked like it was from the early 2000’s. By using a user agent string from IE6/IE9, I could get a version of Google they looked like it was from around 2010 (the former with a simple white navigation bar, and the latter with a more complex black navigation bar). I found it interesting that these UIs seems to be almost frozen in time: many of their navigation bars contain some outdated links that either redirect or 404. I assume this mean that old browser versions are stuck in time in terms of Google search UI also.

Many of these UIs don’t have the controversial changes that Google has recently been implementing, including adding favicons and hiding full URLs.

I also found that there were several different mobile UIs for Google with different navigation schemes and search box styles.

I implemented what I found in a simple Firefox extension that changes the user agent string for Google searches [1].

[1]: https://addons.mozilla.org/en-US/firefox/addon/google-search...


They intentionally show a broken UI if your user agent is Vivaldi. [1] In my opinion this is really concerning.

[1]: https://www.youtube.com/watch?v=QkayN3xiRDc


Crazy. User agent strings are such a broken concept where every browser pretends to be another. How anyone is still attempting to use them to do anything useful is beyond me.

If you a building a webapp, use feature detection not user agent strings.


User agents ARE feature detection, but for the server.

Now I tend to agree with the general advice that anywhere you can use JS based feature detection, you should prefer that.

But hell, there are times js based feature detection REALLY doesn't cut it.

For example, here's the Chromium team recommending UA based feature detection for handling the SameSite kerfuffle: https://www.chromium.org/updates/same-site/incompatible-clie...


UA is advertisement (heh) of features. Why shouldn’t you use what the browser tells you?


Think about the mapping between features and UA strings. Over time features are added or removed. So those mappings grow stale.

Check out feature detection vs browser detection, it explains some of the challenges to relying on user agent strings.

I think the rule is: detect features, don't rely on unreliable proxies for features.

https://mobile.htmlgoodies.com/html5/client/browser-and-feat...


I agree with that advice for most sites, especially SPA's, 100%.

But Google search has some weirdly specific requirements. It needs to know if it can show a result that required a polyfill mere ms after it gets your request, for example, or if it would be better to just send less bytes and a scaled down version of the same result that might need different data to assemble. It's not perfect but a UA string is one of the only practical ways to do this. Having qa and engineers to keep track of this mapping for the most commonly used browsers may make sense for search to shave off a bunch of time shipping js and doing feature detection to figure out what it needs next, but may not make sense for your website.

In general there are no blanket general guidelines that apply to every single site regardless of usage patterns or business needs.

Disclaimer: Former web search eng, current Googler.


Many of these UIs don’t have the controversial changes that Google has recently been implementing, including adding favicons and hiding full URLs.

I used the UA workaround up until near the end of last year, when they broke it and instead replaced with an even more dumbed-down mobile UI. (If you know of a UA which can still "unlock" the old JS-less full-featured UI, please say so!)


I just realized that most of these UIs unfortunately redirect to a simplified mobile UI. Thanks to those who pointed this out in the comments; after Google had left the UIs in-place for 15+ years, I wasn’t expecting them to remove in the few months since I put together this extension. I’ll look into seeing if there are any user agent strings that still work in the future.


I did a little testing, and found that Google Images still uses the old UI with these user agent strings.

I also found another, even simpler mobile UI available with this user agent string, though results seem to be sent through a mobile-formatting proxy operated by Google:

Mozilla/5.0 (PlayBook; U; RIM Tablet OS 2.1.0; en-US) AppleWebKit/536.2+ (KHTML, like Gecko) Version/7.2.1.0 Safari/536.2+


Your post got me excited, sadly it doesnt work :( I installed "User-Agent Switcher for Chrome" and played with strings from this extension. Even verified in dev console "Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.1; SV1)" was being send, cookies/localstorage deleted. At best what I was able to get is "Second Alternate Mobile UI". I was really hoping for "Last-gen, Black Bar", but it seems unattainable :(


try this: Mozilla/5.0 (PlayBook; U; RIM Tablet OS 2.1.0; en-US) AppleWebKit/536.2+ (KHTML, like Gecko) Version/7.2.1.0 Safari/536.2+

from: https://news.ycombinator.com/item?id=22114207


Very much a way to mitigate the FTC's requirements on making advertising more distinct than organic results.

It is an unpopular opinion but I believe Google is dying. They have been for a long time. The cancer is that nothing other than search ads generates the revenue and margins they need and the margins on search ads are now down 90% from where they were in 2010.

Personally I'm long on Microsoft/Bing as a candidate for the surviving English language web index. My prediction (which isn't shared by many so don't be surprised if you disagree :-)) is that once Google's dying becomes mainstream and they start heading into ground that Apple will buy their assets, keep Maps, Search, and maybe Waymo and throw the rest away.


They're just turning the big dial that squeezes more money out of every user. They resisted for so long, but now they can't grow YoY without turning it. It's a one-way dial too, you can never turn it back.

Yahoo did this, most easily seen back on Yahoo Messenger, where I believe they fit an ad in 6 point type between two elements.

I've seen it at plenty of places that does revenue based A/B testing. You do an A/B test, measuring $ vs User retention. Money goes up, and retention maybe is affected, but over a few weeks, it looks like its the same. So you go on to the next test. Eventually, you find you have to do drastic measures to stop bleeding money. I wish I could say how long you need to measure retention for, but its probably too long for the test to make sense.


We've reached Peak Google. It's time for newcomers to reinvent video, reinvent search, reinvent email and start all over again.


You're not the only one. It seems like G itself thinks that, hence all the desperate attempts to diversify - which I see as an attempt to parley a short-term cash and talent surplus into something, anything, that will have value in the long run.

It will be interesting to see what a desperate Goog might do with access to everyone's email, calendars, and docs.


To be honest I’m not sure what everyone else is searching for on Google the search engine anymore. I personally use it as a Wikipedia search engine in 90% of the cases while 9% is for stackoverflow queries, other than that I get all the info I’m interested in from dedicated FB groups and different sub-reddits.

And I’m also a 40-year old guy, from what I can see at people younger than me they spend almost all their web-related time on Facebook properties: Instagram, WhatsApp groups, Facebook private groups (FB the main app is also dying), with TikTok coming strong from behind. All of these are places where Google Search doesn’t have any reach. So one could say that the 2011 mantra “all arrows behind the same big arrow” (or whatever it was) was quite correct, Google+ was Google the company’s major chance of still remaining relevant in 10 or 15 years’ time. They still have YT, too bad they don’t know how to manage it.


Huh, I've just realised I'm much the same. Google is used by me primarily for (and in this order in decreasing proportion): Wikipedia search, Stackoverflow search, company name search, medical related search, current event search (sports scores, election results), (smallest minority of the time) other random queries.

Everything else I get from HN, Reddit, FB, etc.

The instant google results are the vast majority of the time taken from wikipedia. Sports scores and election results are taken from other sites that Google has started top present quite nicely, but that I could easily get elsewhere. Company name searching is basically just using Maps indirectly.

Google as a raw search query product is dying. A competitor could probably service 90%+ of queries just by being great at serving instant result previews at the top for a set of the most popular data (mass-media, wikipedia, local maps).


I use ddg as my default search and try !g when I'm not finding something, and it's rare that I find something on google that I couldn't find elsewhere.

Usually I find success when it's extremely niche results (usually rare error messages) that have only a few hits on google on some random forum that the other crawlers just didn't index.


You might like DuckDuckGo and “bangs”. https://duckduckgo.com/bang


For anyone reading this thinking "what a killer feature I need to switch to DDG now". All browsers do this natively now. In Chromium engines it's found under "Manage Search Engine". In Firefox it's under "One Click Search Engine".


My Firefox install has seven search engines by default in the "one-click search engine" options. By contrast, DuckDuckGo has 13000 bang shortcuts. Sure, you don't need all of them, but I use dozens: some of them almost daily (like !w for Wikipedia and !wt for Wiktionary), some sporadically (like !tw for Twitter or !a for Amazon) and some of them only rarely (e.g. I picked up a Pokémon game recently and consequently have used !bulba to quickly search Bulbapedia, a Pokémon fan wiki.) Of course I can add all those to Firefox, but DuckDuckGo already has them set up, and has many more that I can use without thinking about them. I think that still counts as a killer feature.


Firefox is a little lacking in that department. Chromium based browsers on the other hand, anytime a search is used on a page, that gets added to the list in your browser's options.

For example, just used HN's search bar, and there it showed up in the options ready to customise keywords to my liking (if I wanted to change it).

Before anybody claims it's an additional step, it's not. DDG requires you to know what the keyword is before you use it, which is the same as having to use the search on the site before. So I guess I agree it's a killer feature if you exclusively use Firefox for the time being.


In chrome you also just start typing the name of the site, until it autocompletes and then hit tab.

For example if I want to want search wikipedia, I type "en.w" and hit tab.


The point of bang is that I don't have to setup anything. And it's pretty exhaustive I often use !gten, !gtfr or !gtes to specify the target language for Google translate. I'm glad I don't have to setup all this by myself, every time I change browser.


you can use keyword searches as native features of chrome and ffx


> "I personally use it as a Wikipedia search engine in 90% of the cases"

I use search keywords, such that when I put "wiki example" in the search bar, my browser pulls up https://en.wikipedia.org/wiki/Special:Search?search=example

I also do this with several other sites that I commonly use. In doing this, I've dramatically reduced the number of times I use a general purpose search engine in a day.


The web has been dying since Facebook came along. The internet is growing like gangbusters though.


Just looked at my browser history and looks like I roughly performed 110 searches during past 24 hours and that's probably because I'm having bit slow day. Vast majority of searches were in fact work related (research papers, deep learning, coding, python, pytorch etc). I can't speak for others but I feel ultra-accelerated because of tools like Google. The knowledge hunting would be approximately 3-5 orders of magnitude slower otherwise.


Makes me wonder how much Wikipedia would be worth as a privately held company.


Thank the heavens it's not. Imagine a company controlling the most accessed information aggregation website in the world and trying to make it return a profit.


Based off these dark patterns, the penny pinching at Google is real. A company that is penny pinching -- both by spending less on employees, and nickeling/diming customers -- is a sure sign they're struggling to grow. When you to get to be Google's size, how do you continue the growth trajectory of your past? It gets harder and harder to do so. Their offers to engineers are less competitive, YouTube serves tons of ads, and Chrome is going to ban uBlock It's obvious where Google is at: strict monetization. They don't see any further growth, or they wouldn't be degrading their services.


> "the margins on search ads are now down 90% from where they were in 2010."

Do you have a source for this?


This is publicly available data, published quarterly in the financial statement... I don't recall ever hearing about that kind of a drop.


i presume he means the increase in the traffic acquisition costs like the $12billion (!!) they pay Apple to remain the default search engine in Safari.

i always found this baffling business logic from Google. i think Google is a strong enough brand that a lot of people would switch to it if it wasn't the default. maybe not everyone would switch but not worth $12 billion a year


Buying that traffic bids up the cost of any other competitor doing similarly.


The quarterly financial statements, and a knowledge of how Google accounting works[1]. Revenue from ads is correlated to CPC (cost per click), ad inventory is correlated to ads on Google sites vs ads on "other" sites, and margin pressure is correlated with operational expenses and traffic acquisition costs. The fuzziest number to model is the ad inventory and relies on estimates of search traffic, ads serviced, and reported revenue and CPC.

[1] I worked there and paid attention at their 'life of a dollar' class they used to give Nooglers. :-)


Funny you say that, I was just thinking the same. Their recent user hostile moves seem like desperation more than part of an evil plan.


Google’s biggest opportunity is Cloud. They’re playing catch up but with the culture and staff they have, that’s where I’d be focussing my efforts. I think the 2020s will be the biggest decade for cloud adoption.


I don't think it is. Their hard-earned reputation for canceling services is really hurting them now. They never accounted for the effect of all the users they've been continually pissing off moving up in the ranks in companies and deciding never to use Google services. The leaked communication that suggests that they could actually cancel GCP doomed any chance that that would be a big player next to AWS and Azure.


>The leaked communication that suggests that they could actually cancel GCP...

People need to stop coupling their applications to cloud services for this very reason! It really bothers me that people architect their systems tightly coupled to AWS - Amazon LOVES it, of course, because this is the kind of vendor lock-in that Oracle could only dream about. (Amazon's vendor lock in is strictly superior because it is at runtime. I don't think Oracle ever had that power.)

Its incredible that so many companies have embraced AWS so completely, never giving thought to the the fact that they are giving Bezos total power over their technology investments. The sunk costs are only going to snowball, and AWS can and will raise prices almost arbitrarily, because opting out of AWS will mean a rewrite, which is some multiple of those sunk costs. That is, a company has to recapitulate ALL the money they paid to build software ALL AT ONCE. This is an existential threat to all small businesses, and an easy "just pay extra" grumble for medium and big businesses. Again, great for Amazon, bad for everyone else.

Google can and should get in cloud by pushing cross-cloud technology that lets companies have their cake and eat it too: cloud-hosted auto-scaling applications, AND the ability to pick a new cloud vendor without a rewrite. I suspect K8s is a good push in the direction. So, yeah, it's counter-intuitive but I think G could win big by pushing vendor-independent cloud tech, starting with K8s. The dream is that Google can cancel whatever it wants, and customers will just make an account at another cloud provider and keep rolling.


> I don't think it is. Their hard-earned reputation for canceling services is really hurting them now

I've only ever heard people complain about this on here, nowhere else.


They don't typically complain, but when asked, it's common knowledge among all the programmers at work and all the ones I know. Google can't be counted on. Furthermore, it's tech types who make the decision to use GCP or not, not average Joes who use their other services, so you'd want to look more at places like this for the prevailing opinions.


> it's tech types who make the decision to use GCP or not

Perhaps at startups. At big companies, the call is made by directors and VPs that are well fined by sales people - and that's where the real money is.


There is the old "nobody is ever fired for choosing IBM", and I think Google actually succeeded in forging a "shouldn't we avoid Google ?" mentality.

I saw it real time with Maps changing its pricing model, where an international company reworked all their maps display. It was a lot of money, and it's not something you brush under the rug explaining the contracts are otherwise marvellous.

Then there was the whole google chat -> hangouts -> meets transitions with every variations in-between, and messaging applications are one of the most sensible tool to change in an enterprise environment. In particular the bigger it is the worse these changes will be perceived.

When came time to choose between AWS and GCP, the choice was clearly political, more than the technical merits.


Stadia, from what I've heard, appears to be near-dead on arrival, and a big part of that is skepticism of streaming video games and skepticism around Google killing the service (and rendering any purchased games totally unplayable).


Nah, Google has never been good at B2B enterprise sales. Microsoft has a massive enterprise sales network already and will find it easy to push Azure. AWS has first mover advantage.

I'm short on Google Cloud.


Nope. Cloud is being commoditized as we speak which means there will be a race to the bottom when it comes to the margins. Bezos has already boasted that he is willing to turn down the dial for margins all the way down if it comes to that. Right now there still lot to eat for everyone but soon scarcity of finding new customers will take hold. THen you will find companies to cut down prices to get more customers and huge margins we see now evaporate very quickly. In that setting, only Amazon has good survival skills.

In my view, the biggest opportunity is AI. If I was CEO of Google, I would make a massive investment not just in research but engineering turn-key AI systems that can give businesses a massive boost.


Amazing they spent so much time chasing social with Google+ while AWS was allowed to grow with no competition.

They could have owned the cloud.


you know why they re dying? because i have to add "reddit" or "stackoverflow" or "wiki" to my questions because google often gives shit results, and the mix of ads and results is hilariously confusing. If reddit and SO wake up and realize how well they can monetize their own search, google is gonna be panicking.


I searched the comments for FTC and yours was the only that popped up.

For the record, I don’t think this is FTC compliant. It also reeks of the kind of this that regulators would adjust their laws for. Google is playing with a situation where regulators may write very explicit rules that say exactly how much labeling their ads need. A 50%-75% chop in outgoing paid clicks would be material to their earnings. They’ve been in a situation for a decade where a large chunk of their most valuable demographic had no idea they’ve been clicking on ads. That isn’t a metric they should be getting more greedy on.

Regulators could just set up a test where, if a large enough percent of users don’t know they’ve clicked on an ad, paid disclosure has to be more explicit.

The low hanging revenue fruit for Google at this point is very sparse. Financially incentivized executives are scrambling to do questionable things to hit bonuses that probably shouldn’t be attainable. Kind of like Boeing.


They are not dying, but the perception has been long changing from one of tech dearest to be critical by default of Google.


As an advertising company, where are they losing money?


That would be a huge mistake for Apple, and I cannot imagine it passing the scrutiny of anti trust regulators.


Not only this, but they are also no longer near top of the market for software pay.


What would you say is top of the market then? Google seems to pay pretty highly.


It's been clear for a little while that Google no longer cares about giving the best experience with a lot of their tools and is just focused on maximizing revenue. More and more, Google is the modern equivalent of Microsoft in the early 00s, still good enough that most people use it, but each successive "version" piles on more frustrations than benefits. It's so ironic that Google has become that which they most despised when they started.

The dominance of Google and Facebook is turning the web into a toxic waste.


I watched this clip of Steve Jobs[0] recently, where was speaking about promoting/empowering Product vs Marketing people. Some of his comments seem especially applicable:

"[...] the companies forget what it means to make great products. The product sensibility and the product genius that brought them to that monopolistic position gets rotted out, by people running these companies who have no conception of a good product vs a bad product." ... "They really have no feeling in their hearts, usually, about wanting to really to help the customers."

[0] https://www.youtube.com/watch?v=-AxZofbMGpM


This rings true in my experience. In fact I'd say that some companies that survive this trend tend to repeat it in cyclical manner. Good products, customer oriented then profit motivated short sighted decision making. Rinse and repeat.


Ironically, Jobs installed Tim Cook as Apple CEO and have publicly said that he is "not a product person".


Oh, interesting.. did he say how he perceives Tim, or what role he sees him filling?


Your fate is sealed the moment you go public. No public company can withstand the shareholder pressure forever


It's not the fate. Google has more cash in the bank than any other company in the world (in tune of $130 billion). They could choose to use this cash to invest in creating new business, new products and the market would reward them handsomely. This is how Amazon's stock had meteoric rise despite of poor margins and a series of failures. Amazon has spent out a massive amount of cash in trying to build one business after another, everything from Fire phone to Alexa to kids tablet to Prime Video to grocery delivery to the pharmacy and so on. Many of these has failed and Bezos have pushed on with eyes on long horizon. People don't realize but Amazon is not its web site or AWS but a product factory, a sort of meta product. This keeps the market hopeful that they would have another big hit and the stock keeps going up on speculation. The majority of CFOs/CEOs of big tech haven't understood this and they keep twisting arm of existing cows to squeeze out more milk while a mountain of cash waiting to be utilized. It's also not that there is a dearth of new opportunity. Every sector from health to education to transportation to energy to finance is waiting for the next revolution. Its only gutsy leaders like Elon Musk who come in out of nowhere and start out things like Boring company or Tesla or Solar City. The market rewards them for taking a risk and being visionary (Tesla market cap is now more than GM and Ford combined despite selling 98% less vehicles!). So, the market is working fine and in fact doing exactly what it should do.

So don't blame fate. Don't blame market. Don't blame wall street. Don't confuse a lack of visionary CEOs or their willingness to take a risk with fate or dysfunctional market.


What would happen if they threw their hands up, declared that OK this is as much revenue we can squeeze out, and instead just coasted at current levels?


The board would fire all of the execs and hire a bunch yes men who promise continued growth.


Do the founders still retain majority votes?


Prefer DuckDuckGo for searching, if possible. If you don't like DuckDuckGo's results and can't tolerate them, then prefer Startpage to Google, which will give you the same algorithm minus tracking/customization. Even post-acquisition, Startpage is still a more privacy-conscious engine than Google, and their ads are better labeled.

And while I'm sure I'm preaching to the choir here, quick reminder that unless you're running Lynx or some crap, literally everyone on this blog should have an adblocker installed (preferably uBlock Origin).

I appreciate there are multiple perspectives people have on whether adblocking should be a scorched-earth policy, or whether it's better to just target the worst actors. But disguising ads as native content is abusive enough behavior that you should be blocking those ads no matter where you fall on that spectrum -- and the UI changes here are very clearly, very obviously meant to make ads blend in with normal page results. The 'ad' indicator is meant to look like just another favicon.

I'm seeing people here suggest greaseMonkey scripts, and maybe there's something I'm missing, but I just really don't understand that. Don't restyle the ads, block them! Block advertisers that are abusive.


I like DDG, but have been using Ecosia for the past 6 months or so. I like that I've contributed to planting 25 trees (by the averages) just in my normal course of searching.


Ecosia serves Bing results (iirc). The search result quality is awful.


DDG also uses Bing. I've never used Ecosia before, but it's interesting to me that the experience would be so dramatically different when they're both using the same source. Must be some interesting factors involved in DDG's search to assist in pulling more relevant results out of Bing.


DDG isn't just a skin over bing. I heard it consumes multiple sources and applies its own ordering rules.


I do not know about Ecosia, but DDG does not use only Bing. According to Wikipedia, it uses over 400 different sources. So in theory, given they have a smart enough algorithm, DDG should be able to produce better results than Bing.


Pretty sure the search results are from Bing, while those other 399 sources are for other things, like quick answers, dictionaries, translations, currency conversion, flight search, etc.


It's Bing. I've noticed the slightly lower quality.

It's something of a conscious choice: unfettered tracking and advertising practices but good search results, or less tracking and OK results with profits used for a cause.


Startpage is now owned by an advertising company.[Y]

[Y] https://www.startpage.com/blog/company-updates/startpage-and...


Correct, but it's still preferable over Google.

Startpage's parent company is (currently) less integrated with Startpage than Google's Advertising department is with Search. Their search results are also better formatted, and ads are more clearly marked. Startpage also doesn't do link wrapping if you turn off Javascript, which is such a huge privacy win over Google Search that I would still tell you to prefer Startpage even if I knew that they were slurping up other data.

On the marketing side, Startpage's parent company also doesn't own the vast majority of the online ecosystem, and distributing power among multiple advertisers is better than consolidating all of the power behind a single company who also owns a dominant web browser. System1, for all of its faults, isn't actively proposing web standards with the goal of deprecating URLs.

If you can't use DuckDuckGo, you should still prefer Startpage to Google. Google Search is also owned by an advertising company, one that openly mines literally every single data point of every single search they do. You can't get worse than that. I see people who are saying they want to break free of Google, but that they can't give up Google's results. I'm not going to tell them to stick with Google if an almost universally better alternative exists.


I really wish discussions would better distinguish between tracking, advertising, and camouflaged advertising.

I actually like advertising as one avenue of finding stuff.

I just hate when tracking is bundled with advertising. And I hate when advertising is camouflaged.

In practical terms:

When I’m searching for guitars, I like seeing both: third party results and ads that are not camouflaged.

I also like seeing guitar ads on a guitar related website.

But I hate seeing guitar ads on an unrelated website, just because I looked at a guitar website a few days ago. And I hate when a guitar ad is camouflaged as an independent review.

An advertising company that doesn’t track or camouflage gets my support. And I just hope that StartPage is and remains that kind of company.


> I just hate when tracking is bundled with advertising. And I hate when advertising is camouflaged.

AFAIK pretty much all customers that pay for internet advertising WANT tracking (a cookie or URL session ID that tracks you from the advertising platform to the advertised site destination) as a way to verify that their money is actually producing results, otherwise they have little way of knowing that.


Greasemonkey could also be used to remove the adds not just restyle them.


I guess, but what would be the point? Are there really people who are fine blocking ads on Google, but are not OK just letting Gorhill/EasyList handle the implementation?

Bear in mind, there are a lot of other sites beyond just Google where you should probably be blocking ads. Do you really want to deal with the performance/maintenance costs of Greasemonkey on every one of them? Just install an adblocker already and be done with it.


I guess I would be one of those people - my reasoning being that I don't mind ads that much but in this case the google changes are pissing me off. Many media organizations are not doing to well, the do ads which is maybe not smart but that's how they pay for it. Google is doing very well, what they have already pays their bills and makes them rich, but they want to lessen the experience of their users to get richer.

So in this one case I'm the kind of guy who would like to block ads on Google and nowhere else currently. (Also if I install an ad-blocker I am at the mercy of the ad-blocker not reaching a deal with one of the richest companies in the world to let their content through because it's ok)


I like duckduck, but their ads are even more intrusive than the Google ones. Since Duckduckgo puts the (Ad) symbol at the end of the title rather than the beginning, they are not lined up and so much harder to quickly scan and ignore.


Others have mentioned that this is almost certainly the result of a long course of A/B testing.

The problem with this kind of aggressive A/B testing is that it's a game of "how far can we push the user?" So instead of having enthusiastic fans, they have people who begrudgingly use them. Sure, Google picks up an extra nickel here or there, and I'm sure some PMs got a raise. But I don't know any strong Google boosters any more, and there are hordes of people ready to switch over once something tolerable comes along.

(And from the comments, it seems like many of you have already found tolerable replacement search engines. I think I'm going to join you.)


Haha, I remember when they announced chrome like 15 years ago, hyper excited 13 year old me literally emailed ceo@google.com with how excited I was. I uninstalled Firefox and installed chrome.

And now I exclusively use Firefox again.


Ha you are totally right, Chrome was such a joy to use back in the day...

Remember when google was cool?

https://youtu.be/SC-2VGBHFQI


These were brilliant ads. I remember installing Chrome not long prior to this period and it had its issues, but it was so fast that I couldn't not use it.


My employer just banned Firefox. Unless you theoretically have a "business reason" that they accept.


How do they enforce this? For that matter, why do they care?


Not unheard of on enterprise distributed machines. Before I was an engineer all my jobs were like this - two dudes (the only ones not in suits) tucked away in a dark room, changing our passwords for us when we forgot, setting up our email accounts, tweaking an internal firewall (so we couldn't watch nba games at work lol), shit like that. Machines were locked down, we couldn't install anything at all, they had to install it for us.


That is crazy.


"The problem with this kind of aggressive A/B testing is that it's a game of "how far can we push the user?""

Google also does this with Chrome. Users are subjected to "field trials". No explicit consent is requested. Most users are probably unaware. While it is possible to see which trials one is a part of, Google is not transparent about what exactly these trials entail.

https://www.ghacks.net/2013/04/05/field-trials-in-chrome-how...

https://textslashplain.com/2017/10/18/chrome-field-trials/

https://chromium.googlesource.com/chromium/src/+/master/test...

https://blog.chromium.org/2012/05/changes-to-field-trials-in...

http://raeknowler.com/wtf-chromium


Bing results still look like Google's used to, and although its index is smaller, that is slightly less important when Google seems to try its hardest to avoid showing you what you actually searched for. I've been using it more as a result, and also occasionally DDG and Yahoo; whereas for a long time my searches were exclusively on Google, now I find myself using other search engines because of the lack of quality of results and these other stupid changes that happened to Google within the past year or two.


With nearly every browser supporting multiple search engines, and keywords, there's just no reason to exclusively use a single search engine anymore. You can have your browser's address bar search duckduckgo just by using a keyword like "DDG:" (or something even shorter D:) and then typing in your query. Same goes for other engines, or even super niche searches on any website. Makes workflow much easier too, instead of going from Bing/Google/DDG -> StackOverflow, set up a keyword for SO directly. Of course, if the website's search method is mediocre then you're SOL but that's more and more rare these days.

If Bing's results aren't good, head back to the address bar and use another engine, if that's not good try another. So easy nowadays that it's kind of insane to me that there's still so many people who aggressively will ONLY use Google Search.


I'm convinced that most A/B testing is harmful in the long run. It moves metrics that makes investors look good but often at the cost of long-term brand likability and, for the lack of a better word, the company's "soul"


The technically accurate term for A/B testing is "user gaslighting".


I wish there was a way to directly help Bing/DuckDuckGo/Yandex improve their search results. I've tried both, and it's just not the same.

Google I can bang in cryptic queries like > centos 7 tuned no daemon

and get the 3rd link about how to run tuned in a no daemon mode. Bing/DuckDuckGo have the article at around 7th or 8th place, but prefaced by a lot of "while technically not wrong, not what I'm looking for" links. It's even worse for more niche errors or code snippets.

We cannot, as a healthy internet, let Google control so much of the web.


I felt the same about Google alternatives up until about 3 months ago. Google's results have been declining in quality for a decade, with much more rapid decline over the past year or three.

Google's results are uglier and blatantly revenue based. They have now lapsed behind DuckDuckGo in usefulness for me. I fall back to Google a few times per week, with inconsistent results when I need a "second opinion."

I'd suggest giving DDG another try.

I plan to remove Google from my life this year, at least as a central dependency. Search is already behind me. Mail, calendars, docs, and drive will be taken care of throughout the year. And my Android phone will be replaced with an iPhone.


Do you (or anyone else) have any recommendations for an email service? I've cut pretty much everything Google out of my life, minus Gmail, mostly due to not wanting to go through all the trouble of transferring everything over to a service that I end up not liking. I've heard ProtonMail is good, but other than that, I'm not sure.


I've used Fastmail for many years now, and I have nothing but good things to say about the service. In particular, it's insane how well notifications work in Fastmail, especially compared to Gmail (which I use at work). (Honestly, you'd think Fastmail was the giant multi-billion-dollar super-advanced tech company, if you look at the quality of their email experience vs. Gmail.)

However, some folks are a little spooked by the privacy implications of it being ran out of Australia, so be sure to research that if you're interested in Fastmail.


I'm leaning toward Tutanota, but I can't claim to have experience with them yet.

Proton has appeared somewhat bumpy to me -- I can't say for sure why, but they give me some spidey tingles.

Migrating / transferring is indeed a problem. I would suggest using Google Takeout, their data export tool, and permanently archiving your data with a third party service and / or physical backups. See https://takeout.google.com/. You probably won't be able to import into your new provider.


Thanks for this, I didn't realize you can actually export all your data from Google.


ProtonMail isn’t supported by iOS’s default mail app, which for me personally meant I hardly used it. It’s just too much hassle for everyday email.

I ended up trying fastmail and found the transition much easier.


If you own a domain through Gandi (possibly others) you get email included for free. This is what I use. I don’t know why I don’t see this recommended more often?


I would do this through bluehost but it just straight up doesn't work. No matter the tutorials followed or time spent with support, I can't receive emails at my domain.


Migadu is great because they bill based on emails sent rather than billing by how many accounts or domains you have.


Google always says x many results and when you click page 5 6 or some it is usually empty... not as many results as displayed on first page


Done. Let's see how it works out.


Er, who do you think Apple pays for search on iPhones?


Pretty sure google pays them for search


Google pays them to be default search engine. Users do have option to switch.


My bad, thanks for the catch.


I used to work at Bing. If you really want Bing to improve, the best thing you can do is just use it: clicks on search results, plus backs and dwell times, are vital training data.

Ideally you could use Bing as your default engine, then fall back to Google whenever there's a search that doesn't yield good results. If you have the time, you can also use the Feedback link on the bottom-right of the page to report bad search results; people do actually triage and read those.


>Ideally you could use Bing as your default engine, then fall back to Google whenever there's a search that doesn't yield good results.

This is what I do with DDG. Unless I have a pretty specific search, or want Google's really nice live sports scores widgets, DDG is usually pretty good.


DDG doesn't track what you click on, so unfortunately you are not contributing much to improving its and Bing's search quality by using it instead of Bing.


Thanks for posting, it's nice to hear from someone on the inside!

Downvoters: why on earth are you downvoting Analemma's post? It's constructive, on point and overall the kind of post which makes HN comments valuable.


What I'm about to say will probably get downvoted as well, but I've noticed HN getting more polarized in terms of votes in the last maybe 2 years.

Because of the way that downvoting works on HN, the few times I use it, my thought is "nobody should see this comment". Because enough downvotes delegitemize and hide a comment, it's effectively telling other people that it's not worth being read.

I don't know if that's actually going through people's minds when they downvote, but that's what I think which is exactly why I rarely use it. But too many people now are simply using it on posts they don't like, even if a post deserves to be there. For some reason, it's an unpopular opinion to believe that an idea or opinion you think is wrong deserves to be seen and rebutted rather than silenced.


Downvotes for dislike is supported by HN/pg. Their choice of fading out comments, when combined with downvotes-for-opposition is really terrible for a high quality discussion site. Just goes to show the community around a site makes all the difference.


I still don’t understand the hive mind downvote patterns.


> I used to work at Bing. If you really want Bing to improve, the best thing you can do is just use it: clicks on search results, plus backs and dwell times, are vital training data.

A million flies are attracted to shit. I don't believe this sort of training data will ever become useful if it's in the same pool with the rest of the world. See also: voting with your wallet against the tyranny of the majority of uninformed consumers who buy whatever is most marketed. Those pennies don't matter.

In fact, I believe this sort of training and optimization for the mainstream plays a role in allowing bad results to proliferate.

This is before we even consider the fact that clicking on many results can indicate that they're bad (I click another result because the previous wasn't good), or because they're good (I'm browsing choices). Dwelling long can be bad (crappy & slow site, it takes me long to find the information I want or turn away) or it can be good (I found good stuff and I'm spending a while on it). Whatever conclusion your training system draws might be completely wrong. And probably prone to being gamed.


Clicks are noisy and have their problems (especially clickbait), but as far as ranking goes they still blow out of water everything else you can use. Analemma_ is right on the money. We need more Lemon Pledge^W^W clicks!

Speaking as someone who worked for another big web search engine in the past (not G, not B)

And counting good/bad clicks for a given (query, url) pair is just the tip of the iceberg. There's a lot of other interesting stuff you can do with them _if_ you have the data. Deep learning/NLP with clicks as training signal is probably the most exciting area to me, to name one. Unfortunately almost all of the data currently gets nabbed by google, other search engines are just getting by with the scraps. And it's very hard to bootstrap a competitor from scratch - for example Cliqz had to resort to some shady deals with Mozilla just to get any data to start with.


The above heuristic arguments seem inherently weaker than the direct experience of someone who used to be on the team that improves the results. They are well-funded and should be able to back out the effects you mention.


How are they ever going to know whether they improved results for me or not? You might as well train AI to play a game without ever checking their score. Oh, it's spending 20% more time in each room now and firing fewer bullets than before. Surely it is a better AI now.

Sorry, appeal to authority is no argument. Direct experience is valuable if there's an argument or some real scenario we can dissect, otherwise it's nothing more than a baseless claim. Without concrete examples, it's not even an anecdote. And there are plenty of anecdotes about search results for in-depth content becoming harder and harder to find.

The person I was responding to posted that they're using this training data to order the top 10 results or so. That's already an indication that it's not very helpful for me. I don't get frustrated if the top few results are in suboptimal order, I get frustrated when I get pages and pages of garbage and irrelevant results and can't seem to get anything useful out of it.


A lot of this does, in fact, get solved through machine learning. If one search query is conducted a hundred times by different people with otherwise similar profiles, the machine learning can constantly modify the results until the hundredth person sees the best results first. For example, if the first 25 people all click on the 2nd, 4th, 5th, and 8th result and then stop, it can modify the order of the results for the next 25 people, and randomize the order, so that some see 8-4-2-5 first, and others see 2-5-8-4, etc. The it re-assesses the results and gets smarter.

The gaming part is where there's a lot of importance, since it forces the machine to learn which users are acting nefariously.


Shuffling the top 10 results is not fixing the big problem in search for me. I don't care too much whether the company I search for is in first or fourth place. But I'm really irritated when I get pages upon pages of useless results and nothing remotely relevant.


See the replies elsewhere. The metrics go beyond simple, naive "click counting."

Specifically they also measure metrics like "regret" - if you went back to the search results and chose something else this indicates it wasn't a useful search result for the query and this counts against it.

In theory this is quite a nice defense against clickbaity, content free results.


> Specifically they also measure metrics like "regret"

I explicitly addressed this:

> clicking on many results can indicate that they're bad (I click another result because the previous wasn't good), or because they're good (I'm browsing choices).

You don't know the reason why I clicked back. Maybe I got what I needed at a glance, and I'm browsing to the next thing to see if there's more of what I need.

You don't know that the Average Joe (and a million other Average Joes) wasted 15 minutes on clickbaity content free crap despite not getting anything useful out of it. You know, those same SEO spam sites are trying to find ways to make the user stay as long as possible; they are in the game. And I'm sure there are plenty of gullible users who will stay. Who knows, maybe SEO spam is working and that's precisely why we see SEO spam sites in results?

Again, I think training like this is pretending the problem is much simpler than it is. And I think that there is a real danger that training like this just optimizes for 1) average users 2) sites that managed to game the system with their SEO. Neither is optimizing for quality results.

That theory has so many holes and pitfalls.


I just think you're assuming these search engines are using way fewer factors than they actually use. It's probably exceptionally easy to tie your search behavior to someone else who's much more like you than "the average", unless you're using a search engine like DDG, Qwant, etc.

Maybe the best bet is for a search engine to try to include a simple, non-intrusive method of getting user input to determine the usefulness of a search result. Like, if it determines you came back as a "regret", maybe a ThumbsUp/ThumbsDown icon next to the last link you clicked to get you to say whether that link was useful in any way, or maybe a "block" option as well to say that you never want sites like that again.


> I just think you're assuming these search engines are using way fewer factors than they actually use. It's probably exceptionally easy to tie your search behavior to someone else who's much more like you than "the average", unless you're using a search engine like DDG, Qwant, etc.

I should, then, expect dramatically different results when I use someone else's PC or a public computer. I have not witnessed dramatic differences.


It's just you pretending search engines are simpler then they are. Noone's just ranking most clicked sites first - that's too naive and encourages clickbait. Clicks are an ingredient in the overall ranking system, which is generally also trained on relevancy labels by human annotators/raters/assessors. These human labels are far more reliable and can penalize all the bad clickbaity stuff which clicks can't, but unfortunately also very expensive to obtain and thus low volume.

We may not know exactly why you clicked back, or why Joe wasted 15 minutes on a site, but from all clicks in aggregate and from the human-labelled data, we can tell how that correlates with the clicked site's relevancy and quality and utilize your clicks accordingly to improve the ranking.

Finding such correlations is still by far and large what all search engines do, rather than trying to truly understand the query. It's only now starting to change with recent advances in deep learning and NLP.


> clicks on search results, plus backs and dwell times, are vital training data

Interesting. Can you explain more on what is tracked after a user clicks a result and how this data is used?


(Disclaimer: as I said, I don't work at Bing anymore. Things may have changed in the intervening period and I could be mistaken or plain wrong about things)

Obviously ranking search results is immensely complicated with lots of accumulated insider wisdom, but the short answer is that "static" page features (PageRank, how well the query matches text on the page, etc.) are good for pruning down zillions of pages to generate a candidate list of 10 or so results to show the user, but not great at ordering those results, and serving the results in the best order is really important for user satisfaction. For generating the final top-10 ordering, "dynamic" features like how many people clicked that link, how long did they spend on the page, etc. are the most useful.

Regarding what is tracked, obviously Bing can only track what you're doing on Bing itself, so clicks on blue links and backs are tracked, but they can't see what you do on the destination page. This data is used to train models/NNs/etc. but the raw click data isn't kept for very long: it's too large and it has to be removed for GDPR compliance.


> For generating the final top-10 ordering, "dynamic" features like how many people clicked that link, how long did they spend on the page, etc. are the most useful.

Do Google do this as well?


I've never worked there and can't say for sure, but I have to assume that the broad strokes of their search engine are the same as Bing's. Bear in mind my response above was a very very high-level description of a really complex system with tens of thousands of person-years of accumulated knowledge and experience inside it.


Yandex does that. At the bottom of search results page there are links to other search engines. So if you aren't satisfied with results, you're one click away from Google. I believe they track these clicks and know which results are bad.


> We cannot, as a healthy internet, let Google control so much of the web.

It's getting worse too.

Back in the day, you could Google something for a manufacturer and include the name of the manufacturer like, "Pioneer 10" Subwoofer" and automatically the first result would be a like to Pioneer's subwoofer page or their main ecommerce site.

You type that into Google today? You will get 15 results for AMAZON pages with Pioneer speakers. No, I want to buy it from Pioneer, not Amazon. Oh yeah? The actual link to the actual company, who actually makes those speakers? They're on Page 2.

When you have the actual manufacturer being buried in the results, we have a major problem.


IMO Google is doing the best they can to serve you relevant results, just that everyone and their uncle is spamming internet to get up in the ranking game.

Think of modern google as a question answering machine not a serch engine. And do "Show me Pioneer 10 subwoofer manufacturer specs" instead of just jamming keywords and hoping for the best.


> IMO Google is doing the best they can to serve you relevant results

Maybe, but if so then they're singularly bad at it. Google consistently gives me worse results than DDG and Bing, anyway.

I really think it's because Google is trying to figure out what I "really mean" by my search queries, and it's really terrible at guessing.


What you really mean in a google search is you want to buy something. You only think you don't want to buy. It doesn't matter that you've lost the leaflet for your Pioneer 10 Subwoofer and want to remember which wire goes where in your new house. Or just want to remember some detail of something you had years ago. Google is there to assure you that you really want to buy a shiny new set.

Not buying stuff you already own again serves neither the economy nor Google's advertising profit. Go on, buy another set. Infinite growth depends on it.

That really is their attitude to every product and service.


I'm genuinely asking: can you share an example where the Google results are worse in your opinion than DDG or Bing? Ideally with a screenshot in case there is some personalization going on. I just want to see a very clear example of it so I can try it out for myself.


because you are being swamped by the order of magnitude more people that want to find a price or buy it.


I work at Google (not Search though sorry), opinions are my own.

> When you have the actual manufacturer being buried in the results, we have a major problem.

How do you know though? What if the majority users actually do what you describe in hopes of buying something and thus the shopping results are more relevant for them?

I imagine either there is a bug or this is the case, because I'm sure the links people actually click feed back into the algorithms and the results are modified accordingly.

This being said, it seems like the fact that your results aren't personalized enough to your liking is a shortcoming, assuming you were signed in.


It's a self-fulfilling prophecy. Third-party shopping results are shown on page one and the manufacturer's store is on page two, and people rarely go beyond page one so naturally they will click the Amazon results. This in turn feeds the algorithm and reinforces its original assumption.


> Bing/DuckDuckGo have the article at around 7th or 8th place,

You should take a closer look at DDG, it has the answer from serverfault.com in an instant answers box for your query and it highlights the required setting in a perfectly chosen excerpt from the correct answer:

  As of CentOS 7.2 tuned now has a no-daemon mode which  
  can be turned on by setting daemon = 0 in 
  /etc/tuned/tuned-main.conf. This is mentioned in the 
  RedHat Performance Tuning Guide.


Weird, I do not get that when running the query on DDG.

Do you have the URL?



> I wish there was a way to directly help Bing/DuckDuckGo/Yandex improve their search results.

I am amazed that no search engine gives me easy way to blacklist domains from my results. That would make the usefulness of any of them to increase by orders of magnitude. (and if they are careful, they might be even able to use the blacklisted data to adjust their general results , not only my personal results as well)

And I can't help to add qwant.com to the list of alternative search engines. No affiliation whatsoever, but I am pretty happy to use that as my daily search engine here in Europe.


Google used to have that, but of course they stopped it. There are browser addons that have he same effect though.


> Google used to have that, but of course they stopped it. There are browser addons that have he same effect though.

Yep. Ironically enough, that was killed soon after I found about that. And anyways, it was implemented far from easy to use. It should be just behind the green arrow that gives the options to see cached version etc. And obviously with cross-platform effect on all my devices as long as I am logged in.



I would pay a subscription fee to my search engine to implement this feature. Currently use duckduckgo with a add-on to filter blacklisted sites but would love for this to be integrated.


In my opinion, the results on DDG et al are a matter of users being trained by Google knowing too much about them. If you search "Django" on Google and get relevant results, it's because Google knows you. On DDG you need to search "Django framework".


To me it is not so much result quality as integration. Currently, with Google no matter what I am searching for, I just type it into my location bar and hit enter. Whether I am looking for a new bar to check out on a Friday night or what does a specific compile error mean in Haskell. So in the first case, I will get a map with bars around me and in the second case I will get a link to stackoverflow. With DDG these become functionally separate. To do the Haskell error search I do the same thing, but to do a local bar search I have to open a separate tab, go to Google and do a search there. Same goes for looking for things like theater plays (Google will give me reviews, showtimes and a link to buy tickets all right there at the top of my search page), address or place name searches (map, directions, open hours and website link all come up right away) etc. The only thing I can do with DDG is the old fashioned "find me links relevant to these keywords" searches.


Feels like a perfect opportunity for you to try out search engine keywords. On Chrome, it's under Search Engine > Manage Search Engines. This allows you to type (keyword) -tab- (query). For example, my work computer (on which I can only use Chrome or Edge) is force-defaulted to Google, but I have Bing set to keyword "b" and DDG keyworded to "d", so I just type:

b(tab) Whatever I want to search for

and it goes to Bing with that query. It's a great example of what you're looking for. On my personal devices, I try to use Bing for as much as I can to see whether its personalized results will ever marry up to Google (it's gotten extremely close, lately), but I often prefer searching Google Maps, so I have gMaps added as a search engine with keyword "m", so I can search for anything with "m (tab) place" and immediately see gMaps results.


It's very easy to add new engines to, just take the query URL from any site, pop in %s in place of the query, choose a keyword (same on FF).

I have about 50 keywords.


> Google I can bang in cryptic queries like > centos 7 tuned no daemon

Not only can you bang in whatever query you want to DDG, you can bang it in via Google, using bangs (https://duckduckgo.com/bang). I think that "!g <query>" is the second stop for many DDG'ers, when DDG itself disappoints. (My understanding is that it still offers some anonymisation over searching directly through Google, but I'm not sure.)


I have noticed that when trying to find the most recent solution Google shows me an answer that is 10 years old.

ex. Adobe updates come faster than I can update, so I run into issues, so finding the most recent answer is what I am looking for and not a similar issue from Adobe pre CC.


I find that a lot too. Trying to search for anything Rails related always shows Rails 3/4 results before Rails 5/6. If Google really knew that much about me, they should notice I always end my searching by clicking on a Rails 5/6 result (because those are the versions I use) and every time I land on a Rails 3 or 4 result I almost always click back and try again. Same with Python when I get 2.x results, with all their data they should know I've used 3.x for years.

If companies want to track me and build profiles on me, I accept that I can't stop that without refusing to use their products and every site that's integrated with it (Google Analytics, Adwords, etc). But if they're going to do it, at least use that information to help me. What's the point otherwise?


or at least filter out results that are irrelevant. They are lacking the ability to remove dated results when you are looking for the most recent.

I know you can manually filter results, but why show 10 year old answers to software that gets updated so frequent! I am sure there are other areas where this exists, dated results should be pushed back a few pages.


You click back.

That means you see a second helping of ads.

You still use Google.

From their perspective this is probably working perfectly.


Do they factor in the number of clicks on each result? If so then it would presumably improve over time.

I know Google also factors in how long people spend on the resulting webpage.


Yes, they do factor in number of clicks. They also consider "regret" -- returning to the search list and picking a different result.


I can pretty much only use Google in verbatim mode now


My latest annoyance, nevermind them showing pages that don't have "quoted" keywords. They rewrite the search query in the textbox and remove the quotes. Just when you thought it couldn't get more annoying.


Oh yeah that kills me I often have one word that is the important one and instead they show me the low-value responses that do not contain that one differentiating word that is the reason I'm doing a search in the first place


Does verbatim mode actually work for you? It doesn't seem to for me.


yeah there is no verbatim mode any longer


Do you have a region set? I got an answer on the right from stackoverflow (don't know if it's the correct answer.)


I've been using duckduckgo and bing for a while now. Google is just a fallback.


I switch to DDG precisely because I stopped being able to quickly distinguish ads from results in Google. Even the "results" felt like ads. Now when I have to fallback onto Google, the results just feel so spammy compared to DDG.

edit: For example, I just needed to look up the current version of Scala. DDG correctly has scala-lang.org as the top result. Google has a snippet from sourabhbajaj.com (incorrectly) stating 2.10 is the current version taking up the top 1/3 of the page.


This thread is so enlightening. I too switched to DDG a while ago and occasionally I have to use Google for a search and your description is exactly how I felt: the Google results felt “spammy”.

I had assumed I had simply adjusted to DDG and just forgot what Google had always looked like, but this post confirms that the Google results have in fact changed.


yeah duck duck go is actually great for coding!


Heck yeah! DDG results have gotten noticeably better in the past ~2 years.

Side note - most browsers offer 'search shortcut' customization options, so you can start an address-bar search with !g or !b or whatever to seamlessly swap to your fallback(s).


> Side note - most browsers offer 'search shortcut' customization options, so you can start an address-bar search with !g or !b or whatever to seamlessly swap to your fallback(s).

While you indeed can configure browsers to do this, I think you may be erroneously attributing a DDG feature to -- or duplicating a DDG feature in -- your browser, because those are built in DDG bang commands.


I know for a fact that Firefox can also implement bangs like Duckduckgo out of the box, although you have to manually set it up.


They used to have some default keywords, "g", "w", etc. (you enter "g key words" to search Google for the keywords. If they're not default anymore they're very easy to do.

I like the way Chrome shows it has recognised the keyword after you press space though, neat feedback, good design IMO.


Firefox has a few builtin keyword searches like "@g" (Google) and "@w" (Wikipedia):

https://searchfox.org/mozilla-central/rev/cfd1cc461f1efe0d66...


This was a feature in Opera from something like 2002. That browser was ahead of its time.


Konqueror was even further ahead. After you searched for something via search engine and then opened a page, it offered you buttons which could be used to search for those same terms on the result page. Very useful, but never saw it elsewhere.


Maybe once a month I use Google as a fallback, and then am utterly disappointed. DDG is just plain better at my programming and random factoid searches than Google.

Which is I assume because Google is allowing their search results to be of a poorer quality in the pursuit of higher ad revenue.


I've used DDG for many years now, and I'm admittedly one of those weirdos for whom political statements about freedom and privacy matter, so I'm a "loyal customer" in a sense.

However, 10 years ago I honestly felt DDG is not really worse compared to Google. And here's the problem: DDG hardly changed over last I-don't-know-how-many years, while Google surely did. And while it seems primary Google users are constantly upset about the nature of the changes, I, on the contrary, quite often appreciate how it behaves, especially on the mobile. In fact, when I bought a new phone this autumn, I, for the first time didn't change Chrome+Google Search as my default search engine.

I'm still mostly content with DDG on PC, but Google is actually better at answering questions I mostly want to ask on the go: to look up badly misheard brand or person's name, find some local shop and show me the time it closes, or — the thing DDG is most useless for — brief me on the real-time news everybody is talking about for the last couple of hours.


Yes, been using in for several years now. I don't even think of it as duckduckgo any more, it's just search.


The only thing I can’t fully rely on DDG for is on mobile if I search a business on iPhone - I need the address and phone number instantly.

DDG still isn’t as good as google at this.


You can always prepend „!g“ before your query in DDG to fall back to Google.


No need to prepend. You can bang it at the end and retry your search if it doesn’t work with DDG.


I wonder why everybody (including me previously) thinks you have to prepend.


Because some search terms get interpreted by the browser as trying to hit a specific site. If you prepend the browser probably won't make that mistake. I tend to use !g on the end of my search terms and hit this every now and then; I feel like I've hit this issue in Chrome more than Firefox.


Yea, but in my scenario I’m driving and need the right result right now. I’m not in any sense pro-google, so it pains me they are still the best at this.

I can’t append or prepend, I had time for one search and need a result now before I miss a turn.


What the fuck are you doing typing on your phone whilst driving?!


Being an adult whilst living in the real world.


Needlessly endangering other people's lives. (And hopefully breaking the law, depending on where you live.)

Seriously, which parts of adulthood and 'the real world' make this seem necessary and okay to you? It really is super dangerous. (Google the statistics if that might convince you, or news stories to make the risk emotionally real.)


There are adults living in the real world which drink and drive, but that doesn’t mean we should encourage it.

What you are doing is posing a significant risk to other pedestrians and drivers.

You should reconsider your irresponsible “real world” behaviour.


Typing that on mobile is not easy


All you need is the !g


still too hard (due to the !)


I switched a couple months back because Google started messing around with image search. If you click on an image it now displays a small version of it on the side, instead of enlarging it. It's so useless.

There are some things I miss about Google (I feel like I got better results when searching for programming questions), but I don't miss it too much.


I don't currently have an alternative I use daily, but programming questions seem to me to be the most annoying use case for Google search. I frequently use 4 or 5 keywords to try to find information on a topic, and it gets a cluster of results which ignores one or more of the words that is absolutely essential, because what I want has few if any hits. But things that are everywhere on the internet, I don't need to search for, dammit!


I’ve been using DDG for almost three years.

Going back to Google is actually frustrating for me.

Their results are so badly presented and the number of ads so onerous to scroll through...


I am most frustrated with how almost any url from search or Gmail shows as the url, but actually routes you through a Google url first (so they can track it, of course). That extra hop sometimes takes a second or more.

I don't understand why they don't just track my click with JavaScript. I suspect this new hiding the url thing might have to do with circumventing scripts like I have used before that rewrite the url so that when you click, you go directly to what you clicked on.


If it's named according to the true URL, those same scripts would work, albiet differently.


Currently, the real URL is url encoded into the Google link. But if they stopped displaying the URL and started encrypting the URL or using a DB pointer instead, which is what I am thinking they might be moving toward, then you wouldn't be able to do that.


DDG uses an aggregate of results from various search engines. I really do hope that at some point (if they haven't already) they build out a native search team. I know its hard computer science, but they really need to start rolling their own search engine


DDG has its own crawler and index too, they had it for a long time

https://help.duckduckgo.com/duckduckgo-help-pages/results/du...


Interesting, I did not know that. Thanks for the link!


> I've been using duckduckgo and bing

Bing’s UI looks exactly like Google’s with a two-year delay.

Their current ads have a single distinguisher from an organic result: the circled word ‘Ad’ in the same color and font size as the description.

Same as Google before the change. A worse distinguisher as the one it had before, where ‘Ad’ used a distinct color.

Do we really trust that Bing won’t copy the new design in two years?


> Do we really trust that Bing won’t copy the new design in two years?

Do you need to? Worst case you have delayed the bad for two years.


My biggest beef with DDG is that they do not do worldwide results very well. Their results are overwhelmingly US centric.

As an example.... I search on DDG, "buy cricket equipment near me". I am not behind a VPN and whatsmyip correctly identifies my hostname with the correct European country suffix. However, the top results on DDG for this query is "Dick's Sporting Goods". Click on that and I get a nasty "GDPR DSG Sorry. Can't help you message." Second result is "Cricket Best Buy," who help "to connect North American cricket fans with the best possible cricket equipment." Next, AA Sports. Fourth, First Choice Cricket, "#1 USA Cricket Retailer".

If instead I try to be more precise with the query, "buy cricket equipment "austria"". First result is US Sports Direct.

Nothing here is relevant to my query. At least try to give me some UK results in the first few.


I use DDG, but I have the same exasperation. Even appending 'Melbourne' to a search, I'm liable to get results from a tiny town in Florida rather a metropolis in Australia.

It's frustrating, because for geographical based results, I'm forced to drop back to Google consistently, and Google's getting worse and more user hostile/advertiser friendly.


There is a box at the top of the results page which you can tick to get country specific results


Yeah, but it's just not very good. Set the UK tick box and it will still throw a lot of US results into the mix. Adding "UK" and "manchester" or whatever in the search does a much better job than DDG's country specific tick box. Of course even that isn't great as it'll often give a business in manchester NH, pop 100k over the city of 3m just down the motorway. Even with keywords and DDG's tick box set.

It's a minor annoyance that I've got used to, and hasn't persuaded me to go back to Google (I've used DDG as first choice for probably a little over 5 years now).

If DDG can't deliver the goods, !b (bing) is second choice, !mill (Million Short) probably third, then as appropriate, a mix of !azuk (amazon UK), !w, !ox (oxford dictionary) and a mix of others including !g as last resort. !g is so rarely better or helpful that I wonder why I still sometimes bother.

Edit: I think the real problem is with ICANN. .com should have been kept for global or multinational sites, and the USA used .co.us and .us generally for sites of national interest, as France uses .fr, Germany .de and UK .uk etc.


Is there a bang command for turning this on and off? I could not find any in their list.


yes, likewise, I'm in New Zealand, and trying to find local stuff on DDG is a very poor experience. I'll swap to google for this.


Startpage shows more relevant local results for me, it's essentially a proxy to Google search. DDG supports redirect to Startpage by using "!s".


I like the configuration options you are being offered with DuckDuckGo. So, for example, you can use your own custom font on that page.

I use the Google !g fallback for about 5% of my searches. Sometimes the google results are simply better, but the appearance just seems chaotic nowadays.


DDG and Bing are so bad, there is nothing on the market right now that can replace Google search for accurate results. The auto complete for instance in Google is no match.


What does DDG use for an algorithm? Originally Google was just using pagerank but that was too easy to game by SEO. This means DDG needed to address this, and apparently they have (e.g, search for 'favicon' like the post). How long can we rely on DDG staying neutral in their algorithm?


From wikipedia:

"DuckDuckGo's results are a compilation of "over 400" sources, including Yahoo! Search BOSS, Wolfram Alpha, Bing, Yandex, its own Web crawler (the DuckDuckBot) and others. It also uses data from crowdsourced sites, including Wikipedia, to populate knowledge panel boxes to the right of the results."

https://en.wikipedia.org/wiki/DuckDuckGo


this brings back memories of metacrawler


They use this little known "algorithm" as their backend: https://www.bing.com/.

It's remarkable, BTW, that most people at the same time believe that Bing sucks, and DDG is "just as good as Google". For me that hasn't been my experience at all: Google is far, far ahead of Bing (and by extension DDG) in terms of relevance.


It seems possible without bias, DDG went full on with instant results. I guess Bing does those now too, I don't know, but the choice of scope and resources to use can be enough distinction to colour people's perceptions.


AFAIK they don't have their own algo, they are just a front end for other search engines (Bing etc.) depending on where/what you're looking for.


They have to have some (possibly trivial) algorithm for aggregating the sources they combine.


If they're doing that, sure. I'm not sure it's that sophisticated. Last I heard they sourced different engines for different types of things/geographies not necessarily mixing results from multiple sources for a single query. But this is all based on things I've read in passing rather than any serious research/insider knowledge.


I think DDG is mostly just a front-end on Bing, at least that's how they've been doing things in the past.


It's a nicer skin with better privacy around mainly this search engine: bing.com from Microsoft.


Same here. I may hit Google at most once a month when I'm looking for something really unusual.


Didn't DuckDuckGo have the favicon next to the search results before Google did?


Same. I do research and the URL matters.

This reeks of AOL keyword BS.


Don't forget about Wikipedia citations!


The FTC has spoke on this before. https://www.ftc.gov/news-events/blogs/business-blog/2013/06/...

I believe nearly all the search engines are still guilty of this one.

I also think firms should be able to buy "blank space." For example facebook or amazon could pay NOT to have an ad above their result. Maybe they already do, I dont see an ad when I search facebook, however I do see an ad for amazon above the top amazon result. Google should just be smart enough to see the top result and the ad are the same link, and handle the situation more appropriately, like tucking the ad text underneath the result, or signifying that the top result owner has paid to hide ads. I have to say, I dont find these results differentiated ENOUGH from the ad. https://i.imgur.com/8Dhr1mj.png


>I also think firms should be able to buy "blank space." For example facebook or amazon could pay NOT to have an ad above their result. Maybe they already do, I dont see an ad when I search facebook, however I do see an ad for amazon above the top amazon result.

Backblaze had a competitor who bought ads for the search term "backblaze". The CEO successfully contacted the competitor CEO and they agreed they'd be in a pissing match throwing money at Google if the practice and retaliation (backblaze buying on their search term) occurred. The competitor promptly stopped the ad campaign.

https://www.backblaze.com/blog/save-marketing-money-nice/


If the competitor was smaller with more money to spend or let's say had higher margins, it was stupid of them to agree to this type of (possible?) collusion.


I get what you're saying. It does have an anticompetitive/colluding feel to it.

I don't know exactly where my opinion lies in this case. I think the cloud backup space currently has many players and that even if all players agreed to this - the net result would be Google earning less, each of those companies having less ad spend, and therefore greater profitability.

Heck, imagine if a 3rd party existed to "bypass" the Google ad auction through collusion on generic terms like 'computer backup'. (I'm not in this space, so some of the feasibility is speculative). Each company puts in their bid parameters for search terms. 3rd party evaluate the bids, submit 2 slightly different bids to minimize ad spend, and since the whole market coluded, Google made less money.

It's fragile, it's collusion. It's easily by passable by anyone going to Google directly. Was anyone harmed?


For example facebook or amazon could pay NOT to have an ad above their result

That sounds an awful lot like a shakedown practice. "Nice search result you got there. Would be a shame if it would be obscured by an ad..."


Their current behavior is no different in that regard AND is more confusing for internet users and business AND is based on the twisted "market price" of the search keywords


Basically yes. It does appear a bit like a "pay us this much to keep top result, or we will resell it to competitor" but realistically, thats already the case (the ad being the "top result".)

Deduplication is the lesser of two confusings.


I worry about the implications of the blank space idea. For example, if I owned coolstuff.com, when should I expect for my standard search result to be shown below the "blank space". Today, if someone searched "cool stuff", my site shows up as the top search result, but my arch-nemesis, the owner of neatstuff.com, has an ad tailored to that query. Would his ad go away since mine is the top search result for "cool stuff"? Similarly, if my site is preferable to Google enough that my site shows up every time someone searches for something like "cool shit" or "awesome things", would I also benefit from this "blank space" program, at the expense of the owners of coolshit.com and awesomethings.com?


if you BUY the entire whitespace above the first organic result (could cost north of 1-3 ads) then the first organic result would be the top of the list. It wouldnt matter if you are the top organic result or not, youve just paid not to have ads above that result.

If at some point you slipped down the organic list, you would have the option to buy a normal ad again.


> Google should just be smart enough to see the top result and the ad are the same link

They are smart enough, but they don't get any money from people clicking on search results, only ads, so of course they're going to show the ad - some people will click it.


You would build that into the cost of buying "no ad above my top result."


What's the benefit to whitespace over their own domain name?


Reduction of confusion, and information density.

Why does Amazon need to appear twice in a row - https://i.imgur.com/8Dhr1mj.png


By "whitespace" they mean "whitespace instead of an advertisement to a competitor's website".


I really hate AMP. To the point where I started making an iOS browser that’s sole purpose was to bounce me from AMP links to the original link and delete the history step in between. I wish Apple would offer this in Safari - a simple ‘ignore AMP pages’ check box.

AMP is google worst attempt yet at taking over the web. It’s so user hostile. It breaks lots of sites with its fake scroll and fake back button at the top of google news. I hate it soo bad!


There is a feature to do this in Safari already, but it's not automatic. If you force press or long press on a link so the preview window pops up, then tap on the preview to load the link, it will skip the AMP page.


Are you suggesting that Google is evil?

Because you would be totally right. As a 8.5 year employee of Google I had to resign and blow the whistle.

I gathered all of the disclosure documents and stored them here:

--> https://www.zachvorhies.com <--

It's worse than you can possibly imagine. Everyone else seems to be waking up to this just recently (last 6 months).


Mad respect to you. Odd this is the first I've heard about you. I've got some reading to do.


Thanks for your efforts


Many don't like Google's new design. Rather than resort to hacks, try an alternative search engine. There are many and you might even find one you like.

https://www.qwant.com/

https://www.ecosia.org/

https://duckduckgo.com/

https://www.startpage.com/

https://swisscows.ch/

...


Qwant, Ecosia, and (to an extent) DDG share the same Bing backend for web results, so if you're particularly interested in trying out alternatives, here are some others. All of these have their own search indexes.

https://yippy.com - ugly, but probably the best independent search engine outside the "big ones" and DDG

https://private.sh/ - Run by PIA as a proxy for Gigablast, small index but rapidly getting better

https://mojeek.com - UK-based, worth trying but has a spam problem

https://beta.cliqz.com - German based, their technical blog has been posted frequently on HN. Will eventually require a browser extension or their own browser to search.

https://yandex.com - Ought to be mentioned, but certainly not privacy focused.


https://beta.cliqz.com/ is currently available as any other webpage, and will continue to be available in the same way in the future. I think what's being referred to is another search product (search as you type) we have in the Cliqz Browser. I work on these.

The blog referred to: https://0x65.dev/


I'd love to see a write up about the differences between all these. "Find one you like" makes it seem like a good search engine is more subjective than it is. It would be cool if these companies collaborate and shared their indexing strategies, algorithms, etc to make the whole alternate search engine space better in general.


I've been wondering what it is about Google Search results recently, in that they seem _substantially worse_ than ever before. I hadn't quite noticed what the difference was, but I was really surprised, remembering how Google Search results used to be the very best. Now I know what the difference is: making normal results look the same as Ad results.

"Don't be evil" :)


also the habit of putting "missing: <search term>" and showing you stuff that isn't as targeted as what you were actually looking for.


I've been stuck with this for two weeks now, and it's bad enough that for the first time ever I've considered using something other than Google. It's just so much harder for my eyes to read, I feel I can't glaze through the results like I used to (and I believe the old search would often give date for things like stack exchange and Reddit, which helps with a wide variety of issues).

I'm pretty sure for the layout itself I'll eventually just get a tampermonkey script to make it look like the old, but this is the first thing that has truly made me look for a Google alternative. They have severely damaged their main product, in my opinion.


It is strangely stressful to even read through the results, let alone find the right one anymore. Switching to DDG is pleasant, it even seems to respect a dark mode setting.


I remember when Google made a point of being ethical by putting ads on the right rail with a light blue background so it was clear which results were ads and which were organic.


Seems quaint now, eh?


I noticed this on a co-workers screen recently and my immediate thought was "what dodgy search extensions have they been installing?". Now that it's on my results as well I can't help but strongly dislike the change for some reason. The icons are both very small and very distracting at the same time and don't aid in adding authority or any important meta information about the site.

The changes seem to have added enough noise to make parsing the page annoying, but maybe it's one of those things you brain learns to ignore after a while.


>I noticed this on a co-workers screen recently and my immediate thought was "what dodgy search extensions have they been installing?".

My legit first reaction when I saw it last week on my daily driver was "I wonder what extension is trying to cash out".


(tangent) i really hate what browser extensions have become.


They’re returning to their roots — 3rd party toolbars.


If you must use Google, I suggest you use a couple of settings in uBlock/Adblock as detailed in this 4 day old Lifehacker article [1]

From the article,

- To remove the favicon: google.com##.xA33Gc

- To remove the URL: google.com##.iUh30.bc.rpCHfe

- To remove the arrow next to the URL: google.com###am-b0 and google.com##.GHDvEf.ab_button

- To remove everything: google.com##.TbwUpd and google.com###am-b0 and google.com##.GHDvEf.ab_button

[1] https://www.lifehacker.com.au/2020/01/how-to-fix-googles-ugl...


Yes, I think Google reached "peak search" awhile back, and we're now on a downward trend. The search results are increasingly degraded by commercial intervention, by Google and its paying customers. There has always been a conflict of interest between Google and its public consumers, and Google is now leveraging its near-monopoly market position to shift the balance of that conflict to its financial advantage.


This GreaseMonkey userscript worked for me to revert the search results to their previous style:

https://greasyfork.org/en/scripts/395257-better-google/code


I wonder what the tipping point will be? At what point Google Search revenue will have peaked, pushing Google to accelerate the pace of experiments and new solutions to make more out of fewer users.

Like many around here, I have (re)started using alternatives to Google products last year. We're early adopters, so it will take a while for Google to be affected by a mass exodus, but what will happen when it will start? What medium will they use to fill the gap. The only (currently) untapped options matching Search's reach to display ads are: Gmail, Android, Google Photos. Probably nothing else. What happens for advertisers targeting specifically users like me who end up stopping using Google Search (e.g., how do you reach a high earner from Bay Area if they have completely stopped using Search? Because this, will happen first, and these users are valuable).

The required scale of any alternative is critical. Compensating for Search revenue decline is no easy feat. So much that, until now, nothing else generates anything even close to Search's revenue. If you talk profit, it's even worse as YouTube is probably not as profitable as Google would like (YouTube Premium anyone?) it to be.

So, the future will probably come from outside of Google's own properties, and that is why they are slowly killing competition in the ad tech space (3rd party cookies & Chrome). That is why they have been trying to diversify and are wisely enough pushing very hard with GCP and other proven revenue streams like subscriptions (YouTube Music, YouTube Premium, gSuite).

Probably also why founders really left.

It will be an interesting decade, for sure!


Anyone here have a pro-Google stance? Because at this point I'm vehemently against the company and most of its products.

There's no good replacement for Calendar or Docs/Sheets as of now, that I'm aware of. Microsoft's suite as mentioned by therealdrag0 is an obvious alternative, and perhaps less advertiser-oriented, but still not a great in-browser option IMO.

Especially when considering the interoperability of the "platform," it's clear Google is streets ahead of the competition.

It's a shame that the best featured tools in this space are also not open-source, and used (probably) to mine massive amounts of data.

I'd be ok if you mined my data while I'm on your servers, but only if you allow me to host my own version of your software for when I don't want to be on your servers.


I have a pro-Google stance, but I usually wouldn't talk about it on HN because people just love to assume the worst about Google and anyone defending them just gets attacked. Personally, I think that Google's pushing websites to use HTTPS has done more to improve privacy than anything else I can think of in the last decade.


- HTTPS eliminates proxy based ad blocking.

- DoH eliminates DNS based ad blocking.

- eSNI eliminates the last network level option for host based ad blocking.

- CDNs eliminate IP based ad blocking.

- Chrome eliminates browser based ad blocking.

IMHO they're securing the web _against_ us, not _for_ us.