Hacker News new | past | comments | ask | show | jobs | submit login
Every Google result now looks like an ad (twitter.com)
3592 points by cmod 29 days ago | hide | past | web | favorite | 969 comments



I've been in this A/B test for a couple of months now, so I've had time to adjust, and I still hate it. I've just become so used to seeing the complete URL in green. The complete URL! If you hover over the results, you'll see that they like to take bits like numeric components or the query string out.

This is part of Google's attempt to de-prioritise the URL. Their destructive AMP service confusingly shows you Google's domain instead of the website's — and as they can't fix that without losing out on tracking, they're trying to change what the URL means.

Thanks for ruining the Web, Google.


google amp is a plague on the internet and should have been shutdown by regulators. it is definitely anti-competitive. i am sick and tired of our government completely slaving to the big tech companies when it comes to regulations and anti-competitive behavior.

google amp on mobile overrides behavior on mobile, even on android, so instead of web links opening up in the respective and selected default app, it opens up the amp link within google search or the browser. there is no way to turn off this feature on any browser, chrome or not.


Unless I misunderstood, that is something you disable in each (google)app that has that "feature".

In Gmail for instance: settings: General settings:

Open links in gmail. Turn on for "faster" browsing.

Default is on. Tip: Use firefox focus instead as the default browser to open all links. If the link warrants further look copy the url to your main browser.

The above has nothing to do with amp.

But to not detract, amp truly is a plague.


Thanks for the tip. Most of the people here would fix that somehow, e.g. by tweaking browser settings, finding and following comments like yours. But, why does someone who doesn't deal with tech have to think about that, and will they, should they?

I find their UX approach to SEO more concerning than, say, scoring. For example, AMP pages will receive the same score as equally performant non-AMP content. However, they won't benefit from the carousel view, with instant page loads (prefetch), etc...

Small changes like this, applied at a huge scale to a user base that just can't afford to constantly fight hostile UX are damaging, regardless if they come from Google, an airline site, your mailbox switching to "Promotions" at random intervals or Twitter not allowing you to permanently change the feed order.

Internet feels more hostile than it used to. I don't mind the trolls (I can always block them) but I do mind that we depend on services that deploy hostile UX practices.


> Internet feels more hostile than it used to.

It's not Internet, it's software.

You can really feel it when switching from Android to LineageOS, or from Windows to Linux.

It's the difference between software where "you are the product" and software that has been created to serve the user, to be the best tool it can be.

I'm not talking about UX in the sense that open source software can sometimes feel clunky and unfinished, but I'm talking about breath of freedom you feel when you switch, and suddenly ... you notice that this current you have been forced to swim against, just isn't there any more.

I helped a friend get LineageOS on his bootlooped phone, and he's so happy with it. All these features in the settings that are actually helpful (when in Android you always have to second guess, you know that feeling when a setting's description is really vague and uninformative, "this is probably going to spy on me", is it vaguely positive or negative, because the switch can go either way). All the very basic features that would otherwise require an ad-filled app to perform, of course already there. You get privacy controls that aren't mystifying or reset on updates.

Similarly, I've now been on Instagram for about half a year. Always avoided FB, but there's some art I wanted to put out there, so I gave it a try. And oh my god, Instagram is probably the shittiest software ever? Or at least the most user hostile. It's a social thing where you can share images, comments and messages. Except it isn't, it just appears like one. Literally every interaction feels forced, to show me more ads, make me spend more time in this app (??) and mainly constantly throwing up barriers against interacting with any kind of software or data outside its ecosystem. You can't upload from your laptop, many links are not clickable, many text fields are not copyable, most features in the browser are locked unless you make it pretend to be an iPhone (!!), you can't post or reply to comments. The chat is such basic functionality that it seems hard to fuck up but they did. I should stop, but I can go on ...

This is that feeling of constantly swimming against a current, and somehow we're tricked into believing that is how it's supposed to be, because ... I don't know, some people told me when I complained, that most people don't use instagram in the way I do. Well I guess, but that doesn't seem to be most people's choice.


I think you're definitely misunderstanding what Instagram is. I'm not saying it isn't user hostile and full of ads, because it definitely is, but you're complaining about all the wrong things. All of those things you're complaining about are conscious design choices and have been from the very beginning.

Instagram is not some generic photo sharing software that tries to be open and modular and integrate with everything and proliferate arbitrary visual media with a rich tightly coupled messaging system. It was never that and won't ever be that.

Instagram from the start was just about taking low res pictures with your phone camera, putting a filter on them so they look less terrible and then sharing them with your friends. Every other feature was begrudgingly added to increase accessibility and hence DAUs. So you were never supposed to be able to interact with anything outside of the app. You can't cross-post your posts to facebook or twitter, you can't post from your computer, you can't spam links in your photo descriptions. All of this is literally the point of Instagram. It was like this before it and slightly after it got acquired and people loved it, not in spite of the restrictions, but because of them.

Then Zuck crammed it full of ads and a terrible glommed on messaging system and ruined it.


And Twitter was a service to share SMS text messages of max 140 characters.

I understand that what you describe is what Instagram was, but given what they have both become, they actually have little functional difference.


Amen.

You haven't tried snapchat btw. It keeps getting worse. I'm sure many of us in here are feeling like dinosaurs, unable to fit in.


I did try snapchat but I couldn't stand it, I only had one contact (and "Team Snapchat"! yay) and ultimately got rid of it.


I installed an SSH client on my phone and my life instantly got better.


In fairness, this applies equally to all things one can install SSH clients on.


Totally agree with the Instagram part. Worst software ever.


WhatsApp is second after that :)


if you use google search as your default search, then there is no way i know of to disable amp, no matter which browser you use, even if you request the browser to handle opening the link. google search passes the amp link to the browser.


> Unless I misunderstood, that is something you disable in each (google)app that has that "feature".

> In Gmail for instance: settings: General settings:

> Open links in gmail. Turn on for "faster" browsing.

I changed this setting, but now I clicked a link in gmail and indeed it opens via Firefox, but it still gets redirected through a google-URL before getting to the real page. I want to disable that behaviour most of all.


Then use a generic email client instead of Gmail mobile app or web email interface.


The Fastmail app is very good, for example.


> Tip: Use firefox focus instead as the default browser to open all links.

Unfortunately iOS doesn't support this nearly as well as Android does. On Android you can enforce this pretty much everywhere whereas on iOS a bunch of apps still open links in Safari no matter what you do.


Its been enough to make me change my iPhone's search default to DDG. I've been using it for about a month and I'm surprisingly happy with the results so far.


Me too, but lately I am experimenting with qwant.com, also love the results.


Semi-off-topic, but I recently found out that there is an even worse version of AMP, called Google Web Light. Running Firefox on an ARM device (e.g. Raspberry Pi), the results speak for themselves [1].

Notably, there is absolutely no way for the end user to disable it, short of spoofing your user agent.

[1] https://googleweblight.com/i?u=https://news.ycombinator.com/...


Truthful name, though? Google, we blight.


The Opera Mini MITM has returned!


> there is no way to turn off this feature on any browser, chrome or not.

Here is a way for Firefox: https://addons.mozilla.org/en-US/firefox/addon/amp2html/


>Thanks for ruining the Web, Google.

I posted the solution below, which I found a few days ago. The script will work with greasemonkey and tampermonkey, it will provide you with results similar to the ones before the change. If you also use uMatrix, there will be no ads.

https://greasyfork.org/en/scripts/395257-better-google


> I posted the solution below,

Sorry to be "that guy" but for me the solution is to use DuckDuckGo.

It's not perfect, but neither is trying to play arms race with Google's JavaScript.


DuckDuckGo is great, but Searx[1] is even better. It's a metasearch engine that aggregates several search providers that you can self-host, or access via one of several public instances.

I run an instance on my local network, but you can run it for free on Heroku, AWS or GCP or even on a Raspberry Pi. There are several Docker images you can choose from.

[1] https://github.com/asciimoo/searx


Ironic that I once switched from Dogpile to Google. We have come full circle.

Edit: Looks like Dogpile still works, and flags the google ads properly. I'm switching back!


Extra irony: I went to try Dogpile, but got redirected to some anti-something site (seems to not like my VPN IP) that wants me to do a Google Recaptcha.


You got a laugh out of me :)


Anyone know what DDG's !bang operator is for Searx?



Yeah, it’s pretty straightforward


I use DDG as much as I can but when I need to find something very specific, or find a solution to a bug Ive encountered, it takes 2x as long on DDG vs Google


I find Google ignoring my attempts to be more specific more often.

Just as a contrived example a search like "R33 RB25DET Motorsport ECU" typically got me specific links relating to the exact car, engine and topic. But the past 5 years or so it seems like it is weighting the more common word and more general terms. Often excluding the target topic altogether and just giving general motorsport results. Perhaps it's a consequence of every SEO specialist and their dogbot hammering general search terms and gumming up the machine with cruft.


That's something that really drives me nuts about it lately. If I just wanted general results, I'd do the lazy thing and not put in the additional terms. Worse, I've been finding that it still ignores some of the terms even when I put them in quotes.

The results just seem really bad lately, especially for anything technical. Just now I'd been looking for "html5 canvas torture test". The top result is a video called "Torture Testing my Nut Sac!!" and then some videos about testing Glock guns. Umm, no, that's not even close to what I'd wanted. (Bing does way better here and DDG is somewhere in between.)

I'm not sure what Google engineers are using to find technical information on the web these days, but I can't imagine it's the public Google search.


I personally find it most helpful to just ask Google a question like I'm a complete idiot. I got the idea from the meme about "that guy wot painted them melty clocks", which works extremely well in my opinion. Looking in my history "how to multilingual in java please", worked fine. You get a laugh out of it, 90% of the time Google figures out what you need, and the rest of the time it's going to show whatever the hell it wants to, any way.

At least both of us are pretending the same level of intelligence, which takes away a lot of the irritation.

I also tried talking to DDG like a duck, but it doesn't give as good results as talking to Google like an idiot.


animate on scroll "responsive"

Top result doesn't have the word responsive, very handy.


Just to be clear, I still use !g pretty frequently.

But psychologically it's rather different. If you find the Google search page to be visually aversive then your goal is to get in and get out quickly. That's a bit harder if Google is your default search.


On DDG, type:

    !s your search terms


In case anyone else is wondering, this sends your query to startpage, which appears to query google for you anonymously.


wtf...thats cool...I'll start trying that


I agree. The moment I saw this was the moment I configured my browser to use DuckDuckGo instead of Google. Good riddance.


If your browser is Chrome it’s still working for Google, not for you


Firefox has been my main browser for about 15 years. Never saw the advantage in Chrome, aside from using it once in a while when a webpage didn't work correctly on Firefox. These latest years we are seeing very aggressive behaviour from Chrome (reducing effectiveness of ad blocking, for example) and that just reinforces my decision.


I think I agree. Google still seems like my first choice mentally, but I used ddg a lot more as first search lately ( past month or so ).

It is getting more and more annoying getting workarounds for everything though. It is more annoying, because I liked G layout, default colors snd so on. It was cleaner.

Now not only is their search quality getting worse ( I got what I asked for on bing of all places ), their presentation managed to degrade too.

If it is testing result, I would be curious to see the data that informed that decision.


I wish google would enable infinite scroll like DDG. Internet search results are of the few times it is more user friendly than pagination.

Just being able to keep scrolling and seeing more results is very helpful. I guess it increases the value of the "first page" for Google.


I usually start with DDG (it's my default search engine on all the devices I use) and then quickly move on to !s (for Startpage) since DDG still is lacking in the quality of results for many searches. It's a bit rare that I go to !g (Google search).


> "the solution"

That's a solution, but not a particularly great one since it requires too much from the user to see mass adoption.


That still requires somewhat less from the user than my solution: a filtering proxy. On the other hand, the latter enables a far more customised browsing experience and one that isn't restricted to a single browser on a single computer.

...which brings me to another great point this illustrates: if you want to customise your experience, if you want to be able to control how you see the Web, then you need to make an effort, and the amount you exert is essentially proportional to how much you can change.

Yet the majority of users have shown that they are willing to take whatever Google throws at them with little opposition. I find that a little sad and ironic in this era of "everyone can code" propaganda (I've seen even Google advertises something like that on its homepage); or perhaps the latter is just an attempt to increase the population of intelligent yet docile and obedient corporate drones... I know developers --- web developers --- who really hated the changes yet made no effort to fix it themselves, despite almost certainly having the skills to.


What kind of filtering proxy are you using?


Proxomitron.


Good call, I think you hit it right on the money. We have all seen these problems (basically Google attempting to MITM the entire Internet) getting worse for years along with all of the very real malvertising threats. We have partnered with the Privoxy project to do exactly what you are doing with Proxomitron but a system that will scale to enterprise environments. We have it running in corporate and educational environments already w/out SSL inspection. What we will be able to do with SSL inspection will be a game changer. Check out the virtual appliance! Any feedback or ideas are appreciated. https://www.nextvectorsecurity.com


wow does that ... still exist? is it maintained? (I thought the creator stopped 15+ years ago?) does the user interface still look weird and green? :)


The creator died 16 years ago... but the (rather small) community has made a lot of patches and continues to work on filters. Given that it's basically the equivalent of running all the sites you visit through sed, with a syntax that's more suited for filtering HTML than plaintext, the strength lies in its flexibility and generality.

Yes, the UI is still skinnable, and the default skin is rather... psychedelic.


Ohhhh, right! I remember when that happened, forgot ... Amazing that the software is still in use. And that it's still useful given the temporary nature of these filters. Its syntax was pretty neat; writing your own cosmetic filters was pretty easy. And I suppose that the community wrote some code to auto convert public block lists maybe?

There's something to be said about the adblocker being a filtering proxy, it can really get anything before it hits the browser.

Do you know how it compares to Privoxy nowadays? Way back then it was the open source but harder to configure alternative, that didn't quite work as well as Proxomitron. But maybe Privoxy continued development and got better, I don't know what direction that project took.

Oh and I personally always really liked the default skin :D


Haha I just took a look, expecting a cool Github project page.

Nope - its green, and hasn't been updated since June... of 2003. I have no idea how it is able to be effective against the modern web, considering in 2003 the biggest issue was annoying pop-up Flash ads which no longer exist. Maybe there are updated plugins or something.


I'm confused. Isn't AMP mostly an issue on mobile devices? Can you use this on a phone?


If using Mobile Firefox (not FF Preview or FF Focus), then greasemonkey is available as add on.


This is available only on Android though. On iOS, Firefox Focus (or any Firefox or other browser) is tied to Apple's restrictions. So there's no scope for browser extensions as we generally think of it.


And Mozilla is about to roll out Firefox Preview soo with all add-ons but uBlock Origin disabled until they can fix up and test the add-on ecosystem on the new browser. Unfortunately there's more add-ons for privacy/security/ethics than uBlock (Decentraleyes, tracking token strippers, et. al) which will include add-ons that redirect to the original source from AMP.


Thank you. Solved it for us, at least on desktop.


I can't mentally parse the results anywhere near as fast as I used to be able to. It's horrible.


Mission accomplished?


Depends on what Google’s mission is. If it’s to show you ads, it doesn’t matter how long you’re on their page as the destination will (almost certainly) have more. If it’s to help you, it’s not successful.


>mission

Making profit, obviously. You to be less efficient at distinguishing ads and results, increasing time you look at ads, increasing chances of you interacting with ads.


A few weeks ago, I changed my browsing and searching habits purely to get out of that A/B test bucket. Now it looks like we're all in it.


I am very curious how you achieved that!


Now that is an admission.


Google hates strong SEO sites, because they won't make them any money. So that's a clever way of pushing them further down. I wondered when all results on the first page will be Ads only.


Google decides the layout. You can have the 'strongest SEO' in the world and Google still decide if they put 1 ad or 9 in front of the result.

Strength of SEO is irrelevant to the ads. The only thing Google hate is when sites manipulate themself to rank higher and offer a worse user experience.


Strength of SEO is irrelevant to the ads.

It wouldn't be very surprising if Google varies the number of ads in a search results page based on the search term. For sites that have strong SEO for all of their key search terms that would be indistinguishable from Google placing more ads in pages where that site ranks highly.


My understanding it is very linked to 1) Profitability. Search terms around things like lawyers and credit cards. You'll almost always see 4 ads. 2) Genuine relevance. Google know for certain searches your not likely looking to buy something and to keep credibility don't show ads.

Occasionally you can find pockets of less competitive search's that 1) allow ads 2) relate to your product via the algorithm even if they don't to a human brain 3) Align to your desired audience and these can give great return.


Do they really? Wouldn't a "strong SEO" site be ideal for their spiders?


I guess sites with real, useful content (e.g. Wikipedia) don't need strong SEO since they have a ton of back-links from other sites that validate their high ranking, so "strong SEO" is really about making a less useful site look more useful, which makes sense for them to hate.

SEO really translates to "How to fool Google into boosting your ranking artifically".


In my experience, Wikipedia has been relegated lower and lower in search results.

So much so that I now specifically use their search tool rather than go through google just in case some interesting thing pops up.


Having to search for “xxxxxxx wiki” more and more now. No I don’t want Healthline and Medicinenet links above the Wikipedia entry, thank you Google.

I wish by each search result there was a button that said “banish this domain to oblivion, I never want to see it again.”

You could improve search really fast that way if you still cared about things like that.


There used to be such feature in the results page. I just went looking for it and I got 'Cached' and 'Similar' when I click in the little drop-down arrow. Nice feature that appears to be removed. How does removing that feature benefited the users?


It used to be an extension (an official one from Google, Personal Blocklist). It was never part of the vanilla search results page itself.


No, it used to be part of the official results before the personal blocklist extension ever existed. Then some features were removed from the search results and then partially reimplemented in that extension.


Personal Blocklist by Google has been forked and reimplemented by a number of people:

https://chrome.google.com/webstore/detail/personal-blocklist...


Yes, a ban button has been on my search wish list since Google existed!


I do the same thing when looking for product reviews or useful discussion- generally put "xxxx forum" or "xxxx reddit" etc.


If you know which site you want to search, and its search feature is as decent as wikipedia’s, I suggest adding it as a search keyword. Saves me some time to type e.g. ”wk turtle” in the address bar instead of going through the front page or lazily searching via some third party search engine.


try <foo> !w on DuckDuckGo


Or add "w" keyword for https://en.m.wikipedia.org/wiki/Special:Search?search=%s&go=... bookmark on Android Firefox and then search with "w searchstring" from its address bar.

It's silly that it's necessary to create separate bookmark for this, though. Native search engines, surprisingly, don't support keywords there.


Why silly? It is in fact a bookmark - just a parametrized one. :)


They do have native search engines with neat option to add them from a site's search field. Why not add keywords too as they did on desktop.

I feel silly adding bookmarks for the things I already have in search engine list.


I use duckduckgo, and for the most part, you know what you are searching for so the !tags are really good. !w search term, just takes you right to wikipedea. When I really have no ideal what or where I'm looking for something, I still find myself looking on google a bit, but for the most part, !youtube, !arch, !git, !stack, get me exactly what I want about 99 percent of the time.

Check it out, because it sounds like it might start to match your workflow: https://duckduckgo.com/bang?q=


For YouTube you can use !yt, much shorter.


Is there a shorter domain name for ddg?


There is a https://ddg.gg which redirects to the main domain, so I'm not sure if it's what you're looking for.


https://duck.com Should redirect you.


Duck.com?


Which is complete fucking bullshit. It's driving me mental that when I search for something, Wikipedia usually isn't on the front page. It's almost always the best result for most things, it should be on top.


Shouldn't you then just go to wikipedia and search there ? You know to stop the "F* bullshit" and "save your mental state" ?


Wikipedia's search is totally inferior to google. It requires correct spelling within one or two edit distances and the SERP is far less informative. This is a common enough action that those seconds add up. If ddg wants to be competitive they need to fix this.


Wait, what?

Google used to put the Wiki article right at the top of the results list. It virtually never does that anymore. This is what's bullshit.

The point of a good search engine is that it is supposed to conglomerate good results, relevant results - let's say I'm looking up 'Phillip J. Fry' from 'Futurama', but I still want wiki information. Wikipedia won't even spellcheck for you if you don't know how to spell something correctly, like a city name.

If I use a search engine, I'll get this Wikipedia result: https://en.wikipedia.org/wiki/Philip_J._Fry

But I will also get the far more informative Futurama-wiki result: https://futurama.fandom.com/wiki/Philip_J._Fry

Wikipedia is not a search engine. Although, at this point, Google is barely one, so plastered with sponsored results it can be hard to find the result you're looking for, and with this change, I've finally made the long-needed jump to DuckDuckGo.

Yes, it's time to 'stop the fucking bullshit', and save all our mental states - searching Wiki isn't going to solve that - but not using Google can help. ;)

Comparing a search engine to wikipedia search is like comparing a search engine to a local file search.

If you're looking for a specific driver on your computer that you know the exact spelling and version number of, a local file search will help you find that. A search engine will return many results with download links as well as potentially other drivers, or other versions of drivers for your product - and will generally forgive you if you misspell something.


You are perhaps joking but that has been my tack for a while now. Having specific sites I use to search through. I used to web search the pick from the offerings presented, not caring what site it was so long as it had the information I needed. But now I really value a good website that respects UX and good, honest content with low commercial influence.


I would love to be able to overrule Google and always sort Wikipedia as the first result. Maybe this can be done with a browser extension.


Wikipedia seeming to lose relevance is like my old high school teachers finally getting their way over a decade later...

Do you find their search tool more effective than appending "wikipedia" to your Google search?


I think it's Google's work to deprioritize them on other search engines it is still showing up at first DDG gives it special treatment, by highlighting its summary, which is often what I'm looking for.


At this point I have moved to another search tool as my baseline.

I also use hoogle a lot for work so I'm used to switching search tool.


I noticed that as well over the years. Also, one thing that really drives me crazy is that Google is trying to steer me into using the german wikipedia, even though I am already explicitly searching for the english article name. I really prefer reading the english version for techie topics, no matter if there might be an article in my native language. This is the sort of "smart" behaviour that really feels dumb.


I remember a thread where everyone complained over too many wikipedia results popping up. Now we complain the other way.


I think the idea is that a strong SEO site doesn't have to pay to be seen through searches.


There will still be 2-4+ ads above the fold of the real "good SEO" results anyway.


If you have good SEO and drift to the top of queries that actually are relevant for your site then you don't need to buy as many Google ads.


The way Google defends its income stream against that is simple: They allow your competitors to buy ads on your own names and trademarks, then you're forced to do so as well because otherwise your organic link is below the ads. It used to be that it mattered since only dumb users would click the ads. But now that they're unshaded and look 99% identical, only super nerds bypass ads. Meaning your #1 organic result is basically only good for bragging rights and nothing more if there's anyone willing to pay even a small amount to jump up above you.


Wouldn’t having strong SEO incentive competition to buy ads? Of course it also incentives them to work on SEO, but the only way to ‘get above’ a top ranking site would be to buy an ad, no?


Increasing costs for your competition is a net win for you.


And Google


Depends on the industry, but if you can profit at lower prices you can force your competition to spend less on advertising. That’s bad for Google.


That makes sense. I meant increasing the ad buy spend of a firm's competitors helps Google


> I've been in this A/B test for a couple of months now, so I've had time to adjust, and I still hate it

Me too. It just looks ugly.


I think half of it is the ugly font they are now using. Ugly is the perfect way to describe it.

Google is really not good at creating beautiful products...


My understanding is that they are optimizing for revenue per visitor per month- they are not trying to make the most visually appealing product.


Clearly. It looks like dogshit....


That's a scary optimization to make if it starts costing you visitors.


They'll make it up in volume /s


How is hijacking the domain with Google for AMP not anti-competitive? I'm surprised a class action lawsuit hasn't erupted because of that.


It is not a requirement for AMP. CDNs now let you roll your own domains on the AMP standard: https://blog.cloudflare.com/announcing-amp-real-url/

Bing also uses the AMP standard: https://blogs.bing.com/Webmaster-Blog/September-2018/Introdu...


> It is not a requirement for AMP. CDNs now let you roll your own domains on the AMP standard

All these certificates do is make it so Google's browser (and only Google's browser) will mask the fact you're on Google's domains if you sign the file a certain way.

If anything, this shows more anti-competitive practices -- they're adding features into their browser that specifically benefit a features of their search engine.


That's not true. CDNs also use their own non-Google domains and infrastructure for AMP hosting:

https://amp.cloudflare.com/


Effectively 0 AMP sites are using anything other than Google's CDN.



Yes, sites just host the original copy submitted to Google. You can see all the resources are loaded from https://cdn.ampproject.org

If you visit the page from search results (which is the only place it would be linked) then it would never leave Google's domain.

Here's the actual URL used from search results: https://amp-businessinsider-com.cdn.ampproject.org/v/s/amp.b...


But as long as it's possible it doesn't qualify as lock-in.


You don't need lock-in to be anti-competitive. The requirement of extra work to implement AMP to get that higher search results page placement is the issue.


At which point pushing for new technologies as a private entity is anti-competitive vs moving technology forward?

If the criteria is just "needs extra work" then unfortunately almost nothing can change and we're all going to live with the existing technology. Change inherently has friction and requires "extra work" with the hope that's an investment which provides returns long term.

In other words, say you are a large Internet company that is trying to improve web page loading times. You profile why most web pages are slow and identify issues. You publicly report on those issues and develop guidelines and criteria. Nobody bothers because "extra work". You develop new technology that directly addresses those issues, this technology works within the existing environment but it requires both client and server support to be most effective. Do you think anyone cares? No, because of "extra work". That's why there needs to be incentives. Now you have a "penalty" for not doing that "extra work". You can file it under "it's anti-competitive" (maybe it is) but if you do the "extra work" then suddenly the anti-competitive part works for you, not against you. IMO that's why it's not anti-competitive.

Other examples: why do you think there are so many people that complained when iPhone released with Flash reader? "extra work". Similarly when it removed the audio jack. Change is friction and friction is extra work. But most of the time that's not anti-competitive...


Let me explain based on my 15 years of adtech experience:

HTML is already fast (see HN for an example). HTML is already universal across devices and browsers. HTML is already published and easily cached for billions of pieces of content.

AMP is a fork of HTML that only targets mobile browsers specifically linking from Google search results. It's useless on its own, but AMP is required to get higher placement on search results pages, so publishers are effectively forced to spend technical resources to output an entirely new format just to maintain that ranking.

If Google wanted faster pages then it can do what it always does and incentivize that behavior by ranking results based on loading speed. These signals are already collected and available in your Google webmaster console. There's nothing new to build, just tweak ranking calculation based on existing data. Sites would get faster overnight, and they would be faster for every single user because HTML is universal.

Do you know why they didn't do that? Because it's the ads and tracking that's slow, not the HTML. Google's Doubleclick and Google Analytics are the biggest adserver and tracking systems used on the web. This entire AMP project is created to circumvent their own slow systems. It creates more work for publishers while increasing data collection by running a "free" CDN that never leaves a Google-owned domain and thereby always supports first-party cookies. It's a perfect solution to protect against anti-tracking browsers and why Chrome now will also block 3rd-party cookies, because it won't affect AMP hosted on Google domains.


This makes sense. With all that and browser fingerprinting and accounts and "other" mechanisms, do they even need cookies anymore?


First party storage won't be affected without some major AI tech in browsers so cookies are still the best deterministic connection, especially since most people are logged into a Google service already (gmail, chrome, android, youtube, etc).

Probabilistic techniques are used for anonymous users or environments like iOS Safari that are very strict.


So the user sees your URL, you're getting the revenue from the ads that are shown, sharing will share your URL, your statistics work flawlessly.

In other words: if it behaves exactly as a page hosted on your site (just faster), why do you care?

I'm getting the impression that HN users care a whole lot about seeing the request in the nginx log they are tailing.


Well, as a user, I care about not announcing loudly to Google every single step I take on the web.


Then why are you searching on Google? That's where you would see an AMP page served from a Google AMP cache. If you searched on Bing, you would get AMP pages served from a Bing AMP cache instead.


In the past year, I've seen amp pages increasingly often linked from all sorts of places (reddit, FB, here, etc) besides Google's search results.


AMP pages hosted by the publisher, Google's AMP cache, Bing's AMP cache, or some other company's AMP cache? GGP was complaining about sending any information to Google. Only one of those options does so.


I am obviously not.


It's not always faster. There are plenty of performance and usability issues with AMP pages, not to mention all the extra development effort needing to maintain a different version of the site just for a few mobile browsers.


It's anticompetitive af.


The content is still served from their CDN regardless of the domain. There is no way to serve AMP sites from your own servers and appear in the search carousel among other AMP articles.

Google is strong-arming the entire web to switch to AMP in order to increase their control over the distribution of content, and to be in a better position for tracking users.

The fact that Microsoft and Cloudflare have joined the party does not change the fact that you're about to lose control over your own content if this is not stopped.


That's not true. CDNs host AMP sites on their own domains and infrastructure, independent of Google:

https://amp.cloudflare.com/


By "their CDN" I meant Google, Cloudflare and Microsoft. Can we set up our own CDN to serve our own content from our own servers and receive the AMP badge in search results?

Please disclose your affiliation to Google either in your bio or in comments, and don't post the same comment in multiple places.


> Can we set up our own CDN to serve our own content from our own servers and receive the AMP badge in search results

This...doesn't make sense. You lose the value of a CDN (both to you and to the consumer of your content, in this case Google and the end user) if you're rolling your own.


It no longer makes sense to be able serve our own content without it being pushed down in search results?

We were talking about CDNs because your collegue mentioned AMP CDNs, but the main point doesn't change: we cannot serve our own content from our own servers and get the same placement in search results as AMP content, even if our content loads verifiably fast and is as performant as an AMP page on the client.


> We were talking about CDNs because your collegue mentioned AMP CDNs

I have no clue who bdeurst is. They certainly aren't a colleague of mine.

> even if our content loads verifiably fast and is as performant as an AMP page on the client.

Can you explain to me how your page load time is 0ms? My understanding is that a correctly functioning AMP-cached page will load for the user in a whopping 0ms, because it can be preloaded.

The entire design of AMP starts from a fairly straightforward premise: "How do we reduce (user-visible) page load times to 0, safely, cross origin?" If your pages user-visible loading time is longer than 0, you're failing to keep up with AMP.


AMP pages are preloaded, that's how your get the 0ms load time. If Google would instruct the browser to preload other search results the same way, those would also be available in 0ms when the user accesses them.


See my statement about secure, cross origin preloads. You're asking a search engine to XSS attack its own users.


I think the correct term I was looking for is prefetching. That's a secure way to tell the browser to start loading search result links in the background.


> That's a secure way to tell the browser to start loading search result links in the background

prefetching isn't private cross origin:

> Along with the referral and URL-following implications already mentioned above, prefetching will generally cause the cookies of the prefetched site to be accessed.[1]

IDK about you, but I'd generally prefer that my cookies and IP not be exposed to all of the links that happen to be in the first page of search results.

[1]: https://developer.mozilla.org/en-US/docs/Web/HTTP/Link_prefe...


Browser specs can be improved, and new ones introduced. And even if safe prefetching is deemed technically impossible, the question remains: should we give Google and a handful of other companies disproportionate control over how we publish and consume content, for 50ms of load time?


> Browser specs can be improved, and new ones introduced.

Yes, for example Signed Exchanges, which on a technical level solves all of the problems of rel=prefetch (and a number of the problems with AMP, like link pasting and copying).

> Should we allow a handful of companies to be pinged every time we load a page on the web

I'm hopelessly confused here: you're only going to "ping" one of the handful of companies if you were referred by that company. (In a world with signed exchanges) You're not going to come across an AMP-cache link organically. You'll navigate to example.com directly, without anyone except example.com (and your DNS provider) knowing. The cache provider will only know if you navigate to the cached site via the cache provider. Concretely, you don't go to the Google amp-cache unless you're navigating there directly from Google's search results. Same for Microsoft/Bing.

So if your metric is

> Should we allow a handful of companies to be pinged every time we load a page on the web

Then yes, absolutely, because nothing changes!

Edit: To address your other question,

> should we give Google and a handful of other companies disproportionate control over how we publish and consume content, for 50ms of load time?

Alright: how is AMP materially different from <whatever other algorithmic choices rated search results before>?

You seem to be claiming that AMP is harmful to someone but who? It's not harmful to competitors or to end users, and its only harmful to developers if you make the most strained argument.

My premises here are that users actually prefer AMP results. You may not, but my understanding is that most users do. So from the perspective of an end user browsing the internet, AMP leads to an improved experience.

So it's good for users.

No one has yet been able to explain to me how its actually harmful to a web developer who now has an incentive to make AMP-compatible sites. Like sure, you now have to work with a framework you may not like, but that's not a compelling argument when people are claiming that AMP is a threat to the sanctity of the internet.

So it's not like bad for web developers, it's just sort of a lateral move.

That leaves competitors to the giants. But AMP is an open standard, and DDG could, if they wished, implement an AMP cache themselves today and it would just work. And they'd, if anything, benefit from the bigger players pushing that ecosystem. There's the potential for abuse via the caches.json registry, but the AMP project is aware of this and notes that the registry could be decentralized using Subresource Integrity or similar, if such a standard was adopted[1].

So again: I'm confused by how exactly it's bad, beyond the "I am forced to develop in a way I don't want to if I want to appear near the top of the search page", which isn't new.

[1]: https://github.com/ampproject/amphtml/pull/18495/commits/c5d...


I've actually removed that second question before I noticed your answer, because I knew you would then skip over the first one. Feel free to address the main point I was making in all my posts in this thread, whenever you feel ready.

> So if your metric is

Google being pinged obviously isn't my main metric, as you can see from all my posts in our discussion. My main concern is that publishers will be forced to use specific publishing mechanisms (AMP, Signed Exchanges) to appear at the top of Google Search results. That loss of control puts publishers in a vulnerable position, and hurts innovation across the web.

> Then yes, absolutely, because nothing changes!

Everything changes. Google's influence and control won't end at the moment the user navigates away to a top result on Google Search.


I updated the prior post, but to be a bit more concrete, what differentiates AMP signed exchanges from, say, HTTPS?


Google could serve an older version without the user knowing.


The signed exchanges protocol requires that the content be signed with a key with a short expiry date (< 1 week), sites are free to make it shorter. In extreme cases, the providing site could sign with an expiry of < 1 day or even something like 1 hour.

And iiuc, sites are still free to revoke their certs. So this is actually probably more secure compared to something like https in that regard.

[0]: https://github.com/WICG/webpackage/blob/6cc3237b36c2f9ce7534...


I see what you're saying but im not sure that's a problem - how many people are building CDNs in their garages that need to be certified?

Also, everything I say on HN reflects my own opinion and not any organization, which is what my profile states. I do not hide behind an anonymous username precisely for this reason. Poisoning the well by doxxing me doesn't change how the AMP standard works for CDNs either, and only serves to derail the conversation.


>Everything I say on HN reflects my own opinion and not any organizations, which is what my profile states

Sorry, but I agree with dessant. Your profile doesn't disclose your affiliation and nor do your comments. It's absolutely relevant to the discussion, because whether you want to admit it or not (or try your best to act neutral), your day job will have some influence over your opinions on these sort of projects.

You're right that it doesn't change objective facts about the specification, but I think it's misleading to suggest that, in general, external third-party CDNs are first class citizens in the AMP ecosystem when they're not treated the same within search.


Because the Department of Justice doesn't enforce anti-trust law anymore. Read the book "Chickenshit" for more details.


Also what is the technical benefit? On android it feels like a downgrade


It seems faster for me across devices but there’s usability issues, at least on some sites/pages. In my experience “find on the page” is spotty or impossible on mobile sometimes.


What makes you think it feels like a downgrade? AMP results seem to, on average, load faster with a better UI to me.


Not sure why this is being downvoted. I encourage people to visit the non-AMP version of smaller news providers.

Enjoy the popups, video ads, autoplay, bloated sizes, etc.


Whenever I get an AMP page, I always go to the non-AMP version, even for smaller news providers.

I find the non-AMP version to be superior every single time, but I use NoScript so I don't see the popups, video ads, autoplay, etc.


If you use adblocking addons you don't get many of those.


Or even reader mode - I don't mind flashing the cruft for a few seconds while I turn it on, and it's the best of all worlds. The site gets their ads (briefly) loaded, and I get a clean page to read.


AMP isn't being forced on websites, they choose to enable it or not.


That's an arguable point if not enabling it means lower rankings on search... Which is problematic when both are being offered by the same company.


You either choose AMP and appear in the search carousel at the top of search results in the biggest search engine in the world. Or you choose not to implement AMP and you don't appear there.

Your "choice".


Cake or death.


Yeah and death row prisoners choose their last meal /s


There are alternative search engines of course.

I use StartPage which has the benefit of local results (but can be fully anonymous) but still uses Google search results. DDG is a great alternative, but isn't for me. And then there's Bing


Just FYI, Startpage was recently acquired by a shady company.


Huh, you're right, this flew completely below my radar.

https://restoreprivacy.com/startpage-system1-privacy-one-gro...


Wow. Did not know that, time to move along then. DDG here we go. That said, their privacy policy still looks pretty strong: https://www.startpage.com/en/search/privacy-policy.html

...and at the end of the day, who is actually better out there, and that can prove that no data is leaking? Maybe running your own searx is the only option? (http://asciimoo.github.io/searx/)


Startpage person here ️. Maybe I can shed some light on this. Last year, Startpage announced an investment in Startpage by System1 through Privacy One Group, a wholly-owned subsidiary of System1. With this investment, we hope to further expand our privacy features & reach new users. Rest assured, the Startpage founders have control over the privacy components of Startpage (https://support.startpage.com/index.php?/Knowledgebase/Artic...).

Also, a couple of things that set Startpage apart from DDG: 1) We're HQ'ed in the Netherlands, ensuring all our users are protected by stringent Dutch & EU privacy laws, 2) we give you Google results without tracking, 3) with Anonymous View you can visit results in full privacy.


> Their destructive AMP service confusingly shows you Google's domain instead of the website's

No, it doesn't. It's actually served from Google's URL, but it (the AMP service) shows you the original site URL (well, it shows the domain by default but that's a button that expands to the URL if you click it.)

Your address bar shows you the Google URL, but that's not misleading, either, since what the address bar has always shown is what location content is being served from, not a content identifier distinct from the mechanics of content distribution.

> they can't fix that without losing out on tracking

Nah, they could track of they worked like a classic CDN


I mean, I generally get the gist of what you are saying, but you are saying "no, you're confused, it's not misleading..." It's kind of like saying "no, you don't have hypochondria, it's all in your head!"


I'd love to learn what % of A/B tests get rejected after the test has concluded.

I suspect there's naturally a laundry list of biases that all the work we designed and implemented needs to succeed or boy do we look silly.


If you want a favicon fix. Use this CSS userstyle https://userstyles.org/styles/179230/google-search-old-style with Chrome/Firefox extension "Stylus".


>Thanks for ruining the Web, Google.

When does the mass-migration to DuckDuckGo go mainstream?



Might be now? I just swapped my default search to DuckDuckGo.

I have been having several issues with google search recently, this just seems like a good time to make the jump.


Yeah for a long while I felt like an idiosyncratic person for using it and it did feel like a minor sacrifice. Nowadays it really does seem like a competitive platform on quality. Slightly less good at parsing the semantics, but much better at actually showing me search results instead of ads.


I switched yesterday. I love it does not try to guess what i search for


I use duckduck go on mobile because it doesn't prevent me from saving images.


Why not just use Bing then? Why drink the same wine from different bottle?


It slowly moves a little bit more each day.

Not useful results will change habits.


What a bad thing this AMP is... it is very hard to go directly to the website. Good that only shitty news outlets use it though


I hate to be that guy, but have you considered moving to another search engine? For example, DuckDuckGo is very decent once you adapt to it. And for the few cases where you absolutely need Google to read your mind, just add "!g" to your query and you're automatically there.


The thing which I use Google most for is typing some random place’s name in and it gives me that little card which shows how busy it typically is, what time it’s open, directions, contact number, etc. DDG just gives me a search result and 90% of the time that’s not what I want. I’m looking for information, not a list of URLs.


You can go straight to Google Maps for that. Or use the "!gm" bang in ddg :)


> I still hate it

I left a strongly worded feedback on their form


Just wait until their feedback forms get ads.


I'm sure a sentiment analysis bot looked at it and added a '-1' somewhere in their databases.


It's funny to think that, of all the products Google kills on the regular, the one thing most everyone wants them to be done with (AMP) is probably gonna stick around forever.


> This is part of Google's attempt to de-prioritise the URL

And people wonder why phishing is a thing?


> I've been in this A/B test for a couple of months now, so I've had time to adjust, and I still hate it.

I have been trying to switch from google to duckduckgo for years but its only the past few months that I have been successful and I have google to thank for that.


Google is satanic. They a mirror of the Soviet Union. Google also works with 3rd party websites to blacklist IP addresses so you can never post on a forum . And people always say you get blacklisted because you spam. I think that's bull crap . I think people get blacklisted for their political beliefs.


They could fix this for publishers who point a subdomain that way. Or url rewrite the address to the publishers.


> Thanks for ruining the Web, Google.

Wow...


It's not just ruining the web, it's essentially dishonesty. Or if you are a pragmatist: lying.


> Thanks for ruining the Web, Google.

So, they helped make the web a far easier place to look around on things for a few decades and one layout change and you call them that they ruined the web?

It's easy to be on the barking side.


It's likely a distaste at the culmination of a lot of anti-customer moves that Google has made over the last few years

Barking side, indeed


Google has done away with most of its original competitive advantages. Its biggest advantages now are its name recognition and its size. If Google had started off with paid search results and what-you-search-for-is-not-what-you-get we'd probably all still be using Altavista.


Google gave us a lot, but that does not mean they should not be criticized. Without emotion I can say that using Google is far worse today than it was in the early 2000s. Besides delivering the results in a much more readable format, at that time we were able to search specifically in forums and there were a lot of advanced features like "linkto:" that are not supported anymore.

The problem I see is that Google does not care about us Geeks anymore. They are 100% focused on consumers now, and that sucks a lot.


They are 100% focused on consumers now, and that sucks a lot.

Focused on exploiting consumers, that is. Consumers are the livestock in Google’s factory farm. They are actively hostile to end users now.


Criticizing is fine, sure but calling them they ruined the web? A bit stretched?


that's a rather simplistic and faulty argument. there are plenty of things wrong with "lure and then abuse" scenarios, and it isn't really an argument at all to say "but look at all the good the lure phase did".


AMP obscuring the URL is a side effect of the current technical implementation, not a goal – and they are working to fix it: https://searchengineland.com/google-announces-signed-exchang...


To users, side effects are the same as goals. We have nothing to judge a vendor on except observed behavior.


It's the intended side-effect. For the longest time ever AMP wouldn't even acknowledge it's a problem (oh, we just provide the standard, it's the browsers' fault).

Then Google relented and provided a non-solution in the form of an obscure bar on top of AMP pages (in which the link to the original page is deliberately designed to not look like a link).

The signed exchanges is a bone thrown towards standards committees after all the damage has already been done.

And the "solution" has been directly called by Mozilla harmful, they are not going to implement it. Safari shares Mozilla's concern.


The parent mentioned that: "as they can't fix [the URL appearance] without losing out on tracking, they're trying to change what the URL means."


Actually, that so-called "fix" obscures the actual Google URL where the content is served from.


Yes, and that fix is worse than the original problem.


Cloudflare has an option for native AMP urls, at your own domain, as well.


Is any site using this? It requires a special certificate and signing your content (which admittedly Cloudflare will take care of for you) but even then it’s only for Chrome and Firefox and Apple have said they won’t support it. Over a year after announcing it I’ve yet to find a single site that does this.


> and as they can't fix that without losing out on tracking

Tracking what users click to as the result of a search is critical feedback information for training your models/algorithms, it's not just about "hey let's see where this user goes to fine tune ad targeting for them". And, AFAIK, every search engine out there does it(?)


They can track it with a simple js onclick handler, or a simple redirect on their server, that's not the problem. (They do both btw.) The problem is that they want to track what I do on the links I already clicked - which is absolutely none of their business.


Perhaps so, but I still find it highly objectionable.


Just search in a private window then, the part of the tracking that I assume you find objectionable (gathering info on what sites the current user visits) goes away when you close the window while the part that helps the search engine (and thus results in better search results for everyone) still works.

Now sure, we can argue that maybe the company should provide options where you can say "you can use what I click on for search training but not for targeted advertising" (I think Google does provide a set of options that pretty much disable all web history/targeted ad collection), assuming you believe they follow through. But the company needs to pay for its services somehow so I can't blame them for tying the two types of tracking together, I still have tools as a user (private window) to avoid it if I care enough to.


> This is part of Google's attempt to de-prioritise the URL.

URLs have always been an implementation detail and not a user feature. From the very beginning it was intended that users would follow links, not type in URLs. HTML was built on hiding URLs behind text. Then AOL keywords happened. Then search explosion happened. And short URLs. And QR codes for real-world linking. And bookmarks because yet again typing in URLs is not a major driving use case.

Typing in un-obfuscated URLs has almost never been a key feature or use-case of the web. If anything URL obfuscation is a core building block of the web and is a huge reason _why_ the web skyrocketed in popularity & usage. Don't pretend that somehow AMP obfuscating URLs will be the death of the web. The web exploded in growth despite massive, wide-spread URL obfuscation over the last 20 years. Nothing is actually changing here.


I am not comfortable with someone else's domain becoming the de facto front door to my website.

There's nothing I can do it about it, but I tend to hate it.

If my name is Mike and someone powerful calls me Chucklehead, I will have to start answering to that name in order to continue doing business.

But what REALLY concerns me is if a year later, that powerful someone calls someone ELSE Chucklehead.


Well then don't use AMP? It's your domain, it's under your control. You at least have a choice here, whereas you can't block most other forms of URL obfuscation when being linked elsewhere.


Google is deprioritizing sites without amp in search to force them to use it.

So, if I search for something on reddit, I already learned to use duck duck go. Cause then I don't have to edit url to get rid of amp part not scroll up and down for that link.


> Google is deprioritizing sites without amp in search to force them to use it.

Isn't that just a protection racket?


That's a super valid complaint and also completely orthogonal to any of this nonsense about the web being killed because a URL was obfuscated.


Isn't google already the front door to everyone's website? Are the amp URLS really crazy? If you have a .com address is that roughly the URL in search results?

Your link is being shared from Google's search results and their application, so you might not like it but they have every right to control how it's displayed. Is it difficult to accept traffic from an AMP link? Are there technical downsides besides being called a name you don't want?


The "web" is built around "human readable" technologies. Even actual implementation details that the user doesn't care about - like the application layer protocol (HTTP) and the source code for pages (HTML, CSS) - is human readable.

The "point" of the web was to serve humans, not machines. If we wanted to serve machines, we'd just throw binary blobs around, which would be orders of magnitude more efficient.

That said, I still have a bunch of "ancient" tech magazines that had directories of URLs for (then) popular websites, grouped by category. That's how we found things then.

People forget that there was a world before Google.


> From the very beginning it was intended that users would follow links, not type in URLs.

so, about that

    4.6 Locators are human transcribable.

   Users can copy Internet locators from one medium to another (such as
   voice to paper, or paper to keyboard) without loss or corruption of
   information.  This process is not required to be comfortable.

you can't copy a page title into a client and expect it to find the right resource, now, do you?

accessing the URL is listed as one of the fundamental use case for them, and for good reasons, detailed elsewhere in the same rfc


I agree with most this... but have somewhat of a counter point to "you can't copy a page title".

You (often) can copy a page title into a client (Google) and expect it to find the right resource. This is usually done with articles, etc.

Even your comment provides a perfect example - you didn't link to RFC 1736, but the text "Locators are human transcribable" is unique enough that the first Google result is correct.

So you didn't have to provide a URL to lead someone to this page, just 'enough' unique text for it to be findable.

Which is kind of amazing - and maybe not what the original RFC intended.


counterpoint: https://i.imgur.com/ORLixJa.png

none of the result here are pointer to the source I've used

sure the content is the same, but the resource isn't, if anything this demonstrates how easy is to misled a user, directing him on a different resource thinking it's the same.

compare with stack overflow result:

https://i.imgur.com/maJi47G.png

luckily prominent sites get pushed on top of the result queue (had to cut it because it was submersed by advertisement) but the attack vector is evident.


>You (often) can copy a page title into a client (Google) and expect it to find the right resource. This is usually done with articles, etc.

So web was designed with using Google in mind? woot


> you can't copy a page title into a client and expect it to find the right resource, now, do you?

You can type it into Google, though...


My mother opens every webpage by typing the URL into google


Because web browsers default "home screen" has a large text box by default. People think that's where the address goes, but not really, that's actually a web page which does search. Shame on us for designing it that way.

But on another note, it will be the day when your mother (or anyone else for that matter) types in 'www.walmart.com' and actually goes to 'www.target.com' (names taken from other examples that I've seen on Twitter).


That is already happening

I made a new webpage for my father, and my mother cannot open it, since google does not know about that and there are no links to it.


Maybe Firefox should change the default screen from containing search to containing the address bar instead. That way if you type in a proper URL there it should go directly to it rather than doing a search.


Not going to happen, that very search box is Firefox's main source of income.

There is unfortunately a tax on stupidity, and in this day and age it is paid with one's personal data to multinationals.



Thank you.


> Typing in un-obfuscated URLs has almost never been a key feature or use-case of the web

Making people type in URLs is not a key feature of the web. Letting people interact with their various signifiers has always been.

Too many people involved in UX and product and commentary have forgotten that "don't make me think" doesn't mean "don't let me think."

The URL is a set of half a dozen affordances.


URLs have always meant something to users. It’s how you trust the content and indeed the link. And now with so many exploits and phishing it’s even more important. Typing in a URL might be rarer, an honour only reserved for google, facebook etc. but reading urls is very important


Let's distinguish between obfuscating the query parameters, the the path, and the domain. They are different things, and the domain name especially is a major security boundary.


Most of what I mentioned did obfuscate the domain name. Users do not care about domain name security boundaries - that's yet again an implementation detail. An important one, but still not something most users recognize or care about. Hence, you know, why phishing is so successful.


Users used to care about paths and other descriptors. A decade ago, we were fighting for human readable resource locators. However, users have been taught progressively to ignore them as too complex. (Just the same as it happened for any advanced, but user accessible browser controls.) Why is it that 25 years into web usage, any controlled access is deemed too complicated? (Mind that other technologies were already approaching the end of their lifespan at a similar age. The web is neither new nor disruptive anymore.)


Why is it that 25 years into web usage, any controlled access is deemed too complicated?

Two words: corporate greed. It's so much easier to persuade and herd them to where they can be "monetised" when they don't know how things work, nor can't figure out how and where to learn.


Somehow the knowledge industry has turned into dumbing down as a business. There's a TV ad for a smart speaker integrated car, showing a hip, but managerial type man downloading the route to his workplace to his car by a voice command from home and subsequently happily arriving at work. – How do you manage to make a living, if you can't memorize you daily drive from home to work? How is this a product? Has augmentation of human intellect turned into a zero-sum game?


I use navigation to avoid heavy traffic. Most of the time it changes nothing but sometimes saves 30 minutes of driving.


This is categorically wrong. URL is the fundamental part of the web and without it, the Internet couldn't be decentralized. Imagine if I must reach tweeter by typing "tweeter" somewhere then that somewhere now becomes the gatekeeper. The URL allows distributed gatekeepers. A mechanism with a distributed structure will need something like URL infrastructure.


You're talking about writing URLs as if that's the only purpose they serve. You also read URLs.

Same as I'm not going to type a long path to a file on my file browser or CLI, I'm not going to type the full, character-by-character URL. But being able to see the path also provides extremely useful information.


Sidenote: your comment gets downvoted,huh. Though I don't agree with you I'm happy to see someone with opposite arguments. Have an upvote as you contributing greatly to this conversation.


To be fair didn't internet marketers ruin things first by cramming keywords in URLS and page titles to game search engines?


Two wrongs...


Just saying IMO Google isn't ruining the internet. Of course every decision being scrutinized they can't have a perfect record. They have a search engine that due to it's popularity is the target of all kinds of stats gaming and optimizing, While at the same time trying to grow a profitable product. So it's not like there are simple solutions or easy quips like Google Ruined the Internet. It sounds like a lazy argument.


Applications are open for YC Summer 2020

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: