This is part of Google's attempt to de-prioritise the URL. Their destructive AMP service confusingly shows you Google's domain instead of the website's — and as they can't fix that without losing out on tracking, they're trying to change what the URL means.
Thanks for ruining the Web, Google.
google amp on mobile overrides behavior on mobile, even on android, so instead of web links opening up in the respective and selected default app, it opens up the amp link within google search or the browser. there is no way to turn off this feature on any browser, chrome or not.
In Gmail for instance: settings: General settings:
Open links in gmail.
Turn on for "faster" browsing.
Default is on. Tip: Use firefox focus instead as the default browser to open all links. If the link warrants further look copy the url to your main browser.
The above has nothing to do with amp.
But to not detract, amp truly is a plague.
I find their UX approach to SEO more concerning than, say, scoring. For example, AMP pages will receive the same score as equally performant non-AMP content. However, they won't benefit from the carousel view, with instant page loads (prefetch), etc...
Small changes like this, applied at a huge scale to a user base that just can't afford to constantly fight hostile UX are damaging, regardless if they come from Google, an airline site, your mailbox switching to "Promotions" at random intervals or Twitter not allowing you to permanently change the feed order.
Internet feels more hostile than it used to. I don't mind the trolls (I can always block them) but I do mind that we depend on services that deploy hostile UX practices.
It's not Internet, it's software.
You can really feel it when switching from Android to LineageOS, or from Windows to Linux.
It's the difference between software where "you are the product" and software that has been created to serve the user, to be the best tool it can be.
I'm not talking about UX in the sense that open source software can sometimes feel clunky and unfinished, but I'm talking about breath of freedom you feel when you switch, and suddenly ... you notice that this current you have been forced to swim against, just isn't there any more.
I helped a friend get LineageOS on his bootlooped phone, and he's so happy with it. All these features in the settings that are actually helpful (when in Android you always have to second guess, you know that feeling when a setting's description is really vague and uninformative, "this is probably going to spy on me", is it vaguely positive or negative, because the switch can go either way). All the very basic features that would otherwise require an ad-filled app to perform, of course already there. You get privacy controls that aren't mystifying or reset on updates.
Similarly, I've now been on Instagram for about half a year. Always avoided FB, but there's some art I wanted to put out there, so I gave it a try. And oh my god, Instagram is probably the shittiest software ever? Or at least the most user hostile. It's a social thing where you can share images, comments and messages. Except it isn't, it just appears like one. Literally every interaction feels forced, to show me more ads, make me spend more time in this app (??) and mainly constantly throwing up barriers against interacting with any kind of software or data outside its ecosystem. You can't upload from your laptop, many links are not clickable, many text fields are not copyable, most features in the browser are locked unless you make it pretend to be an iPhone (!!), you can't post or reply to comments. The chat is such basic functionality that it seems hard to fuck up but they did. I should stop, but I can go on ...
This is that feeling of constantly swimming against a current, and somehow we're tricked into believing that is how it's supposed to be, because ... I don't know, some people told me when I complained, that most people don't use instagram in the way I do. Well I guess, but that doesn't seem to be most people's choice.
Instagram is not some generic photo sharing software that tries to be open and modular and integrate with everything and proliferate arbitrary visual media with a rich tightly coupled messaging system. It was never that and won't ever be that.
Instagram from the start was just about taking low res pictures with your phone camera, putting a filter on them so they look less terrible and then sharing them with your friends. Every other feature was begrudgingly added to increase accessibility and hence DAUs. So you were never supposed to be able to interact with anything outside of the app. You can't cross-post your posts to facebook or twitter, you can't post from your computer, you can't spam links in your photo descriptions. All of this is literally the point of Instagram. It was like this before it and slightly after it got acquired and people loved it, not in spite of the restrictions, but because of them.
Then Zuck crammed it full of ads and a terrible glommed on messaging system and ruined it.
I understand that what you describe is what Instagram was, but given what they have both become, they actually have little functional difference.
You haven't tried snapchat btw. It keeps getting worse. I'm sure many of us in here are feeling like dinosaurs, unable to fit in.
> In Gmail for instance: settings: General settings:
> Open links in gmail. Turn on for "faster" browsing.
I changed this setting, but now I clicked a link in gmail and indeed it opens via Firefox, but it still gets redirected through a google-URL before getting to the real page. I want to disable that behaviour most of all.
Unfortunately iOS doesn't support this nearly as well as Android does. On Android you can enforce this pretty much everywhere whereas on iOS a bunch of apps still open links in Safari no matter what you do.
Notably, there is absolutely no way for the end user to disable it, short of spoofing your user agent.
Here is a way for Firefox: https://addons.mozilla.org/en-US/firefox/addon/amp2html/
I posted the solution below, which I found a few days ago. The script will work with greasemonkey and tampermonkey, it will provide you with results similar to the ones before the change. If you also use uMatrix, there will be no ads.
Sorry to be "that guy" but for me the solution is to use DuckDuckGo.
I run an instance on my local network, but you can run it for free on Heroku, AWS or GCP or even on a Raspberry Pi. There are several Docker images you can choose from.
Edit: Looks like Dogpile still works, and flags the google ads properly. I'm switching back!
Just as a contrived example a search like "R33 RB25DET Motorsport ECU" typically got me specific links relating to the exact car, engine and topic. But the past 5 years or so it seems like it is weighting the more common word and more general terms. Often excluding the target topic altogether and just giving general motorsport results. Perhaps it's a consequence of every SEO specialist and their dogbot hammering general search terms and gumming up the machine with cruft.
The results just seem really bad lately, especially for anything technical. Just now I'd been looking for "html5 canvas torture test". The top result is a video called "Torture Testing my Nut Sac!!" and then some videos about testing Glock guns. Umm, no, that's not even close to what I'd wanted. (Bing does way better here and DDG is somewhere in between.)
I'm not sure what Google engineers are using to find technical information on the web these days, but I can't imagine it's the public Google search.
At least both of us are pretending the same level of intelligence, which takes away a lot of the irritation.
I also tried talking to DDG like a duck, but it doesn't give as good results as talking to Google like an idiot.
Top result doesn't have the word responsive, very handy.
But psychologically it's rather different. If you find the Google search page to be visually aversive then your goal is to get in and get out quickly. That's a bit harder if Google is your default search.
!s your search terms
It is getting more and more annoying getting workarounds for everything though. It is more annoying, because I liked G layout, default colors snd so on. It was cleaner.
Now not only is their search quality getting worse ( I got what I asked for on bing of all places ), their presentation managed to degrade too.
If it is testing result, I would be curious to see the data that informed that decision.
Just being able to keep scrolling and seeing more results is very helpful. I guess it increases the value of the "first page" for Google.
That's a solution, but not a particularly great one since it requires too much from the user to see mass adoption.
...which brings me to another great point this illustrates: if you want to customise your experience, if you want to be able to control how you see the Web, then you need to make an effort, and the amount you exert is essentially proportional to how much you can change.
Yet the majority of users have shown that they are willing to take whatever Google throws at them with little opposition. I find that a little sad and ironic in this era of "everyone can code" propaganda (I've seen even Google advertises something like that on its homepage); or perhaps the latter is just an attempt to increase the population of intelligent yet docile and obedient corporate drones... I know developers --- web developers --- who really hated the changes yet made no effort to fix it themselves, despite almost certainly having the skills to.
Yes, the UI is still skinnable, and the default skin is rather... psychedelic.
There's something to be said about the adblocker being a filtering proxy, it can really get anything before it hits the browser.
Do you know how it compares to Privoxy nowadays? Way back then it was the open source but harder to configure alternative, that didn't quite work as well as Proxomitron. But maybe Privoxy continued development and got better, I don't know what direction that project took.
Oh and I personally always really liked the default skin :D
Nope - its green, and hasn't been updated since June... of 2003. I have no idea how it is able to be effective against the modern web, considering in 2003 the biggest issue was annoying pop-up Flash ads which no longer exist. Maybe there are updated plugins or something.
Making profit, obviously. You to be less efficient at distinguishing ads and results, increasing time you look at ads, increasing chances of you interacting with ads.
Strength of SEO is irrelevant to the ads. The only thing Google hate is when sites manipulate themself to rank higher and offer a worse user experience.
It wouldn't be very surprising if Google varies the number of ads in a search results page based on the search term. For sites that have strong SEO for all of their key search terms that would be indistinguishable from Google placing more ads in pages where that site ranks highly.
Occasionally you can find pockets of less competitive search's that 1) allow ads
2) relate to your product via the algorithm even if they don't to a human brain 3) Align to your desired audience and these can give great return.
SEO really translates to "How to fool Google into boosting your ranking artifically".
So much so that I now specifically use their search tool rather than go through google just in case some interesting thing pops up.
I wish by each search result there was a button that said “banish this domain to oblivion, I never want to see it again.”
You could improve search really fast that way if you still cared about things like that.
It's silly that it's necessary to create separate bookmark for this, though. Native search engines, surprisingly, don't support keywords there.
I feel silly adding bookmarks for the things I already have in search engine list.
Check it out, because it sounds like it might start to match your workflow: https://duckduckgo.com/bang?q=
Google used to put the Wiki article right at the top of the results list. It virtually never does that anymore. This is what's bullshit.
The point of a good search engine is that it is supposed to conglomerate good results, relevant results - let's say I'm looking up 'Phillip J. Fry' from 'Futurama', but I still want wiki information. Wikipedia won't even spellcheck for you if you don't know how to spell something correctly, like a city name.
If I use a search engine, I'll get this Wikipedia result:
But I will also get the far more informative Futurama-wiki result:
Wikipedia is not a search engine. Although, at this point, Google is barely one, so plastered with sponsored results it can be hard to find the result you're looking for, and with this change, I've finally made the long-needed jump to DuckDuckGo.
Yes, it's time to 'stop the fucking bullshit', and save all our mental states - searching Wiki isn't going to solve that - but not using Google can help. ;)
Comparing a search engine to wikipedia search is like comparing a search engine to a local file search.
If you're looking for a specific driver on your computer that you know the exact spelling and version number of, a local file search will help you find that. A search engine will return many results with download links as well as potentially other drivers, or other versions of drivers for your product - and will generally forgive you if you misspell something.
Do you find their search tool more effective than appending "wikipedia" to your Google search?
I also use hoogle a lot for work so I'm used to switching search tool.
Me too. It just looks ugly.
Google is really not good at creating beautiful products...
Bing also uses the AMP standard: https://blogs.bing.com/Webmaster-Blog/September-2018/Introdu...
All these certificates do is make it so Google's browser (and only Google's browser) will mask the fact you're on Google's domains if you sign the file a certain way.
If anything, this shows more anti-competitive practices -- they're adding features into their browser that specifically benefit a features of their search engine.
If you visit the page from search results (which is the only place it would be linked) then it would never leave Google's domain.
Here's the actual URL used from search results: https://amp-businessinsider-com.cdn.ampproject.org/v/s/amp.b...
If the criteria is just "needs extra work" then unfortunately almost nothing can change and we're all going to live with the existing technology. Change inherently has friction and requires "extra work" with the hope that's an investment which provides returns long term.
In other words, say you are a large Internet company that is trying to improve web page loading times. You profile why most web pages are slow and identify issues. You publicly report on those issues and develop guidelines and criteria. Nobody bothers because "extra work". You develop new technology that directly addresses those issues, this technology works within the existing environment but it requires both client and server support to be most effective. Do you think anyone cares? No, because of "extra work". That's why there needs to be incentives. Now you have a "penalty" for not doing that "extra work". You can file it under "it's anti-competitive" (maybe it is) but if you do the "extra work" then suddenly the anti-competitive part works for you, not against you. IMO that's why it's not anti-competitive.
Other examples: why do you think there are so many people that complained when iPhone released with Flash reader? "extra work". Similarly when it removed the audio jack. Change is friction and friction is extra work. But most of the time that's not anti-competitive...
HTML is already fast (see HN for an example). HTML is already universal across devices and browsers. HTML is already published and easily cached for billions of pieces of content.
AMP is a fork of HTML that only targets mobile browsers specifically linking from Google search results. It's useless on its own, but AMP is required to get higher placement on search results pages, so publishers are effectively forced to spend technical resources to output an entirely new format just to maintain that ranking.
If Google wanted faster pages then it can do what it always does and incentivize that behavior by ranking results based on loading speed. These signals are already collected and available in your Google webmaster console. There's nothing new to build, just tweak ranking calculation based on existing data. Sites would get faster overnight, and they would be faster for every single user because HTML is universal.
Do you know why they didn't do that? Because it's the ads and tracking that's slow, not the HTML. Google's Doubleclick and Google Analytics are the biggest adserver and tracking systems used on the web. This entire AMP project is created to circumvent their own slow systems. It creates more work for publishers while increasing data collection by running a "free" CDN that never leaves a Google-owned domain and thereby always supports first-party cookies. It's a perfect solution to protect against anti-tracking browsers and why Chrome now will also block 3rd-party cookies, because it won't affect AMP hosted on Google domains.
Probabilistic techniques are used for anonymous users or environments like iOS Safari that are very strict.
In other words: if it behaves exactly as a page hosted on your site (just faster), why do you care?
I'm getting the impression that HN users care a whole lot about seeing the request in the nginx log they are tailing.
Google is strong-arming the entire web to switch to AMP in order to increase their control over the distribution of content, and to be in a better position for tracking users.
The fact that Microsoft and Cloudflare have joined the party does not change the fact that you're about to lose control over your own content if this is not stopped.
Please disclose your affiliation to Google either in your bio or in comments, and don't post the same comment in multiple places.
This...doesn't make sense. You lose the value of a CDN (both to you and to the consumer of your content, in this case Google and the end user) if you're rolling your own.
We were talking about CDNs because your collegue mentioned AMP CDNs, but the main point doesn't change: we cannot serve our own content from our own servers and get the same placement in search results as AMP content, even if our content loads verifiably fast and is as performant as an AMP page on the client.
I have no clue who bdeurst is. They certainly aren't a colleague of mine.
> even if our content loads verifiably fast and is as performant as an AMP page on the client.
Can you explain to me how your page load time is 0ms? My understanding is that a correctly functioning AMP-cached page will load for the user in a whopping 0ms, because it can be preloaded.
The entire design of AMP starts from a fairly straightforward premise: "How do we reduce (user-visible) page load times to 0, safely, cross origin?" If your pages user-visible loading time is longer than 0, you're failing to keep up with AMP.
prefetching isn't private cross origin:
> Along with the referral and URL-following implications already mentioned above, prefetching will generally cause the cookies of the prefetched site to be accessed.
IDK about you, but I'd generally prefer that my cookies and IP not be exposed to all of the links that happen to be in the first page of search results.
Yes, for example Signed Exchanges, which on a technical level solves all of the problems of rel=prefetch (and a number of the problems with AMP, like link pasting and copying).
> Should we allow a handful of companies to be pinged every time we load a page on the web
I'm hopelessly confused here: you're only going to "ping" one of the handful of companies if you were referred by that company. (In a world with signed exchanges) You're not going to come across an AMP-cache link organically. You'll navigate to example.com directly, without anyone except example.com (and your DNS provider) knowing. The cache provider will only know if you navigate to the cached site via the cache provider. Concretely, you don't go to the Google amp-cache unless you're navigating there directly from Google's search results. Same for Microsoft/Bing.
So if your metric is
Then yes, absolutely, because nothing changes!
Edit: To address your other question,
> should we give Google and a handful of other companies disproportionate control over how we publish and consume content, for 50ms of load time?
Alright: how is AMP materially different from <whatever other algorithmic choices rated search results before>?
You seem to be claiming that AMP is harmful to someone but who? It's not harmful to competitors or to end users, and its only harmful to developers if you make the most strained argument.
My premises here are that users actually prefer AMP results. You may not, but my understanding is that most users do. So from the perspective of an end user browsing the internet, AMP leads to an improved experience.
So it's good for users.
No one has yet been able to explain to me how its actually harmful to a web developer who now has an incentive to make AMP-compatible sites. Like sure, you now have to work with a framework you may not like, but that's not a compelling argument when people are claiming that AMP is a threat to the sanctity of the internet.
So it's not like bad for web developers, it's just sort of a lateral move.
That leaves competitors to the giants. But AMP is an open standard, and DDG could, if they wished, implement an AMP cache themselves today and it would just work. And they'd, if anything, benefit from the bigger players pushing that ecosystem. There's the potential for abuse via the caches.json registry, but the AMP project is aware of this and notes that the registry could be decentralized using Subresource Integrity or similar, if such a standard was adopted.
So again: I'm confused by how exactly it's bad, beyond the "I am forced to develop in a way I don't want to if I want to appear near the top of the search page", which isn't new.
> So if your metric is
Google being pinged obviously isn't my main metric, as you can see from all my posts in our discussion. My main concern is that publishers will be forced to use specific publishing mechanisms (AMP, Signed Exchanges) to appear at the top of Google Search results. That loss of control puts publishers in a vulnerable position, and hurts innovation across the web.
> Then yes, absolutely, because nothing changes!
Everything changes. Google's influence and control won't end at the moment the user navigates away to a top result on Google Search.
And iiuc, sites are still free to revoke their certs. So this is actually probably more secure compared to something like https in that regard.
Also, everything I say on HN reflects my own opinion and not any organization, which is what my profile states. I do not hide behind an anonymous username precisely for this reason. Poisoning the well by doxxing me doesn't change how the AMP standard works for CDNs either, and only serves to derail the conversation.
Sorry, but I agree with dessant. Your profile doesn't disclose your affiliation and nor do your comments. It's absolutely relevant to the discussion, because whether you want to admit it or not (or try your best to act neutral), your day job will have some influence over your opinions on these sort of projects.
You're right that it doesn't change objective facts about the specification, but I think it's misleading to suggest that, in general, external third-party CDNs are first class citizens in the AMP ecosystem when they're not treated the same within search.
Enjoy the popups, video ads, autoplay, bloated sizes, etc.
I find the non-AMP version to be superior every single time, but I use NoScript so I don't see the popups, video ads, autoplay, etc.
I use StartPage which has the benefit of local results (but can be fully anonymous) but still uses Google search results.
DDG is a great alternative, but isn't for me.
And then there's Bing
...and at the end of the day, who is actually better out there, and that can prove that no data is leaking? Maybe running your own searx is the only option? (http://asciimoo.github.io/searx/)
Also, a couple of things that set Startpage apart from DDG: 1) We're HQ'ed in the Netherlands, ensuring all our users are protected by stringent Dutch & EU privacy laws, 2) we give you Google results without tracking, 3) with Anonymous View you can visit results in full privacy.
No, it doesn't. It's actually served from Google's URL, but it (the AMP service) shows you the original site URL (well, it shows the domain by default but that's a button that expands to the URL if you click it.)
Your address bar shows you the Google URL, but that's not misleading, either, since what the address bar has always shown is what location content is being served from, not a content identifier distinct from the mechanics of content distribution.
> they can't fix that without losing out on tracking
Nah, they could track of they worked like a classic CDN
I suspect there's naturally a laundry list of biases that all the work we designed and implemented needs to succeed or boy do we look silly.
When does the mass-migration to DuckDuckGo go mainstream?
I have been having several issues with google search recently, this just seems like a good time to make the jump.
Not useful results will change habits.
I left a strongly worded feedback on their form
And people wonder why phishing is a thing?
I have been trying to switch from google to duckduckgo for years but its only the past few months that I have been successful and I have google to thank for that.
So, they helped make the web a far easier place to look around on things for a few decades and one layout change and you call them that they ruined the web?
It's easy to be on the barking side.
Barking side, indeed
The problem I see is that Google does not care about us Geeks anymore. They are 100% focused on consumers now, and that sucks a lot.
Focused on exploiting consumers, that is. Consumers are the livestock in Google’s factory farm. They are actively hostile to end users now.
Then Google relented and provided a non-solution in the form of an obscure bar on top of AMP pages (in which the link to the original page is deliberately designed to not look like a link).
The signed exchanges is a bone thrown towards standards committees after all the damage has already been done.
And the "solution" has been directly called by Mozilla harmful, they are not going to implement it. Safari shares Mozilla's concern.
Tracking what users click to as the result of a search is critical feedback information for training your models/algorithms, it's not just about "hey let's see where this user goes to fine tune ad targeting for them". And, AFAIK, every search engine out there does it(?)
Now sure, we can argue that maybe the company should provide options where you can say "you can use what I click on for search training but not for targeted advertising" (I think Google does provide a set of options that pretty much disable all web history/targeted ad collection), assuming you believe they follow through. But the company needs to pay for its services somehow so I can't blame them for tying the two types of tracking together, I still have tools as a user (private window) to avoid it if I care enough to.
URLs have always been an implementation detail and not a user feature. From the very beginning it was intended that users would follow links, not type in URLs. HTML was built on hiding URLs behind text. Then AOL keywords happened. Then search explosion happened. And short URLs. And QR codes for real-world linking. And bookmarks because yet again typing in URLs is not a major driving use case.
Typing in un-obfuscated URLs has almost never been a key feature or use-case of the web. If anything URL obfuscation is a core building block of the web and is a huge reason _why_ the web skyrocketed in popularity & usage. Don't pretend that somehow AMP obfuscating URLs will be the death of the web. The web exploded in growth despite massive, wide-spread URL obfuscation over the last 20 years. Nothing is actually changing here.
There's nothing I can do it about it, but I tend to hate it.
If my name is Mike and someone powerful calls me Chucklehead, I will have to start answering to that name in order to continue doing business.
But what REALLY concerns me is if a year later, that powerful someone calls someone ELSE Chucklehead.
So, if I search for something on reddit, I already learned to use duck duck go. Cause then I don't have to edit url to get rid of amp part not scroll up and down for that link.
Isn't that just a protection racket?
Your link is being shared from Google's search results and their application, so you might not like it but they have every right to control how it's displayed. Is it difficult to accept traffic from an AMP link? Are there technical downsides besides being called a name you don't want?
The "point" of the web was to serve humans, not machines. If we wanted to serve machines, we'd just throw binary blobs around, which would be orders of magnitude more efficient.
That said, I still have a bunch of "ancient" tech magazines that had directories of URLs for (then) popular websites, grouped by category. That's how we found things then.
People forget that there was a world before Google.
so, about that
4.6 Locators are human transcribable.
Users can copy Internet locators from one medium to another (such as
voice to paper, or paper to keyboard) without loss or corruption of
information. This process is not required to be comfortable.
accessing the URL is listed as one of the fundamental use case for them, and for good reasons, detailed elsewhere in the same rfc
You (often) can copy a page title into a client (Google) and expect it to find the right resource. This is usually done with articles, etc.
Even your comment provides a perfect example - you didn't link to RFC 1736, but the text "Locators are human transcribable" is unique enough that the first Google result is correct.
So you didn't have to provide a URL to lead someone to this page, just 'enough' unique text for it to be findable.
Which is kind of amazing - and maybe not what the original RFC intended.
none of the result here are pointer to the source I've used
sure the content is the same, but the resource isn't, if anything this demonstrates how easy is to misled a user, directing him on a different resource thinking it's the same.
compare with stack overflow result:
luckily prominent sites get pushed on top of the result queue (had to cut it because it was submersed by advertisement) but the attack vector is evident.
So web was designed with using Google in mind? woot
You can type it into Google, though...
But on another note, it will be the day when your mother (or anyone else for that matter) types in 'www.walmart.com' and actually goes to 'www.target.com' (names taken from other examples that I've seen on Twitter).
I made a new webpage for my father, and my mother cannot open it, since google does not know about that and there are no links to it.
There is unfortunately a tax on stupidity, and in this day and age it is paid with one's personal data to multinationals.
Making people type in URLs is not a key feature of the web. Letting people interact with their various signifiers has always been.
Too many people involved in UX and product and commentary have forgotten that "don't make me think" doesn't mean "don't let me think."
The URL is a set of half a dozen affordances.
Two words: corporate greed. It's so much easier to persuade and herd them to where they can be "monetised" when they don't know how things work, nor can't figure out how and where to learn.
Same as I'm not going to type a long path to a file on my file browser or CLI, I'm not going to type the full, character-by-character URL. But being able to see the path also provides extremely useful information.