Hacker News new | past | comments | ask | show | jobs | submit login
Announcing AMP Real URL (cloudflare.com)
190 points by chdaniel on April 17, 2019 | hide | past | favorite | 227 comments



This is a "solution" to a "problem" that AMP itself created. And in the process it creates additional complexity[1] and a new potential revenue stream[2] for middlemen providing a service that shouldn't be necessary in the first place.

Astounding.

And oh yeah:

> AMP Real URL is only supported in the Chrome browser at this time, but we are optimistic it will be supported more widely as its benefit to Internet users becomes clear.

Who needs web standards, right? [3]

[1] Especially egregious because it's making SSL identity validation even more complex than it was before. I'm sure this will make getting security "right" even harder.

[2] Or in this case a new barrier to competition:

> After speaking with publishers and with Internet users, we have decided not to charge for AMP Real URL. This is not because our customers haven’t been excited or willing to pay for it, AMP makes up a huge component of many site’s traffic. Our motivation is the same as for offering CDN or SSL services to millions of customers free of charge, we are here to help build a better Internet and improving AMP is a huge step in that direction.

Edit: [3] dfabulich (below) has correctly pointed out that this is in the process of being standardized: https://tools.ietf.org/html/draft-yasskin-http-origin-signed...

Despite this, Firefox has currently marked this proposal as "harmful" (https://mozilla.github.io/standards-positions/). It seems to me as though Google may be ramming this one through despite objections.


> Who needs web standards, right?

No one needs to follow web standards when one browser gets so big that it can completely dominate the technology and the corporation that runs it can force anything they want down everyone's throats! If you don't want to see things like this happening (and you shouldn't, AMP is a disgusting technology), stop using Netscape, err, I mean, Chrome.


Microsoft is in favor of it. The Chrome team generally ships stuff if at least one other major browser vendor approves.


Does Microsoft still count as a major browser vendor, given that they'll be shipping a blink-based browser? Of course, the same thing might be said back when Chrome was using Webkit too… and I think it was.


The major browser vendors are:

  * Google (Blink –> Chrome, Edge)
  * Apple (WebKit –> Safari)
  * Mozilla (Gecko –> Firefox)
Given that Microsoft has itself formally announced that EdgeHTML is abandonware, they have no ability to vote on future browser standards. That's not an opinion, it's the unavoidable consequence of not being the developer of a web browser engine. Personally I wish they still had a vote.


Chrome had a vote when they were on WebKit.

If your browser has a bunch of users, and if you can (and do) opt to block features in your fork, you're a major browser.

Microsoft has ripped a fair amount of stuff out of Chromium already. https://www.theverge.com/2019/4/8/18300772/microsoft-google-...


None of those Chrome features ripped out by Microsoft are web standards or web standard proposals. They are nearly all application support features or user conveniences; most of them tied in some way to the Google ecosystem.

Google maintained their own fork of WebKit with countless feature deviations from the main tree. If Microsoft builds a track record of maintaining their own fork to hold back or modify actual web standard proposals written into the Blink engine then I would agree—they would regain eligibility into that list.


On a similar note, MS is/will offer many of those features themselves tied to ms/live services.

Of course, I'd rather it this way as MS can at least have a browser that keeps pace with the rest of the world. Edge would be about on parity twice a year or so, and often the biggest of my headaches lately. Fortunately no longer having to support any IE currently (only browsers with async function support).


Does the ability to do so give them a spot on the list, or do they have to flex the muscle before they can prove that it's there?


Username checks out. :-)

The list is descriptive, not prescriptive.

If they haven’t flexed their muscle, it doesn’t matter if they’re on the list or not. If they do assert that influence, they’re on the list.


I'm aware of that, but it doesn't make me feel any better about it somehow.


> Who needs web standards, right?

This new thing (signed HTTP exchanges) is an IETF draft. https://tools.ietf.org/html/draft-yasskin-http-origin-signed...

As usual, Chrome is the first browser to implement this, but they're participating in the normal standards process. (Microsoft is in favor. The Chrome team generally ships stuff if at least one other major browser vendor approves.)


Normally I'd rush to Chrome's defense here (see my post history) but I have to agree with others about Chrome kind of breaking the process here. Other browser vendors have large concerns with this spec, so it's quite surprising that it's shipping at this scale.


Doesn't surprise me in the slightest. Chrome has done this before, e.g. with QUIC.


But Firefox and Safari aren't on board. I thought everyone had agreed that browser flags were the best way to implement features that weren't actually approved/standardised yet

Turning it on when they know there's no consensus is breaking the web standard


I stand corrected. But I still think that AMP and Google in general are bad for an open, competitive internet.


This doesn't mean anything though, they all crap out rfcs whenever they have a new feature to push. And they all usually agree with what each other pushes, but even if they don't, the browsers will just forge ahead however they want. Standards are diplomatic cover.


They do participate in the standards process, and that's great, but it doesn't also mean they don't subvert it or shove tech down everyone's throats until it becomes a defacto standard that you have to implement if you want to stay competitive.


A draft written by a google employee.


I'm curious about the flipside, what's the steel man argument for AMP?


About signed http exchanges, the new feature being announced and used by amp:

Say you have a site that links to other pages on your site, and you want the links to be instant. After a user loads a page on your site, you can have javascript on your page preload the content of the links on your site, so when the user clicks on one, you can instantly update the URL and show the new page. (You could even do something clever like only preload the content of links that are visible on the screen currently, or that the user is moving the mouse toward.)

The above only works on links within your own site, because 1) there's no way your javascript can render something onto the screen with someone else's domain in the URL bar, 2) if the user's browser is pre-fetching the content directly from the linked sites and if users can post links to arbitrary sites on your page, then someone can get the IP addresses of all of your visitors by posting a link to your site, which may be preloaded by everyone's browsers who visit your page, regardless of whether users actually click the link, and 3) the user already has an open connection to your site, so preloading would be quickest if the data came from your site instead of potentially dozens or more of other sites. Point #1 could be solved by browsers implementing the prefetching logic, but that doesn't solve points #2 and #3.

Now imagine that your server could download signed and packaged versions of all of the pages linked to on your site, and then have your javascript preload that data from your server into your users' browsers, in a way where the users' browsers can verify the signature of the data to know that it really came from the linked sites and display it if the user visits any of the links on your site. That's what signed exchanges enables.


Does the origin site lose advertising and analytics?


This is Google. If this technology weren’t compatible with their ads and analytics, they wouldn’t push it.


Not necessarily, the payload can contain javascript that will run if the content is actually loaded. This would allow most advertising and analytics scripts to continue to work.


Worth noting that Mozilla's current position on this standard doesn't really make sense: https://github.com/mozilla/standards-positions/issues/29#iss...

There may indeed be valid issues with the standard, but their current rationale for marking it as "harmful" isn't technically sound.


It's really unfortunate to see awful claims like "we are here to help build a better Internet and improving AMP is a huge step in that direction" that are blatantly dishonest and harmful to the Internet being promoted by Cloudflare in this blog post. AMP is a horrible cancer the Internet is plagued with, and we keep being told it's great even though literally nobody wants it.

Claims like "Many of the sites we have spoken to get as much as 50% of their web traffic through AMP" ignore the fact that this is only true because it's been forced down our throats without our consent.

Anyone implementing an AMP-based technology which doesn't come with a way for users to decline to participate is actively harming the open web. I like Cloudflare and a lot of what it does, but I'm really disappointed in them today.


I can definitely agree here, It's really disheartening to see this. When I saw this headline I first thought it was Cloudflare using some sort of cool trick to redirect users back to the proper non-amp page, but was just met with disappointment.


Cloudflare hosts their own AMP Cache. It seems reasonable they'd adopt this as well.


If you use a different AMP cache than Google's, does it affect your search ranking/carousel positioning with them?

If AMP is federated, that's a lot less disturbing to me than my first impression.


Let me see if I can explain this a little better.

Anyone can cache a signed exchange from anyone else. So, for example if you went and fetched a signed exchange from https://amppackageexample.com/ (or any other site that supports one, this is just an example), you could then serve that from your own server, more or less just like any other file (the less is that you need to set the right Content-Type header, but it otherwise works just like serving an image or a zip file).

Then, if a user visited the URL on your site https://yoursite.com/cached-copy-of-amppackageexample.com/ then the browser would display https://amppackageexample.com/ in the URL bar, as though that URL had 301 redirected, but without the extra network fetch.

Google search does exactly this, just loading a cached copy of the Signed Exchange, and any other cache (or even any website) can do the same.

So, if you publish a signed exchange, you are allowing all caches to do this, not just a selected cache. However, since the document is signed, absolutely no caches can modify the document or the browser will reject it and error handle (typically just making an actual redirect to your domain).

I hope that helps to understand.


Thanks for the explanation!


Yes, thanks for clarifying.


Google can only serve data from Google's AMP cache. Bing can only serve data from Bing's AMP cache. So you can't, say, pick an AMP provider, your website gets hoovered up by each individual AMP cache and served to users coming from their respective networks.

In short, only being in Google's AMP cache gets you good rankings on Google Search.


I don't know the answer to those questions, but I'd be curious to know as well.

Bing also hosts an AMP Cache if that's of interest to you.


> If AMP is federated, that's a lot less disturbing to me than my first impression.

AMP is federated. Anybody who wants instant loading of AMP pages linked from their own pages can run their own AMP cache, as Bing and Baidu (among others) do.

> If you use a different AMP cache than Google's, does it affect your search ranking/carousel positioning with them?

Bing and Baidu running their own AMP caches doesn't affect their Google ranking. Why would it?


[I work at Cloudflare]

It's hard to build a great website, particularly one which works well on mobile. AMP as a web framework makes it easier. There's nothing about that value it which is intrinsically linked to any particular way of rendering AMP, a good chunk of the web would be better off building on AMP independent of Google. That doesn't mean every site should, there are developers and teams who are capable and willing to build great sites and we support their right to not have to use AMP. The need to use AMP-the-framework to get instant loading is hopefully a temporary one partially fixed by signed exchanges.

The AMP cache is also made much less concerning and painful for users through the use of signed exchanges. By ensuring the content is what was provided by the origin we believe the Internet is becoming a more trustworthy place, opening the door to sites being able to be served from wherever they will be most performant.


[I know, but thanks for disclosing. :)]

The problem is, there really isn't an "AMP independent of Google", as Google's implementation is the only thing that matters, due to Search and Chrome. Supporting Google's proprietary fork of HTML is more or less supporting that entire structure, and it is inherently harmful. Whilst they have some notion of "governance" for the project, the plurality of participants are Google and they've confirmed that Google isn't really beholden to follow the project's governance with the implementations they put into their own products anyways.

For the most part, AMP is a subset of HTML with proprietary tags, but could be easily implemented in normal HTML. The only reason to implement AMP is to format your website to be compatible with Google's demands for special treatment in Search. (A benefit that, while publishers may aspire for today, becomes increasingly useless as the entire Internet is forced over to the same proprietary format.)

Various blogs have covered samples of how loading AMP is actually worse than loading their own websites in a lot of cases, but they're forced to implement it anyways because of that Google Search incentive. Surely the correct solution for mobile developers is to teach people to use restraint in piling on extra crud, not bootstrap some extra crud that prevents them from using any other crud.

And essentially, from the responses here and in almost every previous discussion about AMP, and literally Twitter feeds made just to retweet examples of AMP hatred: Nobody wants this. Everyone who wants it is using it because they have no choice. It's forced on publishers to maintain competitiveness in search, and it's forced on users who have no way to decline.

Probably what ground my gears the most about your blog, to be honest, is the cheerful, supportive tone for what is inherently a harmful technology that was implemented in a closed fashion. Compare to how FastMail acknowledged that they may have to support AMP for Email, whom I'm not upset with: https://fastmail.blog/2018/02/14/email-is-your-electronic-me...

If you have to do this for business reasons, tell us that, don't try and tell us AMP is good for the world, when we know it isn't.


> Surely the correct solution for mobile developers is to teach people to use restraint in piling on extra crud, not bootstrap some extra crud that prevents them from using any other crud.

Any time you have to rely on lots of disparate people, all potentially under wildly different pressures and facing diverse incentives, to make the same smart choices... you may be facing an uphill battle.

We've tried to rely on exactly what you call for. It's such an obviously, clearly, good idea. Painfully so. We've been trying it for quite a long time by now. It's difficult to call it a success.

Openness, flexibility, and interoperability are great and wonderous things. It's a tragedy, then, that so many use the tools and powers granted them by these virtues so apallingly poorly.


> It's hard to build a great website, particularly one which works well on mobile.

> The need to use AMP-the-framework to get instant loading is hopefully a temporary one partially fixed by signed exchanges.

You know what would really fix this? Developers doing their work. Deeply knowing and understanding the consequences of their work and how everything comes together in the end and leads to a slow website or not.

We don't need to put even more power in the hands of Google, because we (as in "some developers") don't know how to do our job right and therefore punish the internet as a whole.

There is no valid reason for the use of AMP. It's only there because a) developers fucked up and b) Google forces you to use it. If we would fix a), I'm not sure b) would even be a valid point anymore.


> You know what would really fix this? Developers doing their work.

That's merely replacing one problem by another, which is: how do you get developers to do their work?


You'll need people that _really_ love their work. I know that's hard to find or to achieve, but it's possible to have some intrinsic motivation to deliver better and better code each day, to grow with your tasks etc.

I think most people these days are in for the money and not for "the challenge" which is why we see so much mediocre software. On the other hand there is excellent software too, so there are people who really know what they are doing.


> You know what would really fix this? Developers doing their work. Deeply knowing and understanding the consequences of their work and how everything comes together in the end and leads to a slow website or not.

This is a wonderful idea! It's tragic, then, that nobody has thought to try this for the past several decades...


Seems like nobody has, yeah. Otherwise developers would only laugh about things like AMP and we would have great websites. One can only dream this might happen in the future.


I dunno all the amp sites I have visited have had terrible user experience IMO... Also for the longest time AMP had absolutely broken in-page search on mobile safari.

If you believe in a web technology go through the process and champion a draft standard. It can start out as an experimental flag enabled feature or vendor prefixed feature before it is adopted as a web standard. But don't ram this shit down our throats...

Also let's talk about the obvious business incentives for data collection both Google and cloudflare have for serving up AMP pages.


> literally nobody wants it

> as 50% of their web traffic through AMP

> forced down our throats without our consent

Who, precisely, is forcing publishers to use AMP?


Google is, by giving AMP-enabled pages preferential placement in Google Search results. If your business lives and dies by web traffic, it's not optional.

Bear in mind, publishers hate AMP as much as everyone else: It forbids them to do what they want with their website. AMP is the technology nobody wanted and nobody asked for, users and publishers alike.


I think you're conflating 2 things.

User do want AMP-like technologies. I want the browser to restrict the publishers as much as possible (while keeping their sites "useful" for some definition of the word - static markup such as text, images and some background/styling is definitely useful) - I actively do that by preventing them from auto-playing videos, using adblocking, etc.

However, Google-controlled AMP is a travesty. I don't want to view websites on Google. I want the original URLs, I want to be able to bookmark it and share it, I want to be able to go to the non-mobile website etc.

Keep AMP, banish Google.


First: This doesn't remove you viewing websites on Google. It gives you the original URLs, but they're still being served by Google. In short, this is enabling Google to lie to you about where the website is coming from, it doesn't banish Google.

Secondly: We have restricted views and formats for websites already, and they're far superior to AMP. Things like RSS, which most websites support already and I use daily, and Reader View available in both Firefox and Edge, which strips out everything but the core page content. These already exist, are superior to AMP, except in one key area: AMP is built to serve Google Ads, whereas RSS and Reader View strip them out. AMP is not a good technology for users, AMP is a technology to protect Google and give them more control over the web.


> this is enabling Google to lie to you about where the website is coming from, it doesn't banish Google.

I understand, which is why I'm pro AMP but anti Google-specific AMP.


I would argue Bing or Baidu having the same power isn't inherently better.


Short of some kind of large user survey—which would be difficult because I doubt many users actually know what AMP is—we're never going to know whether users want it.

Personally, I don't want to get a gimped version of a web page, I want the real thing. Yes some websites suck, but at least I'm getting the definitive copy. AMP versions often miss content or have weird layouts.

Sadly, I also really like Google Search—their results are much better than Bing or Duck Duck Go in my experience—so I don't have a choice in the matter.


> User do want AMP-like technologies.

Some users do. Some don't. I certainly don't want AMP-like technologies, regardless of where the cache is.

> I want the browser to restrict the publishers as much as possible

In the browser where I can control it, sure. I don't disagree at all. On a third party server where I have no control over it, no.


Users want fast loading speed, not specifically AMP.

And quite often fast loading speed can be achieved by merely removing the bloat from your pages.


> Keep AMP, banish Google

Cloudflare != Google


Well it only works in Google Chrome so...


> Google is, by giving AMP-enabled pages preferential placement in Google Search results

That claim is gonna need evidence, and I've seen no evidence so far. Please provide it if you've got it.

Google does prefer sites that are fast to sites that are slow, ceteris paribus.

So if AMP sites are faster, maybe they get upranked due to performance, but not just because they're AMP.

So if a publisher doesn't like AMP they don't have to use it, they just need to make their sites faster on mobile.


IIRC, there were reports that Google was ranking non-AMP pages lower than their identical-but-AMP-cached counterparts. And page rank to a webmaster is EVERYTHING. I don't care how devoted you are to Internet freedoms, if your boss sees your articles slipping down the page, you're in hot water.


I've seen no proof of this happening, other than when the non-AMP version had strictly worse page score due to actually having worse performance.


The ranking part is nonsense; AMP has never affected organic scores. However the "amp carousel" was real, and had SEO implications. I'm not sure if it still exists.


> AMP has never affected organic scores

Are you sure? It seems to me that AMP pages totally dominate the results. And Google explicitly says they consider performance, which means they have come up with some metrics they consider to represent performance, one of which could effectively be "is it AMP?". AMP pages do have automatic performance advantages: Google often prefetches the content before you've tapped it, and they render the content inline instead of navigating you to it.


Yes, I'm pretty sure. Google has been explicit about that fact. I'd also argue they've been somewhat disingenuous by only focusing on organics and ignoring the carousal though.

As for their metrics on page speed, they've had objective measurements for some time. Lighthouse is their go-to solution now for measuring a website's speed.

As for AMP's ability to be pre-fetched and rendered, this benefit doesn't show in Lighthouse and only from search. For that reason I would bet it isn't included as a ranking factor either. But of course that's the "secret sauce", so we can only go off of their word.


> Google has been explicit about that fact.

I don't think that Google's word can be trusted. They may be correct, but they may not be. Evidence either way would be welcome.


"Who, precisely, is forcing publishers to use AMP?"

Google rolls out their carousel, pushing everything else down the fold.

Then they roll out AMP.

Then only AMP results go in the carousel.

So, it's optional if you don't mind losing the traffic.


> literally nobody wants it

I can guarantee you that the majority of internet users like AMP. It's only developers and hardcore techies that don't.


How can you "guarantee" this? First and foremost, the majority of Internet users don't like AMP, they just don't know what AMP is. Websites may behave strangely to them, and they just don't understand what's causing it. We, developers and hardcore techies, have the same problem, but we happen to know that the cause is AMP.


I presume you live somewhere with good internet connection. AMP is a life saver here and people first and foremost care about being able to load the page at all. So yes, they like AMP even if they don't know what's causing a page to load fast.


So if this is Google's motivation, they should just promote results that can load fast regardless of whether they use AMP. They have the means to do that but they don't want to, presumably because their motivation isn't quite aligned with users.


Maybe, maybe not.

But why doesn't Google do the one thing that would render this entire argument moot? Provide a mechanism that allows users to opt-out of getting AMP pages entirely.


Mozilla's Position: "Mozilla has concerns about the shift in the web security model required for handling web-packaged information. Specifically, the ability for an origin to act on behalf of another without a client ever contacting the authoritative server is worrisome, as is the removal of a guarantee of confidentiality from the web security model (the host serving the web package has access to plain text). We recognise that the use cases satisfied by web packaging are useful, and would be likely to support an approach that enabled such use cases so long as the foregoing concerns could be addressed." https://mozilla.github.io/standards-positions/


Apple's position: "The Security Considerations describe some bad things that can happen even if the spec is properly implemented. Unsurprisingly, I think those things are bad. No time to make actual technical contributions at this time but I will consider it if this spec gets multi-vendor interest." https://twitter.com/othermaciej/status/951001352347402240

Microsoft is in favor.


If by “Apple’s position” you mean “a 15-month-old tweet by one Apple employee”.

Edit: And the spec has since gotten multi-vendor interest. From Microsoft:

> We're excited about the potential for this feature set to resolve some of the performance and privacy problems of alternative approaches, and we have been talking to publishers who are interested in utilizing these technologies to provide accelerated experiences.

https://groups.google.com/a/chromium.org/d/msg/blink-dev/gPH...


He's not a random employee. https://en.wikipedia.org/wiki/Maciej_Stachowiak "he is a leader of the development team responsible for the Safari web browser and WebKit Framework"

For now, his tweet is all the signal we have from Apple.


I'm the person who posted the tweet. Since then, some of my colleagues from the WebKit team have given more specific security feedback. Some of it has been addressed. And the Security Considerations section is less scary. But even so, I'd say we are pretty uncomfortable with this approach, for similar reasons to Mozilla. We can see some advantages to Google re-serving the whole web from their own servers and getting browsers to present it as if it comes from the origin, but it also seems like a worrisome change to the web security model.


And considering Apple's strict policies around public communication, it's pretty safe to describe it as a formal statement of the company's position.


People jumping to this conclusion is why employees are burdened with prefixing everything they say with a disclaimer that they do not speak for their employer :(


I completely agree. My point was only about the reality of communications from Apple employees, not asserting it as an ideal.


Note also the Apple employee's reply upthread.


Since Microsoft's position is now "implement Chrome with Bing as the search default", I am pretty sure most, if not all, web standards Google proposes will be enthusiastically supported by Microsoft, as they'll support them regardless.


Microsoft's claim is that they'll be more like Google and Apple working on WebKit. They disagreed a fair amount, and didn't always enable each other's features just because the other browser enabled them.

Microsoft has ripped a fair amount of stuff out of Chromium already. https://www.theverge.com/2019/4/8/18300772/microsoft-google-...


It will be interesting to see if Microsoft ever diverges from the web platform functionality exposed by Chrome. My prediction is that either they never do it, or they eventually do it and are forced to fork. Chromium is not as open to variation in the main tree as WebKit was when Google showed up.


> three-month-old tweet

15 month :)


Thanks, corrected.


Google's response: https://github.com/mozilla/standards-positions/issues/29#iss...

In all fairness I have to agree with Google on this one. Mozilla's current objections don't really make sense.


Thumbs down to anyone enabling AMP’s success. Google should be measuring page speed only, it’s none of their business what methods you used to achieve that speed.


You can improve your network request speed, but you can’t bring the time taken down to literally zero, i.e. the speed if your page was already prefetched while the user was looking at search results. Yet for privacy reasons, prefetched results had to be served from a cache, yet the Web had no way for Google to cache a page without literally rehosting it on its own domain – which in turn required restrictions on JavaScript, to avoid random people’s code being able to execute in the context of www.google.com. That explains most of AMP.

On the other hand, AMP Real URL is based on Signed HTTP Exchanges, which allow one site to send a cached copy of someone else’s site, in a much more straightforward manner. In theory, Google could now drop the bulk of what’s now known as “AMP” and cache arbitrary pages that indicate their willingness to be cached. That they’re instead integrating this with AMP suggests they may not drop it, which would be unfortunate. On the other hand, since Signed HTTP Exchanges will remain a Chrome-only feature for the foreseeable future, it’s arguably a bit early to expect Google to make that kind of commitment.


If Google didn't penalize sites that don't use AMP, and instead just measured response, paint, etc times and such, I would agree with you.


> You can improve your network request speed, but you can’t bring the time taken down to literally zero, i.e. the speed if your page was already prefetched while the user was looking at search results.

Maybe I'm just old, but I don't understand why this is a worthwhile goal. Even with its bloat the internet today is much, much faster than it was 15 years ago and I'm quite happy to wait a few seconds for a page to load. In addition, doesn't pre-fetching every result just waste bandwidth on mobile?


Well, for one thing, internet speed depends on the quality of your connection, which in the case of mobile networks varies wildly.

But personally, I’m a sucker for low latency across all of tech (and gaming as well). That’s why I use Safari instead of Chrome, Terminal.app instead of iTerm, C++ instead of Rust sometimes (compile times), and basically anything else instead of Java (startup time), among other preferences. I even prefer reading ebooks on normal screens rather than e-ink displays, simply because I don’t want to wait between pressing “next page” and seeing the page show up. Not surprisingly, then, I really appreciate website snappiness in general, including links that load instantly due to prefetching. I can’t say I appreciate AMP as it exists today, because the quick load comes with a host of UX issues, but I have hope for the future.

Take that perspective how you will. I think I’m a bit of an outlier in just how much latency bothers me, but pretty much everyone consciously or subconsciously appreciates when it goes down. Probably including you: the current speed might seem “fast enough” now, but if faster loading becomes the norm and the other issues are dealt with, I bet you’d have a hard time going back.


I can't imagine they prefetch every result. Just the first one.


This is exactly right. Google's behavior with AMP often seems orthogonal to their stated goals.


This. Exactly this. If Google actually cared about performance, that is what they would have done.


Is there any proof that the score is impacted by AMP-ness, and not purely due to speed?


Despite all the hate against AMP, this will actually improve the state of a decentralised web. Disconnecting the web packaging feature from the application that birthed it, the format allows a website to sign its content and create an immutable package that any server in the world can distribute - with the signature allowing clients to trust where it came from and show the origin address instead of the cache.

Isn't this exactly what the distributed web needs? This is a massive boon to IPFS (a content hash can still show the proper origin name), a big blow to censorship (a censored website could spread its content to a thousand different servers, each served over HTTPS, and viewers would still the original URL whichever cache they access), consensual permanent archiving, and much more?


Questions for you:

- Who decides which HTTPS certs are valid? (Followup: Who decides which of those your browser considers valid?)

- Who operates the browsers you'd use to view this content which sees the original URL, and have any of those companies deplatformed content on behalf of government requests?

Which is to say, web packaging looks somewhat decentralized at a glance but arguably still leaves the same handful of companies entirely in charge of deciding what you view and how you view it. And it's entirely dependent on your browser's developers being ethical and trustworthy, and choices on what browser you use has just shrunk significantly in the past month alone.


> Who decides which HTTPS certs are valid?

Wait wait wait, are you telling me signed exchanges maintain the status quo on a problem they aren't intended to solve?


The packages are independently verifiable, and cacheable. Do you could start your own cache with a Raspberry Pi at home if you wanted, and if whatever browsers you trust implemented this check you’d have your own secure version of it.


> If your site has AMP Real URL enabled Cloudflare will digitally sign the content we provide to that crawler, cryptographically proving it was generated by you. That signature is all a modern browser (currently just Chrome on Android) needs to show the correct URL in the address bar when a visitor arrives to your AMP content from Google’s search results.

Is the dig (emphasized above) about what constitutes “a modern browser” really necessary? Is a modern browser now whichever one that supports something you like?


Based on the discussion from the Cloudflare announcement and here, I don't think people understand why Amp was created and is being heavily pushed by Google: global accessibility.

Many countries, including India, are just now getting widespread broadband rollout. Due to its infancy, many ISPs have data caps and may even be delivering data over 4G/LTE to homes. With AMP, Google and CF have the privilege of driving all of this (search-driven) bandwidth to datacenters within the same country, or at least the same continent, to the visitor. If all of this content was really served from the origin, latency would be considerably worse since the data will have to go through the undersea cables and the performance issues that may occur with bad routing.

You could also say the Data Saver web proxy they run is to encourage these users to browse the web without worrying as much about their phone data bill.

Google really wants everyone in these countries to be using and depending on the Internet just as much as Americans use and depend on it, otherwise they may miss out on advertising $$$ potential from a country that's ~4.5 times more populated than the United States.


That isn't a main reason. People are going to use the internet either way. Google is making a power grab by breaking web standards (embrace, extend, extinguish), and companies like Cloudflare are enabling them.

Mozilla has marked Signed HTTP Exchanges as harmful.


You're in this thread a lot, and you keep referencing "Mozilla has marked Signed HTTP exchanges as harmful". Is this all that important? Should Mozilla support it in 6 months as they always do with their follow-chrome-dont-lose-marketshare approach, will you also support it? In every comment, you are focusing on the "Google is evil" part of it, instead of focusing on "Are HTTP Exchanges themselves bad?", "is this an insecure protocol?", "Could it be used to impersonate a domain?".

The only reason signed HTTP Exchanges are a thing is because Google is trying to solve a problem with user experience (the URL bar). AMP and exchanges are just a different protocol and method of hosting content on a CDN. In this case, you are forced to reduce your page size and you delegate your HTML to be loaded by a third-party, contrary to that of a traditional CDN where you would (for example) create a CNAME in your DNS.


> Should Mozilla support it in 6 months as they always do with their follow-chrome-dont-lose-marketshare approach

With what, that they considered harmful, have they done that?


Encrypted Media Extensions


I'm sorry but if you believe AMP breaks web standards, then you do not understand AMP. It is built entirely from the ground up on web standards.


AMP does not work unless you include a Google hosted JS file.

From https://amp.dev/documentation/guides-and-tutorials/learn/spe...:

> AMP HTML documents MUST

> ...

> contain a <script async src="https://cdn.ampproject.org/v0.js"></script> tag inside their head tag.

That is not a web standard.


Wow that is disgusting. Why does it need to include remote code to function? How can they even pretend this is an open standard when it has a backdoor?


It's a subset of HTML, built on WebComponents. The subset is defined by a separate standards board and built on web technologies.



You're completely exaggerating. Please educate yourself by reading the full conversation leading up to the "harmful" tag before spamming that link again: https://github.com/mozilla/standards-positions/issues/29


The point there is that is it not supported by Mozilla. Google has strong-armed publishers into implementing their websites in a restricted Google format that speeds up websites just for Google Chrome and Google Chromium-based browsers. It's an unethical power grab by Google on multiple levels.


Yes, that's a standard signed by the IETF. It's also an incredibly new feature (supported by AMP starting today).


It's not an actual standard if it's forced on everyone by a single vendor abusing its market dominance.


> I don't think people understand why Amp was created and is being heavily pushed by Google: global accessibility.

I think people understand this. The issue is that AMP is forced on people who don't want it. I have no problem with AMP being provided for people who do.

A method of opting out would resolve this.


People in India aren't as interested in your average Western news site as you (presumably from the West) are.


Fair. At this point, the focus has probably shifted from that since these ISPs and cell providers aren't as technically limited as the United States was 10-15 years ago.

I feel like AMP started out this way at the core, "How can we ensure these web pages load fast and aren't eating up these users' data plans?", but then the dreaded parts of AMP like search rank preference, Google being the one in the URL bar, etc. were afterthoughts that were put in to make AMP widely adopted and to increase their stranglehold on the Internet as a whole.


While I disagree that this is the primary reason for Google pushing AMP (if it were, carousels would be page size and speed limited, possibly taking into consideration CDN availability, not (Google-)AMP-exclusive), it is definitely a facet of the argument that's been underappreciated.


Sigh that last thing we need is more AMP.


If I understand correctly, the biggest problem was that google was hosting the cached amp pages. If this instead sends the user to your own domain, viewing your content which just happens to be hosted elsewhere, does that solve the primary complaints? I know google was also exposing content directly on the search results, but I feel that's a separate issue.


> If I understand correctly, the biggest problem was that google was hosting the cached amp pages.

Speaking for myself, that's one large problem. The other large problem is that I dislike the AMP pages themselves.


The problem is that the AMP standard is just generally poorly-thought-out, no matter who hosts it. Instead of a subset of HTML, it's a weird mishmash of everything that requires magic incantations to make things work for the browsers of the time it was invented.

It's like NaCl/PNaCl - a good idea in theory, but created by a team that didn't do a great job at speccing something that could be long lived and satisfy other players than Google+Chrome.


This new thing substantially improves the design. What design problems do you see in the new thing?


The whole spec is still a giant mess [1] of HTML and HTML-like-but-not-quite-really, with a boatload of ceremony that does nothing but glue it into an existing ecosystem. Hosting it on someone else's site doesn't fix that.

[1] https://amp.dev/ru/documentation/guides-and-tutorials/learn/...


I honestly don't see the mess that you're referring to. Can you be a bit more specific? AFAIK, Amp is 100% valid HTML because it's a HTML subset, it is not "HTML-like-but-not-really".


That is patently not true. From mmastrac's [1]:

> Resources such as images, videos, audio files or ads must be included into an AMP HTML file through custom elements such as <amp-img>

From: https://github.com/ampproject/amphtml/blob/master/spec/amp-t...

    Amp Specific Tags

    <amp-img>
    <amp-video>
    <amp-ad>
    <amp-fit-text>
    <amp-font>
    <amp-carousel>
    <amp-anim>
    <amp-youtube>
    <amp-twitter>
    <amp-vine> <amp-instagram>
    <amp-iframe>
    <amp-pixel>
    <amp-audio>
    <amp-lightbox>
    <amp-image-lightbox>
None of that is "100% valid HTML".



Are you familiar with HTML5 and Custom Elements[1][2] (often aka WebComponents). <legit-whatever-i-want> is a valid HTML component.

It's not 2010 HTML, that's for sure, but so what? Are you complaining about AMP, or about HTML5?

[1]: Spec: https://developer.mozilla.org/en-US/docs/Web/Web_Components/...

[2]: Examples: https://www.webcomponents.org/


So what happens when an HTML-compliant browser renders a document with <amp-img>? It won't be what the author of the document intended, because the browser will skip the <amp-img> tag. The document is only interoperable if all browsers load and execute Google's Javascript that defines those tags. This type of technically-valid with obviously functionally incompatible is a perfect example of the Extend phase of the EEE[1] strategy.

HTML has always tolerated out-of-spec tags, because it was originally designed as an application of SGML (defined by an SGML DTD). Since then, new tags were regularly added (and generally tolerated by older browsers), but the practice of defining new tags in a DTD has gradually stopped happening. Web Components accomplish a similar role but replace the use of the declarative language in a DTD with a requirement to run a Turing complete langrage. Making the definition of new tags undecidable is a powerful way to tightly couple browser implementations under Google's control.

[1] https://en.wikipedia.org/wiki/Embrace,_extend,_and_extinguis...


What?

An html compliant browser would apply the custom elements spec and render the custom element. If the user isn't using JavaScript, then you fall back, as defined in the spec.

Note that custom elements are themselves a standard, there is no "extending" here. There is just applying the web elements spec in a way some people dislike.


This is an application of the W3C's Extensible Web Manifesto (https://www.w3.org/community/nextweb/2013/06/11/the-extensib...)


Isn’t it a subset by definition? It runs in html5 browsers.

Other sunsets of HTML failed magnificently like XHTML basic and it’s siblings.


Technically perhaps. But with things like <amp-img> instead of <img>, so yes, it "runs".


Web-components are HTML. Registering a new element (e.g. <amp-img>) that wraps <img> with some additional restrictions, is pretty much making a subset facade that restricts you to the subset.

It's like complaining that BibTeX, LaTeX, AMSTeX et al aren't "TeX".

In the original incarnation of SGML, the whole point of the DTD and DSSSL was to be able to define new elements, and specify how they are handled. This seems perfectly legitimate and part of the HTML standard.

There may be other problems with AMP, but creating new high level elements that resolve to standard HTML elements, doesn't seem to be one of them, and arguing over whether something is named 'amp-img', or 'img' + some detailed validator that throws errors if you try to do anything with <img> off the rails, seems to be bike-shedding over naming, and if anything, I'd argue giving new names to a function to limit its inputs is more clear and readable, then adding precondition checks only.


Well those new elements aren't functional unless you specifically include a Google controlled/hosted piece of JavaScript in your AMP page.

If it's a standard, shouldn't there be some way for those elements to work without including someone else's JavaScript?


Isn't that like saying "This page doesn't render without including this CSS or Font or XSL Stylesheet"?

If your objection is that the ampjs has to be loaded from Google's CDN, well, the whole point of Signed Exchanges/Web packaging is to move to a world where AMP caches are federated and you can host this stuff elsewhere.

The only real objection that has merit IMHO is that Google Search should rank these things based on performance, not on amp validation alone, so if you authored your own AMP-like framework, it could be similarly ranked.


"Isn't that like saying "This page doesn't render without including this CSS or Font or XSL Stylesheet"

The important part isn't "this", it's "somebody else's". AMP is very specific not about what the JS is, but rather where it is, and that you must include it using their mandated url, not your local copy.

I'm just not comfortable with the idea that "you must include this opaque thing that we might change" be advertised as a standards-based movement.

I have no issue with Google pushing something like AMP. I just don't like the charade that it's some sort of open movement.


I agree with what you're saying, but AFAIK, AMP forced this hosting for very specific reasons in the browser protocol stack, and that Web Packaging/Signed Exchanges are designed to address. That is, forcing a single opaque resource location is a hack/polyfill for what should have been some kind of native browser support for these mechanisms in the first place.

To me, there's been a lot of false starts in the Browser platform, a lot of hacks, and 'worse is better', until we finally get something good.

Remember AppCache/Application Manifests? It was horrible. Now we have Service Workers. Or the gazillion variations of <link> prefetch/subresource/prerender/etc

I think eventually we'll end up at a good place that doesn't have a centralized requirement. And I don't (being a Google employee), see these designs as deliberate attempts to arrogate power and control, but more like expediency. AMP was a reaction to Apple News and Facebook Instant putting publishers in non-Web silos. You could argue the proper thing to do would have been for the W3C/WHATWG to design something by committee, then wait a year or two for all browsers to ship, and then get the publishers on board.

What AMP essentially did was ship a polyfill fix for something on existing web technologies to get publishers to do something they could have done on their own, but didn't, with the eventual real fix coming later.

For whatever reason, publishers seemed unwilling to fix their mobile performance using standard practices, and invited a takeover of native mobile silos. I see AMP as a kind of holding action to preserve federated mobile web publishing, until something better came along.

You have to ask yourself, what's worse, using HTML Web Components and a JS framework library, or having major publishers choosing Apple News or Facebook Instant as their primary publishing mechanism on mobile?


What's wrong with NaCl?


I believe the gp was referring to NaCl the Native Client https://developer.chrome.com/native-client

Not the crypto library https://nacl.cr.yp.to/


NaCl was an interesting tech demo, but it was a complex set of rules built on the x86 house of cards and it took a lot of work to port it between platforms (I think it was demoed on ARM but never took off).

PNaCl was a "lazy" attempt at making this cross-platform by leveraging LLVM's intermediate format which was never designed to be a "format" like that.

WebAssembly is 10x better than either of those formats and you can see the benefits of having multiple players design something to work for a long-term horizon.


The biggest problem for me is loading an external script (hosted by Google), plus CSS rules to hide the content until a timeout if that script doesn't load. It literally makes the pages load slower for me (but will never show up in their telemetry).


The biggest issue is that google is forcing websites to use its own html subset by judging sites that use it instead of by judging sites based on actual speed. There are a lot of posts on HN about sites that got no speed increase or even regressed when switching to AMP.


Curious if Google still owns the left/right swipe and back button events when a user lands on your page from a carousel.

Also don't miss the bit about "currently just Chrome on Android"


As I understand it, Google actually hosts the AMP content but the browser “lies” and says the content is coming from the original domain.

I honestly would prefer what you are suggesting, even if that meant using iframe on the original domain to host the AMP cache from Google.

1. example.com/long-url with content and meta tag specifying AMP url (2) below.

2. example.com/long-url/amp that embeds a full-height 0-margin iframe to google.com/amp/example.com/long-url and has no external styles/JS.

3. Google caches content from (1) example.com/long-url and when user clicks it, directs them to (2) example.com/long-url/amp.

Other than 1 extra HTML page, everything is from Google’s AMP cache. The browsers do not need to lie. And if Cloudflare hosts the (2) AMP page, it can be fast as the rest of their services.


I wouldn't interpret this as the browser lying any more than the fact that your wifi router delivered the AMP document to your browser and your browser didn't show your wifi router in the URL bar.

The document is digitally signed by the publisher, using the publisher's own private key on the publisher's own server. This signature is then verified by your browser on the other end and verified against a CA issued certificate. The intermediaries don't matter in terms of the content, they are just a pipe at this point that can optimize network paths and allow for prefetching of the bytes.

The specification only allows a short lifetime for signed documents (7 days maximum, configurable to be shorter by the publisher), preventing long-lived caching drift. Refreshing will reload from the origin directly.


> I wouldn't interpret this as the browser lying

I do, because the domain being displayed isn't the domain that my browser has contacted. Whether or not the content is guaranteed to be unchanged isn't relevant to this.

It reduces my ability to trust my browser.


The same concerns already apply to CDNs which are pretty pervasive.


I don't think that's terribly analogous. The difference is that when you're using a CDN, the traffic is being redirected on the server side, behind the domain name resolution.

That means that when I go to a site that is using a CDN, my browser isn't lying to me about the domain I'm contacting. It is correctly reporting that information. Whatever routing happens behind that domain is a different issue.


> I wouldn't interpret this as the browser lying any more than the fact that your wifi router delivered the AMP document to your browser and your browser didn't show your wifi router in the URL bar.

The router can't read the content; https is signature and encryption both.


Yes, though in the use case here, the party linking to the content has already read the content anyway by crawling it. This would be true if your entire session were delivered this way, but instead it's only the first click from a page linking to the signed exchange URL. The party linking also already knows the user's IP, knows what URL the user is going to, and can know the content. There is no loss of privacy.


After they serve the page from their cache, they learn which which link you clicked on. That first click is tracked similar to a /redirect?url= tracker.


I explained the “lying” part in another comment below. I understand the signed package part. The difference between Google doing this and my router/ISP/everything-in-tracert is that the content is served by Google. It originates with Google, not with the router or other intermediaries. Sure it is a signed cached package but Google’s servers are hosting and serving the content and the browser is going to google.com/amp/foo pretending to be foo.com.

This is different from using CNAME example.com to point to cloudflare.com to host your site on it because then Cloudflare is the actual host for your site. With AMP Real URL, you never know which site is actually involved anymore because the browser “lies” to you.


>but the browser “lies” and says the content is coming from the original domain.

Is this a chrome thing? I don't notice it on safari.


The linked article says only Chrome on Android supports this so far. As I understood it, other browsers do not show the original URL yet but if they start supporting AMP Real, they will start “lying” too.

I say “lie” because while the cryptographic signature gives Google/browsers authorization to display the original URL, the original domain is not involved in serving the request for each user. The original domain could be down or change the content while AMP Real URL cache will still show old data. Copy pasting the URL in a new tab will show different data - from the original domain. Same URL should not show different data between two tabs. I understand cached data can always be stale but hitting refresh in tab 1 vs 2 will show two different but consistent things. That is weird.


It's "Signed HTTP Exchanges"

"It is worth noting that SXG is already supported by the Opera web browser, still under evaluation by the Microsoft Edge team, while Mozilla Firefox considers it harmful, and the Safari team already expressed its skepticism"

https://www.bleepingcomputer.com/news/google/google-chrome-a...


It's a technically feasable thing:

> The solution makes use of Web Packaging (which incorporates some clever use of cryptography) to allow the cache (run by Google, Cloudflare or others) to keep a copy of an AMP page and serve it quickly to the end user, but to also contain cryptographic proof of where the page originally came from.

Not sure if WebKit/safari have implemented this yet.


Safari and Firefox, last I saw, did not plan to implement it, ever. https://github.com/w3c/strategy/issues/171


Excellent!


> If this instead sends the user to your own domain, viewing your content which just happens to be hosted elsewhere

But it doesn’t. It sends them elsewhere and pretends it’s your domain.


Is that different than a CDN? This seems even better than a CDN, because the client verifies that the content is signed by the original server, where a normal CDN is free to modify content that goes through it.


A data-whoring mega-corp isn't forcing anyone to use a CDN, and a CDN doesn't mandate Javascript (and CSS to punish visitors who block that JS).


I work on AMP Real URL at Cloudflare and am happy answer any questions (live from AMPConf in Tokyo)!

If you're looking for a super in-depth technical post which includes all sorts of wonderful bits about the engineering and cryptography involved don't miss https://blog.cloudflare.com/real-urls-for-amp-cached-content...

AMP Real URL was built by engineers Avery Harnish (started when Avery was an intern!) and Gabbi Fisher, and is built on top of our Workers [1] tech.

1- https://www.cloudflare.com/products/cloudflare-workers/


You should take a hard look at what you're doing. It's breaking the web by enabling Google to embrace, extend, and extinguish. AMP and Signed HTTP Exchanges aren't agreed upon web standards, and Mozilla calls the technology harmful.

https://mozilla.github.io/standards-positions/

https://www.zdnet.com/article/former-mozilla-exec-google-has...


I dislike getting AMP pages, and when I'm served one I can usually find the URL to get to the proper page manually.

Does Real URL mean that I can no longer do this? How can I find the URL of the page I really want to see?


The URL is now where you'd expect it -- in the address bar.

Copy and paste it to a new tab and you're off to the races.


What happens if you just focus in the address bar and hit enter or hit the reload button? Either result, the user won’t actually be sure where it’s loaded from.

AMP is an objectively bad thing, designed to further strengthen googles control over the web.

I’ll be encouraging my customers who currently do or may have used cloudflare to use alternative solutions from now on.

Edit: removed weird extraneous “that”


How does the browser know to grab the real page or the amp version? I never want to load an amp page in my browser and out of respect for others privacy I never want to share a link to an AMP page. How can I do this with the new proposal?


Look for /amp in the URL.


So in order to get to a non-AMP version of the page, you have to duplicate the tab? The content will be different but the URLs will be the same? How is any ordinary web user supposed to understand this?


Correct, it's essentially the same as it is now, but easier as you don't have to worry about removing the "google.com/amp" prefix.


I assume this means I’ll be able to use a browser extension to catch AMP requests and convert those requests into traditional requests?

Not making an AMP judgement in this comment, just asking if the technical capability exists.


I'd be curious how an extension would know it's an AMP page in this new situation. Is there some new extension API or response property exposed?


Will this load an uncached copy of the AMP version, or the canonical URL as specified by `<link rel=canonical>`?


That is up to the website, some elect to use separate URLs for their AMP content, others serve a different version based on the requestor, others might be AMP-native and not have another incarnation.


How is it up to the website? Can they put a URL in their signed package for the browser to display that won't load the same content if you copy it and paste it into a new location field?


OK, thanks.


Thats a good question can you still access the orginal page then.


Not sure why I got down voted, but the question I responded to has been answered. The URL structure still has AMP in. So one can access the original page at least.


Couple implementation questions:

Who's javascript/cookies run in a real URL amp page, if any?

What happens if I hit refresh, does it reload the AMP page or the real page?

Can a webpage that isn't Google use real domain AMP pages? In that case, can their javascript influence the page at all? (ie change the look, put elements over it, make http requests)


Good questions:

> Who's javascript/cookies run in a real URL amp page, if any?

The document operates as the signed origin, so cookies, CORS etc all operate as the signed origin (the one in the URL bar). The HTTP request is made using the request URL's origin however, so the server delivering the signed exchange has no cookie access to the signed origin's cookies.

> What happens if I hit refresh, does it reload the AMP page or the real page?

A refresh will cause the browser to make a normal HTTPS request to the origin in the URL bar. A refresh works identically to what it has in the past, essentially.

> Can a webpage that isn't Google use real domain AMP pages?

Yes. It is a spec that browsers can support, and any site can use. There is nothing Google specific, or even AMP specific, about the specification.

> In that case, can their javascript influence the page at all? (ie change the look, put elements over it, make http requests)

No. Conceptually, you can think of a signed exchange as a 301 redirect to a new URL which has already been cached by the browser (so there is no 2nd network event). The cache was populated by the contents of the signed exchange, assuming the signature validates.


Could Cloudflare accept a request header that would load up the full page instead of the AMP one? I suspect many people would use such a feature, and it would also be a good way to indicate to Google (and now it seems, Cloudflare as well) the dislike of AMP.


While it would present a fingerprinting vector, I would likely enthusiastically use a browser extension which effectively disabled AMP.


So is this basically the server okaying a replay attack? I'm looking through the technical post but it hasn't clicked yet.


That's kind of a funny way of thinking about it, buy yes? Loading an AMP page is (hopefully) an action which isn't mutating any state, so I would say it's more similar to how a CDN works. You decide you are interested in having a cache respond to a given request in a specific way, and it responds to requests which look a certain way with that response.

In this case it's significantly more secure though, as the exact request and response are signed and a third-party you trust (your browser) is deciding if that signature matches.


Analogy: Squid caching with a signature from your SSL certificate that proves it was valid as of when you signed it, so that the browser can trust the Squid cache and display the URL that’s in the signed plaintext with a domain matching that of the certificate that signed the cache blob.

Today’s browsers trust all user-configured proxies implicitly and no other proxies at all, so providing a signed copy of the GET-only AMP content, it can be safely cached (the “replay attack”) without needing to trust the cache, because it’s signed plaintext.


> okaying a replay attack

When you permit a proxy to replay your content, it's just caching. It's not an "attack." (If the proxy can replay your content without your permission, that would be an attack.)


I believe AMP required inserting a piece of Google controlled and hosted JavaScript in your content from the very beginning. So the cat was pretty much out of the bag on this already.


So if you block this Google JS, you cannot access said AMP page? Does AMP implicitly mandate users subscribe not only to said content provider but the third-party AMP host?


Yes, you have to include it.

"AMP pages must...Contain a <script async src="https://cdn.ampproject.org/v0.js "></script> tag inside their <head> tag."

https://amp.dev/documentation/guides-and-tutorials/start/cre...

Their special tags[1] won't render without it, and I suspect Google won't include it in their SERPS if it's not valid AMP.

[1] https://amp.dev/documentation/guides-and-tutorials/learn/spe...


They're certainly not doing it without the website owner's permission. It's disingenuous to call it an attack.


Yes, that's fair. I meant more in terms of ceding control...that's a prerequisite for AMP, and always has been.


Is there any notion of expiration? If I present a signed page to a crawler, what’s to stop them and their partners from serving that page on behalf of my domain for all eternity, and browsers presenting it as valid and authorized by my domain? From retractions to right-to-be-forgotten, there are a billion reasons I would want the ability to disavow cached content.



What can I do to stop the spread of this harmful technology or at least never get served amp pages?


Very constructive comment here. Perhaps you can reword to better understand the value or express why you feel this is harmful technology so OP can help address it.


Personally, I never want to see an AMP page and its ugly design and shitty controls. How do I make this happen? Is there something I can do that will make Cloudflare simply redirect me to the real web page instead?


It sounds to me like you're talking about the grey bar at the top of pages served from the AMP cache? This change is about using Web Packaging to remove that bar.

(Disclosure: I work at Google)


I think he, along with many others, wants Google to stop pushing this garbage on the internet.


In related news:

Google signed exchanges, Instant-loading AMP pages from your own domain

https://webmasters.googleblog.com/2019/04/instant-loading-am...


Found this more lively page, apologies if cross-posting a comment is not allowed:

Semi-related, I think Web Packages and Signed Exchanges could have some usefulness outside of Google's caches. One of their spec examples was for verifiable web page archives. Another idea it could be used for a wifi "drop box" (drop station?) when there's no internet connection around. That isn't uncommon at some popular spots up river into the woods in the US.

The idea is that as people enter the area, they can update the drop station automatically for things like news or public posts with whatever they've cached recently.

I'm pretty sure I read about this idea before the spec was drafted but I couldn't find or remember the site, something like vehicle-transported data.


I'm rather worried about how closely Cloudflare is working with Google in pushing their proprietary stuff. I would have hoped they would be more neutral and take a similar position as Mozilla.


I'll just keep using the Redirect AMP to HTML addon[1]. AMP links annoys me to no end.

[1] https://addons.mozilla.org/firefox/addon/amp2html


Who uses AMP? I've only ever heard of it on Hacker News but never seen it when using Google or any news websites. What actions does a normal user have to take to use AMP?


Nothing. By default, Google will show you the AMP page when you tap a search result if it’s available


It's not working for me. Do I have to use Chrome on Android? I just tested on Firefox Linux, Chromium Linux, Safari iOS, and Firefox Android.


Google anything news related, you'll see the symbol next to the URL you click, and you'll notice it loads rather quickly too.


I can't see the lightning bolt icon. Commented here https://news.ycombinator.com/item?id=19682479


Look for the lightning bolt when searching for news on Google on a mobile device.


I don't see it. Here's a screenshot on Firefox Android. https://i.imgur.com/vuJavS8.png All the links go to cnn.com, usatoday.com, etc.


Try the 'All' tab, specifically the carousel.



> Importantly your site is still being served from Google’s AMP cache just as before; all of this comes without any cost to your SEO or web performance.

Glad to see they've addressed AMP's main leverage point


I wonder if this will have any impact on AMP for Email as well.

https://emailinnovations.com/the-email-is-being-amp-ed/


Will the usability still suck? Can I tap the menu bar to go to the top?


Glad cloudflare and Google are now ramming through de facto web standards without consensus from the rest of web and browser developers...


>The promise of the AMP (Accelerated Mobile Pages) project was that it would make the web, and, in particular, the mobile web, much more pleasant to surf.

That's, ostensibly, the goal, but anyone with an independent mind can tell it's just about control.

>It was particularly aimed at publishers (such as news organizations) that wanted to provide the best, fastest web experience for readers catching up on news stories and in depth articles while on the move. It later became valuable for any site which values their mobile performance including e-commerce stores, job boards, and media sites.

What a harrowing paragraph.

>As well as the AMP HTML framework, AMP also made use of caches that store copies of AMP content close to end users so that they load as quickly as possible. Although this cache make loading web pages much, much faster they introduce a problem: An AMP page served from Google’s cache has a URL starting with https://google.com/amp/. This can be incredibly confusing for end users.

This wasn't an issue before TLS everywhere was pushed, was it? The same organization that pushed for encryption, no matter how useless, Google, is now there to solve the problem of caching encrypted pages, centrally of course.

>But the problems with the AMP cache approach are deeper than just some confusion on the part of the user. By serving the page from Google’s cache there’s no way for the reader to check the authenticity of the page; when it’s served directly from, say, the BBC the user has the assurance of the domain name, a green lock indicating that the SSL certificate is valid and can even click on the lock to get details of the certificate.

There's already no foolproof way to do that. Rather than checking that it's actually BBC and BBC has verified what it's delivering, you instead ask a third party if this is the BBC and if what it's sending is true.

>That signature is all a modern browser (currently just Chrome on Android) needs to show the correct URL in the address bar when a visitor arrives to your AMP content from Google’s search results.

So, just as with ''DNS over HTTPS'' and other nonsense, this is yet another thing they want to pile on.

>Importantly your site is still being served from Google’s AMP cache just as before; all of this comes without any cost to your SEO or web performance.

Only fools and cretins care about SEO and ''web performance'' is solved by not having so much JavaScript and actually bothering to optimize images you send.

>Brand Protection: Web users have been trained that the URL in the address bar has significance. Having google.com at the top of a page of content hurts the publisher’s ability to maintain a unique presence on the Internet.

Not violating people already trained is very important.

>Easier Analytics: AMP Real URL greatly simplifies web analytics for its users by allowing all visitors, AMP or otherwise, to coexist on the same tracking domain.

Anyone using anything more than HTTP server logs, which are already too revealing, is likely a fool.

>Increased Screen Space: Historically when AMP was used room would be taken for a “grey bar” at the top of your site to show the real URL. With AMP Real URL that’s simply not necessary.

This sentence is only possible due to a dearth of independent implementations, which communicates a great deal about this nonsense. Also, it's nice to see Google beginning to kill URLs as it wanted to; lying in the UI is a good first step.

>Content Signing: By relying on cryptographic techniques, AMP Real URL ensures that the content delivered to visitors has not been manipulated protecting the sites and brands it is used on. It’s now not possible for any external party to add, remove, or modify the content of a site.

Remember when Cloudflare was spewing private information all over the Internet?

>We are also taking this opportunity to sunset the other AMP products and experiments we have built over the years like Ampersand and Firebolt. Those products were innovative but we have learned that publishers value AMP products which pair well with Google’s search results, not which live outside it. Users of those older products were informed several weeks ago that they will be gradually shut down to focus our attention on AMP Real URL.

Don't worry about this being shut down for the next big thing, though.

>Our motivation is the same as for offering CDN or SSL services to millions of customers free of charge

You mean subverting the Internet through increasing centralization and also enabling mass-spying by the perversion of the very encryption that's ostensibly so important?

Does anyone actually think Cloudflare isn't a US government operation? They have so much hardware and they get so much support from these other companies that we know are paid off by the government.


I don't understand why AMP is targeted at the "mobile web". What exactly makes browsing the web on mobile different than other platforms?


The network is much slower and the CPU is much slower, so getting very good performance requires extreme optimization.


But here [1] it says "One of AMP's biggest user benefits has been the unique ability to instantly load AMP web pages that users click on in Google Search. Near-instant loading works by requesting content ahead of time". So AMP content is implicitly prefetched. How is this any different than regular prefetching?

And as far as mobile is concerned, the trivial optimizations that are available on desktop, such as firewalling by content type via e.g. uMatrix, are not at all advertised to end users. AFAIK Chrome browser on mobile does not allow such extensions. Page load times are significantly reduced by using such browser extensions effectively. Why skip to the "extreme" that is AMP?

[1] https://webmasters.googleblog.com/2019/04/instant-loading-am...


This, and in particular, the network has much higher _latency_ over mobile. AMP is aggressive about reducing the number of round trips between browser and server.


Extreme optimization that can be achieved without AMP.

Performance of web pages is heavily dependent on the amount of assets and their size. Hacker News loads extremely quickly without AMP, and the same can be achieved for other sites. HTML/CSS (with a small amount of JS if really needed) can achieve the same thing as AMP regarding the web page size and rendering time.

CDNs are well established and can be used to serve content from a nearby server, and HTTP/3 will reduce the number of round-trips needed.


If you have a round-trip-time of 1 second, it will take you 1 second to load a text file with 1 byte in it. However, an amp page will have loaded before you clicked, so it will take only the handful of millis in CPU time to swap frames and display.


Like mentioned in another comment, HTML5 supports prefetching.


The problem with prefetching the publisher URL from a search results page is that it leaks the user's query intent to an origin they have not visited, which violates the user's privacy.

By prefetching a signed exchange from the same origin as the search results page, privacy is preserved. Once the user clicks on a result, the user intent can be shared with the third party origin, no problem, and is in the AMP cache.


> By prefetching a signed exchange from the same origin as the search results page, privacy is preserved

How so? Unless I misunderstand (which is entirely possible), all that this does is change where privacy is violated from the publisher to the search engine.


If a Google search results page instructs your browser to request some bytes to preload from a Google server, that request does not reveal anything new to anyone. Google already knows it instructed your browser to preload, so knowing that your browser mechanically followed that request tells it nothing new about you or your behavior that it didn't already have another way to know.

Let's consider the alternative. Imagine you searched for [headache] and a preload request was made to mayoclinic from your browser for their headache document. Your browser when making that fetch would send to mayoclinic your ip, any stored mayoclinic cookies, and the document URL that you prefetched (not the precise query, but the approximate query is easy to guess). This is sent to mayoclinic _even if_ you never click on that document at all, which is not what you would expect privacy-wise.

Once you do click, mayoclinic can very easily log this visit even if the document bytes were preloaded from elsewhere. They can use javascript analytics or simply even load an image on their server (https://amp.dev/documentation/components/amp-pixel). And you as a user are not surprised that clicking on mayoclinic shares your interest in the document with mayoclinic.


> Let's consider the alternative

Ah, I understand now, thank you.

I guess the problem that I have is with the preload. Without that, then neither Google nor the Mayo Clinic would get that data.

So, no preload for me.


[flagged]


Ask a non technical publisher type person what they think about AMP, privately.


They usually complain profusely about how they aren't allowed to install their flavor of the month new tracking scripts. As if the last 20 weren't enough.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: