And oh yeah:
> AMP Real URL is only supported in the Chrome browser at this time, but we are optimistic it will be supported more widely as its benefit to Internet users becomes clear.
Who needs web standards, right? 
 Especially egregious because it's making SSL identity validation even more complex than it was before. I'm sure this will make getting security "right" even harder.
 Or in this case a new barrier to competition:
> After speaking with publishers and with Internet users, we have decided not to charge for AMP Real URL. This is not because our customers haven’t been excited or willing to pay for it, AMP makes up a huge component of many site’s traffic. Our motivation is the same as for offering CDN or SSL services to millions of customers free of charge, we are here to help build a better Internet and improving AMP is a huge step in that direction.
 dfabulich (below) has correctly pointed out that this is in the process of being standardized: https://tools.ietf.org/html/draft-yasskin-http-origin-signed...
Despite this, Firefox has currently marked this proposal as "harmful" (https://mozilla.github.io/standards-positions/). It seems to me as though Google may be ramming this one through despite objections.
No one needs to follow web standards when one browser gets so big that it can completely dominate the technology and the corporation that runs it can force anything they want down everyone's throats! If you don't want to see things like this happening (and you shouldn't, AMP is a disgusting technology), stop using Netscape, err, I mean, Chrome.
* Google (Blink –> Chrome, Edge)
* Apple (WebKit –> Safari)
* Mozilla (Gecko –> Firefox)
If your browser has a bunch of users, and if you can (and do) opt to block features in your fork, you're a major browser.
Microsoft has ripped a fair amount of stuff out of Chromium already.
Google maintained their own fork of WebKit with countless feature deviations from the main tree. If Microsoft builds a track record of maintaining their own fork to hold back or modify actual web standard proposals written into the Blink engine then I would agree—they would regain eligibility into that list.
Of course, I'd rather it this way as MS can at least have a browser that keeps pace with the rest of the world. Edge would be about on parity twice a year or so, and often the biggest of my headaches lately. Fortunately no longer having to support any IE currently (only browsers with async function support).
The list is descriptive, not prescriptive.
If they haven’t flexed their muscle, it doesn’t matter if they’re on the list or not. If they do assert that influence, they’re on the list.
This new thing (signed HTTP exchanges) is an IETF draft. https://tools.ietf.org/html/draft-yasskin-http-origin-signed...
As usual, Chrome is the first browser to implement this, but they're participating in the normal standards process. (Microsoft is in favor. The Chrome team generally ships stuff if at least one other major browser vendor approves.)
Turning it on when they know there's no consensus is breaking the web standard
There may indeed be valid issues with the standard, but their current rationale for marking it as "harmful" isn't technically sound.
Claims like "Many of the sites we have spoken to get as much as 50% of their web traffic through AMP" ignore the fact that this is only true because it's been forced down our throats without our consent.
Anyone implementing an AMP-based technology which doesn't come with a way for users to decline to participate is actively harming the open web. I like Cloudflare and a lot of what it does, but I'm really disappointed in them today.
If AMP is federated, that's a lot less disturbing to me than my first impression.
Anyone can cache a signed exchange from anyone else. So, for example if you went and fetched a signed exchange from https://amppackageexample.com/ (or any other site that supports one, this is just an example), you could then serve that from your own server, more or less just like any other file (the less is that you need to set the right Content-Type header, but it otherwise works just like serving an image or a zip file).
Then, if a user visited the URL on your site https://yoursite.com/cached-copy-of-amppackageexample.com/ then the browser would display https://amppackageexample.com/ in the URL bar, as though that URL had 301 redirected, but without the extra network fetch.
Google search does exactly this, just loading a cached copy of the Signed Exchange, and any other cache (or even any website) can do the same.
So, if you publish a signed exchange, you are allowing all caches to do this, not just a selected cache. However, since the document is signed, absolutely no caches can modify the document or the browser will reject it and error handle (typically just making an actual redirect to your domain).
I hope that helps to understand.
In short, only being in Google's AMP cache gets you good rankings on Google Search.
Bing also hosts an AMP Cache if that's of interest to you.
AMP is federated. Anybody who wants instant loading of AMP pages linked from their own pages can run their own AMP cache, as Bing and Baidu (among others) do.
> If you use a different AMP cache than Google's, does it affect your search ranking/carousel positioning with them?
Bing and Baidu running their own AMP caches doesn't affect their Google ranking. Why would it?
It's hard to build a great website, particularly one which works well on mobile. AMP as a web framework makes it easier. There's nothing about that value it which is intrinsically linked to any particular way of rendering AMP, a good chunk of the web would be better off building on AMP independent of Google. That doesn't mean every site should, there are developers and teams who are capable and willing to build great sites and we support their right to not have to use AMP. The need to use AMP-the-framework to get instant loading is hopefully a temporary one partially fixed by signed exchanges.
The AMP cache is also made much less concerning and painful for users through the use of signed exchanges. By ensuring the content is what was provided by the origin we believe the Internet is becoming a more trustworthy place, opening the door to sites being able to be served from wherever they will be most performant.
The problem is, there really isn't an "AMP independent of Google", as Google's implementation is the only thing that matters, due to Search and Chrome. Supporting Google's proprietary fork of HTML is more or less supporting that entire structure, and it is inherently harmful. Whilst they have some notion of "governance" for the project, the plurality of participants are Google and they've confirmed that Google isn't really beholden to follow the project's governance with the implementations they put into their own products anyways.
For the most part, AMP is a subset of HTML with proprietary tags, but could be easily implemented in normal HTML. The only reason to implement AMP is to format your website to be compatible with Google's demands for special treatment in Search. (A benefit that, while publishers may aspire for today, becomes increasingly useless as the entire Internet is forced over to the same proprietary format.)
Various blogs have covered samples of how loading AMP is actually worse than loading their own websites in a lot of cases, but they're forced to implement it anyways because of that Google Search incentive. Surely the correct solution for mobile developers is to teach people to use restraint in piling on extra crud, not bootstrap some extra crud that prevents them from using any other crud.
And essentially, from the responses here and in almost every previous discussion about AMP, and literally Twitter feeds made just to retweet examples of AMP hatred: Nobody wants this. Everyone who wants it is using it because they have no choice. It's forced on publishers to maintain competitiveness in search, and it's forced on users who have no way to decline.
Probably what ground my gears the most about your blog, to be honest, is the cheerful, supportive tone for what is inherently a harmful technology that was implemented in a closed fashion. Compare to how FastMail acknowledged that they may have to support AMP for Email, whom I'm not upset with: https://fastmail.blog/2018/02/14/email-is-your-electronic-me...
If you have to do this for business reasons, tell us that, don't try and tell us AMP is good for the world, when we know it isn't.
Any time you have to rely on lots of disparate people, all potentially under wildly different pressures and facing diverse incentives, to make the same smart choices... you may be facing an uphill battle.
We've tried to rely on exactly what you call for. It's such an obviously, clearly, good idea. Painfully so. We've been trying it for quite a long time by now. It's difficult to call it a success.
Openness, flexibility, and interoperability are great and wonderous things. It's a tragedy, then, that so many use the tools and powers granted them by these virtues so apallingly poorly.
> The need to use AMP-the-framework to get instant loading is hopefully a temporary one partially fixed by signed exchanges.
You know what would really fix this? Developers doing their work. Deeply knowing and understanding the consequences of their work and how everything comes together in the end and leads to a slow website or not.
We don't need to put even more power in the hands of Google, because we (as in "some developers") don't know how to do our job right and therefore punish the internet as a whole.
There is no valid reason for the use of AMP. It's only there because a) developers fucked up and b) Google forces you to use it. If we would fix a), I'm not sure b) would even be a valid point anymore.
That's merely replacing one problem by another, which is: how do you get developers to do their work?
I think most people these days are in for the money and not for "the challenge" which is why we see so much mediocre software. On the other hand there is excellent software too, so there are people who really know what they are doing.
This is a wonderful idea! It's tragic, then, that nobody has thought to try this for the past several decades...
If you believe in a web technology go through the process and champion a draft standard. It can start out as an experimental flag enabled feature or vendor prefixed feature before it is adopted as a web standard. But don't ram this shit down our throats...
Also let's talk about the obvious business incentives for data collection both Google and cloudflare have for serving up AMP pages.
> as 50% of their web traffic through AMP
> forced down our throats without our consent
Who, precisely, is forcing publishers to use AMP?
Bear in mind, publishers hate AMP as much as everyone else: It forbids them to do what they want with their website. AMP is the technology nobody wanted and nobody asked for, users and publishers alike.
User do want AMP-like technologies. I want the browser to restrict the publishers as much as possible (while keeping their sites "useful" for some definition of the word - static markup such as text, images and some background/styling is definitely useful) - I actively do that by preventing them from auto-playing videos, using adblocking, etc.
However, Google-controlled AMP is a travesty. I don't want to view websites on Google. I want the original URLs, I want to be able to bookmark it and share it, I want to be able to go to the non-mobile website etc.
Keep AMP, banish Google.
Secondly: We have restricted views and formats for websites already, and they're far superior to AMP. Things like RSS, which most websites support already and I use daily, and Reader View available in both Firefox and Edge, which strips out everything but the core page content. These already exist, are superior to AMP, except in one key area: AMP is built to serve Google Ads, whereas RSS and Reader View strip them out. AMP is not a good technology for users, AMP is a technology to protect Google and give them more control over the web.
I understand, which is why I'm pro AMP but anti Google-specific AMP.
Personally, I don't want to get a gimped version of a web page, I want the real thing. Yes some websites suck, but at least I'm getting the definitive copy. AMP versions often miss content or have weird layouts.
Sadly, I also really like Google Search—their results are much better than Bing or Duck Duck Go in my experience—so I don't have a choice in the matter.
Some users do. Some don't. I certainly don't want AMP-like technologies, regardless of where the cache is.
> I want the browser to restrict the publishers as much as possible
In the browser where I can control it, sure. I don't disagree at all. On a third party server where I have no control over it, no.
And quite often fast loading speed can be achieved by merely removing the bloat from your pages.
Cloudflare != Google
That claim is gonna need evidence, and I've seen no evidence so far. Please provide it if you've got it.
Google does prefer sites that are fast to sites that are slow, ceteris paribus.
So if AMP sites are faster, maybe they get upranked due to performance, but not just because they're AMP.
So if a publisher doesn't like AMP they don't have to use it, they just need to make their sites faster on mobile.
Are you sure? It seems to me that AMP pages totally dominate the results. And Google explicitly says they consider performance, which means they have come up with some metrics they consider to represent performance, one of which could effectively be "is it AMP?". AMP pages do have automatic performance advantages: Google often prefetches the content before you've tapped it, and they render the content inline instead of navigating you to it.
As for their metrics on page speed, they've had objective measurements for some time. Lighthouse is their go-to solution now for measuring a website's speed.
As for AMP's ability to be pre-fetched and rendered, this benefit doesn't show in Lighthouse and only from search. For that reason I would bet it isn't included as a ranking factor either. But of course that's the "secret sauce", so we can only go off of their word.
I don't think that Google's word can be trusted. They may be correct, but they may not be. Evidence either way would be welcome.
Google rolls out their carousel, pushing everything else down the fold.
Then they roll out AMP.
Then only AMP results go in the carousel.
So, it's optional if you don't mind losing the traffic.
I can guarantee you that the majority of internet users like AMP. It's only developers and hardcore techies that don't.
But why doesn't Google do the one thing that would render this entire argument moot? Provide a mechanism that allows users to opt-out of getting AMP pages entirely.
Microsoft is in favor.
Edit: And the spec has since gotten multi-vendor interest. From Microsoft:
> We're excited about the potential for this feature set to resolve some of the performance and privacy problems of alternative approaches, and we have been talking to publishers who are interested in utilizing these technologies to provide accelerated experiences.
For now, his tweet is all the signal we have from Apple.
15 month :)
In all fairness I have to agree with Google on this one. Mozilla's current objections don't really make sense.
On the other hand, AMP Real URL is based on Signed HTTP Exchanges, which allow one site to send a cached copy of someone else’s site, in a much more straightforward manner. In theory, Google could now drop the bulk of what’s now known as “AMP” and cache arbitrary pages that indicate their willingness to be cached. That they’re instead integrating this with AMP suggests they may not drop it, which would be unfortunate. On the other hand, since Signed HTTP Exchanges will remain a Chrome-only feature for the foreseeable future, it’s arguably a bit early to expect Google to make that kind of commitment.
Maybe I'm just old, but I don't understand why this is a worthwhile goal. Even with its bloat the internet today is much, much faster than it was 15 years ago and I'm quite happy to wait a few seconds for a page to load. In addition, doesn't pre-fetching every result just waste bandwidth on mobile?
But personally, I’m a sucker for low latency across all of tech (and gaming as well). That’s why I use Safari instead of Chrome, Terminal.app instead of iTerm, C++ instead of Rust sometimes (compile times), and basically anything else instead of Java (startup time), among other preferences. I even prefer reading ebooks on normal screens rather than e-ink displays, simply because I don’t want to wait between pressing “next page” and seeing the page show up. Not surprisingly, then, I really appreciate website snappiness in general, including links that load instantly due to prefetching. I can’t say I appreciate AMP as it exists today, because the quick load comes with a host of UX issues, but I have hope for the future.
Take that perspective how you will. I think I’m a bit of an outlier in just how much latency bothers me, but pretty much everyone consciously or subconsciously appreciates when it goes down. Probably including you: the current speed might seem “fast enough” now, but if faster loading becomes the norm and the other issues are dealt with, I bet you’d have a hard time going back.
Isn't this exactly what the distributed web needs? This is a massive boon to IPFS (a content hash can still show the proper origin name), a big blow to censorship (a censored website could spread its content to a thousand different servers, each served over HTTPS, and viewers would still the original URL whichever cache they access), consensual permanent archiving, and much more?
- Who decides which HTTPS certs are valid? (Followup: Who decides which of those your browser considers valid?)
- Who operates the browsers you'd use to view this content which sees the original URL, and have any of those companies deplatformed content on behalf of government requests?
Which is to say, web packaging looks somewhat decentralized at a glance but arguably still leaves the same handful of companies entirely in charge of deciding what you view and how you view it. And it's entirely dependent on your browser's developers being ethical and trustworthy, and choices on what browser you use has just shrunk significantly in the past month alone.
Wait wait wait, are you telling me signed exchanges maintain the status quo on a problem they aren't intended to solve?
Is the dig (emphasized above) about what constitutes “a modern browser” really necessary? Is a modern browser now whichever one that supports something you like?
Many countries, including India, are just now getting widespread broadband rollout. Due to its infancy, many ISPs have data caps and may even be delivering data over 4G/LTE to homes. With AMP, Google and CF have the privilege of driving all of this (search-driven) bandwidth to datacenters within the same country, or at least the same continent, to the visitor. If all of this content was really served from the origin, latency would be considerably worse since the data will have to go through the undersea cables and the performance issues that may occur with bad routing.
You could also say the Data Saver web proxy they run is to encourage these users to browse the web without worrying as much about their phone data bill.
Google really wants everyone in these countries to be using and depending on the Internet just as much as Americans use and depend on it, otherwise they may miss out on advertising $$$ potential from a country that's ~4.5 times more populated than the United States.
Mozilla has marked Signed HTTP Exchanges as harmful.
The only reason signed HTTP Exchanges are a thing is because Google is trying to solve a problem with user experience (the URL bar). AMP and exchanges are just a different protocol and method of hosting content on a CDN. In this case, you are forced to reduce your page size and you delegate your HTML to be loaded by a third-party, contrary to that of a traditional CDN where you would (for example) create a CNAME in your DNS.
With what, that they considered harmful, have they done that?
> AMP HTML documents MUST
> contain a <script async src="https://cdn.ampproject.org/v0.js"></script> tag inside their head tag.
That is not a web standard.
I think people understand this. The issue is that AMP is forced on people who don't want it. I have no problem with AMP being provided for people who do.
A method of opting out would resolve this.
I feel like AMP started out this way at the core, "How can we ensure these web pages load fast and aren't eating up these users' data plans?", but then the dreaded parts of AMP like search rank preference, Google being the one in the URL bar, etc. were afterthoughts that were put in to make AMP widely adopted and to increase their stranglehold on the Internet as a whole.
Speaking for myself, that's one large problem. The other large problem is that I dislike the AMP pages themselves.
It's like NaCl/PNaCl - a good idea in theory, but created by a team that didn't do a great job at speccing something that could be long lived and satisfy other players than Google+Chrome.
> Resources such as images, videos, audio files or ads must be included into an AMP HTML file through custom elements such as <amp-img>
Amp Specific Tags
It's not 2010 HTML, that's for sure, but so what? Are you complaining about AMP, or about HTML5?
: Spec: https://developer.mozilla.org/en-US/docs/Web/Web_Components/...
: Examples: https://www.webcomponents.org/
HTML has always tolerated out-of-spec tags, because it was originally designed as an application of SGML (defined by an SGML DTD). Since then, new tags were regularly added (and generally tolerated by older browsers), but the practice of defining new tags in a DTD has gradually stopped happening. Web Components accomplish a similar role but replace the use of the declarative language in a DTD with a requirement to run a Turing complete langrage. Making the definition of new tags undecidable is a powerful way to tightly couple browser implementations under Google's control.
Note that custom elements are themselves a standard, there is no "extending" here. There is just applying the web elements spec in a way some people dislike.
Other sunsets of HTML failed magnificently like XHTML basic and it’s siblings.
It's like complaining that BibTeX, LaTeX, AMSTeX et al aren't "TeX".
In the original incarnation of SGML, the whole point of the DTD and DSSSL was to be able to define new elements, and specify how they are handled. This seems perfectly legitimate and part of the HTML standard.
There may be other problems with AMP, but creating new high level elements that resolve to standard HTML elements, doesn't seem to be one of them, and arguing over whether something is named 'amp-img', or 'img' + some detailed validator that throws errors if you try to do anything with <img> off the rails, seems to be bike-shedding over naming, and if anything, I'd argue giving new names to a function to limit its inputs is more clear and readable, then adding precondition checks only.
If your objection is that the ampjs has to be loaded from Google's CDN, well, the whole point of Signed Exchanges/Web packaging is to move to a world where AMP caches are federated and you can host this stuff elsewhere.
The only real objection that has merit IMHO is that Google Search should rank these things based on performance, not on amp validation alone, so if you authored your own AMP-like framework, it could be similarly ranked.
The important part isn't "this", it's "somebody else's". AMP is very specific not about what the JS is, but rather where it is, and that you must include it using their mandated url, not your local copy.
I'm just not comfortable with the idea that "you must include this opaque thing that we might change" be advertised as a standards-based movement.
I have no issue with Google pushing something like AMP. I just don't like the charade that it's some sort of open movement.
To me, there's been a lot of false starts in the Browser platform, a lot of hacks, and 'worse is better', until we finally get something good.
Remember AppCache/Application Manifests? It was horrible. Now we have Service Workers. Or the gazillion variations of <link> prefetch/subresource/prerender/etc
I think eventually we'll end up at a good place that doesn't have a centralized requirement. And I don't (being a Google employee), see these designs as deliberate attempts to arrogate power and control, but more like expediency. AMP was a reaction to Apple News and Facebook Instant putting publishers in non-Web silos. You could argue the proper thing to do would have been for the W3C/WHATWG to design something by committee, then wait a year or two for all browsers to ship, and then get the publishers on board.
What AMP essentially did was ship a polyfill fix for something on existing web technologies to get publishers to do something they could have done on their own, but didn't, with the eventual real fix coming later.
For whatever reason, publishers seemed unwilling to fix their mobile performance using standard practices, and invited a takeover of native mobile silos. I see AMP as a kind of holding action to preserve federated mobile web publishing, until something better came along.
You have to ask yourself, what's worse, using HTML Web Components and a JS framework library, or having major publishers choosing Apple News or Facebook Instant as their primary publishing mechanism on mobile?
Not the crypto library https://nacl.cr.yp.to/
PNaCl was a "lazy" attempt at making this cross-platform by leveraging LLVM's intermediate format which was never designed to be a "format" like that.
WebAssembly is 10x better than either of those formats and you can see the benefits of having multiple players design something to work for a long-term horizon.
Also don't miss the bit about "currently just Chrome on Android"
I honestly would prefer what you are suggesting, even if that meant using iframe on the original domain to host the AMP cache from Google.
1. example.com/long-url with content and meta tag specifying AMP url (2) below.
2. example.com/long-url/amp that embeds a full-height 0-margin iframe to google.com/amp/example.com/long-url and has no external styles/JS.
3. Google caches content from (1) example.com/long-url and when user clicks it, directs them to (2) example.com/long-url/amp.
Other than 1 extra HTML page, everything is from Google’s AMP cache. The browsers do not need to lie. And if Cloudflare hosts the (2) AMP page, it can be fast as the rest of their services.
The document is digitally signed by the publisher, using the publisher's own private key on the publisher's own server. This signature is then verified by your browser on the other end and verified against a CA issued certificate. The intermediaries don't matter in terms of the content, they are just a pipe at this point that can optimize network paths and allow for prefetching of the bytes.
The specification only allows a short lifetime for signed documents (7 days maximum, configurable to be shorter by the publisher), preventing long-lived caching drift. Refreshing will reload from the origin directly.
I do, because the domain being displayed isn't the domain that my browser has contacted. Whether or not the content is guaranteed to be unchanged isn't relevant to this.
It reduces my ability to trust my browser.
That means that when I go to a site that is using a CDN, my browser isn't lying to me about the domain I'm contacting. It is correctly reporting that information. Whatever routing happens behind that domain is a different issue.
The router can't read the content; https is signature and encryption both.
This is different from using CNAME example.com to point to cloudflare.com to host your site on it because then Cloudflare is the actual host for your site. With AMP Real URL, you never know which site is actually involved anymore because the browser “lies” to you.
Is this a chrome thing? I don't notice it on safari.
I say “lie” because while the cryptographic signature gives Google/browsers authorization to display the original URL, the original domain is not involved in serving the request for each user. The original domain could be down or change the content while AMP Real URL cache will still show old data. Copy pasting the URL in a new tab will show different data - from the original domain. Same URL should not show different data between two tabs. I understand cached data can always be stale but hitting refresh in tab 1 vs 2 will show two different but consistent things. That is weird.
"It is worth noting that SXG is already supported by the Opera web browser, still under evaluation by the Microsoft Edge team, while Mozilla Firefox considers it harmful, and the Safari team already expressed its skepticism"
> The solution makes use of Web Packaging (which incorporates some clever use of cryptography) to allow the cache (run by Google, Cloudflare or others) to keep a copy of an AMP page and serve it quickly to the end user, but to also contain cryptographic proof of where the page originally came from.
Not sure if WebKit/safari have implemented this yet.
But it doesn’t. It sends them elsewhere and pretends it’s your domain.
If you're looking for a super in-depth technical post which includes all sorts of wonderful bits about the engineering and cryptography involved don't miss https://blog.cloudflare.com/real-urls-for-amp-cached-content...
AMP Real URL was built by engineers Avery Harnish (started when Avery was an intern!) and Gabbi Fisher, and is built on top of our Workers  tech.
Does Real URL mean that I can no longer do this? How can I find the URL of the page I really want to see?
Copy and paste it to a new tab and you're off to the races.
AMP is an objectively bad thing, designed to further strengthen googles control over the web.
I’ll be encouraging my customers who currently do or may have used cloudflare to use alternative solutions from now on.
Edit: removed weird extraneous “that”
Not making an AMP judgement in this comment, just asking if the technical capability exists.
What happens if I hit refresh, does it reload the AMP page or the real page?
The document operates as the signed origin, so cookies, CORS etc all operate as the signed origin (the one in the URL bar). The HTTP request is made using the request URL's origin however, so the server delivering the signed exchange has no cookie access to the signed origin's cookies.
> What happens if I hit refresh, does it reload the AMP page or the real page?
A refresh will cause the browser to make a normal HTTPS request to the origin in the URL bar. A refresh works identically to what it has in the past, essentially.
> Can a webpage that isn't Google use real domain AMP pages?
Yes. It is a spec that browsers can support, and any site can use. There is nothing Google specific, or even AMP specific, about the specification.
No. Conceptually, you can think of a signed exchange as a 301 redirect to a new URL which has already been cached by the browser (so there is no 2nd network event). The cache was populated by the contents of the signed exchange, assuming the signature validates.
In this case it's significantly more secure though, as the exact request and response are signed and a third-party you trust (your browser) is deciding if that signature matches.
Today’s browsers trust all user-configured proxies implicitly and no other proxies at all, so providing a signed copy of the GET-only AMP content, it can be safely cached (the “replay attack”) without needing to trust the cache, because it’s signed plaintext.
When you permit a proxy to replay your content, it's just caching. It's not an "attack." (If the proxy can replay your content without your permission, that would be an attack.)
"AMP pages must...Contain a <script async src="https://cdn.ampproject.org/v0.js "></script> tag inside their <head> tag."
Their special tags won't render without it, and I suspect Google won't include it in their SERPS if it's not valid AMP.
(Disclosure: I work at Google)
Google signed exchanges,
Instant-loading AMP pages from your own domain
Semi-related, I think Web Packages and Signed Exchanges could have some usefulness outside of Google's caches. One of their spec examples was for verifiable web page archives.
Another idea it could be used for a wifi "drop box" (drop station?) when there's no internet connection around. That isn't uncommon at some popular spots up river into the woods in the US.
The idea is that as people enter the area, they can update the drop station automatically for things like news or public posts with whatever they've cached recently.
I'm pretty sure I read about this idea before the spec was drafted but I couldn't find or remember the site, something like vehicle-transported data.
Glad to see they've addressed AMP's main leverage point
That's, ostensibly, the goal, but anyone with an independent mind can tell it's just about control.
>It was particularly aimed at publishers (such as news organizations) that wanted to provide the best, fastest web experience for readers catching up on news stories and in depth articles while on the move. It later became valuable for any site which values their mobile performance including e-commerce stores, job boards, and media sites.
What a harrowing paragraph.
>As well as the AMP HTML framework, AMP also made use of caches that store copies of AMP content close to end users so that they load as quickly as possible. Although this cache make loading web pages much, much faster they introduce a problem: An AMP page served from Google’s cache has a URL starting with https://google.com/amp/. This can be incredibly confusing for end users.
This wasn't an issue before TLS everywhere was pushed, was it? The same organization that pushed for encryption, no matter how useless, Google, is now there to solve the problem of caching encrypted pages, centrally of course.
>But the problems with the AMP cache approach are deeper than just some confusion on the part of the user. By serving the page from Google’s cache there’s no way for the reader to check the authenticity of the page; when it’s served directly from, say, the BBC the user has the assurance of the domain name, a green lock indicating that the SSL certificate is valid and can even click on the lock to get details of the certificate.
There's already no foolproof way to do that. Rather than checking that it's actually BBC and BBC has verified what it's delivering, you instead ask a third party if this is the BBC and if what it's sending is true.
>That signature is all a modern browser (currently just Chrome on Android) needs to show the correct URL in the address bar when a visitor arrives to your AMP content from Google’s search results.
So, just as with ''DNS over HTTPS'' and other nonsense, this is yet another thing they want to pile on.
>Importantly your site is still being served from Google’s AMP cache just as before; all of this comes without any cost to your SEO or web performance.
>Brand Protection: Web users have been trained that the URL in the address bar has significance. Having google.com at the top of a page of content hurts the publisher’s ability to maintain a unique presence on the Internet.
Not violating people already trained is very important.
>Easier Analytics: AMP Real URL greatly simplifies web analytics for its users by allowing all visitors, AMP or otherwise, to coexist on the same tracking domain.
Anyone using anything more than HTTP server logs, which are already too revealing, is likely a fool.
>Increased Screen Space: Historically when AMP was used room would be taken for a “grey bar” at the top of your site to show the real URL. With AMP Real URL that’s simply not necessary.
This sentence is only possible due to a dearth of independent implementations, which communicates a great deal about this nonsense. Also, it's nice to see Google beginning to kill URLs as it wanted to; lying in the UI is a good first step.
>Content Signing: By relying on cryptographic techniques, AMP Real URL ensures that the content delivered to visitors has not been manipulated protecting the sites and brands it is used on. It’s now not possible for any external party to add, remove, or modify the content of a site.
Remember when Cloudflare was spewing private information all over the Internet?
>We are also taking this opportunity to sunset the other AMP products and experiments we have built over the years like Ampersand and Firebolt. Those products were innovative but we have learned that publishers value AMP products which pair well with Google’s search results, not which live outside it. Users of those older products were informed several weeks ago that they will be gradually shut down to focus our attention on AMP Real URL.
Don't worry about this being shut down for the next big thing, though.
>Our motivation is the same as for offering CDN or SSL services to millions of customers free of charge
You mean subverting the Internet through increasing centralization and also enabling mass-spying by the perversion of the very encryption that's ostensibly so important?
Does anyone actually think Cloudflare isn't a US government operation? They have so much hardware and they get so much support from these other companies that we know are paid off by the government.
And as far as mobile is concerned, the trivial optimizations that are available on desktop, such as firewalling by content type via e.g. uMatrix, are not at all advertised to end users. AFAIK Chrome browser on mobile does not allow such extensions. Page load times are significantly reduced by using such browser extensions effectively. Why skip to the "extreme" that is AMP?
Performance of web pages is heavily dependent on the amount of assets and their size. Hacker News loads extremely quickly without AMP, and the same can be achieved for other sites. HTML/CSS (with a small amount of JS if really needed) can achieve the same thing as AMP regarding the web page size and rendering time.
CDNs are well established and can be used to serve content from a nearby server, and HTTP/3 will reduce the number of round-trips needed.
By prefetching a signed exchange from the same origin as the search results page, privacy is preserved. Once the user clicks on a result, the user intent can be shared with the third party origin, no problem, and is in the AMP cache.
How so? Unless I misunderstand (which is entirely possible), all that this does is change where privacy is violated from the publisher to the search engine.
Let's consider the alternative. Imagine you searched for [headache] and a preload request was made to mayoclinic from your browser for their headache document. Your browser when making that fetch would send to mayoclinic your ip, any stored mayoclinic cookies, and the document URL that you prefetched (not the precise query, but the approximate query is easy to guess). This is sent to mayoclinic _even if_ you never click on that document at all, which is not what you would expect privacy-wise.
Ah, I understand now, thank you.
I guess the problem that I have is with the preload. Without that, then neither Google nor the Mayo Clinic would get that data.
So, no preload for me.