He also failed to mention a key AMP rule: all CSS is in the head, a max of 50 KB. There is no external CSS at all. That's crucial. It reverses a persistent anti-pattern in web development that calls for a bunch of separate files for CSS and JS. Almost all CSS on most webpages is unused (this is still true of a lot of AMP sites -- 50 KB is way too much CSS for an article).
I think the reality here is that early 21st-century web developers are terrible at web development. They stuff massive amounts of JS and CSS down users throats, distributed across 50 or a 100 requests, and call themselves "engineers".
AMP, or something even better, needs to be the default way to build websites.
No. We don't need AMP or something else. The existing tools are more than enough. What we need are actual, trained engineers/developers on the platform.
The real problem with web performance is that any random guy with questionable or non-existent knowledge thinks he can whip up a website by mixing any library he comes across. That is fine for a personal site or some random trivial web page. However we over glorify them and appoint them in critical jobs. And then we wonder why 21st century web is horrible. insert shocked pickachu meme in here
Can you imagine what would happen if we started appointing surgeons by putting random guys through a 1 month bootcamp which consists of showing them youtube video courses? We used to do something like this in the middle ages. We also had a surgery in that era with a 300% mortality rate.
What we need are standards on who can actively work at what positions/level based on their training/skills. Just like any other critical industry.
Fast website costs a lot of money in development time. If a website loads fast enough, nobody wants to invest X amount of dollars to make it somewhat faster.
Since AMP sites are prioritized on mobile, there is now a very clear business case for supporting AMP.
If your management that is filled with engineering people fails to understand the importance of fast websites then the I am afraid the problem is with the engineers. Whether that is incompetence or actively cutting corners can be debatable.
Ironically, with the advent of HTTP2, more requests might be a better thing. Separate out bundles into per-page (or even per-component!) chunks and ensure you're only sending the user the content they need, without additional overhead.
Which is one of the selling points of AMP.
Sorry Mr. Marketing VP, we simply cannot add all that crap ad network code! The AMP framework doesn't permit it, and if we stop using that we lose all our SEOs!
Now, admittedly we've obviously done something horribly wrong, and the people on our team are senior enough programmers to accept that. However, if you're right out of a boot camp, or don't have a lot of confidence, then it might be something you overlook. If you've got a manager breathing down your neck saying, "It looks fine to me! Why are you wasting time trying to shave a few KB?", it can be hard to say, "Wait! This could potentially cost us customers". Having that conversation of "If we make the customer wait an extra second, they may walk away", is tough in the best of cases.
Ideally it would be very difficult to make as bad a mistake as we've made, but it really isn't. I suppose it gives us justification for larger salaries for experience :-)
CSS Wizardy (Harry Roberts) wrote a great article about this topic recently - https://csswizardry.com/2018/11/css-and-network-performance/
Edit: I just realize that others posted the same concerns below...
As an example, if you had something like an item page and a detailed viewer which the user could choose to open, the item page's HTML could have its critical CSS inline and a <link rel=prefetch> tag to tell the browser to preload the viewer's CSS so it will likely be in the cache by the time the user opens the viewer.
We work with 1000s of publishers. Decisions about websites come from business and marketing, not from the dev team. Performance is a low priority unless it is critical.
This means CSS and JS cannot be cached between page loads however so you're requesting the same data every time while navigating pages. It's a tradeoff against faster page loading. I'm surprised there isn't a better solution for this yet.
Also, with HTTP/2, splitting up your CSS and JS is actually a good idea because you can include only what you need on each page and the parts are cached between pages.
I think the basic problem is people including way too much CSS and way too much JS. For even desktop pages, 50KB of CSS should be enough for most pages.
For a site like Hackernews though where you probably visit many pages each visit, downloading the same e.g. 50KB of CSS every page is wasteful.
With the advent of css-in-js and compilers, this is slowly but surely happening.
What amp did was prove at scale, and with tooling, that performance can be achieved. That philosophy is dripping into other tools.
In some cases, it is the lack of tools.
But I think it's often not either of those, and it is in fact the chosen priorities of the engineers. A lack of attention to craft.
We see a similar split in GameDev. Some game developers (Carmack, Blow, Acton) sweat every millisecond and byte and cache hit. If you work on engines in AAA, that's a job requirement. Others are happy to use Unity (etc), never optimize, and don't mind that their simple 2d platformer is bloated and wastes cycles. But many simply don't care — why spend time cutting down on bloat and waste when the game plays well enough.
I see plenty of engineers who have a "it works, ship it" mentality. To me, that's not the attitude of someone who cares about their craft.
This is from direct experience working with many major brands to improve their sites - its not improving at all.
The most important feature of AMP is that it makes it impossible to add the kind of bloated crap that causes poor performance on something like a large media company website.
Something like a news website likely has 2-4+ analytics integrations, and a similar number of ad provider integrations.
Your staff want to use google analytics so that goes in. Your display ad provider (ads appearing in the page around articles) uses some other analytics provider so they have to go in. You want video ads so another ad provider who requires a video specific analytics provider goes in. Site rankings produced by someone or other are important to get direct ad sales so their required analytics integration goes in. Then someone in sales signs some deal to add a third party widget to your homepage so that goes in.
All of this happens over the protestations of the developers. People may literally quit over these integrations but their replacement will implement it.
Turn off your ad blocker, go to a news site, watch the network tab and despair.
AMP makes that crap near enough to impossible that the developers can convince the sales team and management that it simply cannot be done.
What actually made Google try it is that it gives them control over the advertising system. Performance was the pretext they used to protect their main source of revenue.
* It's clearly driven by a biased entity (Google)
* It doesn't actually directly prioritize page speed
* It breaks websites
You probably meant it the other way round?
I'm really torn on AMP to be honest. I'm morally and technically opposed to it, but... the experience is often dramatically better than what I would have gotten without it.
TFA is just stating that you don't need handcuffs to behave well. You can optimise your site without AMP.
Now, I don't know whether that's true, but supposing that it's, the post you're responding to raises an important point. If AMP doesn't inherently limit bloat, maybe it's only a matter of time before AMP pages are all bloated crap.
Or don't, and you'd never have the problem in the first place. The problem with bloat IMO is that clients at all support it. Thankfully there are a lot of browser extensions for generous content blocking that can make the user experience passable.
Exactly. Nothing stops you from following them in the regular Web
That's why you can't have a fast news site without it - because new site owners won't follow the rules needed to make their site fast unless they get something in return other than speed. Only AMP gives that.
If Google based is lightning bolt etc. on measured page speed instead then yes you might not need AMP. But they don't.
But they do?
When I load this AMP page  with an empty cache, Dev Tools show that 3.3 Mb of files are loaded . Also, the page contains lots of errors . This page contains ads, analytics and even a video that you have mentioned in your comment.
After loading, every 10 seconds a HTTP2 request to cloudapi.imrworldwide.com is sent even if the page is in the background.
AMP looks more like an internal Google standard for integrating websites into search results. The spec contains a lot of restrictions like these: you MUST include a 12 000-line JS script from Google ( https://cdn.ampproject.org/v0.js ) into every AMP page, you MAY use only components made by Google and even custom JS template engine made by Google. If you are an ad network then you have to make negiotiations with Google, your competitor, to be included into the whitelist. There are lot of whitelists: list of sites that can provide fonts, list of sites that can add widgets to AMP pages.
Here is one example of such conditions :
> In order to qualify for inclusion, an extended component that integrates a third-party service must generally meet the notability requirements of the English Wikipedia, and is in common use in Internet publishing. As a rough rule of thumb, it should be used or requested by 5% of the top 10,000 websites as noted on builtwith.com, or already integrated into oEmbed.
If you are a small business, you are not welcome.
Part of the third-party bloatware problem on the web is us, users who aren't willing to pay for content
> The truth probably is that most websites would fail if users had to pay for them.
I agree with you here, but the implication of this is that if the user isn't willing to pay for it, and ads aren't enough to sustain the business, maybe they should not be in that particular business.
Or, they could try to rise to the challenge and build a brand people are willing to pay for. The Guardian in the UK recently hit a million paying digital subscribers, so it's certainly not impossible. Most sites do not need a million subscribers to make payroll.
Try a random article page like https://edition.cnn.com/2018/12/07/tech/australia-encryption...
I refreshed a few times, scrolling to the bottom of the page each time. According to Chrome dev tools even without a video playing between 6 MB and 21 MB was transferred just for me to scroll to the bottom of the page.
That's why I view other initiatives by Google such as https://web.dev  laughable at best.
 Is a rehash of what has been in their docs for ages: https://developers.google.com/web/fundamentals/performance/w...
Literally nothing has ever achieved that, and it likely never will, it's also no AMP's job, AMP's job is to be a workaround for that systemic issue.
Oh. And ads. It's always about the ads.
They don't even attempt to hide it, really. Right there on AMP's page (emphasis mine):
--- quote ---
The project enables the creation of websites and ads that are...
What AMP Provides
Higher Performance and Engagement
Flexibility and Results
Publishers and advertisers can decide how to present their content and what technology vendors to use, all while maintaining and improving key performance indicators.
More than 1.5B AMP pages have been published to date and 100+ leading analytics, ad tech and CMS providers support the AMP format.
--- end quote ---
https://www.ampproject.org/ (desktop version is AMP-ified too).
PLEASE go browse a bunch of non-AMP media sites with adblocker off. 500kb is TINY by comparison to what is out there outside of AMP.
By the time you've loaded CNN's AMP page, you'll have loaded at least 4 MB: ~2 MB of AMP prefetching done on Google's search results page and another ~2.2 MB on the AMP page itself.
If Google adhered to its own page performance standards, as outlined here: https://developers.google.com/web/fundamentals/performance/w..., CNN's AMP page would be demoted in search results. However, since its AMP, it gets put front and center in search carousel.
Same goes for every single other AMP page.
It's almost like saying "you don't need jQuery to make your site faster" - to those who understand the purpose of jQuery, it's a bit of a nonsensical statement.
My point is, if you're using service workers, you're more a PWA than a standard website.
(but I also avoid using the term for exactly this reason)
The author addresses this point quoting the examples of the most popular apps where offline mode doesn't make sense as their whole point is to facilitate online communication (booking, chatting, dating, social interaction etc.).
AMP buys you Google's CDN and prefetching on Search.
And PWA buys you a 0 distance cache hit on your second visit, the ability to prefetch unvisited resources, and with the ability to update the cache in the background.
Not saying these aren't negligible on a 15MB site but they are tools for making an existing slow site appear faster which is what everyone who isn't a dev really wants.
Good lord, friend. Why put your users through this to begin with?
I know a lot of people hate the idea of doing everything in the browser, but seriously, it's the one ubiquitous platform that we have, which even competing entities have agreed to standardise. I personally think it will be the most popular app delivery mechanism in the next few years, even on mobile devices.
You either have an abundance of high speed internet access, or benefit from internet usage that isn't metered by the amount of data consumed, or both.
Functionality or not, heavy web experiences leave out a lot of people who would otherwise be willing and capable users of %product% if it weren't for the excessive payloads they're often expected to consume just to be a user when that payload could quite trivially be reduced and still accomplish the goal of delivering content and information to the end user.
So here's my question: forget about people who hate doing everything in the browser, what about the users who just don't have access to Cable+ internet speeds, want to use the product, but perhaps can't because they're downloading 15, 20, 30MB websites just to fill out a form element because the browser is all this user is familiar with?
Not everyone on the internet is a regular of HackerNews who groks native applications versus web apps. At what point do we start focusing on the content and asking ourselves "Do I REALLY need this animation library just to indicate 'Start here' is where they should be interacting with my webform"?
To answer that rhetorical question for you: probably, quite likely not.
Parent post (in fact, the segment you quoted) literally says that it's okay when it is useful functionality, i.e. not when it's just to fill out a form.
Sadly, these aren't going to get to experience a decent chunk of the internet anyway, since video, gifs, and images all frequently weigh significantly more when put together.
Since when did forms become useless and what web-based video editor are you (or the original reply-er) pointing to that uses AMP?
Take Google Sheets as an example. I don't know what the total download size is for the JS (certainly not 15mb), but it's an example of a JS heavy website where you wouldn't want to serve -> submit form -> serve cycle for every user input.
This entire discussion started from an opinionated blog post that directly discusses AMP and it's performative nature on the web.
You can't just invent a hypothetical that makes it easier to justify exceptions to the very topic the original blog post we're talking about here, of course exceptional things like resource-heavy, full-featured web apps are going to be outliers to lightweight payloads for web apps. For some reason I highly doubt those were the types of use-cases Google had in mind when AMP was released, so why are we even considering exigent outliers on this one?
Having re-read your post, you seem to have misunderstood my point. You seem to be arguing that keeping things lightweight is good. I agree with you there! But remember, for certain applications, this is not always possible. In my hypothetical video editing app, 15MB could well be just the core logic - it's not 1MB of logic and 14MB of bells and whistles!
As I mentioned to another commenter: can you show me an example of a full-featured, web-based video editor using AMP?
I'm not sure if I can buy in that this is the example to rest upon here.
Many reasons, but the top one in my opinion is having a single code base across all devices. I think it's a worthwhile goal, personally.
I'm not sure if it's a particularly convincing axiom here for heavy payloads of web-apps, particularly when we're talking about simply delivering content to cite a singular code base for why a web application that would benefit from AMP (which was the central conceit of the actual blog post we're all here discussing today) needs to be 15MB heavy.
Well formed markup and smart content compression could accomplish the same thing.
I guess you got your wires crossed, as AMP is not something I was talking about - my focus was specifically on PWAs and the fact that it's not a problem if their payloads are large.
But yes, AMP (read: glorified CDN) wouldn't solve much if you have 15MB to download each time you visit a site (aside: you wouldn't need to do this with PWAs because of caching).
Everyone with a computer produced after 2010 has the equivalent of late-90s supercomputer, and we were doing video editing on desktops in the 90s!
Look, I know its weird but there is finally a multi-implementation cross-platform runtime that people actually want to use. I would be so excited if I could run full Photoshop in my browser on Linux.
Fundamentally, 15mb is excessive even for that, even if it is common practice for such things to bloat to that size.
I really do believe people will build for the browser more than they will for the desktop because it's a more universal platform.
I can even imagine a future where, for the majority, the only native app you have on your machine is a browser, and everything else is some variant of a PWA. I really feel this will happen very soon.
> You can’t call Uber while being offline, and why would you open Uber app otherwise?
To see my journey history. Uber has this data. It's my data, why shouldn't it be on my device too?
> Tinder is useless offline. You can’t date empty chat screens.
I could look at my messages offline, if they were saved on the device.
> You can’t join a meetup at Meetup.com without network connection.
But I could look at my calendar and see upcoming stuff for the days ahead, see past events, my message history, and profiles for my groups.
All this stuff loading from a local cache could improve speed. But that's just a bonus.
When you visit what you expect to be a search result url, it should take you to the server/site that was indexed. When AMP for launched, it didn’t even provide a link to the original site. It took a lot of people like me grousing and complaining before Google’s 17 tiers of product managers had enough meetings to even budge on that.
I honestly can’t tell you how so many sites were convinced to implement AMP on their sites. I could wave my hands and say everyone is stupid and just goes along with the latest shiny bullshit but that can’t be but a small part of the actual answer.
Supposedly Google isn’t factoring AMP pages into their search rankings yet, but given the positive signal of page speed (a good thing, generally) and them switching to a mobile first index, a natural direct side effect is that AMP pages will be ranked higher by default.
If Google were somehow automatically transforming sites you’d never hear the end of people complaining, especially content owners. Instead they convinced engineers and content creators to tie the rope on the hanging post themselves, jump off, and be happy about it.
I can’t speak too negatively of PWA. The ability to build offline web apps is actually pretty useful in some situations.
Overall I still firmly believe that a good search engine should deliver the most relevant links or content for what is being queried. Should a bad QA site get ranked higher than a high quality one just because the former delivers mobile-first AMP pages and the latter has a slightly bigger payload but has much more relevant content?
Performance today is not a priority for web developers. It's not even in the top ten priorities. Performance is just not considered unless it is so bad that a site looks like it isn't working.
We can change this at any point we like. We won't. It will continue to get worse.
Why the hell does it take an application like Photoshop 30 fucking seconds to simply launch and get to where it can accept input on my 32-core, dual-Xeon desktop at work with 192GB of RAM and a very fast SSD? Because people punt the fucking problem into the future and assume everything will just continually get faster
Similarly PWA is also not about performance but about adding metadata that phones and browsers can use to treat the site like an app by e.g. giving it an icon in an app folder. You couldn't do this before, now you can. Before people were doing silly things like packaging up websites as apps and putting them in the app store just so they could their own cute little icon in somebody's app folder. Now they don't have to do that and the install/discovery is a bit smoother as well (easier discovery, less steps, less users lost).
Anecdotally, AMP sites always load a bit slower for me. The page will sit blank for a few seconds before finally dumping all of the content at once, as opposed to loading text immediately while it takes a moment to load the rest of the content that has a higher file-size.
Without AMP, I can start reading a page before it's done loading. With AMP, especially on a desktop, I'm often stuck staring at a white screen for 15-20 seconds before anything shows up. I often find myself trying to cut the "amp" bit out of the URL to see if I can get to the original page. It's frustrating, and it is a big part of why I'm considering dumping Google as a search engine.
That's just my experience, though - YMMV.
If you come across an article you want to share from your mobile device, and it's an AMP link, it's the AMP URL that gets shared if you post it on sites like HN, FB or Reddit. If I'm browsing those pages from my desktop, clicking that link almost never redirects me to the "original" page, but loads the AMP page in the desktop browser. Sometimes getting around that is as easy as cutting out "/amp/" from the URL, other times it's a totally different URL and I'm stuck staring at a blank page for 30 seconds before it either loads, I just give up or I Google the headline/title and try to find the original page.
Forced AMP results on mobile devices also make it difficult to get to certain pages when I want them. Take Eater's 38 lists that they put together. If I'm on my phone and want to find a restaurant from that list in a particular location (say if I'm out of the house and want restaurants near me), then the AMP result returns a page that doesn't include the map, only a list, which isn't very helpful. In order to get to the map, I either need to go to Eater.com and manually find it, or use something like Bing to search for it. I know that the purpose of AMP is to not load the map in an effort to increase speed, but in that instance the map is exactly what I want and AMP makes it harder to get to.
I'm not saying AMP doesn't have its benefits, but its inconveniences have outweighed them, in my experience.
PWAs are much easier to tune for global performance though if you have a tight budget. The app doesn't have to talk much to a Backend server and (down)loading can be tuned by using a CDN.
Regenerate pages after changes happen and you'll be fine. You'll still need dynamic search most likely, but even those can be pre-generated for many terms. (Your admin / content management part needs to be dynamic of course, but that's not customer-visible)
If your catalogue is so large that constant regeneration is impractical, you can generate on-demand and cache long-term a few layers above for anything not requested recently.
Have you ever worked on a high scale e-commerce site? I have and what you are talking about is impractical and pretty much impossible.
Products have multiple variants, photos for each variant. Various companion products that depend on what you already selected. Pricing options that can depend on quantities or packages. And search? Spend 5 minutes on any serious e-commerce search system and there is no such thing as “common search terms.” Of course there are common searches, but on any non-trivial e-commerce system, you have potentially thousands of distinct common searches.
I was one of the original engineers for https://www.matalan.co.uk and you can’t just “regenerate” pages after changes. You can regenerate the cache for images or product descriptions, but e-commerce isn’t like a printed catalog. We put exceptional engineering into that application and to trivialize that sort of application like it was some kind of blog site kind of demonstrates a lack of experience in building something that serves millions of visitors per month — visitors that all have different paths based on what they want to buy.
I agree with the issues you're raising, but even then, you can make a lot of the site bypass dynamic rendering.
What do you think all those little "Edit" buttons all over wikipedia do?
They take you into the CMS where you can produce more static content.
Out challenge has been that we have to load a lot of images, so we spent a lot of time optimizing everything around it and optimizing everything around it. From TLS1.3 to the CDN, to every part of our stack.
Try it out
edit: this also applies to the "near you" cards on the home page.
How about a browser setting where the user can decide whether to load images only when they are seen?
> By the way, since you first opened this article my ServiceWorker has downloaded XXX Mb of useless data in background. I hope you are on WiFi :)
But the code doesn't really download anything. It stores the intial datetime your first load the page, and diff to the current time, convert it to "bytes" every second.
Google creates more interactive search results for AMP pages than non-AMP versions.
Django/Rails/$FRAMEWORK with Turbolinks.js is also great, for the same reason I like pwa's. If you aren't a fan of the JS ecosystem (totally fair), Turbolinks is a great way to get some ""snappyness".
I may be wrong, this just was my understanding from reading all the Google search guides last week.
The scary part is that they don't just offer an "alternative" technology like ActiveX, Flash, Applets, Silverlight and so on. They are influencing core web standards.
It's powered by a home grown static site generator framework. Hosted on Netlify. Works fairly fast even random corners of India. But then I found my own generator limiting and surveyed existing frameworks - decided to go with Gatsby - a ReactJS based SSG. To my surprise Gatsby+Netlify worked much faster and smoother than my JS free solution! It's currently running at https://beta.discoverdev.io
Worth noting that Gatsby generates pure html+css during build time, and page renders fine without JS enabled. Seems to be a good balance between DevX and UX. :)
I guess I can compress a lot further, given it should be a tiny (34x34?) image anyway!
An interesting case when someone who is firmly saying "BS" three times in every paragraph is BS too.
But regardless of the complexity of the database schema, neither one of them is really an "feature rich application" in the way that that say, Photoshop, is a feature rich application.
At the end of the day, I think they're both just CRUD apps.
AMP links are often slower than Reader Mode, because they're blocked on slow font loading.
It uses a custom font. Custom fonts are absolutely unnesessary for a news page; they only delay the moment when text becomes visible. Standard Windows fonts are of a good quality and don't need a replacement (which often renders very poorly on Windows XP).
It uses a lot of scripts. Script block some browsers while parsing and executing.
It has SVG images embedded into the page. If you want the page to load faster, you should move images into external files.
AMP is directly linked to Google and cannot be used without it. Please look at the requirements for AMP HTML documents: 
> AMP HTML documents MUST ... contain a <script async src="https://cdn.ampproject.org/v0.js"></script> tag inside their head tag.
So every AMP document MUST load a Google-controlled script. And by the way, it contains 12 000 lines when beautified. It contains different Google URLs like 'https://ampcid.google.com/v1/publisher:getClientId?key=' or 'https://ampcid.google.com/v1/cache:getClientId?key=AIzaSyDKt... , references to Doubleclick and AdSense (but not other ad networks). Does every site need it?
It contains a lot of code not necessary for most websites. For example, a XHR interceptor for "some trusted AMP viewers"  is included into the script. Or a cryptographic library for calculating sha hashes .
You can use only JS components approved by Google; the spec says:
> Extended components
> The script URL must start with https://cdn.ampproject.org
So for example, Google might make a custom component for Facebook but not for QQ or Mixi. Google defines what widget can appear on an AMP page. If you are an ad network, you have to make negotiations with Google to be added (you have to negotiate with your competitor). If US government imposes sanctions on some foreign site, Google will have to comply.
It is clearly a technology made for integrating news articles into a Google page (and judging by limitations in the spec, Google might plan to make a non-vebview native renderer for AMP pages). Don't believe when they try to pretend that it can be used independently as a standard or made for accelerating the web.
I loled a lot:)
Because Google effectively gave them no choice.
I'm talking about link aggregators and search engines, not publishers. They could have come up with another solution, but they decided to use AMP, which provides the same benefits to them as it provides to Google.
As my GGGGGP post said, that is the entire point of AMP. It can be validated safe to preload, which is not possible for HTML pages in general. So no, Google, Bing, Pinterest, Baidu, LinkedIn, and other sites that preload AMP pages are not free to do this for your website unless it is written in AMP.
What does this even mean? My browser already preloads websites in the background, AMP or not. So assuming I don't go through Google, you could call going to my website via the address bar "instant" as well.
> User isn't deanonymized to the publisher, publisher analytics and ads don't register page views, page can be trivially transformed to lazy load below the fold, etc.
This is all explained in the AMP documentation itself. There was even an article about it on HN not too long ago (https://medium.com/@pbakaus/why-amp-caches-exist-cd7938da245...). People like the author who criticize AMP without knowing what problem it solves are willfully ignorant.
This all works on the open web without signing an agreement with the traffic source, unlike Facebook's and Apple's proprietary instant article solutions. That's why it has been adopted by so many other search engines and link aggregators.
BTW. This "subset" is invalid HTML. Just so you know.
What part of AMP is invalid HTML? It is a competitor to Facebook instant articles and Apple News that works on the open web.
Because publishers jumped on bandwagon and started deploying AMP pages en masse.
Just because someone else besides Google implemented AMP pages doesn't mean that their intent and purpose is something else than what I wrote.
> What part of AMP is invalid HTML?
<html :lightning emoji:> // is invalid
// script attributes are invalid
<script async custom-element="amp-carousel" src="https://cdn.ampproject.org/v0/amp-carousel-0.1.js">
Your conspiracy theory might make sense if the other search engines and link aggregators didn't actively encourage their link targets to also implement AMP or if they tried to extend AMP for their own purposes and were blocked by Google. Neither is the case.
You can also read a bit about how "great" AMP is: https://news.ycombinator.com/item?id=18627692 I've already mentioned that it's not even valid HTML. There are so many more issues with it. I will not re-iterate, you can read for yourself: https://ferdychristant.com/amp-the-missing-controversy-3b424..., https://twitter.com/lukestevens/status/963905898895699968?re... (including effing AMP for email, https://techcrunch.com/2018/02/13/amp-for-email-is-a-terribl...)
Meanwhile, Google's competitors have willingly signed onto AMP without any criticism of the development of the standard. Your conspiracy theory doesn't hold water.
I showed you that your first link didn't understand this either. Your last link isn't even about the same technology. I didn't bother clicking the other because that would surely also be a waste of time.
I have patiently explained this to you over and over because I can't stand people spreading conspiracy theories. That's how the US ended up in the mess it's in.
Yup. That sums up your attitude perfectly.