I strongly recommend this for reading: https://github.com/ampproject/amphtml/issues/13597 (locked, sadly) and the original AMP4Email issue has a fair bit as well: https://github.com/ampproject/amphtml/issues/13457 (only for discussion of email).
Unfortunately, there are lots of people talking about how bad AMP is, and why, but nobody yet has suggested how we do anything to stop it.
I will be looking for a way to move my online identity away from Google to something I control more directly. I am also going to try replacing my social media activity with email. The web is already decentralized, we just need to use it.
If anyone’s interested I wrote up my investigation into potential replacement mail services: http://www.robinwhittleton.com/2018/02/18/dropping-g-suite/ . In the end I went with Runbox, will see how they perform but I’m happy so far.
I tried Runbox back in 2013 after a fair amount of research and they looked great. After signing up, they sent a confirmation email with all my account info with user-name and password in clear text. I cancelled immediately after seeing this, but I wonder if they are still doing it?
I ended up switching to Fastmail, which I still use and highly recommend.
They have a dedicated EU service now. I use them and RunBox.
I don't want to use AMP in email so I'm not going to. It's just that easy.
What are you doing and what do you suggest the rest of us do? Is voting with our wallets not a thing any longer?
While it's certainly not totally impossible to vote with your data, you should remember that you are the product for google. The customers are advertisers. Not you.
On a serious note I don't like the negativity in your comment and I don't think there is anything insightful about it. I believe the free market is based on individual actions. I will continue to choose services based on how well they serve my needs.
A familiar example is the NRA. It's not a political party, but it advances the interests of its members relentlessly. Institutions of the left include things like Planned Parenthood. Often, there are issue advocacy organizations that spring up when corporations are doing bad things. Often the strategy is to do things that impact the corporation financially, which short of legal threats is basically the only thing they understand. Anything short of that can be brushed off with rationalizations by various powerful stakeholders within the corporation.
Basically, the point I'm trying to make is that individual action is essential, but not sufficient to make a difference when it counts.
Then again, maybe AMP is their solution to this third-party interface "problem": create an incompatible "feature" that will encourage or force users to abandon standards-based tools.
Kind of like what's happened with Google Talk. God I miss the Mac's Messages app being able to interface with my work's gsuite chat.
FWIW, the iOS email client has improved a lot over the last few iOS versions. Worth checking out if you haven't looked at it for a while.
My point was that you can't go it alone and there are friends that will help you. I left a more theoretical justification in another sub-thread.
along with DDG I find myself very well taken care of, and I can always type in Google if I need to (2 times a day?).
But... under U.S. law, online email storage over X... (I forget the exact count, something over 100 days) days old, is open to examination without a warrant.
Not that I've anything particular to hide. But moving my personal correspondence in that direction?
I may spin up a an email server under my own sub-net, just to make use of this tool and capture its output. But I'm strongly disinclined to put it on a publicly facing email server.
Anyway, they may well have this data. Some other data, more likely not.
And, even where they may have -- or have access to, this data, retention periods may be significantly shorter. At least, the retention periods that aren't shrouded in secrecy in e.g. a large campus in the middle of Utah.
Not email, but more and more social/communications platforms are offering to archive your data in your Google Drive account. Now, maybe with a privately held passphrase to AES encryption, some might consider that ok. But that is not what these services are offering.
For the majority of people, "Who cares?" And there is value to keeping your life free of unnecessary friction.
But, more and more of this stuff is getting supeonaed in divorce cases, employment disputes, etc., etc.
Though, I suppose if you maintain private access to the messages, and don't share them, you are still a candidate for contempt of court.
Anyway, I'm not interested in accumulating more of my social life into a data store that, here in the U.S., has less constraints against third-party (here, meaning particularly, government) access.
but that's not all.. if a pw protected chunk was sent back to a buddypress activity or message reply and could be decoded there... then people could setup a wordpress/buddypress for family and a separate install for friends..
Options to get rss read of activity, or get emailed activity and replies. Could just email a reply back with a plugin, but that's currently in the clear. It could just be a notice that ScreenName X posted a reply on the Activity Group Y.. click to read.. and it's trivial to make those things private / login needed kind of privacy at that point.
So solve email encrypt on device, send to server which I think can hold data encrypted with php 7, and it would need to be decrypted within wordpress/buddypress for friends / family there when they logged in .. then there are most of the other pieces for mutliple messaging different contacts and groups.. it's all there.
I am sure there are other similar projects with similar hooks where a similar thing could be streamlined / integrated with just on device email, with options to beef up the privacy to make it better. Would be nice to addin sms texting to the mix for notifications and replies somehow.
most people know using email, and buddypress can be very similar to fbook for layout, so familiar to most I think. It's all close.
Email seems fine, if it's your own server and you can keep it secure. Including, secure from some legal argument that they can knock on your door or the data center's and just have a look at whatever they want.
There was a fellow in the UK who used assymmetric keys to encrypt all his inbound email, holding it on the server in encrypted form where the paired key was not on the server but rather available only to his email client.
Then, you have the problem of security in transit. That email doesn't offer, inherently. And most of your correspondents won't use PGP/GPG nor SMIME.
I just had another friend start using WhatsApp. Is it really secure? I don't know. At least, she'll use it.
WhatsApp offers to archive your correspondence to Google Drive. I haven't turned that on.
I've tried brining up Signal with a few friends, but they won't give it the time of day. (Unlike the Washington, DC crowd, who now appear to be flocking to it in minor degree.)
The tool I might use is for SMS/MMS. It used to also offer to archive WhatsApp conversations, but that's been discontinued.
It's also on Play.
So yeah, this isn't any "big security context". Just my personal stuff. But the default is to put the messages into Gmail. On the one hand, actually convenient. On the other... just, no.
I have a bootlooped Nexus 5x with a bunch of SMS/MMS I never backed up. Including, I now realize, from a friendship that's ended. I'd kind of like to have some of those. So, I'd like to be pro-active with regard to the next phone that's going to crap out on me.
P.S. In short, I think I basically agree with you. Decentralized, and under one's own control.
It's just that:
Email doesn't secure the transport, and many people won't secure their messages before transport.
If it's not your own email server, under your control including perhaps physical, law in the U.S. with respect to email designates older messages as quasi-abandoned and "up for grabs".
And I forgot to mention the many posts/comments I've been reading here, about how more and more difficult it's becoming to run your own email server, not just in terms of securing it but also because more and more email providers are shit-canning any emails that don't appear to be blessed by their counterparts.
Anyway, I'm not promoting the idea that I have some particularly good answer. Rather, just food for thought.
My suggestion is, if you don't want people to use AMP, provide an alternative Web framework that provides similar performance, show in a side-to-side demo you can meet the same performance, take it to the W3C or IETF getting Mozilla to push it. No one on a crappy 3g connection really cares about the nuances of the politics here, they don't care bout Web vs native, they care only that their phone seems slow as molasses.
If there is no Web solution, then you're going to see native apps with proprietary formats, in essence, non-standard RSS readers. And honestly, I really don't want to download the WashingtonPost, NYT, or Verge App to read articles less painfully.
AMP is just JS using the Web Components spec, and some tools, the purveyors of JS frameworks like Vue, Ember, or various Bootstrap-like templates, could ship their own competing proposal.
The way standards used to work on the internet is rough consensus and running code. People would propose multiple specs and implement them, working groups would evaluate the various solutions, and the best proposals would be folded in. Provide some competition for AMP and maybe there will be a different outcome. Google shipped NaCL in Chrome, and Asm.js defeated it. Google shipped SPDY, and it won and became HTTP/2.
Less talk and politics, more shipping code.
At best it's not better than anything else out there. Solutions to slow loading pages are well known and have nothing to do with AMP or "other web frameworks". Even Google themselves lay out the solutions: https://developers.google.com/speed/docs/insights/rules. Nowhere does it say AMP. And if you bothered to read the article, you'd see that Google's own tools consider Google's own AMP to be bad and non-performant.
> Less talk and politics, more shipping code.
That's exactly how and why we ended up with AMP.
You could provide a mobile framework and tools for publishers that helps sites create pages that render fast by putting them on rails. AMP is that framework, other people could similarly introduce tools to help. Chrome DevTools and Google has long offered the Page Speed tools and others to audit your code for slowness, but curiously, no one seems to use them or care, which is why we ended up with millions of slow ass mobile sites shoehorned with megabytes of JS.
The problem is the cache and specifically the preloading of it. This gives AMP an unfair advantage of multiple seconds over anything else.
Still, even without the AMP-cache, mobile sites were loading way too much JS, even after Google penalized them. The effect of AMP showing how sites could be loaded as fast as native Apple News/Facebook Instant, has finally gotten publishers to strip down their sites. You might not like the way it played out, but the end result is that not only do end users get AMP-cached fast loading, but they also end up download far less data, because the sites themselves have been pared down.
It won't fix this. The only thing it will do, it will let browsers show the original link, not the AMP link, and fix the UI. The problems described in the article will not go away.
> But before you have a general purpose spec that fixes something, you need a specific embodiment that does
AMP isn't that spec though. It does nothing special. And the only reason it's fast is because Google aggressively preloads it.
> but the end result is that not only do end users get AMP-cached fast loading, but they also end up download far less data,
Are they though? When for every search google preloads tens of AMP sites to make them "fast"?
No, it does more that, it does away with the need to use iframes which break scrolling, and it allows all sites to use preloading without violating privacy see https://redfin.engineering/how-to-fix-googles-amp-without-sl...
"If other browsers accepted the Web Packaging standard, the web might look rather different in the future, since basically any site that links to a lot of external sites (Reddit? Twitter? Facebook?) could start linking to prerendered Web Packages, rather the original site. Those sites would appear to just load faster. Web-Packaged pages could one day eliminate the Reddit “hug of death,” where Reddit’s overenthusiastic visitors overwhelm sites hosting original content.
Despite cries that Google is trying to subvert the open web, the result could be a more open web, a web open to copying, sharing, and archiving web sites."
>Are they though? When for every search google preloads tens of AMP sites to make them "fast"?
TheVerge.com non-AMP loads 3MB of data, 289 HTTP requests, executes 1.5Mb of JS. Going to Google.com and searching for Verge stories produces 10 carosel Verge stories, and according to Chrome DevTools, only 377kb was loaded, though this seems oddly wrong, I doubt prefetching AMP stories will exceed the shitty bloat of non-AMP pages.
WashingtonPost non-AMP homepage is 6MB+
NYT non-AMP is 4MB+
WSJ non-AMP is 5.7MB
And by non-AMP, I mean "mobile web version" The desktop versions are even larger.
Can you see the problem?
It's prerendered (via a static site generator). In total, it loads 692 KB (I din't do anything to optimize it, the images are quite large etc.). It loads from a small server, and images are loaded from Twitter, meme.com etc.
Here's an AMP page: https://www.google.se/amp/s/www.usmagazine.com/celebrity-new...
It loads a whopping 2.9 MB , and keeps loading as you scroll down. If you open it from Google's search, it opens instantly. Because parts of it were already preloaded on the search page. And the page itself (including almost all images) is served by a ridiculously powerful geographically distributed CDN.
1. How is that fair to people who actually build their pages and host them on their servers?
2. What is open about this web?
3. How will Web Packaging solve this issue if I can't afford to build a geographically-distributed CDN on par with Google's for my own cache?
 It actually changes on every reload. The lowest number I've seen is 1.6 MB, but then, in a second or two, it starts loading additional stuff, going up to at least 2.2 MB
So much for "small APM pages". Actually, as I'm clicking around, rarely is a page below 1 MB. Even for pages that are not that different from mine: only images and text.
For some reason you think that the solution to that is "let's do a standards-incompatible aggressively preloaded slimmed down page that will live on our ultra-fast CDN/cache servers".
Can you see the problem?
Also, can you see why web packages don't solve the problem (hint to start you thinking: not everyone can run their pre-rendered pages off of Google's CDN. Even Google's own AMP isn't fast if it's not preloaded from Google's cache)?
How can it be standards incompatible if it works in existing standards compatible browsers?
> Also, can you see why web packages don't solve the problem (hint to start you thinking: not everyone can run their pre-rendered pages off of Google's CDN. Even Google's own AMP isn't fast if it's not preloaded from Google's cache)?
Did you read the Redfin article? The point isn't for you to run the CDN or do the prefetching, the point is, how do people find your site and articles? Either they find it through Google/Bing/Baidu/etc, social network sites (Twitter/Facebook), or aggregation sites (Reddit, HackerNews, etc). The point is, for large aggregation sites with a lot of traffic to roll out preloading on CDNs. So for example, Cloudflare already supports AMP-Cache, and Reddit could roll out prefetching if desired.
And you completely missed the point that, getting publishers to adopt AMP gets them to slim down their sites even if you don't use the AMP cache or preloading. Something everyone has been trying to get them to do for years, including Google, who has been trying to penalize slow sites for years (https://www.linkedin.com/pulse/20140827025406-126344576-goog...)
So hurray for you making a slimmed down page, but you're not the target audience, the huge number of other sites that have for years, bloated the Web and haven't responded to previous attempts to force them to go on a diet are the target.
You really have no idea how the web works, do you? Browsers do a best effort to display any page. Even if the HTML is totally absolutely invalid, the browser will go out of its way to display at least something.
The mere fact that something is displayed by a browser doesn't make it standards-compliant.
AMP is standards incompatible because:
- its HTML is not valid HTML 5 (just a few examples here: https://news.ycombinator.com/item?id=16467873)
- whatever extensions to HTML 5 they bring are not a part of any HTML standard, past or present. And it doesn't look like Google is interested in making them a part of any future standard.
> So hurray for you making a slimmed down page, but you're not the target audience, the huge number of other sites that have for years
That's not the point, is it? Google will still penalise my page even if it's way slimmer than a standard AMP page. And since I cannot afford to run a Google-scale CDN, it will perform worse than an AMP page.
So here's what we have in the end:
- Google (and Google alone) decides what AMP will look like. There are no discussions with the web community at large or the standards committees.
- Google (and Google alone) decides that only AMP pages end up in its own proprietary AMP cache. (Other "big aggregators" may/will also decide that only AMP pages can be in their proprietary caches)
- Even if a web developer follows all of Google's performance tips (https://developers.google.com/speed/docs/insights/rules) the page will still be penalised because it's not an AMP page (i.e.: not a page developed using whatever a big corp has decided, and running from a big corp's CDN/cache)
- Even Google's own page speed tools tell you that AMP is not fast, and yet everyone (even 100% optimised slimmed down pages) is penalised if you're not running the page from an overpowered private cache
A lot of mental gymnastics and total ignorance of how the web works goes into calling this an open, extensible web that will benefit everyone.
In what part is the article wrong?
> You could provide a mobile framework and tools for publishers that helps sites create pages that render fast by putting them on rails. AMP is that framework
Even Google's own performance measuring tools say that AMP isn't fast.
> Chrome DevTools and Google has long offered the Page Speed tools and others to audit your code for slowness, but curiously, no one seems to use them or care, which is why we ended up with millions of slow ass mobile sites shoehorned with megabytes of JS.
That is really besides the point. You bemoan that "there's no mobile framework and tools for publishers that helps sites create pages that render fast"? Oh look, there are plenty of those frameworks, and there are tools like Google's own tools.
And those tools say one thing:
- AMP is not fast
- Google lies about the speed of AMP by aggressively preloading AMP pages from it's own overpowered CDN/cache
- It's entirely possible to create fast pages with existing technologies without AMP. Google has extensive documentation on how to do that (and obviously it never mentions AMP). However, Google will penalise those pages even if they are faster than AMP.
But by tying AMP to Google's monopoly, all other options to solve this problem became non-viable, because they don't come with Google's search ranking blessing.
Bear in mind, in discussions with the AMP tech lead, I've discovered AMP4Email isn't even being approached as a standard. Gmail is implementing it as a proprietary fork of email, and the AMP Project is just deciding whether or not they'll support it directly.
It seems to me that Google only submits to the W3C when it needs other browsers to implement something. Whenever Google's monopolies can handle it, they keep it proprietary.
Perhaps there's a perception that the W3C is premature at this stage, as things are quickly evolving and standards committees usually standardize stuff after there's been some proprietary experience.
The W3C might not even be the right venue. For example, changes to JS go through TC'39, whereas changes to protocols go through IETF. Lots of XML/SGTML spec work has been done outside the W3C. I don't feel knowledgeable enough to make an informed comment.
I'm not a fan of some of the aspects of AMP, but I feel like somebody needs to try something new and stir the pot a bit.
My own provider FastMail, has said they of course, can't promise they won't support AMP4Email since compatibility is a major goal of theirs, and I can't fault them for that. (And I significantly appreciate the contribution their blog has made to the discussion on the topic.) I'd love if I had the option to opt into behavior like this though, at least on a personal level.
But if only a few percent of users (or more than likely, a fraction of a percent) are rejecting the mail, it won't dissuade anyone. Companies will pick up AMP4Email so they can push dynamic advertising directly in people's inboxes.
If you want to put it in a separate file (such as a local config file) then you can just
I still forget sometimes the differences between BRE, SRE, and ERE (mostly due to vim defaulting one thing and perl the other).
You can just as well receive the e-mail and just complain if it's AMP only, or they don't pay attention to the plaintext part, causing issues for you.
Better to just complain over channels where someone is actually listening.
If someone does that, it's a good method to land on basically any spam blacklist out there.
Most larger providers like mailgun will tell you if you're bouncing a lot of mails and reduce your reputation. Same for gmail even.
it also helps if you report senders that repeatedly bounce that way to spamlists. That usually gets a lively reaction.
For instance, how much longer before Google only accepts email from the "Big Players?" (or drastically lowers the performance/acceptance of email from smaller sites?)
Now, for the web AMP I have no idea...
From the issue:
> I think that the conversation itself is important and I'm always available on the internets.
cramforce is https://twitter.com/cramforce
I don’t follow amp links. Then again, I’m satisfied not signing up for twitter, or even loading their site as an outsider to get context when people link to tweets. I also killed my fb account about six months after registering (seven years ago). I clearly accept trade offs that don’t appeal to some.
Others have mentioned switching to ddg to withdraw support from google. I will probably test the waters with that in the future. It’s not as though we are held hostage in the fashion that many are beholden to ISPs.
One longshot I could think of is reporting it to the EU as anti-competitive behavior. In the case of exclusive AMP preloading on a dominant market (search), it's not that far fetched.
Similarly, in the US, by the time Microsoft lost it's antitrust case, Microsoft's dominance in a lot of ways was already cemented, and in those ways, still is. (While obviously the mobile shift has cut them out, almost every desktop PC in every business on the planet still runs Windows.)
It claims to be an open standard; we should participate (or try to participate) in the open standard.
The problems AMP solves are real. They will be solved one way or another. I'm hopeful they can be solved in an HTMLite sort of way, not a whatever-Google-does sort of way.
So let's think about how to improve AMP, change AMP, evolve AMP, until it becomes something else and something better than even its creators thought. Destroy it from the the inside.
EDIT: I know AMP has been terribly approached, and it might be nice if it just disappeared. I'm not claiming otherwise. But it's not going to disappear the same way proprietary LiveScript didn't disappear. I don't want to just scream at walls; we've been trying that.
If AMP not handed over to the W3C, I don't think we can or should call it a standard.
I don't think Mozilla was super fond of EME either, but couldn't afford to be "the web browser that can't watch Netflix".
I definitely do remain wary of these groups being forced to rubber stamp things, I do feel Google exerts way too much control even in that space, because due to their monopolies, they can just implement what they want, and sometimes the W3C has to accept it or be irrelevant.
The whole point is that AMP works on all major browsers as they exist today. Waiting for a new standard and then waiting for Apple to implement it is a non-starter.
Google's always only supporting competing software just as much as absolutely necessary
On the other hand, using the web package standard as the ancestor suggests would currently trap you to Google.
But I will say I think your wood shed idea won't work.
(Though I am very open to hearing more about it's implementation.)
Where you can oppose the adoption of AMP and work on improving the rest of the ecosystem so that AMP has fewer compelling applications.
That's going to be hard to fix in an HTMLLite sorry of way.
... but they've been doing exactly that for ages. Originally with link rel=prefetch, and later with ever more elaborate schemes which I think culminated with this: https://plus.google.com/+IlyaGrigorik/posts/ahSpGgohSDo
But prefetching based on giving hints to the browser has a bunch of problems. The most obvious one is the one hinted at here: you can have either something ineffective, or something that's effective but complex and not supported across all browsers.
> Not doing this is a strong hint that another agenda is at work, to say the least.
The sinister agenda of wanting things to actually work well.
Google CAN preload arbitrary websites. They fully load your website, JS included, when they index it. As for the security problem when on google.com, they could still preload the HTML, CSS, webfonts, do all the DNS/HTTPS overhead, all of which would be safe to do, save those first few seconds, and create a level playing field.
2017 IETF proposal by Google: https://tools.ietf.org/html/draft-yasskin-webpackage-use-cas...
2018 Chrome demo at AMP event: https://youtube.com/watch?&t=9m03s&v=pr5cIRruBsc
There may be overlap in goals with W3C Web Publications, which is working to converge EPUB and Web: https://w3c.github.io/wpub/
If the user has GPS consuming 3G data, or some other app they care about, and only did a quick google search to confirm something, now the GPS or other app data will be severely limited while the browser download all that crap about some 20 different sites the user will never ever visit.
I personally disable preloading on everything, because waiting 3s for a page to show up is a very fine tradeoff to control what my device is downloading and when.
Has the publishers not learned anything from Facebook taking their content and readers hostage? Sure, it gives you traffic boost in the beginning, but sooner or later it will force you to play by their rules and generate content for them.
More users don't equal increased ad revenue with AMP. If ads are not a thing you are selling, you are probably losing free attention if you aren't already exploiting AMP.
Since the beginning AMP team was clear that AMP is not very useful without cache.
I do agree with all of the concerns raised, but wanted to point this out as to not get sidestepped from the main discussion.
I don't see that at all. Even just calling it a "specification" is misleading in the worst ways. You don't need a spec for caching. Nor do you need a spec to tell you that you shouldn't include a ton of CSS and JS. Nor was it ever the plan of Google to actually see others implement this spec.
AMP was created as a response to Facebook's Instant Articles. If anyone supports the open web, then it is hard to see how things become better if the web stays stuck as more content moves into walled gardens, or the web remains unusable on mobile phones in large parts of the world.
Maybe there is no controversy, because there is no need for one.
It is the caching where the discussion lies - effectively Google is providing a CDN for AMP pages and I agree this needs some improvements. But these should in the realm of technical discussions rather than looking for conspiracies.
- it's not fast without Google's overpowered cache/CDN (that no one has a chance to replicate)
- it's not even valid HTML
- it's entire design and development is governed exclusively by Google, with no external input, and all external input is discarded and discouraged. See top comment: https://news.ycombinator.com/item?id=16455593
The fact that it lives on GitHub doesn't make it open.
>it's not even valid HTML
Custom elements are in the WebComponents spec, which is "valid HTML".
1. Google dominates search. So other caches are basically irrelevant
2. To create a competing cache you need to make sure Google's search uses that cache and that your cache is big, fast, and powerful enough to pre-render AMP pages on Google scale
Which comes back to the original problem: AMP is not fast until someone caches and pre-renders it.
> Custom elements are in the WebComponents spec, which is "valid HTML".
Will all of you "opponents" please read the article you comment on?
Since you can't, here is the six paragraph:
> AMP makes up its own standards that break with what is considered valid HTML. Case in point, have a look at how the AMP project’s homepage, which itself is an AMP page, produces over a 100 validation errors
You can check it yourself.
Then why did you ask to see other caches? You're moving the goalposts.
>produces over a 100 validation errors
By a validator that doesn't understand HTML5. Try your browser - it's a far more advanced version of the validator. You'll find it understands amp pages just fine.
No goalposts have been moved.
Once again. Slowly:
---- quote -----
AMP is designed by Google, and implemented almost exclusively by Google. It is also used in the most popular search engine in a way that makes AMP feel like it's fast.
- it's not fast without Google's overpowered cache/CDN (that no one has a chance to replicate)
---- end quote ----
You cannot replicate Google's cache for the following reasons:
- Google's search is dominant.
- Google's search uses and will use Google's own AMP cache.
- Google's own AMP cache relies on Google's infrastructure.
Even if you create an AMP-compliant cache:
- Google will not use it.
- It's highly unlikely that you will be able to match its power and speed.
How is AMP open and fast again?
> By a validator that doesn't understand HTML5. Try your browser
Nope. The browser is able to render AMP pages just because the browsers have historically tried to make the best out of shitty HTML.
Let's start from the top.
Do you know that this is invalid HTML 5?
<html :lightning emoji:>
<script async custom-element="amp-carousel" src="">
Do you know that this is invalid HTML5?
Shall I continue?
You see, in order to know that it's enough to just stop drinking in Google's propaganda and actually look at what the web has to offer, and how all of this stuff works.
Google does not do this out of altruism of course - their margins on web search are much higher than on mobile, and if more and more content exists only in mobile apps and walled gardens they will lose revenue.
My protest is to talk about this issue in this HN thread, and to just install the extension so that the sites start to "just work".
I'm not excited about it. I can't remember a single time a page with lazy-loaded images felt snappier. It seems very dependent on latency to the host serving the images. Most pages also don't account for content shifting when images haven't loaded yet.
A note on "harmful" as an adjective: It depends on the speaker.
any open source project aiming to bypass the Great Firewall of China is "harmful", according to the Communist Party.
How is Google the only search engine? A different one is literally a click away. This just doesn't pass as an argument.
I can’t. 95% of my visitors will come from Google. As result, I’ll have to accept whatever demands Google makes.
So it's not only possible to do, they've literally been doing it for 5 years now.
Firefox doesn't appear to do prerendering, but it will happily prefetch as well and has since Firefox 23.
AMP's cap on file sizes and other limitations helps these relatively ancient prefetching & prerendering work better, but you don't need AMP to get preloading. Not at all.
Don't know, if it's necessarily "happily" in the case of Firefox. It's hardly optional, if you want to compete at all in terms of speed with browsers that are doing the full prerendering. And they are aware of the implications, which is why they're not doing it fully.
The pages load fast, they are just styled to not be visible until after 8 seconds.
For example, using the site used in the article, you can block 3rd-party scripts and bypass the AMP CSS using the following cosmetic filter in uBlock:
scientias.nl##body:style(animation: none !important;)
When I found out about this, I tried to find the reasons for this artificial "delay" in the AMP documentation: I can't find any valid reasons for the artificial delay.
The net result unfortunately is that most users wanting to block `ampproject.org` out of privacy concerns are going to feel the need to whitelist `ampproject.org` to "un-break" a site making use of it.
They'd need to host the actual page on Google.com. And after solving all the problems that doing this introduces, you've pretty much got AMP already.
Even if you can't package up and ship all of your traditional site to Google's CDN, you could do most of the burdensome/heavy bits. But then Google doesn't get to control your website and define the way it's allowed to look, which is what AMP is really for.
I also can't imagine the amount of shit Google would have taken if they'd started just randomly doing that kind of thing for existing web pages. Instead they introduced a totally new mechanism (i.e. AMP) where the caching was a core concept from the start.
> Even if you can't package up and ship all of your traditional site to Google's CDN, you could do most of the burdensome/heavy bits. But then Google doesn't get to control your website and define the way it's allowed to look, which is what AMP is really for.
But "heavy/burdensome bits" are exactly the things that matter the least for this use case. Ideally they would not exist at all. If they do, they should not be speculatively prefetched.
It'd also mean that these pages are now tied to Google's CDN, no matter what. Have a user click through the link from some other source than a Google search result? They'll still end up loading the resources from them. Is that really what you want?