Hacker News new | past | comments | ask | show | jobs | submit login
Google AMP lowered our page speed, and there's no choice but to use it (unlikekinds.com)
519 points by luu on April 10, 2019 | hide | past | favorite | 308 comments



I'd love for Google to explain the justification for putting _only_ AMP content in the "Top Stories" carousel. Google needs to either rename the carousel to be "Top AMP stories" or allow non-AMP stories to appear in the carousel. Isn't the whole purpose of search to prioritize the most relevant content, regardless of the type of HTML used to create it?


I've asked the AMP team exactly this and they punted and demurred on the subject and said they have some vague plans to some day make it also contain sites that are as fast as AMP.

The AMP team will tell you that they are concerned with making the web faster, and it's the _search_ team that controls the carousel. Which is all well and good, but I can tell you that the _only_ reason any publisher is doing AMP is because they want the placement in the carousel and better search results.

As far as I can tell, google has no intention of ever allowing non-amp content they can't re-serve from their cache and add whatever analytics and tracking they want to from being ranked highly. AMP is a tactic to let them track the most lucrative mobile web users and take control away from publishers of original content.


There is nothing about an AMP page that makes it inherently faster until it lands in and is served by a Google regional cache server. Otherwise my site is going to be just as fast. I can make just as fast web pages and serve them from a server sitting in the same state as my customers. There will be no need for Google level scale for my site as it is regional. Yet I still get put below AMP sites.

From many different angles, AMP getting preferential treatment is heavy handed and clumsy, manipulative and forcing technology standards on the web that no one agreed on.


AMP limits what the page can do, and allows Google to safely serve content through a cache. So Google can preload and prerender the page in an iframe. As a result clicks on the carousel are literally instant. That can't be done safely with a regular HTML page.


I can serve content in miliseconds after the network request has been made. Near enough to appear instant to my users. I have caches too.

You can't help the network request if it has to be made when the user clicks but there is nothing stopping browsers from preloading my site and nothing stopping google from noting that my site could be preloaded when it crawls it. Limiting ranking influence to sites with the AMP structure is an option Google has decided to make and not an inherent restriction in the technologies at play.

The technology behind AMP is very cool mind you, I just find it being an exclusive ranking factor to be an unecessary pressure to use their technology when many other technologies could work just as well. Google is a major force in the way people make websites and they are being very heavy handed in this instance.


> nothing stopping google from noting that my site could be preloaded

There's a whole host of security problems with trying to see if an arbitrary site can be preloaded. Logging, tracking, etc, not to mention more nefarious tricks like mining Bitcoin in the background.

Maybe Google could try to detect those things, but then you just end up in a cat and mouse game. A "safe" subset of html and js that can't do those bad things is much, much easier to analyze. One such subset is AMP.


A safe subset of html that they can analyze could be summarized in a document and ranking things higher that conform to that subset. That is not what AMP is.


That is exactly what AMP is attempting to be. It's a subset of HTML and CSS which guarantees that the site can be preloaded in a privacy-preserving way. There are a lot of things sites want to do that can't be done in pure HTML and CSS and allowing full use of JS would be incompatible with this goal, so that gap is filled with open source libraries.

(Disclosure: I work at Google on Ads, and I work with the AMP folks a bunch. Speaking for myself, not the company.)


I'm curious what you think AMP is then, because what you describe is, quite literally the AMP projet. (Google, as well as Cloudflare and Bing also add a cache on top of things to make the preloading faster, but that's not technically a part of the AMP standard).

I'll grant you that the branding/communication here is atrocious, but as I'm reading this, what I'm seeing is "They should have implemented <description of AMP>, instead they implemented AMP, which is much worse."


TIL <amp-img> is a standard HTML tag that works without loading the AMP JS.

AMP supports a subset of standard HTML, plus a whole bunch of AMP-specific extensions. That's a significantly different thing from being just a safe subset of HTML.


https://developer.mozilla.org/en-US/docs/Web/Web_Components/...

Webcomponent custom elements are part of the HTML5 standard.

`<literally-any-element-name-i-want-here>` is a valid subset of html. You added the "standard" modifier, which like, I mean, WebComponents are part of the standard, so using them is still a subset of standard HTML.

If you mean "they should have limited themselves to using older HTML APIs and doing custom polyfills based on css-classes instead of element names", which is probably closer to what you actually mean, I'd ask why.

The behavior would, in general, still be the same. You'd be required to include some metadata information, you'd be restricted to some amp.js provided to you, and you'd need to use class='amp-html' instead of <amp-html>. And that would be the entire difference.


So wait, the AMP project is just a document saying limit yourself to these things and we will rank you higher in search results because these things will make your site perform better and we want better performing sites?!?

Wow, color me disabused of the notion that AMP added stuff on top of that simple requirement to allow capture of publishers by Google, I had assumed bad intent and invented fanciful scenarios of what I would do if I were an evil corporation trying to control the web like requiring people to load my script on their pages and only using my analytics library so I would have all that data for me, or maybe just create a 'standard' set of components that I control and have people use those so they have tied up money in implementing my tech and I have lock in and in order to keep their ranking in my search app they sink more and more money into doing things the way I want, increasing my capture of them.

All sorts of evil stuff I might do, but gee, it turns out all Google did was publish a document saying use this stuff that is already standardized, and leave out these parts of the standards because they can be problematic and make things perform badly, and we will rank you higher in search results. Nothing else, just rank you higher.

Now I'm really sorry I thought Google was as evil as I am likely to be when working in a group of people trying to control a market. Google is as pure as the driven - gee, I don't know what, Snow? No, Climate Change means that metaphor is soon no more, what else can be driven, hmm, oh I know, a sociopath! Google is as pure as the driven sociopath, that sounds good.


> So wait, the AMP project is just a document saying limit yourself to these things and we will rank you higher in search results because these things will make your site perform better and we want better performing sites?!?

Yes. Or at least, in practice its not really distinguishable from this other than technical nits that aren't practically relevant to the overall design or what anyone complains about.

>requiring people to load my script on their pages and only using my analytics library so I would have all that data for me, or maybe just create a 'standard' set of components that I control and have people use those so they have tied up money in implementing my tech and I have lock in and in order to keep their ranking in my search app they sink more and more money into doing things the way I want, increasing my capture of them.

These are the things you have to limit yourself to. You have to limit yourself to predetermined subsets of Javascript, because if not there's no way to ensure the safety/speed/etc. of the site.

You're shifting goalposts here. Now you want the set of things you're limited to using to be broader than is possible for valid technical reasons. You're also conflating AMP and Google, which like valid, because its confusing, but again, AMP doesn't require Google anything. So all of your complaints about forcing you to use Google's analytics library or include Google's JS aren't true. Yes, for AMP (JS) to work you need to include a Cache's JS, and Google is the biggest cache, but they aren't the only one.

As for the rest of your post:

> Don't be snarky. Comments should get more civil and substantive, not less, as a topic gets more divisive.

Could you at least try to be polite.


Suppose that this is true and that there is no way to achieve this without AMP. Are we going to route the whole web through Google's server so that pages are rendered faster? This is getting silly.


I can't help but wonder what problem they're solving for. Is there a huge market of people clamoring to make webpages load faster than they already do?


You should try loading "the modern web" over a not great but passable phone connection (ie 3G maximum) on a non-flagship phone. Nearly every normal website you will have an exceedingly bad time on. That's what AMP claims to solve. Far be it from me to claim they succeeded or that they are doing it in the right way, but I think pretending the problem doesn't exist at all is disingenuous.


AMP is a terrible user experience though. Google Search on mobile is - pardon my language - fucked now. I hate it. I get trapped in the cache, getting to the actual source is pain, there's some stupid bar on top, websites are showing their own bars to signify that the user is not actually on the website. And I'm a techie, I know how this works! I imagine regular users must be utterly confused.


Thanks for voicing this out. I'm on the same boat, AMP is a terrible user experience. It takes me multiple clumsy steps to get to the content page I thought I was getting to when I clicked a link.

Apart from that a lot of websites started showing an overlay on top of the AMP pages, clicking on AMP pages don't work as expected a lot of times, it has a noticeable delay between actions (scrolling/tapping).

I disliked AMP when I first started seeing it but the way it has evolved has made the experience way worse than it started with... Probably because Google also caved in and tried to cater to customisations requests from big publishers.


I have 4G and a top end phone, and mobile web is getting worse by the day.

The first solution is not AMP. The first solution is to go back to when a web page was a "page", not a program made up of cobbled together bits of JavaScript.

Cookie preference popups, join the newsletter popups, animated or dynamic ads, and everything else stupid websites do that causes the browser's rendering engine to grind away non-stop...

The current user experience is worse than I think it ever was.

But as other posters have comments, AMP is probably intended to be a control and data tracking system first, with user experience being second or lower priority.


This is easily solved by installing Firefox for Android and uBlock Origin. I don't even understand how people tolerate the "modern" web otherwise.


Why not fixing “the moder web” then?


Google can't block ads and tracking scripts in Chrome without either blocking their own, or facing (valid) claims of anti-competitive behaviour. Neither of which would get the person who did it promoted.

And no attempt to fix the slowness of the modern web will succeed without blocking ads and tracking scripts.

This is the downside of the dominant browser being made by and ads-and-tracking company.


Because convincing all the website owners to do anything (even if it clearly benefits everyone) is an impossible task.

See HTTPS adoption.


HTTPS adoption:

https://storage.googleapis.com/cdn.thenewstack.io/media/2018...

Great example. You just proved my point that it is possible to have positive change in large communities.


I'm not sure that graph shows what you think it shows.

That graph shows that there's an obnoxiously large number of websites that don't support something that there's zero reason not to support.

It costs $0 and takes maybe half an hour to implement, browsers are screaming "not secure" to each visitor, and one in five websites still don't support it. "Make websites faster", on top of being vague, is way more difficult to implement.


1 in 5 NOT supporting it is much better than the opposite when I started working in the web where 90+% of websites didn't support HTTPS.


Well, I like the percentages and the direction that the graph is heading.


You don't have to convince anyone, it's about incentives. The article and many others have shown exactly how to do this: make the metric a big part of search rankings.

If fast site performance ensures you get listed first, every site would get faster overnight.


Isn't https over the hump in most of the world?


Yes. In many areas where phones are the main tool used to access the internet, and coverage is primarily still 3G. Not defending AMP - but making pages load faster is still something people want and need.


For a lot of niches usage is > 50% mobile already.


There are billions of people with capped data plans and where 200kb/s is the standard.


I don’t appreciate Google preloading pages that I didn’t ask for, resulting in my limited data allowance being wasted.


There are no AMP pages that weigh less than 1MB.

However, since Google aggressively preloads most of that data while you search, it gives you the illusion of AMP pages being lightweight.

Non-AMP pages? Oh, Google will happily penalize them even if they perform better.


I did a mobile Google search for "Trump" to bring up the top news section on Google, and was able to find sites (generally non TV-news sites which aren't doing video with every story) that weighed less than 1MB:

USA Today (582KB, first item in the top news section) https://www.google.com/amp/s/amp.usatoday.com/amp/3431466002

Vox (486KB) https://www.google.com/amp/s/www.vox.com/platform/amp/policy...

The New Yorker (565KB) https://www.google.com/amp/s/www.newyorker.com/news/current/...


Unless those articles are packed with raster infographics that's pretty bloated just to send a couple of pages worth of text.

I meam, why exactly are 500kb of cruft is being served to present 5kb worth of data?


The EU experience of USA today weighs on at 248KB and that's without AMP

So even if the AMP Pages are small, they're still much larger than the page ought to be


Sure. But USA Today EU is not really a fair example. You can‘t really expect publishers to serve absolutely no ads and tracking at all. Unless you want them to die. There are initiatives like Apple News+ and Subscribe with Google but they are not mainstream yet. And individual subscriber options for news portals are way to cumbersome (and perceived as too expensive) for users.


I agree, but the solution is for ad companies to improve so you don't need tons of JavaScript to display an advert. Why can't they be done server side, rather than handled by client side JavaScript? A lot of web apps now employ SSR, so having the adverts use SSR should considerably speed them up


I ran all links below with Google Chrome Tools enabled, no cache. Switched to mobile view to make sure that AMP pages are loaded. Cache disabled to see the actual weight of the page.

---

USA Today AMP page: 45 requests. 755 KB transferred. 1.6 MB resource

USA Today EU Experience: 10 requests. 209 KB transferred. 251 KB resources.

AMP version is significantly worse for essentially the same content (AMP version and EU version are nearly identical)

---

VOX AMP: 37 requests. 575 KB transferred. 1.4 MB resources.

VOX AMP not served from Google [1]: 22 requests. 403 KB transferred. 935 KB resources.

VOX non-AMP [2]: 19 requests. 627 KB transferred. 1.3 MB resources

So, AMP is worse when served from Google. And on par with non-AMP version

---

The New Yorker AMP: 129 requests (and they keep coming). 796 KB transferred. 1.9 MB resources

The New Yorker AMP not served from Google [3]: 70 requests (and they keep coming). 745 KB transferred. 1.5 MB resources

The New Yorker non-AMP page[4]: 245 requests (and they keep coming). 8.2 MB transferred. 13 MB resources.

So. The only example where the AMP page is significantly better than the non-AMP pages.

---

But the page weight is already accounted for in Google Search algorithms, and The New Yorker page should have been deprioritised from search. It's not, it's in the carousel, it will redirect to the 13MB version on desktop. Meanwhile, as Vox, and USA Today and many many many others show, the regular properly made website will not differ significantly from AMP versions.

[1] https://www.vox.com/platform/amp/policy-and-politics/2019/4/...

[2] https://www.vox.com/policy-and-politics/2019/4/10/18305175/t...

[3] https://www.newyorker.com/news/current/william-barr-goes-ful...

[4] https://www.newyorker.com/news/current/william-barr-goes-ful...


Yeah, I am sure there is some sort of KPI tracking engagement which helped to inform this behavior.


Yes.


Yes have you seen as lot of websites these days


As a former SRE for Google who cared for AMPHTML FROM BIRTH, NOBODY IS FORCING YOU TO USE AMP. In return for free hosting of the web content the publisher agrees to use the AMP format/subset. The whole point of the project was to stop walled gardens, like BBC app, like CNN app,etc., which arose because of crappy slow web pages! That was killing the searchable mobile web! AMP is to enable mobile search! The motivation is in the first page of every dang design doc for AMP at Google! You might be really surprised to learn that your supposed plan for world domination is run on a shoestring and a low priority at Google!


Wow, a "shoestring low priority" project gets top search result placement, a custom icon callout, and cannot be disabled. What do the high-pri projects get?


High-prio projects get to be a requirement for commenting on Youtube.


[flagged]


What's your response to this?

https://news.ycombinator.com/item?id=19632481?

If that's the goal, then it does not seem universally effective.


When it affects your bottom line by changing ranking factors then it is no longer as optional as you make it out to be.

Google decided to dominate the search world, and it now underpins a huge portion of online businesses. That comes with a level of responsibility to be fair to it's patrons that in this particular instance I feel isn't being met.

Of course they can do whatever they want, it is their product, but we don't have to like it.


> NOBODY IS FORCING YOU TO USE AMP.

That sure doesn't seem to be true at all.


It's quite ironic that Google is getting into trouble with the EU for nonsense like excluding search engine spam from its results, when it's stuff like this that really has that classic Phone Company feel to it.


The EU works slowly so it is way too early to say if Google will get in trouble or not for AMP.


The problem is that doesn't work to produce desired behavior. If you slap them for the wrong stuff then one of two things happens.

The first is they conclude they're going to get slapped no matter what and then do whatever they want and write off the penalties as unavoidable because better behavior doesn't actually avoid them anyway.

The second is they conclude that the only way to avoid the penalties is to grease the right palms, you induce them to figure out how to do that, and then they still do whatever they want because once you force them to corrupt your institutions in order to not be treated unfairly, those institutions no longer threaten them even when they misbehave.

The only scenario that leads to behavior improvements is the one where all the penalties are assigned justly and proportionally, to behavior that could reasonably be predicted ahead of time as prohibited.


> If you slap them for the wrong stuff

None of the FAANG is being slapped by the EU for the wrong stuff. They aren't being slapped for your specific example (yet) but that doesn't mean that forcing their self-serving choices onto powerless users deserves a pass.


> None of the FAANG is being slapped by the EU for the wrong stuff.

A lot of the other stuff is things like not putting links to another search engine's results page in their search engine's results page. It's pure nonsense.


> The first is they conclude they're going to get slapped no matter what and then do whatever they want and write off the penalties as unavoidable because better behavior doesn't actually avoid them anyway.

European Commission fines are no small matter. They may be small at first, and may even be zero (just a warning), but the policy is to increase fines on non-compliance up to the point where the target complies. They can't write off penalties as unavoidable.


And if the fines are being imposed on actions that it would be hard to predict ahead of time would result in a fine, that leaves them with the second option (use money to buy influence), which is probably the worst of all because it's stable.

Once you convince a company that paying whatever it takes to buy a government is the only way to avoid random multi-billion dollar fines, you've created a long-term structural problem, because it becomes the status quo and is hard to undo. And then the government can't even punish them when they're actually being bad.


I'm not sure that amp is available in Europe. I live in a EEA country and I've never seen an amp page. Curious if this is the case in the rest of Europe.


With the coming ITP changes in Mobile Safari, it also helps Google maintain 1st-party cookies by never leaving their domains.


Also often defeats Safari Reader Mode, though I guess there will be a bit of an arms race here.


Safari Reader mode is too stupid to use the canonical URL. That's Apple's problem.


Web users and independent sites need more stories like this as it's impossible to understand these things from their perspective unless you have this kind of traffic.


Is looks like Google just needs AMP to be able to display other sites' pages on its site and mobile app. But I don't understand why they just don't take a "Reader mode" implementation from Firefox to produce text-only pages instead of making publishers develop a second version of their sites.


Reader Mode (Simplified View in Android Chrome) drops ads, which publishers would not appreciate.


There should be a law!


There should be, but there won't be, at least not anytime soon. This is still the wild west. Google is the corrupt sheriff whose task is to keep the place relatively safe and productive and is politically clear to enrich himself by his position, as no one else is going to pay him and no one important cares.

Give it another fifty years and the web will finish civilizing.


Does “civilizing” mean “entirely taken over by feudal lords instead of just mostly”?


The judiciary is quite literally the outcome of a feudal society (courts?), but that doesn't mean that civilized societies don't benefit from establishing rule of law.

In fact, the absense of laws just means that some random individual can and will have power over you without you having any say on the matter.


I chose to see that positively. In 50 years the Web has civilised into something that is not run by a big company and governed by sane rules.


> Give it another fifty years

I see you have not been following the CO2 news.


The reason I have gotten from google employees is that the top stories carousel preloads content, and that in order to preload content without leaking a user's searches, google needs to serve their own cached version of a page. in order to make sure that cached version of a page isn't doing anything tricky, they need to heavily limit what it can do.

it's not about making the internet faster, it's about making it easy for google's stupid scroller.


That's a bit circular. The carousel predates AMP, so they somehow knew how to manage this in the past.


Sure they did — partnerships and whitelists. Is that what you would prefer?


I believe AMP to be a sort of trojan horse that leads to a fragmented internet. A way for Google to have some degree of a walled web garden. So yes...I'd prefer almost anything else. If you're signing a contract to be part of it, at least you know what is happening, and can read the terms.


Yes. Since I automatically ignore anything in the carousel anyway, I'd much prefer that AMP be nonexistent regardless of what that means for the carousel.


but the carousel didn't preload content before amp


It also didn't hijack the page header, back button, and left/right swipe actions on other people's pages.


Another thing might be that this way Google can observe user's behaviour: how much time they spent reading the article, what part did they read carefully etc. which is only possible if the user stays on Google's domain.


Throwaway for a Google search engineer here. I can tell you we aren’t interested in that at search as a signal, but publishers who are interested can run their own analytics through our platform. I’m not aware of any that do it to that level of granular detail.


It would seem odd for Google to focus on that level of detail unless they’re producing the content. They know what you searched for, what content is on the page and if/when you came back to Google. How long you spent on a specific paragraph or image doesn’t seem like it would help Google improve search or target ads better.


It does help with improving search, though, because you now have data that shows that the words in that paragraph are somehow closely related to the search term. This is relevant data to feed into the ML models.


> it's not about making the internet faster

Are you saying preloading content _doesn't_ make the internet faster?


https://news.ycombinator.com/item?id=12542782

AMP is search engine "paid placement" reborn.

The "payment" is letting Google serve the pages and do the user tracking.


They're looking at another antitrust case in the EU.


It certainly smells of the shady shit Microsoft was doing back in the day.


No the purpose of Google Search is to facilitate selling Ads.


What do you think? :-) it's obvious, it's capitalism, isn't it?


I'm not Google, but the reason is obvious to anyone who understands AMP. Only AMP can be guaranteed safe to preload through static analysis, and a carousel lets the search results page figure out which pages to preload trivially (the results currently visible in the the carousel).


> Only AMP can be guaranteed safe to preload through static analysis

This isn't how static analysis works. Yes, the halting problem says you can't determine safety of any possible input in a Turing-complete language, and yes, JS is Turing-complete. But it doesn't say you can't determine safety of some inputs. You can write an algorithm that outputs "Yes," "No," and "I don't know" just fine. Google can, if they desire, specify some rules / annotations that would help its analyzer answer "Yes" to more pages. They've chosen not to do that, and instead to only build an analyzer for AMP.

Besides, preloading pages is about fetching content, not rendering them, I think.


> specify some rules / annotations that would help its analyzer answer "Yes" to more pages

Isn't that exactly what AMP is? Whatever "rules" Google specifies will in fact, define a DSL/subset of HTML that can efficiently answer "yes". As people demand more and more capability in this ruleset, eventually you'll end up with something like AMP, only the syntax will be slightly different, but the fact that it is a weird subset of HTML will remain.


In a way yes, but isn't one of the "rules" that is has to be cached/re-served from Googles servers as well? That is a rather problematic one, especially wrt antitrust issues...


No, those rules guarantee that the page is safe to preload. It will also be cached and preloaded by Bing, Baidu, and Cloudflare.


AMP doesn't allow arbitrary JS. It only allows a fixed set of components. That's what makes it amenable to (trivial) static analysis -- just verify that the page contains only the elements allowed by AMP.


> Only AMP can be guaranteed safe to preload through static analysis

No, AMP pages can't be proven safe, because with AMP you can include ads which are a very common vector of malware.(if I understand your comment)


Ads aren't preloaded in AMP pages.

Also, I don't think you understood my comment. I mean safe as in not deanonymizing the user to pages they don't visit. If you preload a non-AMP page, you have deanonymized your user to a third party publisher and the ad servers on that page.


> Ads aren't preloaded in AMP pages.

That does not make a big difference.

> Also, I don't think you understood my comment. I mean safe as in not deanonymizing the user to pages they don't visit. If you preload a non-AMP page, you have deanonymized your user to a third party publisher and the ad servers on that page.

Sorry I still don't get it, with AMP it's exactly the same except that 3rd party is the Google AMP server. You can also load analytics & trackers with AMP as well. There's no privacy in the AMP design goals.


If Google allowed arbitrary pages in the carousel, and then preloaded those pages, then BBC and BuzzFeed could insert trackers that let them know what you searched for even if you never click on a result.

This is not true for AMP analytics because Google define the JS and can defer sending analytics events until after the user has clicked on an item in the carousel.

Google can either preload content, allow non-AMP pages, protect users from data leakage, but not all three at once. They have chosen to sacrifice non-AMP content.


> If Google allowed arbitrary pages in the carousel, and then preloaded those pages, then BBC and BuzzFeed could insert trackers that let them know what you searched for even if you never click on a result.

There's nothing you can do with the carousels that they can't do by just parsing the content or meta tags they could have defined. And when you click on the carousel, it could have redirected to their page.

And before you talk about the "speed" of preloading like I've seen this argument over and over again on AMP threads, AMP could have defined an HTTP header which would have made the browser understand it's an AMP page and load the cached AMP JS (among other stuff like refusing to load non-AMP content), there's no technical need for an AMP server.


Google can choose to sacrifice non-AMP content. But the header above this feature should read something like "AMP Content," not "Top Stories." The latter strongly implies that those are the most relevant results from the user's web search, but they are not. They are a subset of the most relevant results from the user's web query. A more precise and less misleading heading would be appropriate.


Cannot this problem be solved by producing a text-only version of the page using Firefox's reading mode algorithm that removes everything except the article text? This way publishers don't have to invest resources in making a second version of their site.


Good luck browsing news stories with video content or image slideshows. Also publisher analytics, advertising, and all the other basic staples of web journalism. But yes, if you completely change everything about the business model of online news, reader mode might be viable.


> *Also publisher analytics, advertising, and all the other basic staples of web journalism. But yes, if you completely change everything about the business model of online news (...).

Yes, this is the solution and should be the ultimate goal to fight for. Those "staples" are precisely what's ruining the web journalism, and the Internet at large.


Let me introduce you to USA Today European Experience: https://eu.usatoday.com

It’s possible to create a non-bloated fast news website. Neither publishers nor Google are interested in that.


> Neither publishers nor Google are interested in that.

See previous comment about the way the online news business works.


> Then there’s users begging Google to allow them to use more than 50kb of CSS. Yes, most site’s CSS is bloated. But 50kb is an absurdly small, arbitrary limit.

I think for static articles 50kb should be plenty. The linked page itself uses only 35kb and a third of that is fontawesome. And 32kb of that is actually unused without javascript enabled. So the site only uses 3kb, but 50kb is absurdly small?

This site (Hacker News) gets by with 2 kb.


A given page may only need a dozen kilobytes of CSS, but the combined amount that covers any possible page might be higher. For example a news article might have a video player, audio player, image carousel, map, data tables, scroll away images, etc.

Normally a page could insert a link for a global CSS document, or have a document corresponding to each feature (video.css, audio.css). AMP doesn't allow this, you have to inline the styles into a single style block in the head of the document.

The only way to do this if you have a site with a long tail of features is to track which elements are rendered and then before flushing the page calculate which styles should be inlined.

AMP's approach is workable, but is at odds with how most sites and frameworks use CSS, and means the web framework and static resource build pipeline have to be redesigned around AMP constraints. And in the worse case, you'll discover than some content combinations happen to trip the 50k limit anyway, and silently fail.


> A given page may only need a dozen kilobytes of CSS

I think you would be challenged to find a single page that needed that much; please give an example if you have one. Most single page articles I have tested use less than 5kb. codepen.com uses like 6 kb on load of a new pen. gmail.com completely loads with 5kb and doesn't load more.

> For example a news article might have a video player, audio player, image carousel, map, data tables, scroll away images, etc.

This is definitely a concern, but 50kb is high enough that you should be able to fit custom css for everything you mentioned. I think 10x what most single pages use is pretty reasonable, and certainly not "an absurdly small, arbitrary limit".

A significant portion of time is spent calculating styles from css so keeping it small is a real bonus to load times.

> AMP's approach is workable, but is at odds with how most sites and frameworks use CSS, and means the web framework and static resource build pipeline have to be redesigned around AMP constraints.

Pretty much every site I have looked at since my first comment besides Github loads less than 50kb (much of which is unused) anyway. If you are really hitting the 50kb limit, you probably need to redesign anyway for desktop so AMP should come mostly for free.


If limiting CSS to 50k per page would prevent news articles from including "a video player, audio player, image carousel, map, data tables, scroll away images, etc." then I wish browser makers would include it as an option so I could apply it as a filter everywhere. I don't want all that stuff to begin with.


The whole of jQuery minified is 30kb. Why you would ever need 50kb for a single AMP page is beyond me.


It's mostly just long class names and things like that.

If you minified the css along with the html you could get it significantly smaller.

Also using individual css properties instead of combining them using the shorthand syntax, I assume any minifier would do that as well.

Of course it's probably really hard to do because some class names are en/dis-abled in javascript and the names have to stay.


There are no AMP pages that weigh less than 1MB.

However, since Google aggressively preloads most of that data while you search, it gives you the illusion of AMP pages being lightweight.

Non-AMP pages? Oh, Google will happily penalize them even if they perform better.


To be fair, HN looks like it was designed in the mid-2000s


And yet it works.

The problem is designers and their bosses/clients. They equate design with looks. As a result, they see design as solving the problem of aesthetic. They fail to realize this is a problem which very few people care about on the web. So long as it passes the smell test of credibility, no one cares. (Excluding a handful of cases.)

As a result, we, the users, must suffer.


That‘s true only for (some) tech-literate people. Everyday users will quickly think something looks ugly/old/cluttered when compared to the other apps/sites they use.

I‘m an outlier on this site because I generally like redesigns and all that comes with modern web stuff (white space, rounded corners, flat, light drop shadow, generally clean and option for dark mode).

I find it‘s easy to test. Take a redesign you didn‘t think was necessary and then look at it 3 years later. Design changes over time. It‘s just life.

HN for some reason did age decently because it‘s very spartan and minimal. I like it. Even mobile is fine for reading. Not so much for contributing though.

Edit: Actually, everyday users might not care much either way. UNTIL some other site with similar functionality comes along but looks much more modern. Or if the site is trying to get new users.


"I like" is precisely the mistake I am addressing.

Design is not about what one likes. It is about what helps one solve a problem.

Web designers and their bosses/clients reduce "design" to the creation of the look-and-feel of websites. For them, design is all about how something looks. This is something which is highly subjective.

The flaw in this is that it's not how users think. Users have a job-to-be-done. A good design is one that helps them accomplish that job. A better design is one that helps them accomplish that job better, faster, or easier. A bad design is one which doesn't help them or makes it worse.

Consider a monolingual English speaker using an ATM in China. You will never hear them say, "Well, I can't get my money because I don't understand Chinese. However, this ATM looks so nice I'm going to try to use it again."

It doesn't matter how great that ATM looks if the user cannot accomplish their task.

Yes, aesthetics have a place, but it's a very diminished place of importance. Aesthetics is much less important than most web designers and bosses/clients think.

It's time they get over themselves and start thinking about users.


Early to mid 2000s was the best time in web design. Everything before that was "omg, i can put animated images and color on the web, LETS PUT ALL OF THEM IN THERE!" (aka the Geocities School of Style) and after after that was "OMG iPhone is kewl, lets make everything 20x sized so that people can rub their screens with ease (desktop? what desktop? that is sooo yesterday, and dying, get with the times grampah, today real programmers make web applications in their iPads while drinking latte soda cappuccinos and eating gluten free croissants at plate free coffee shops with hand demoisturized tables)".

I mean, early/mid 2000s design was still bad (especially when people learned about the gradient tool and drop shadow filter), but at least browsers were limited enough to mostly contain the damage.


I tend to agree, there is no reason to change the hn look and feel. If it were a corporate product it would have had a dozen product managers trying to make a name for themselves in redesigning it ad nauseum, just for the sake of change and advancing their own careers. Also see: Wikipedia (more or less), Craigslist, DuckDuckGo (I’ll defend that one), and scant few others. News sites like newspapers and large blogs are particularly egregious imo (but I think much of that is driven by demands for revenue by selling more ads and tracking).


HN was designed in the mid 2000s and has changed very little and that’s a good thing.

Just for grins and giggles, I occasionally charge my first gen iPod Touch from 2007. Most web pages are unusable - except for HN and daringfireball.

I have no affiliation with the site below. I just saw it on Show HN a few years ago.

http://tenyearsago.io/news.ycombinator.com


Being stable is a good thing but please don't conflate that with being sterile.

The mobile UX here is non existent.


> The mobile UX here is non existent.

It doesn't have any specialist mobile UX, but it also doesn't have any need for specialized mobile UX. It has pretty nearly the best mobile discussion UX I've seen simply by not trying too hard.


I have to zoom to click things.

It has clean UX for desktop. Which thankfully translates to mobile good enough because we built mobile concepts to deal with it (zooming).

But no, that doesn't mean it has mobile UX.


HN works really well on mobile. The only issue is that the voting arrows are a bit small but otherwise it is one of the mobile sites I use with the best UX experience.


Also you have to use a mobile browser that isn't fundamentally broken in it's handling of <pre> tags, which some people apparently won't or can't.

  # Otherwise the ends of really long lines scroll off the side of your phone and
  # you can't read them because your browser is a piece of junk.


The only other way to handle <pre> tags is to wrap these long lines, but i do not see how that is not fundamentally broken considering that the entire purpose of the <pre> tag is to show preformatted text.


What’s the expected behavior? I have to swipe to scroll the text.


I disagree. I browse hn almost exclusively on mobile and I think it is fantastic. Compared to other sites, many of which won’t even load when my connection is shoddy, jump around as they’re loaded, require me to turn off my ad blocker just to render properly, etc., and I’ll take hn’s mobile UX any day. Two examples of terrible “modern” mobile UX are (new) reddit and LinkedIn.


Yea but I'm not asking for a react rewrite I'm asking for fonts and UI elements to not by microscopic or disappear on me when I misclick on the wrong microscopic thing.


You say that as if it's a bad thing.


And that's a bad thing, why?


That's a good thing.


Does this site run on AMP?


What in AMP would make HN need 50+ KB instead of 2 KB of CSS?


> For a start, anyone contributing to AMP is required to sign a contributor license agreement (CLA) for their code to be accepted into the project

This is pretty standard. You have to sign one to contribute to Emacs.


I noticed this too, and immediately thought of the FSF. But I kept reading and thought this was different:

> Note, you don’t grant these rights to the AMP Project, you grant them to Google. Google owns the code and patents.


This is standard. Copyrights are assigned to the body that owns the project. For EMACS this is FSF, for AMP this is Google. The only time it's something different is if a project is owned by a foundation just for that project but super common in the OSS world. Have you done much OSS development?


One very important difference is that the FSF is a 501(c)(3) non-profit, and the terms of its charter and of the copyright assignment state that the software must remain free in the future. Effectively, the only thing the CLA does is make it easier for the FSF to enforce the GPL on behalf of the developers.

On the other hand, in AMP's case the CLA allows Google to start distributing the software under a proprietary license in the future, if they so desire.

One example of this difference was the time when Gitlab stopped requiring a CLA, after being prompted to do so by the Debian project: https://about.gitlab.com/2017/11/01/gitlab-switches-to-dco-l...


It does, however, contradict Google's insistence that the AMP project is independent form Google.


I have done quite a bit of open source stuff and so far I gave manged to avoid projects which require a CLA. In my experience the projects with a CLA are a minority.


Why don't projects let you sign away your copyright so your contribution becomes public domain? Is that even possible?


In many countries authors aren't allowed to put the code under the public domain. To avoid these issues it is instead preferred to provide an explicit copyright license.

https://opensource.stackexchange.com/questions/1371/


What license is AMP ... GPLV ? BSD ? MIT ?


AMP's license is technically Apache 2.0, but functionally, AMP's license is irrelevant. The only AMP code that matters is the code implemented on Google's servers for serving search results, as that is what everyone must comply with to be placed well on Google. Nothing about openness or transparency or governance of the project actually matters, because the issue is how AMP is used by Google on Google servers.


"Standard" != right.


When you contribute to Ubuntu or related you sign away to Canonical. It makes sense.


that can be done in amarica, not here


A reminder to all AMP-haters like me: When using Firefox mobile, Google search results become free of AMP crap.

Also, you get the added benefit of a browser whose maker has no interest in tracking your every move.


And it has extensions like desktop browsers do, including ones that remove even more modern website bloat


You mean built-in extensions or can you add some now?


You have always been able to use every desktop extension on mobile FF, at least as long as I have been using it.


Android or iOS?


Android. iOS's firefox is Safari (like every browser on the iPhone) with a firefox overlay.


This has been possible for as long as I'm using it on Android. Ability to run uBlock is great.


Microsoft was sued by DOJ by bundling IE with the OS. Google is doing a similar thing with AMP. They are walking a fine line.


you don't HAVE to use google search. everyone just prefers it. people prefer bing for searching videos(less censorship) and duck duck go for more organic results.

i don't think the AMP carousel is anything but a feature. people need to stop treating companies as services. products can go away in the blink of an eye.


Why is there constant advocacy to do away with anti-trust laws on this website?


Because there is a strong “libertarian” bent in the startup community. Why? Because there’s a lot of insanely wealthy people who influence the broader community, and over-privileged upper-middle class college kids who just read Ayn Rand for the first time and now think they’re enlightened who gobble up their religious fervor for “capitalism.” And why do those wealthy influencers praise capitalism? Because it provides a narrative for why they are successful that inflates their own egos and fits into their self-engrossed world view, for them to acknowledge anything other than their own brilliance having contributed to their success is to undermine their entire self image. /rant


how is it violating anti-trust is what i want to know. you don't put your website on google amp so google ranks you lower - they've told you this upfront. if you don't like google amp you can choose to not support it and take the advertising/SEO hit(which is really a monetary hit)

except no, people say they are "forced" to use google amp. well duh, google was also "forced" to make amp due to the shitty way people design websites in general(slow, ads in awkward places leading people to install ad blockers etc etc)

amp is inevitable in my opinion - the old style of ads are intolerable to most users now and google needs to keep up since ads are their core business


using your monopoly to force another one of your products on people violates anti-trust law

that's how


You never HAD to use Windows


It's not just a feature if clearly the first page on mobile search is all AMP. Anyone with half a brain realizes that if they want to get to first page on mobile, they need to use AMP. It's a requirement now, not a feature.


Google needs to be split up and shut down. Why aren’t the American authorities using the antitrust laws that is already there?


Basically because the current philosophy of the US justice department anti trust prosecutions is whether market dominance of the player harms ordinary people or causes them to incur higher cost. It is not really about actual monopoly power. Google has been pretty careful to stay consistent with this philosophy.

This wasn't always so in the US and it's not the same philosophy in EU. However, it is a valid and consistent view.


Corollary 1: You are legally allowed to kill all of your competition while not improving your monopoly product as long as you keep your product free.

Corollary 2: Human attention is considered equivalent to monetary value of exactly $0 by US government.

Corollary 3: You should never trust your government to be smart or do right thing for you.


Counterpoint: If you strongly want your government to change its approach, organize to change the government.

In US, it's very safe to do this without violence but even in significantly less safe places, people have successfully done this peacefully [1] [2] [3]

[1] https://en.m.wikipedia.org/wiki/2013_Delhi_Legislative_Assem...

[2] https://en.m.wikipedia.org/wiki/2011_West_Bengal_Legislative...

[3] https://en.m.wikipedia.org/wiki/October_2005_Bihar_Legislati...


Why would it need to be shut down.



[flagged]


I’m not defending Republicans but as someone else said American anti trust law has always been about protecting consumers and not businesses.

Unfortunately, everything I could site about the difference comes from Ben Thompson’s (of stratechery fame) Exponent Podcast.


The current interpretation of antitrust law in the United States is fairly new (it's only been around for 50 years or so, and as I understand it, largely driven by The Antitrust Paradox by Robert Bork [yes, that Bork]). Before that, there was broader enforcement in the antitrust sphere.


Of course there is a Planet Money episode about that....

https://www.npr.org/templates/transcript/transcript.php?stor...


- Killing the web with AMP

- Killing email with gmail specific garbage

- Killing mobile phones with walled gardens

- Killing browsers by implementing their own spec and fuck everybody else

- Killing open source tools by releasing closed source extensions for VS Code

Google has done a complete and total 180 in the past 10 years. I remember when the name inspired and made you feel safe that this thing you were using was made with character and thought to your well-being. Now it's a dry-heave feeling having to touch anything Google.


I don’t think they’ve done a 180, I just think we understand (through experience) their tactics better now.


We saw/feared where it would go and now it’s gone there.

Was it there 10 years ago? I don’t think so. At least not so explicitly.


They’ve been an advertising company since 2000, so I’d argue that yes it was there, people just didn’t recognise it/gave them the benefit of the doubt.


Granted, advertising in the early 2000s was vastly different though. It was served organically, with none of the tracking or embedded-scripting nonsense that's ubiquitous today (whether from Google or other actors). The very notion of "ad-blocking" or "tracker-blocking" would have made no sense back then.


While tracker-blocking in today's sense would have made no sense, there certainly was ad-blocking and cookie-blocking before the 2000s.

AdSubtract advertised it's ability to block doubleclick.com cookies in March of 2000: https://www.computerworld.com.au/article/91102/adsubtract_bl...

The article also mentions Siemens' WebWasher product which blocked cookies. Other cookie-blocking products were released in the same time period.


I was referring more to their motivation for doing things, not specific shitbag things they do now.


I remember using Privoxy back in the early 2000s because even then certain ad networks were regularly serving up malicious content and pop-ups.

at that time Google was the good guy serving up only text ads.


- Yeah, hate that Google walled garden. Much better to go with Apple.

- Yeah, that gmail specific stuff is evil, except it isn't gmail specific at all.

- Yeah, killing browsers with Chrome. Just like IE6 except it's open source, cross-platform, evergreen, and pushes standards forward rather then trying to subvert them. The biggest problem now is others like MS using their browser engine is reducing diversity, but I guess that's their fault for making it open source. evil!

- And yeah, they have now officially destroyed VSCode (which includes lots of closed source MS stuff) by releasing a closed source extension.


These arguments don’t invalidate OP’s critique of Google’s positions though.

https://en.m.wikipedia.org/wiki/Two_wrongs_make_a_right


Your only two options on mobile are Apple & Google and he is calling out Google (in fairly extreme terms) on a point where they are easily the better of the two.

It shows what a silly contortion he was going through to tell us to hate Google.


> - Yeah, hate that Google walled garden. Much better to go with Apple.

For a lot of people, it is.


I trust Apple not to sell my data. And I'm willing to pay the hardware markup for that.


The issue was OP's anger (to put it mildly) that Google was creating a walled garden.


[flagged]


Are you joking or serious? You stated you were a Google employee before.


What are you talking about. I never stated that because I am not!


Okay so you’re a troll.


I don't use Facebook at all. They are just as bad as Google.


This is natural for corporations which grow to this size. It's not about good vs evil, they just weren't at a scale where they could do these projects which are all based in some combination of good intentions, serving customers, and profit.


This isn't some law of physics. A corporation could decide that their ultimate goal is to improve society, the planet, employees’s lives, consumers’s lives, etc above pure profit. This would of course result in less profit.


I said these projects are launched with both good intentions and profit. That Google has done plenty to help the world is indisputable.


They could. But if they commit to relinquishing control of their enterprise to outside VC investors (as practically all SV companies do; there's no such thing as a self-bootstrapped unicorn!) they won't.


If you want an example of a tech company that actually decided to prioritize public good, I'd present Kickstarter, which has specifically registered itself as a public benefit corporation. They are admittedly a lot smaller, and I'm sure there were a lot of situational details that made it possible, but they did it.


There are billion-dollar companies like Zoho and Atlassian that have been bootstrapped.


Atlassian and Zoho are B2B enterprise shops. Not unicorns, and not facing the same bad incentives as consumer-facing companies.


A unicorn are startups that have gone to become worth at least $1 billion. That's it. B2B is not relevant to that classification.


Publicly held corporations won't make such a choice. The shareholders demand value over everything.


Purism (the makers of Librem hardware) are a "Social Purpose Corporation": https://en.wikipedia.org/wiki/Social_purpose_corporation

Which I believe both can be publicly held, and as stated in Florida, shareholders can actually go after them for failing to create a public benefit.

Perhaps in the future, we should be hesitant to ascribe positive social traits to corporations which don't have this in-built motivation.


Corporate management defines that value.


Because IE6, Outlook-Web/Hotmail and Windows Mobile were so much better, right? I understand your POV, but this has nothing to do with Google specifically. Nothing has changed structurally since 10 years ago, the game is the same as it always was. The Internet will detect damage and route around it, as usual.


The brain damage caused by Outlook/Exchange and IE remain with us. The brokenness and complexity don't go away; it slowly grows, making it increasingly difficult for decentralized communities, and increasingly easier for large corporations, to take the reigns.

In many respects the Internet doesn't route around damage so much as accumulate damage. And that's partly the result of apologists within the engineering community endlessly excusing Google (and more recently Mozilla), as well as increasingly greater numbers of engineers having no memory or even conception of running fundamental services independently. As the base layers become increasingly complex economies of scale increasingly prefer centralization, both in design and especially implementation.

There will always be competition. In many respects Cloudflare is a breath of fresh air, pushing against the stampede to AWS. Likewise, Mozilla viz-a-viz Chrome. But there are certain interests that large players will always share to the detriment of small organizations and individuals. DoH and DoT embedded in the browser intrinsically favor large organizations, and specifically Google and Cloudflare. And why do they promote DoH and DoT in the browser rather than promoting it at the system level (i.e. in Windows, Systemd, etc)? Because it's (a) less costly for them and (b) furthers their competitive commercial interests in their battle with other large organizations.

One reads endless apologia on HN about how people would rather send their DNS requests to Cloudflare and Google rather than their ISP, as if those were the only realistic options, and no matter that doing so doesn't require delegating even more functionality to the browser oligopoly. They've already given up. They see their own role as choosing sides as consumers rather than actively participating as producers of these technologies, which is so utterly depressing from the perspectives of preserving an open internet and promoting free software. (For example, web developers are fine with DoH because they don't see how it effects their own server-side and client-side development. It doesn't matter to them that it would make it increasingly difficult to independently implement and maintain the software and systems--they're just consumers. QUIC? Not their concern--that's the responsibility of nginx and node.js, no matter that those projects likely never could have been founded outside large corporations if they first had to climb the steep hill to HTTP/3 and QUIC.)

How do we stop this? By keeping an eye on long-term goals. While IPv6 and DNSSEC are more complex than some alternatives, from a long-term perspective they minimize overall complexity and substantially preserve independence, such as it is. Some options are even more clear cut: push DoH and DoT into the system resolver (and OpenBSD is doing, and projects like unbound and even, IIRC, systemd already support).


Regarding DNS forwarding.. I'm not far off giving up myself. We have DNS servers acting as recursive servers across the enterprise, and there's a bikeshed every few months about how it needs to forward to Google. Multiple external consultants have come in and explained that having a 1500 user enterprise refer to our own DNS servers will be just dog slow, because actually returning a cached result is apparently something only Google and Cloudflare can do.

I'm tired. I want to go make some changes that improve things. I expect I'll move to a forwarding configuration just so I can let people consider a "win" and move on something else.


> While IPv6 and DNSSEC are more complex than some alternatives

I'm not sure what you are getting at. IPv6 is pretty similar to IPv4 - and in some ways simpler - and I'm not aware of any other alternative. And I don't really see how DNSSEC (or IPv6) relates to anything else you are talking about - DoH addresses a different use case.


I intended them as general examples of people pushing against certain kinds of change in favor of simpler short-term options that favor or preserve centralization and complexity. I agree IPv6 does make many things simpler, but moving to IPv6 can be difficult and introduces some additional complexity (e.g. link-local addresses, shift away from DHCP, etc). The argument against IPv6 is that NAT is simpler if only because NAT is already the status quo, and we already have "solutions" to deal with the limits of small address spaces. How many people use Cilium with IPv6 transport instead of UDP over IPv4 over VXLAN?[1]

Local DNSSEC doesn't solve ISP confidentiality but it does prevent NXDOMAIN substitutions, which is one of the biggest immediate benefits for browser-based, default-enabled DoH/DoT. And one argument for doing DoH/DoT to centralized servers rather than locally is that it partly obviates the need for DNSSEC--it's "simpler" because it doesn't depend on orchestrating larger infrastructure changes. (I guess the logic is that without DNSSEC centralization is inevitable anyhow, and if people are convinced that ubiquitous DNSSEC will never come than doing DoH/DoT in the browser vs at the system resolver is a distinction without a difference. A poor argument but implicit nonetheless.)

[1] At work I suggested Cilium with IPv6 and IPsec for policy and LAN confidentiality of K8S clusters but everybody thought I was nuts and favored Cilium's UDP+IPv4+VXLAN mode and transparent Istio proxies for automagic mutual TLS. Why? Because it's what they know and understand and they can reasonably expect everybody else will make the same judgment. It's ultimately a much more complex, error prone, less performant, and less flexible approach but it's the path of least resistance. And in the long-term the complexity will mean proprietary managed solutions will prove that much more enticing, an irony apparently invisible to people who believe K8S will keep them independent of AWS, Azure, etc,


I don't have much to say regarding your work issue with IPv6 and a K8S cluster. However, I do have some comments regarding DNSSEC:

As I understand it, there you have basically 3 options for local DNS w/ DNSSEC - 1) you can run a local recursive resolver, such as BIND; 2) You can use a non-validating stub resolver which requires that you trust the the recursive resolver you use _and_ the channel to the resolver; or 3) You can use a validating stub resolver. The problem with option 3, is that I don't believe it's possible to do DNSSEC validation for a record without fetching a bunch of parent records - so you end up running something that looks a lot like a recursive resolver with a cache, so, its really not clear to me why someone would do this over option 1.

Anyway, that basically leaves you with two options: a recursive DNS server or you have to trust the recursive server that you do use and the channel to the server. While its fine to run your own recursive server if you want to, there are legit reasons someone (such as myself) wouldn't want to - its another service to manage and it may not perform as well as a shared cache in a recursive server someone else is maintaining. What that leaves me with, is that I want to use a non-validating stub resolver and I suspect that is true for most people as well.

Unfortunately, as DNS traffic is not encrypted, when I use a non-validating stub resolver, it's easy for anyone that can eaves drop on my traffic can log where I'm going. Anyone that can modify traffic, can send me whatever responses they want. It doesn't matter if the domain I request is protected by DNSSEC if someone in the middle sends me back a bogus response that my resolver has no way to verify.

DoH actually works alongside DNSSEC in some situations - by encrypting the channel between my stub resolver and a validating recursive resolver, it means that DNSSEC validation can't be easily stripped out by someone that controls whatever local network I'm connected to.

Even with a local validating recursive resolver, you have little protection against NXDOMAIN substitutions and the like unless the domain you are talking to has DNSSEC enabled. However, DNSSEC is being slowly deployed and, in my opinion, it's very unclear if it will ever get to 100%. Unless it does, something like DoH still has value in protecting users from interference on the local network for non-DNSSEC domains.

My points: I don't think that a Local DNSSEC resolver running on everyone's computer is realistic; DoH actually benefits DNSSEC by securing the channel between a stub resolver and a validating recursive resolver.

Sources:

https://tools.ietf.org/html/rfc4033#page-12

https://lwn.net/Articles/665055/


1) You don't need to personally maintain your local resolver. The OS can do this for you. Linux+systemd and OpenBSD already come with this out-of-the-box, and macOS and Windows could feasible do this as well. (IIRC some macOS services already rely on an internal caching resolver, anyhow.) OpenBSD 6.5 almost shipped with DNSSEC verification enabled by default but they decided at the last moment that it still caused a few too many headaches.

2) DNSSEC isn't a panacea and there are absolutely problems with depending on it. And of course it doesn't directly address the confidentiality issue, either. My point is simply that too many technically-oriented people are willing to throw up their hands; they'll give centralized DoH/DoT to Google and Cloudflare a pass as a fait accompli but endlessly bikeshed the downsides to DNSSEC.

In truth DNS confidentiality is a fundamentally difficult problem because DNS is fundamentally centralized, as would be any system reliant on recognizable identifiers for its namespacing. And while we can maybe trust Cloudflare and Google today that can easily change, and there's no going back once we centralize even further--as described by another poster who described how 8.8.8.8 is ubiquitous enough at this point that it has set expectations for actual resolution behavior. It's not DoH or DoT that is the problem, per se, it's the fact that it will be enabled by default and be performed by the browser directly to Google and Cloudflare.

Authentication is solvable, however, and while more difficult to orchestrate DNSSEC usage, the result would be preferable long-term compared to making Google and Cloudflare yet another point of centralization. Even if you think DNSSEC sucks it represents at this very moment a stark choice between a whole universe of sucky options in the future versus vs the preservation of our ability to pursue better options down the road, independent of its specific merits. Would you prefer DNSCurve over DNSSEC? Once they broker most DNS traffic both Google and Cloudflare will have an even stronger incentive to promote the most baroque alternatives because additional complexity favors the large incumbents, just as large corporations, once regulation becomes inevitable, will lobby for particularly complex regulations requiring particularly costly compliance as a barrier to competition.

And, again, it doesn't require you to necessarily do anything. You don't need to be a mechanic to understand the value in preserving the viability of being a mechanic.


The main challenge I see in what you are advocating is that it's not clear to me how to avoid having a large, centralized recursive resolver such as 8.8.8.8 or 1.1.1.1, without increasing DNS query time or having to deal with badly behaved ISP resolvers.

* We've tried having ISPs run recursive resolvers - and many ISPs demonstrated that they either ran slow servers, servers that sent back faked responses, or both.

* DNSSEC lets you validate a response, thus getting rid of faked responses. However, even if the OS making running a local DNSSEC validating resolver easy, I don't see how that can be made fast. In order to validate any response, it has to be validated up through the root, which requires more requests. Caching helps, but, it still won't be as fast as a well implemented shared cache.

* It seems like it would be possible a central cache to return all of the signatures up through the root in response to a request to allow for local verification. That would solve the extra requests issue while ensuring that all responses are genuine. So, even if an ISP wanted to send back bad responses, they couldn't. However, it's my understanding that there are real advantages to keeping DNS requests and responses to below ~1,280 bytes to fit inside of single transport frames - having to send all of that extra data seems like it would make it hard to do that. Since DNSSEC uses RSA keys, which are huge, that would probably be impossible, but even with EC keys, it might be tricky. Also, it doesn't really address the very real issue that some ISPs have done a poor job by running slow DNS servers.

* Any solution that doesn't involve some sort of centralized cache, is going to have a hard time keeping up with 8.8.8.8 or 1.1.1.1 performance wise.

So, while I value decentralized solutions, it just feels like it's a very difficult proposition for a decentralized solution to win when it's at a performance disadvantage.


Hi, full disclosure I'm the founder of a company providing AMP services.

You can hate and have valid arguments against AMP, but... on this article, in particular, where is the data?

How can people comment and form an opinion based on basically nothing?

Am I missing the "before and after" links with the benchmarks?


(Author here)

You can run the test yourself on the AMP and non-AMP versions of an identical article to see yourself.

(Note: sometimes the numbers are off the first couple of times you run tests. If you run them multiple times, the AMP articles tend to score (sometimes significantly) lower.)

Non-AMP: https://unlikekinds.com/article/google-amp-page-speed (Results: https://imgur.com/OVpdwyh)

AMP: https://unlikekinds.com/amp/article/google-amp-page-speed (Results: https://imgur.com/I3ha7Gi)

Edit: More data here: https://news.ycombinator.com/item?id=19630846


The AMP version gives me better results: https://www.dropbox.com/s/g4jchw76sh9x49k/Screenshot%202019-...

Non AMP version:https://www.dropbox.com/s/lclqqdbdliuofjf/Screenshot%202019-...

My email is on my profile if you want to talk about this. Maybe we can help.

Edit: If you compare against the cached version, the version that your mobile users are going to hit, the results are much much better: https://www.dropbox.com/s/c2y5akqclcim7tm/Screenshot%202019-...


I had run it a few times before until I felt it wasn't a coincidence

But just now I ran it until I got bored of running it, alternating between amp/non-amp, chrome on Windows 10 (normally I'm Chromium on Fedora, but let's try mainstream)

AMP: 75 90 91 91 91 89 80 91 (avg 87.25)

Non-AMP: 95 95 96 96 95 94 96 (avg 95.29)

So I feel pretty confident about that. Also appears more consistent (and apparently faster than the amp cached one somehow - probably because it wasn't prefetched by Google)


> My email is on my profile

The 'Email' field in the profile is not visible to other users. You should add it to the 'About' field.


Updated. Thank you.


The numbers are always going to be different for different people based on your geographic location and provider, and whether amp or your CDN has closer servers.


PageSpeed and Lighthouse measure the site load mark you down for things it thinks you could fix. For a head to head comparison like this WebPageTest is often a better tool:

non-amp: https://www.webpagetest.org/result/190411_1B_7c5b8e0d2f067ad...

amp: https://www.webpagetest.org/result/190411_RF_776066ee3b0226f...

Your site does pretty similarly with non-AMP and AMP. On the median of 9 runs PLT and fully loaded time are better on AMP, but speed index and time to interactive are better on non-AMP. Digging into it more, the time difference is completely due to how long it takes your site to serve HTML, and the biggest thing you could do to speed your site up would be making the server return the HTML sooner, perhaps by adding a cache.

Testing with curl, your site takes 880ms on average to serve non-AMP, but 1120ms to serve non-AMP. Graph: https://i.imgur.com/BlWqSoo.png

That's not an AMP thing, that's a your-serving-stack thing.

(Disclosure: I work at Google, and used to work on mod_pagespeed)


(Edit: another thought, maybe your curl testing took longer with all the extra CSS in the download and writing to disk if that's what you did - an AMP requirement. But did your curling pull down the external CSS/JS for non-AMP?)

According to the webpagetest.org results you linked, which ran the test 9 times on each page, the mean time to first byte for the AMP page is 1005, and for non-AMP it's 989.

Which is just 16 ms.

But the time to visually complete for example on non-AMP is 1955, and on AMP it's 2166.

Which is a difference of 211 ms (which percentage wise is almost 10x bigger (I think, I'm not super mathy)

This implies to me that the problem is AMP not the server.

But if your data is correct then perhaps something is up.

The thing is, I know how the site is coded. The AMP version just excludes things almost exclusively. Although it does have to render the CSS in place (the fragment is cached) rather than just link to a stylesheet.

But here's the thing, even if that is the problem, the only reason it's like that is because that's what AMP requires.

Maybe there's changes we could make on the server, but according to this webpagetest site, the problem doesn't appear to be the server nearly as much as the rest of the render process.


For the curl testing I did:

    for prefix in article amp/article; do
      echo $(for i in {1..9} ; do 
        (time curl -sS https://unlikekinds.com/$prefix/google-amp-page-speed > \
         /dev/null) 2>&1 | grep real | awk '{print $2}'
      done)
    done
Do you see the same results?

I shouldn't have focused so much on the difference between the AMP and non-AMP times, though. The main thing I wanted to point out is that if your server is taking almost a second to return html there's a lot you could do to improve pageloads right there.


I hear ya. It's rails running (with its database) on a single small VPS, so I'm pretty pleased with how it's running, particularly given the load yesterday.

And thanks for sort of fact checking, for a second I thought whoa, that could be it, all on the server side.

Don't think it is though. I might set up some performance testing on a local copy to check the difference between the two just to be sure (especially because the article ended up getting a lot of attention.)

That's some pretty badass shell scripting, I'll give it a try in a bit.


If you put varnish or something in front of rails to cache your responses you could probably cut 400-600ms off your loading time.


I got the same score on both. And yes, I agree that if you hand optimize your site, you can achieve speeds as good as AMP, but the point of AMP is to standardize it and force bloat off of these pages.


See here for more data: https://news.ycombinator.com/item?id=19630846

I realise it isn't scientific but I think it's pretty clear


AMP pages are bloated. You don’t see that because most of the JS and some data is preloaded while you search.

It’s trivial to make a site that’s more performant than AMP. But then Google gives exclusive preference to AMP by hosting it in its cache, preliading assets etc. And by penalizing your website.


It's unfortunate that you're being downvoted for asking for data. If there is in fact a configuration where the AMP page is slower, then it's either a bug or there's something very wrong with AMP. But either way there's no way for us readers to tell without seeing the actual page.


Thank you, I appreciate that.

I love technology and love to learn. When something isn't working as supposed, I want to learn why.

I provided links where it shows that all these claims are not actually true.

But if there is something wrong with my methodology, I want to learn about that as well!


This is not the first article I've read about this very same thing. One by a very well known author. Unfortunately, I can't provide the link data for any of that but I'm sure you can Google for it but I don't think it will show up in AMP.


I am wondering when the EU is finally giving this BS a fine.


The irony is that the web version of google news on mobile is horrendously slow. Hackernews seems to have good performance though, maybe they can take some design cues.


AMP has totally ruined the internet for desktop users. ~90% of AMP sites don't even provide a link to the full desktop version of the page (unless you want to dig through the source code of the page, but even then it is hit and miss), and AMP sites are linked to more and more on social media sites like reddit.


I think the most fundamental issue here is that organisations are rebuilding their websites with AMP to align to Google search, when Google search is not the only search product available (much as you could believe otherwise from the discussion!).

How did all the work this company did add any value for someone using DuckDuckGo, Bing, Ecosia, etc considering the site was already fast?

This type of activity is a bad outcome.


Bing also preloads AMP pages, so this work would add value for Bing users. https://blogs.bing.com/Webmaster-Blog/September-2018/Introdu...

Same with Baidu. https://9to5google.com/2017/03/07/accelerated-mobile-pages-e...


Google isn’t the web. To make the point that they’re dictating web standards is like saying that walmart makes people sick by only placing unhealthy products on their end caps. Like it or not, it is Google’s store and if you don’t like it, sell your wares in the Whole Foods down the street (duck duck go) or find a more creative channel than SEO (Instagram, podcasts, Twitter, etc).


Google isn’t the web like the banking system isn’t the only way to move money - you can do it, but it’s really really difficult and if you have business ambitions it’s almost certainly the wrong choice to avoid it.


I don’t think Ben Thompson (stratechery.com) spends too much time worrying about SEO and Google. He built a solid reputation with excellent idea and he did this strange model of getting people to pay money for stuff - over 3000 people to pay him $100 a year to read his content.


google.com is 100% the most powerful, influential site existing by a wide margin. Can you say that about Whole Foods or Walmart? No, they're just another shop in town.

Google IS the web.


Soon enough Google will face the fate of Ma Bell


Not so sure. We live in a very different time, and appetites for breaking up very large companies with dominant market positions in the US seemed to die with the Clinton administration’s failed attempt on Microsoft.


It's not just appetites. The laws themselves have been reinterpreted, and the AT&T breakup happened right as it was going on - it might have not happened 10 years later.

https://www.theamericanconservative.com/articles/robert-bork...


Times have changed. It’ll never happen as it creates risks of a Chinese competitor taking hold, and the US government doesn’t want that.


Be split up only to later recombine into a larger entity?


...at tremendous profit to executives and bankers. All of the resources that various investors sank into building "competing" infrastructure, consumed through strong-armed acquisitions as everyone slowly realized that FCC was just as opposed to competition as they always have been.


Not sure, I cant get a feel for society as a whole. If the apathy is actually gone or its just those pockets of "I want to be angry" still kicking


Practically, they probably weren't faster than their AMP page on mobile as a search result, because of preloading and google CDNage of the AMP cache of the page.


That’s the sales pitch but I usually notice AMP pages because they take so long to render at all, waiting for a ton of JavaScript to load before anything displays. Mobile caches are small and unless you hit Google a lot in the same browser you see those delays frequently compared to sites which follow other Google teams’ recommendations for performance.


Can someone explain how an AMP page would possibly be slower?

Especially given Google's CDN and the fact it's likely to be preloaded in practice?

I find myself wondering if this isn't just some artifact of the Chrome tool being used to measure performance -- especially since that tool reports an overall score like "94", not an actual speed -- and the weightings it uses [1] could be different from what AMP is designed specifically to speed up. Also it would have to be measured across a wide variety of locations worldwide, etc., certainly if the author's webserver is in their own city, for example.

A lot more data, measured rigorously, would be needed to prove that AMP is actually slower in practice -- it's a bold claim.

[1] https://developers.google.com/web/tools/lighthouse/scoring


Unless things have changed, you can't think of it as a real CDN. The Google engineer assigned to our AMP launch told us to think of it as a proxy without actually calling it that. Our testing showed that to be an accurate description of how it worked in practice.


This link on the page is dead (404): https://unlikekinds.com/article/how-google-is-creating-the-o...

Edit, apparently it should be linked from here, as the title and url of the article has changed: http://unlikekinds.com/t/how-google-is-creating-the-one-page...


Thanks! Fixed :)


Thus far, this whole AMP business seems like Google's tech-tipped thrust to control information (apart from their other methods thus far).


It's definitely control, but it's almost a lesser of three evils - it's no coincidence that as a large web publisher at the time, we were approached by Google, Facebook, and Apple at almost the same time and asked to deliver content for their respective walled gardens.

I'd happily choose the web over the other two alternatives. Sure it'd be great to have an open standard achieving the same thing as AMP, but it's 2019 and we haven't seen it yet.


I just think they are going about it in a pushy yet guarded way. I'd prefer good architects over good architecture as it breeds creativity/variety


Why the hell does AMP and AMP alone get the privilege of hijacking chrome mobile's menu bar and hiding it from the user until we scroll all the way up to the top of the page and then drag it back down? It's like they've never heard of tabbed browsing before.


Anybody can do it. You better have a good reason though. https://developers.google.com/web/fundamentals/native-hardwa...


"Why the hell use Chrome?" : a corollary, no?


Good tab management, to be honest. Other mobile browsers like Firefox and the Adblock browser have issues where swiping the tab away only works in a specific direction or in a very straight line, or is annoyingly difficult when the device is in landscape mode because you have to swipe twice as far for some reason, or the UI for opening the tabs page is inconvenient.


All good points. But not sure why "unlike kinds" should actually be featured - lets be honest your not exactly a household name in publishing.

Its hard enough getting a major publishing brand accepted by google as a news publisher.


I personally have a choice about AMP pages -- I ignore them. AMP blows, and if I can't get a non-AMP version of a page, that page doesn't exist for me.


Google AMP needs to die...


Why on earth do we look to this bizarre for technical capability?


People say AMP is disgusting, but I say that the alternative is even worse. Just go visit delfi.ee, postimees.ee, õhtuleht.ee and you'll see how damn bloated sites become if there's no market pressure to leanness.


This is true but consider an alternate history where Google cared more about web performance than pushing sites to adopt their proprietary JavaScript toolkit: search rankings would simply incorporate page performance/size measured in real browsers and all of those sites would have a strong incentive to reduce page weight and they could load faster without needing 100KB of AMP JavaScript to successfully load before they start rendering.


I'd be glad if that alternate history were the case, but currently AMP seems to be the lesser evil. AMP doesn't make my laptop's fans spin up.


I use Firefox’s content blocking and haven’t had that happen in years except on new installs where I forgot to disable WebM.


I don't use Firefox because my hardware is super weak and hardware acceleration is a must for most things for me - FF doesn't do that.


For any other uninformed readers of this comment: .ee is the internet country code top-level domain (ccTLD) of Estonia, operated by the Estonian Internet Foundation.

(I had to look it up too)


Why in the earth you would use Google s sandbox es ?


Google/Alphabet needs to be broken up.


so, kill google news. who cares about it anyway and if anyone do I just don't get why. google platform stinks more than anything ever microsoft did


Google News is a very useful tool for a lot of people trying to find coverage from sources other than the select few that they might often frequent. I use it to find how international articles affect my country, my area, etc. I also use it to find articles in other languages.

Aside from that, Google News content is heavily integrated into the normal Google Search pages. It's not simply a matter of simply ignoring it; Google makes their Google News results some of the top results for searches for current events.


Yes it certainly is matter of ignoring it. Only not ignoring that stuff gives it credibility as we gave microsoft products.


If you think the majority of people are going to ignore what Google posts as the most relevant search result, I don’t think there’s anything more I can say to dissuade you.


From the article:

>... the position on Google’s results that can literally mean the difference between a failed business and one that makes millions of dollars

looks to me like your business is the tail, and here you are trying to wag the dog.


I think breaking Google's monopoly over search is the biggest challenge that nobody is trying to solve ... Maybe nobody's knows how to, maybe nobody's willing to invest in such endeavor but I hope people will realize soon that the cost of doing nothing is becoming much higher than the cost of trying ... And yeah I know there's duckduckgo but we need at least 2 or 3 much better duckduckgo's


This article would be much more interesting if they showed that the AMP version of the page actually was slower than the non-AMP version. Instead, they just say that Google's Page Speed score is lower for the AMP version. But _nobody_ cares about the Page Speed score - it's just a rough proxy for user experience. The author of the article, however, doesn't do anything to look at actual user experience.


AMP is just a set of conventions and limitations that, when followed, make for a fast site. Anyone can make a fast site if they follow similar rules. Most sites don't do that because either the developers want to use something that's "nicer" to code but a lot bigger, or because the marketing department insists on loading 12 different tracking and analytics progams--when they probably only use one or two.


AMP is google flexing its control over the web.

If Google wanted to highlight fast/penalise slow sites, they could simply measure the load time of a site when they index it.

But that would only achieve their stated goals, it wouldn’t achieve their actual goals.


They could also just have mandated serving AMP with some random HTTP header and the client would cached a whitelist of the AMP scripts if got that header, nothing in the AMP design needs to have an AMP server, it's an arbitrary limitation from Google to control the web.


No, they couldn't. That is browser and transport-dependent. There are some things even Google cannot do ....


I didn't say, "they could guarantee the time it takes to load for every user on every connection on every browser".

I said "they could simply measure the load time of a site when they index it." Their indexer, running from their servers could be their "reference point" for this content.


It feels like AMP is Google nailing its own coffin to me. It probably felt like a winning move when Bell and AT&T made people buy their own products to use telephone systems, but it led directly to their disruption by the DOJ. Even though some poeple at Google probably realize that, it won’t matter if they cash out beforehand, if working on a project gets them promoted, and if institutional inertia is in control.

Google is hurtling toward an antitrust case they won’t win, and it will really be all their fault.


Throwaway Google search engineer here. People here really do care about making the web faster and moving metrics, both because that is how the company is set up to reward employees but also because they believe it makes the product better. Google has been penalizing slow sites forever. It stopped moving the needle (I suspect because it isn’t marketable). Amp on the other hand really is working, metrics show a faster, smoother experience, and user studies have been positive. That’s why Google is doubling down so much with it. Not because they have goals to control content providers or wall off the web, but because it makes a dramatic difference on the whole.


So if the "only" goal of AMP is to make things faster, why isn't the carousel based purely on "your result must load in < X ms"? If a company can "force" other companies to adopt AMP, it can surely "force" them to improve load times on their own.

Also, if speed is their goal, why does the mandatory AMP 'boilerplate' include a CSS-driven 8 second delay before content is shown, that is removed if the client loads the AMP JS?

Oh right. I know the answer. It's to give the impression that blocking third-party resources (such as AMP JS) via e.g. a content blocker, won't make the site faster. Which as we know, is a load of shit.

You might have the best of intentions and donate your entire salary to homeless blind children - that doesn't mean for a second that I believe google's actual goals with AMP are anything less than exerting more control over the web for their own purposes.


> "they believe it makes the product better."

You should stop and do some serious research on why people don't like AMP. You're talking about the Web like it's a Google "product." You're hijacking the Web in a way that will eventually destroy it.

Publishers don't want their content restricted and hosted on Google's servers on Google's domain, but they are getting their arms twisted by reduced rankings if they don't implement it. They also don't fully understand the long term implications of what they are doing.


Sure, but one of those conventions and limitations is that the page is hosted by google. It's not something you can deploy on your own server. It's not as if it's just some list of recommendations that make sites faster - you're signing up to a Google service, and you're at their mercy from then on.


>It's not something you can deploy on your own server.

But you can. Often it doesn't matter because you aren't a globally distributed cache, but if you are cloudflare[1] (or apparently Microsoft[2]), you can and do.

[1]: https://amp.cloudflare.com/

[2]: https://github.com/ampproject/amphtml/blob/master/caches.jso...


(author here)

It's not just conventions and limitations.

For example, you can't use an img tag on an AMP page. That's invalid AMP. You have to use an amp-img tag, which is rendered client side with js.

Another example is with forms. It forces you to include the amp-form js.

If it were just best practise, I'd be far happier to go along with it. Kind of the point here is that we were already using best practise, because the site was super fast.


Images rendered client side WITH JAVASCRIPT???? If my employee did AMP site for our company he'd immediately lose his job, no questions asked.


This allows AMP to skip preloading images below the fold on viewports of arbitrary size. If you fired somebody for doing something you don't understand without asking for an explanation, that's your problem.


Agree, I prefer to open amp site because most of time it simply load faster.


I prefer to just use a content blocker to prevent the crap that slows down most sites from loading.

Fun fact, blocking the amp JS makes amp pages artificially slow because they hide everything in css by default.

AMP as a solution to slow websites is like a chainsaw as a solution to an itchy foot.


How is it working at Google? AMP breaks sites and makes it look like they're hosted elsewhere.

It makes sites take multiple minutes to display after they redirect to the original version.


Did you read the article (or even the headline)? Specifically the part about the site being faster before AMP?


Impossible. For a site loaded from a SERP, the page will be preloaded if it is AMP. If you are creating an AMP page to be accessed directly (like the author), you fundamentally don't understand the problem that AMP solves and are using it wrong. PEBKAC.


Which is in disagreement with the OP's statement:

AMP is just a set of conventions and limitations that, when followed, make for a fast site.

AMP is more than that. AMP is the CDN that allows Google.com to preload your web page, and sits as an intermediary between you and your users.


Anyone can run an AMP cache. Bing also serves AMP pages.


It is a disagreement with OP's statement. Neither OP nor article author understand the problem that AMP solves.

Also, AMP is not a CDN. A CDN is a component of an AMP preloading implementation.


I dont want google to preload ANYTHING for me. Thanks.


If you go to the seminars held by Google, they suggest that in an ideal world your original pages would be AMP rather than having them separate.


True, that's what the article says. It's about control over users and data, not performance or UX.


It's about control of preloaded content, specifically. If you don't care about your content being easily preloaded from the SERP page, then sure, no need to use AMP. And if, as a user, you don't care about quick access to preloaded content (because e.g. you're on a low-latency connection), feel free to scroll down from the top-stories carousel.


Preloading is a waste of data on mobile. If you're on a metered connection then this costs more. Ranking sites by performance would already ensure a fast UX for users.

Also position on the SERP page is a major indicator of relevancy. Biasing results because they use AMP is unfair and inaccurate.


One point here: stop using Google Pagespeed Scores unless you're having serious speed issues that need correcting.

Google only uses this in an incredibly small way, yet I've seen SEO people at multiple companies hammering on it like it's some magical SEO juicer. It's not. Just don't be in the bottom 10% (Google uses a slice even smaller than 10%).

I like that Google wants the web to be faster, but if I could axe one product that causes a lot of angst, it's that damn pagespeed score.


,,50kb is an absurdly small.... responsive queries so that your site looks great on mobile and desktop (and tablet and landscape and Android and iPhone.)''

Why would I need CSS for iPhone? I don't have an iPhone. Also I'm now browsing from my laptop, so I don't need mobile specific CSS either. Isn't there a way for the responsive site to load CSS specific to the device/display?


No. Especially not with AMP, because Google sits between you and your user so you never get to know what device they are using.

In theory on your own site you could use user agent sniffing to worn out what CSS to send, but it's still not a great idea because you'll be wrong at least some of the time.


User agent sniffing is how we ended up with things like "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/70.0.3538.77 Safari/537.36"


Hey, you weren't kidding. Mine is

  Mozilla/5.0 (Macintosh; Intel Mac OS X 10_14_4) AppleWebKit/605.1.15 (KHTML, like Gecko) Version/12.1 Safari/605.1.15


"you'll be wrong at least some of the time."

I think the adoption rate for UserAgentSpoofers, could be a related metric, approximating the value of "some of the time"

my user agent rotates randomly, or when the spoofer button is clicked


You can sort of, but you'd need your own server. Google has control over this and I'm not sure if they permit different content for each device. They probably don't.


Google today is a single gateway where business go to sell and users go to do anything. Its a monopoly power. Nothing will change until there are better alternatives. Binq is big but stupid. DDG is too small. Apple, Amazon, Facebook, CDNs should all create good search engines, and allow SE aggregators to live.


Until Google will allow me to block whole swaths of IPs in effort to lower spam getting into my Inbox, I don't care much about other upgrades.

For few months now it seems to me that Sparkpost is a favorite choice of spammers. Perhaps its the low cost or maybe its because they never ever answered to a single spam@sparkpost.com email I send forwarding abusers content. None, zero nada! I can report someone... silence... an then 3 days new letter pops up from a new domain (but same IP) from Sparkpost.

That's why I need to be able to block whole IPs.


> Then there’s users begging Google to allow them to use more than 50kb of CSS. Yes, most site’s CSS is bloated. But 50kb is an absurdly small, arbitrary limit. Stylesheets these days handle resets for normalising behaviour between browsers, grid systems so you can lay things out without resorting to murder-suicide, and responsive queries so that your site looks great on mobile and desktop (and tablet and landscape and Android and iPhone.) These essential components will take you a decent way to to the 50kb already.

As far as I know, the 50KB limit is per page. I can't see why a typical single page should require that much CSS where the CSS included is actively used on that page. If you're going to include e.g. the CSS for every Bootstrap component on every page you're easily going to go over the limit but the whole point of the limit is to discourage you from doing that.

The OP article should only require a modest amount of CSS I think, same for most pages. People here hate on AMP a lot, but "avoid excessive CSS" is a good guideline in my opinion.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: