
Google Seeks to Pacify Consumers with Faster Mobile Pages - corneliusjac
http://www.bloomberg.com/news/articles/2016-02-24/google-seeks-to-pacify-consumers-with-faster-mobile-pages
======
toddmorey
There's a lot of Google negativity here (perhaps not without reason), but when
you view the AMP spec and what it tries to solve for, it feels like a smart
move to help broaden the adoption of sensible but poorly understood
optimizations:
[https://www.ampproject.org/docs/get_started/technical_overvi...](https://www.ampproject.org/docs/get_started/technical_overview.html#dont-
let-extension-mechanisms-block-rendering)

Those optimizations are worth looking over even if you don't plan on adopting
AMP.

I do think that rather than requiring direct use of AMP for specialized search
placement, it should really measure site performance and only ensure your site
meets the same benchmark, no matter what you do to optimize.

Google may not be a perfect ally of the open web, but I like this approach
much more than what I've seen from Facebook.

~~~
pcwalton
Some of those guidelines (avoid style recalculation and use only "GPU-
accelerated" animations) are specific to Chrome (and to other browsers of
today). Style recalculation is needlessly slow today due to the lack of
parallelism and typed CSSOM. And there's no such thing as a "GPU-accelerated"
animation, either in the spec or technically: all animations can be run on the
GPU with minimal state changes. It's just that browsers found it easiest to
only optimize a small subset of cases (in the case of CSS animations, _really_
small--transform and opacity), so as to keep around their legacy, originally-
CPU-based, painting infrastructure. It's good that they did that, because it
let us get to _some_ degree of graphics acceleration quicker, but we can do so
much better.

I'd prefer to just improve the browsers instead of putting the burden on Web
authors.

~~~
cromwellian
I think the browser specs are broken such there will always be pathological
cases, and always cases where 'you're on the performance rails'. End users
don't care.

This is no different on any other platform. If you program games directly to
OpenGL, for example, you run into all kinds of different bottlenecks, both CPU
and GPU. Depending on the pipeline of the various GPUS, and what kinds of
hazards they have, there's stuff you have to avoid, or branching paths you
have to handle to get good performance. If you use SQL databases, not all
databases can handle all queries optimally.

I think it is asking too much of browsers to give them such a huge surface
area spec as HTML and CSS, and then ask for them to optimize everything so
there's no slow paths.

The practical reality is, there's always a subset of specs known to be fast
and well supported and developers just need to know what that is.

~~~
pcwalton
> I think the browser specs are broken such there will always be pathological
> cases, and always cases where 'you're on the performance rails'. End users
> don't care.

In the case of GPU acceleration, the specs are not broken. There is no
fundamental reason why animations on arbitrary properties cannot be GPU
accelerated and some can be. Ask anyone who's worked with Scaleform whether
rapidly changing vector graphics (which is all these animations are) has to be
slow on mobile.

For style recalculation, maybe, maybe not; it's not clear to me that browsers
are worse than native app frameworks here, except that we're missing Typed
CSSOM, which is important spec work.

> This is no different on any other platform. If you program games directly to
> OpenGL, for example, you run into all kinds of different bottlenecks, both
> CPU and GPU. Depending on the pipeline of the various GPUS, and what kinds
> of hazards they have, there's stuff you have to avoid, or branching paths
> you have to handle to get good performance.

And game engine authors are fed up with it, which has led to Metal, DX12, and
Vulkan. It's not a good way to treat developers; we should try to give them
APIs in which, as much as possible, everything is fast. I think the Web APIs
can be that, if implemented optimally.

> I think it is asking too much of browsers to give them such a huge surface
> area spec as HTML and CSS, and then ask for them to optimize everything so
> there's no slow paths.

I'm not saying that we have to pretend everything in HTML and CSS can be made
_equally_ fast. My point is that, in many areas, browsers have done a bad job
of even trying to optimize broadly. Take animations, for example: the only
thing browsers optimize is two properties out of hundreds, transform and
opacity. This is exceptionally problematic, given that there's no technical
reason why this has to be the case. I just gave a talk about this a few days
ago :)

~~~
sunnyps
I think you have a different idea of what gpu acceleration means here. When we
say that animating transform and opacity is fast, it means that we can animate
it without rasterizing content over and over again i.e. only use composition.
This is independent of how rasterization takes place. Rasterization itself can
be gpu accelerated, Chrome has started doing this on mobile[1].

Animating other CSS properties such as font-size require us to rasterize new
content on every frame of animation. That is the inherent reason for why it's
slow. Again we can use the GPU for rasterization but we're still going to do
it every frame instead of caching the result in a texture that's then
composited over and over again.

[1]: [https://www.chromium.org/developers/design-
documents/chromiu...](https://www.chromium.org/developers/design-
documents/chromium-graphics/how-to-get-gpu-rasterization)

~~~
pcwalton
I know how it works. I wrote much of the layout and graphics code for Servo
and did the initial bringup of the compositor on Firefox Mobile. You shouldn't
have to rasterize new content on every frame when you animate font-size
(except for individual glyphs, but that's still quick, just a few ms, if you
parallelize it—and even with techniques like SDFs and/or GPU vector graphics
tricks like [1] you may not even need to do anything on the CPU). If you
animate, say, margin-left, you should in most cases be doing essentially no
work on the CPU, and you should be able to submit one draw call to repaint the
page since all of the resources (images, glyphs, alpha masks, SVGs) should be
cached in an atlas. It should be 1-2ms. Even animating something like border-
top-width should be no resource changes and one draw call, exactly as fast as
your compositor should do it.

Optimally implemented, I believe there should be no difference between
"rasterization" and "compositing". You're painting the same number of pixels,
paying the same fragment shading and rop cost, either way. Caching fully
rendered Web content in textures assuming that the cost of rasterizing that
content is high isn't working very well, and I think it's not worth it.

[1]: [http://wdobbie.com/post/gpu-text-rendering-with-vector-
textu...](http://wdobbie.com/post/gpu-text-rendering-with-vector-textures/)

~~~
cromwellian
Does using a font-atlas really deliver the same quality as fully hinted true-
type fonts? Also, this demo doesn't really have any state changes in it.
Wouldn't a more realistic example be of a UI that contains mainly different
font weights and sizes, to better gauge the effects of all of the pipeline
state changes.

I agree that the browser should be more like a game engine, and less like an
X11 buffered window system, in terms of just rasterizing faster enough to
render 60fps. But it seems like to me that the CSS spec contains a number of
features (rounded corners, soft shadows, etc) that by themselves can be
rendered fast on a GPU, but cobbled together in a complex scene graph might
result in stalls.

~~~
pcwalton
> Does using a font-atlas really deliver the same quality as fully hinted
> true-type fonts?

Hinting really isn't important on mobile or HiDPI. In fact, no version of OS X
or iOS does it. (I don't know about Android, but I wouldn't be surprised if it
doesn't on HiDPI screens.)

What's more interesting is antialiasing quality, especially subpixel AA.
That's something I think needs more investigation. But I'm cautiously
optimistic; there's been some really exciting work in GPU AA and vector
graphics lately :)

> Wouldn't a more realistic example be of a UI that contains mainly different
> font weights and sizes, to better gauge the effects of all of the pipeline
> state changes.

Don't change state then. :)

State changes really shouldn't be necessary. If you set things up properly,
you should be able to render arbitrarily many glyphs with one draw call, bound
only by your texture atlas size.

> But it seems like to me that the CSS spec contains a number of features
> (rounded corners, soft shadows, etc) that by themselves can be rendered fast
> on a GPU, but cobbled together in a complex scene graph might result in
> stalls.

They typically don't. The key is to render critical resources like that to a
texture atlas and to batch like resources together as you do so. (For example,
batch all box shadow pieces together into one draw call, batch all rounded
corners into another, etc.) This effectively places a small fixed upper bound
on the number of draw calls you issue. Then you can rerender the page for each
animation frame with a very small number of draw calls (frequently just one)
and a simple blitting shader. If you get your draw call/state change overhead
down, essentially nothing on the Web comes close to taxing a modern GPU; GPUs
are incredibly fast at rendering batches.

My talk goes into more details, if you're curious:
[https://air.mozilla.org/bay-area-rust-meetup-
february-2016/](https://air.mozilla.org/bay-area-rust-meetup-february-2016/)

~~~
cromwellian
Thanks for the link, I'll check it out.

What's the status of Servo BTW? I mean in terms of HTML/CSS spec completeness.

Edit: just saw the demo in the video, awesome job!

~~~
pcwalton
Lots of progress made, but still lots to do before we get to rendering most
Web sites. And thanks :)

------
stevenh
I've already gone to great lengths to optimize my mobile site. I am fully
confident that AMP cannot possibly make my site any faster. Every page on my
site loads via a _single_ HTTP request that transfers 10k of gzipped JS and
HTML all at once. Loading an additional script with an extra request for
another JS file is completely against my mobile design principles.

Should people like me start using AMP anyway to stay relevant in Google search
rankings?

~~~
sliverstorm
_He declined to comment about whether AMP pages would rank higher, though he
said some of the signals Google uses for search include whether a page is
mobile friendly and how rapidly it loads_

Showing only results that use AMP would be a very coarse cudgel to enforce
page speed. It is entirely within Google's ability to simply directly measure
page speed, and to my eyes the more sensible course of action given the
capability.

~~~
ska
You have to imagine someone there is already measuring this, if for no other
reason that evaluating the effectiveness of their own approaches.

------
arohner
Shameless plug, but if you care about web performance, my startup,
[https://rasterize.io](https://rasterize.io) provides easy frontend
performance monitoring. It tells you how fast users load your site, broken
down by desktop vs. mobile, and geography. You can view most of the chrome
devtools waterfall information, for every (modern) visitor.

~~~
untog
What's the page load overhead in adding this for each user?

~~~
arohner
It's an 8kB async-loaded JS script, and then it posts a small (1kB) amount of
data back to my servers. Overhead should be tiny, and it's async so it won't
block page rendering.

~~~
billyhoffman
Looking at your source. You are using the timing API to get RUM data. How is
your JS different/better than just using Boomerang to get those metrics?

------
matthewmacleod
I still think this is a pointless nonsense. Anybody who is going to bother
implementing AMP will be equally capable of implementing well-optimised
standards-compliant web pages, with the bonus of not having to use this silly
restricted format.

------
techabuse
Ha, for a second I thought they were going to do something about the ad
saturation. uBlock and NoScript on Firefox for Android pacify this consumer
just fine.

------
chflags
Content alone loads faster than loading content plus ads.

Making one DNS request is faster than making several, including ones for
domains related to advertising.

There are the speed gains that Google can offer and then there are ones that
are under the user's control, which Google will never offer, unless they
change their business model.

Not requesting ads makes my user experience faster.

Experiment: Disable Javascript for Google searches and what happens? Do the
ads still load?

------
shostack
I haven't seen much on the ad data side of this.

Please correct me if I'm wrong, but if a publisher switches to AMP, then all
of their ad serving data (and by extension impression data for all of their
visitors that Google is able to identify) becomes Google's data essentially.
Whereas before, if a publisher wasn't running DFP, AdSense, or GA, then Google
was essentially blind to their ad data.

So does this or does this not give Google that level of visibility into the ad
layer of a site? If so, that is a MAJOR strategic competitive data advantage
once lots of sites switch over, which they will be encouraged to do due to
higher mobile ranking as a result of getting their site's sped up.

~~~
ianlevesque
Okay but what publisher isn't using DFP, AdSense, or GA. Have you seen the
media sites being targeted here? Take any one and watch the dozen or more 3rd
party domains it hits. Their data goes everywhere already, including to
Google.

~~~
shostack
I don't disagree about the ubiquity of DFP, AdSense and GA. But unpacking that
a bit further...DFP gives way richer data than can be had with AdSense or GA
if a publisher is running non-Google ads on their site. So Google has limited
visibility into anyone not using DFP. I don't have market data on the adoption
of DFP, so while I think it is a safe bet it is the major player still, I'm
not sure what % of sites do NOT use it.

And for anything not run through DFP and not on Google inventory, this would
now give them visibility into those ads, potentially bid data from the
headers, potentially audience data for anything passed in plaintext
(surprisingly common), etc. Or am I missing something in terms of the data
they would see?

------
ck2
Make up your mind Bloomberg:

 _Google said [...] it will put websites built with its Accelerated Mobile
Pages [...] in the Top Stories section of a search results page_

VERSUS

 _AMP isn’t “a signal we use in ranking” pages, Besbris said. He declined to
comment about whether AMP pages would rank higher_

~~~
wanda
There is a difference between being the number one in the standard results
list and having your page featured a la Google+/Google MyBusiness pages.

I don't know the data on CTR for these featured pages but I think it's
probably significant.

------
arprocter
When an article about less web bloat has a self-playing video clip...

------
sosborn
Not kicking me to a full screen ad for their gmail app every time I log in to
gmail would be a nice start.

------
oconnore
You know what already makes pages load lightning fast?

Ad blocking.

~~~
jamesgeck0
One of AMP's goals seems to be ensuring that websites aren't noticeably slower
just because they have ads on them.

~~~
oconnore
Cool, if they also remove ads from AMP served content, we will be at feature
parity!

------
dang
There are two articles about AMP on the front page right now, this one and
[https://news.ycombinator.com/item?id=11167428](https://news.ycombinator.com/item?id=11167428).
Which is better?

~~~
michaelmior
They both cover different aspects of AMP and there doesn't seem to be much
overlap so I think both are ok.

~~~
dang
Ok, we won't merge them.

------
atdt
To use AMP you need to load JavaScript from the AMP CDN, which is operated by
Google. From
<[https://github.com/ampproject/amphtml/blob/master/spec/amp-h...](https://github.com/ampproject/amphtml/blob/master/spec/amp-
html-format.md#amp-runtime>):

> The AMP runtime is loaded via the mandatory <script
> src="[https://cdn.ampproject.org/v0.js"></script>](https://cdn.ampproject.org/v0.js"></script>)
> tag in the AMP document <head>.

So you are handing over the security of your site and your user's privacy to
Google.

~~~
whyagaindavid
you can also host it yourself. See github downloads.

~~~
fiveoak
You could also use a subresource integrity check if you are worried about the
security but still want to use Google's hosted version.

