I was doing testing on a scrolling carousel and couldn't figure out what was causing a stutter that appeared in an Android WebView, but not in the equivalent Chrome version on a Mac. It turned out that the desktop browser was doing an additional recalc in the middle of a bunch of other animations. I ended up slotting in a document.body.getBoundingClientRect() to force the recalc and make them work equivalently. You might think this would cause thrashing, but it actually smoothed things out. Crazy but true.
Also, CSS animations can "cancel" in odd ways... if I tell an element to translateX from a to b, then during the animation, tell it to actually go to c, some browsers will figure it out (most) and do the right thing, some will "start over", some will do wonky things with the speed (does it calculate the duration from the current position in between a and b, or from a?).
Don't get me wrong, CSS3 animations are great, but don't make the mistake that they "just work", because they don't.
CSS3 animations are cool because they have the potential to be superior, but 3 out of 4 times it seems that the equivalent JS/$.animate is smoother and more consistent.
It's the not the reality I want, but it's the one I've been shown.
Hopefully in a few more years we will see better and more consistent performance across all browsers, but we aren't quite there yet with CSS
I urge web developers to consider the trade-offs when using CSS hardware acceleration. Properties like `will-change` or `transform3d` substantially increase the client device's energy usage. This has battery-life implications for users on portable devices such as laptops, tablets, and mobile phones. As with any other feature, ask "why" before "how" when prioritizing interactive motion in your webapp.
More info: https://dev.opera.com/articles/css-will-change-property/
They also made the font blurry on Chrome on Windows prior to DirectWrite (it's moot now, though personally I'll say that I hate the use of DirectAnything for simple font rendering).
*, *::before, *::after
transition: none !important;
animation: none !important;
I used this in the past when I wanted to play 2048 as fast as possible, resetting also border-radius, text-shadow, box-shadow, text-rendering, image-rendering and background-image . It really made quite a difference :]
here's what i use instead:
/* forgot if this was needed but i think using !important here broke something */
/* maybe not needed */
/* non-zero to support "transitionend" event */
/* possible tiny optimization */
Side note: perhaps using `..duration: ..ms` in and `transition-timing-function: step-start;` could make some additional micro optimisation.
 killed it: https://bugs.chromium.org/p/chromium/issues/detail?id=347016
Other nice to have option would be to just force disable rendering all animation by browser itself.
I took this course over the weekend and didn't find it very helpful because of a lack of code walkthroughs.
 - https://www.lynda.com/Web-Development-tutorials/Advanced-SVG...
This means long, slow CSS animation absolutely kills browser performance. Independent of your opinions on animation in the browser, is there a less CPU-intensive way to do SVG animation?
To really get it running smooth and buttery, we’re going to use the GPU to render the animation.
Though translateZ() or translate3d() will still be needed by some browsers as a fallback, the will-change property is the future. What this does is that it promotes the elements to another layer, so the browser doesn’t have to consider the layout render or painting.
Edit: better formatting
There are things you can attempt depending on your browser, though, such as avoiding non-convex paths in the SVG.
If you want good performance everywhere though the only real option is just don't use SVG if you need to animate it.
It's not so much that GPUs aren't built for paths as that winding rules are not suited for parallelism. The solution is to use meshes instead of paths. This is my current area of work.
I remember a whole slew of "accelerated 2D" cards in the mid-1990s that had faster Windows GDI performance than we have today for certain 2D operations such as path drawing (the regression was in Windows Vista when they moved all GDI operations to the CPU).
And doing UI layout on background threads breaks the basic design of pretty much every UI framework, web or native, which are usually single-threaded.
There is absolutely no reason why browsers have to use the native compositor for CSS. It's a bad fit, and browsers should stop doing it.
> And doing UI layout on background threads breaks the basic design of pretty much every UI framework, web or native, which are usually single-threaded.
That's why you have a render tree on a separate thread from the DOM. Look at how Servo does it (disclaimer: I work on Servo). We've proven that it works.
Dunno about the others but Qt has a threaded render loop (http://doc.qt.io/qt-5/qtquick-visualcanvas-scenegraph.html#t...) (and also has no problem handling 1080p/60fps animations on embedded hardware)
Largely written by Glenn Watson, the primary author of Servo's WebRender, no less. :)
Not entirely true. It's pretty critical to use the OS compositor for <video> for power reasons so that video playback can be offloaded entirely.
For everything else, though, agree. The OS compositor isn't magic and at the end of the day doesn't do much for non-video layers.
Here are a few:
- On some platforms, animations on native layers are applied every frame, even if the application that owns the layer is busy. This means fewer opportunities to drop frames.
- Sometimes web rendering engines are embedded in apps. Your app may want to draw web content that filters some other content that's behind it. This content may not be available to the browser engine (it may be in a separate process, for example). A native layer can apply this filter in the compositor process, where the content is available.
- Using the native compositor makes it easier to embed components (like video) that are provided by the system as native layers.
The browser can and should do the same with its CSS compositor. There's no reason why the CSS compositor should run on the main thread (and, in fact, no modern browser works this way).
> - Sometimes web rendering engines are embedded in apps. Your app may want to draw web content that filters some other content that's behind it. This content may not be available to the browser engine (it may be in a separate process, for example). A native layer can apply this filter in the compositor process, where the content is available.
For transparent regions, a browser can do this by simply exporting its entire composited tree as a single transparent layer, where it can be composited over other content. In the case of a single-layer page, this is what the browser is doing anyway.
If you're talking about CSS filters, there's no way that I know of in CSS to say "filter the stuff behind me" in the first place. You can only filter elements' contents.
> - Using the native compositor makes it easier to embed components (like video) that are provided by the system as native layers.
I grant that you have to use the native compositor to get accelerated video on some platforms. But that doesn't mean that a browser should do everything this way. In fact, no browser even tries to export all of CSS to the compositor: this is why you have the various "layerization" hacks which give rise to the sadness in this article. Reducing what needs to be layerized to just video would actually decrease complexity a lot over the status quo. (If you don't believe me, try to read FrameLayerBuilder.cpp in Gecko. It would be way simpler if video were the only thing that generated layers.)
You still have to swap buffers from your background thread and then composite the buffer instead of compositing the animating layer directly. It's a small advantage, but it is an advantage.
> If you're talking about CSS filters, there's no way that I know of in CSS to say "filter the stuff behind me" in the first place. You can only filter elements' contents.
https://developer.mozilla.org/en-US/docs/Web/CSS/backdrop-fi... (yes, it's an experimental property, but it's the one I was thinking about)
By this I assume you mean that when you have two compositors, you have an extra blit. This is mostly true (though it's not necessarily true if the OS compositor is using the GPU's scanout compositing), but it's by no means worth the enormous downsides of current layerization hacks. Right now, when you as a Web developer fall off the narrow path of stuff that the OS compositor can do, your performance craters. The current status quo is not working: only about 50% of CSS animations in the wild are performed off the main thread.
There's another enormous downside to using the OS compositor: losing all Z-buffer optimizations. Right now, browsers usually throw away 2x or more of their painting performance painting pixels that are occluded. When using the OS compositor, the browser painting engine doesn't know which pixels are occluded, because that's something only the OS compositor knows, so it has to paint the contents of every buffer just in case. But with a smart CSS compositor, the browser can early Z-reject pixels covered up by other elements.
> yes, it's an experimental property, but it's the one I was thinking about
Ah, OK, I wasn't aware of that because it's only implemented in Safari right now. Well, using the OS compositor would make it easier to apply backdrop filters, as long as the OS compositor supports everything in the SVG filter spec (a big assumption—I suspect this is only the case on macOS and iOS!) But even with that, I think it results in less complexity to just use the OS compositor for this specific case and fall back on the browser compositor for most everything else, just as with video. CSS really does not map onto OS compositors very well.
Sure, but you're basically re-implementing big parts of the OS in the browser. Which is maybe not a bad idea, but at some point there's inception.
The status quo is not working: only about half of CSS animations in the wild run on the compositor. As browser vendors, we need to admit that the attempt to carve out a limited, fast subset of CSS has failed, and we should do the hard work to make all of CSS run fast.
Mozilla are moving Servo's WebRender over to Firefox. The feature is called Quantum Render, you can read up about it here:
To give some idea of what this means for CSS animation performance:
It will get better when things like this: https://bugs.chromium.org/p/chromium/issues/detail?id=591179... get marked fixed (aka, use GPU rendering everywhere)
Although browsers needing to be highly defensive doesn't help, either. If a native app renders slow you generally blame the app, whereas if a website is slow you blame the browser (regardless of who is actually at fault). This leads to browser being conservative and defensive in their graphics stacks so that scrolling can be smooth in the face of graphically intense rendering.
OS composition is unrelated entirely here, and really there's no need for that to change away from just dealing with pixmaps.
An independent animation is an animation that runs independently from thread running the core UI logic. (A dependent animation runs on the UI thread.)
The elements map directly to OS compositor "visuals."
Disclosure: I work at Microsoft
That certainly used to be true on win32 (in that an animation of left/top would easily hit 60fps on a desktop computer), but maybe Microsoft's UI toolkits have regressed significantly? I rather doubt it, though, and suspect that it would still work just fine on a native windows app even though the web equivalent grinds to a halt.
No, it doesn't. The app renders to a single surface in a single GPU render pass unless the app uses a SurfaceView, which is generally only for media uses (camera, video, games).
Multiple layers are only used when asked for explicitly (View.setLayerType) or when required for proper blending. They are generally avoided otherwise as it's generally slower to use multiple layers.
You can absolutely do the "bad" things in the linked article in a native Android app and still hit 60fps pretty trivially. The accelerated properties, like View.setTranslationX/Y, only bypass what's typically a small amount of work (and don't use a caching layer). It's an incremental improvement, not something absolutely required. Scrolling in a RecyclerView or ListView, for example, doesn't even do that. It just moves the left/top of every view and re-renders, and that's plenty fast to hit 60fps.
This was especially necessary due to many of the ripple animations being introduced.
The new part is that some animations (basically just the Ripple Animation) can happen on their own on that thread, but it doesn't use a GPU layer for it nor a different OS composition layer.
Really? As so often, there was a lot of talk about doing that beforehand, and it wasn’t discussed at all later on, so I had assumed that this had been done. Interesting that this didn’t happen.
What’d be the reason for that? Animating objects on a static background seems a prime case for GPU layers. Or was it the issue with the framebuffer sizes being too huge again?
A browser is essentially a small OS with server-side rendering, where clients/web apps send HTML/CSS/JS for their GUI and the compositor/browser engine renders into bitmaps. We probably need something simpler than HTML/CSS/JS, though, if we want it to be reasonably easy to implement a fast rendering engine.
I realize that would probably not be easy, but maybe still possible without major architectural changes?
Yes, and we as browser vendors should absolutely do that.
Though a better solution is to just make the compositor able to accelerate the full generality of CSS in the first place.
It's complicated because layout changes can cause new elements to be created. Elements can be really expensive to create.
The layout thread.
> It's complicated because layout changes can cause new elements to be created. Elements can be really expensive to create.
Huh? No, they can't. In rare cases, layout changes can cause new render objects to be created (line breaks), but that's not elements.
Dunno which thread—just an idea.
In this case each animation frame will be just a set of commands sent by CPU to GPU: render thing A at coordinate Ax,Ay, render thing B at ...
So animations that do not involve relayout will cost almost nothing for CPU. And even with relayout ( transition: width 2s; ) it will not be that bad as relayout is partial as a rule.
At least this is how it works in Sciter: https://sciter.com/sciter-and-directx/ (demo of Sciter HTML/CSS rendering integrated into DirectX scene)
But it isn't the case with the technique described in the article (browsers also render with more than 60 FPS if the monitor refresh rate is higher), so my comment was a bit offtopic, sorry.
If we're all formal all the time, it's going to be hard to welcome new people. It's important to make sarcasm obvious though (because it's difficult to express), so that newcomers who may not know that this is ridiculous can feel "in" on the joke, while learning something, and not being embarrassed.
In a 60 fps stream, you can insert one image that is way off from the rest, and it will be detectable. You have to go around 100fps for that "blip" to go unnoticed.
So when it comes to transitions, high frame rates actually do matter, as that makes the difference between something smooth & natural, and something that appears jagged & visually annoying.
The sensitivity of the human eye goes way beyond 30fps. Studies have shown that viewers can distinguish between modulated light and a stable field at up to 500 Hz. As a more practical example, many reviewers have noted that the new iPads' 120 Hz refresh rate has obvious, visible benefits for scrolling and animations.
But for gaming, anybody that can't tell the difference between 30 fps and 60 fps is blind. I have a 144 hz monitor, and I can certainly tell the difference even between 60 fps and 144 fps.
The UFO Test , while contrived, will show you the difference. If you have a 60 hz monitor, it will show things moving at 60 fps and 30 fps (And 15 and 7.5 fps, if you want). If you have a 144 hz monitor, it will show 144 fps, 72 fps, 36 fps, etc. The difference is clear, especially if you're following the UFO with your eyes. The higher framerate is less blurry. In games, this can be huge.
These are the best css properties to animate with
Position — transform: translateX(n) translateY(n) translateZ(n);
Scale — transform: scale(n);
Rotation — transform: rotate(ndeg);
Opacity — opacity: n;
I'll add in that translate3d is generally faster than the other translate options. There's some good performance info in the post and it's worth the short read.
transition: transform 300ms linear;
And I'd point out that the authors are specifically stating that animating transform+opacity (via a 'transition') are more efficient than things like left, top, bottom, etc.—because those properties affect the layout stage, which is earlier than the composite stage (where the transform+opacity properties operate), and subsequent stages have to be recalculated.
They also discuss different ways of structuring DOM trees to create the same animation which have performance trade-offs.
I copied his code-pen and used his first example in Safari (just using the `left` css property) and it was still 60 FPS in Safari (per debugging tools). I wonder how much variance there is between browsers on this. (I'm on a Mac 10.12.5 using Safari 10.1.1 (12603.2.4))
This isn't pure css as it's using JS to set the property, but it should be a good indicator.
I think you need to call getComputedStyle (https://developer.mozilla.org/en-US/docs/Web/API/Window/getC...) to resolve the new style update outside of the layout/paint cycle.
But I'm not really sure to be honest