Hacker News new | past | comments | ask | show | jobs | submit login
60 FPS Animations with CSS3 (medium.com)
224 points by ritadias on July 10, 2017 | hide | past | web | favorite | 76 comments



Animations via CSS have been a challenge in my experience. It's easy to create a demo with a single layer of DOM elements and have it work well, but as soon as there is overlap of any sort, lots of DOM elements to manipulate, or things like images or media, it can get twitchy real fast. In addition, every browser and every platform is different in rand ways.

I was doing testing on a scrolling carousel and couldn't figure out what was causing a stutter that appeared in an Android WebView, but not in the equivalent Chrome version on a Mac. It turned out that the desktop browser was doing an additional recalc in the middle of a bunch of other animations. I ended up slotting in a document.body.getBoundingClientRect() to force the recalc and make them work equivalently. You might think this would cause thrashing, but it actually smoothed things out. Crazy but true.

Also, CSS animations can "cancel" in odd ways... if I tell an element to translateX from a to b, then during the animation, tell it to actually go to c, some browsers will figure it out (most) and do the right thing, some will "start over", some will do wonky things with the speed (does it calculate the duration from the current position in between a and b, or from a?).

Don't get me wrong, CSS3 animations are great, but don't make the mistake that they "just work", because they don't.


Similar experiences.

CSS3 animations are cool because they have the potential to be superior, but 3 out of 4 times it seems that the equivalent JS/$.animate is smoother and more consistent.

It's the not the reality I want, but it's the one I've been shown.


It's very situational. I built a rich SPA with lots of animations a few years ago and sometimes I would have no issues and other times animations would chug on one or two browsers. I haven't done animation work in the past couple of years but I used GSAP for that product and it was great. It's very tough to navigate the animation world because fades work great in context A on FireFox but work like crap in the same context on Safari. You might see the opposite on context B with a different animation.

Hopefully in a few more years we will see better and more consistent performance across all browsers, but we aren't quite there yet with CSS


Meanwhile 60fps animations using any technology other than web is entirely ... unimpressive.


This article was useful and concise.

I urge web developers to consider the trade-offs when using CSS hardware acceleration. Properties like `will-change` or `transform3d` substantially increase the client device's energy usage. This has battery-life implications for users on portable devices such as laptops, tablets, and mobile phones. As with any other feature, ask "why" before "how" when prioritizing interactive motion in your webapp.

More info: https://dev.opera.com/articles/css-will-change-property/


> Properties like `will-change` or `transform3d` substantially increase the client device's energy usage

They also made the font blurry on Chrome on Windows prior to DirectWrite (it's moot now, though personally I'll say that I hate the use of DirectAnything for simple font rendering).


A nice feature would be to add an "energy saver" option to your web app, which adds a class to your app and a stylesheet that disables all animations.


In user styles capable browsers you can disable CSS3 animations pretty easily, just using

    *, *::before, *::after
    {
      transition: none  !important;
      animation: none !important;
    }
(For not-so-capable browsers using author level pseudo-user-style sheets the `star` must have raised specificity, like `:not(#\0)`.)

I used this in the past when I wanted to play 2048 as fast as possible, resetting also border-radius, text-shadow, box-shadow, text-rendering, image-rendering and background-image [1]. It really made quite a difference :]

[1] https://userstyles.org/styles/103878/nofx


`transition: none` breaks pages relying on the "transitionend" js event (edit: and i guess the other transition-related events i forgot about), i noticed logging into google wouldn't work with it (it wouldn't hide some gray overlay)

here's what i use instead:

  /* forgot if this was needed but i think using !important here broke something */
  transition-property: none;
  
  /* maybe not needed */
  transition-delay: 0!important;
  
  /* non-zero to support "transitionend" event */
  transition-duration: 0.0000001s!important;
  
  /* possible tiny optimization */
  transition-timing-function: linear!important;
i don't know if `animation` has events associated with it but i haven't came across any sites breaking without them


This is insightful remark; I guessed it could break some presumably crappy designs blindly relying on transition-state or such, but … Google? Wow. Even considering their approach to user level style sheets [1] it is mildly surprising.

Side note: perhaps using `..duration: ..ms` in and `transition-timing-function: step-start;` could make some additional micro optimisation.

[1] killed it: https://bugs.chromium.org/p/chromium/issues/detail?id=347016


There is actually a draft of media query called "reduced motion" that tells webpage that user prefers to have animation use set to minimum. It wasn't created to save energy but to help people that feel dizzy when too much of animation is going on. https://webkit.org/blog/7551/responsive-design-for-motion/#u...

Other nice to have option would be to just force disable rendering all animation by browser itself.


On a slightly unrelated note: Can anyone recommend a good SVG animation course that I could take. Or a set of guides that walk you through doing SVG animations / typical workflows.

I took this[1] course over the weekend and didn't find it very helpful because of a lack of code walkthroughs.

[1] - https://www.lynda.com/Web-Development-tutorials/Advanced-SVG...


On a related note, are Lynda courses still absolutely dire?


GreenSock's TweenMax is pretty good


My 2015 macbook pro will hit 150% CPU on basic SVG animation via CSS properties (like rotating an SVG element by setting `transform: rotate(90deg);`) even when using the best practices described in the article.

This means long, slow CSS animation absolutely kills browser performance. Independent of your opinions on animation in the browser, is there a less CPU-intensive way to do SVG animation?


From the article

To really get it running smooth and buttery, we’re going to use the GPU to render the animation.

Though translateZ() or translate3d() will still be needed by some browsers as a fallback, the will-change property is the future. What this does is that it promotes the elements to another layer, so the browser doesn’t have to consider the layout render or painting.

Edit: better formatting


It's either just using SVG at all that's the problem or something in the SVG that's a problem. SVGs are generally not GPU rasterized since GPUs are really not built for paths. That's a whole research area in and of itself.

There are things you can attempt depending on your browser, though, such as avoiding non-convex paths in the SVG.

If you want good performance everywhere though the only real option is just don't use SVG if you need to animate it.


> SVGs are generally not GPU rasterized since GPUs are really not built for paths. That's a whole research area in and of itself.

It's not so much that GPUs aren't built for paths as that winding rules are not suited for parallelism. The solution is to use meshes instead of paths. This is my current area of work.


Do you mean the non-zero rule and even-odd rules? In the WP definition they don't inherently seem to have side effects or dependencies on previous computations, can you link to an elaboration?


Computing whether a pixel is filled requires computing its winding number, which depends on every path before it on the scanline. This is not a problem for a sequential algorithm, since you compute winding numbers as you go, but it is a problem if you want to fill every pixel in parallel.


Is this the generalization of the Pathfinder text renderer?


> GPUs are really not built for paths

I remember a whole slew of "accelerated 2D" cards in the mid-1990s that had faster Windows GDI performance than we have today for certain 2D operations such as path drawing (the regression was in Windows Vista when they moved all GDI operations to the CPU).


I hope browser vendors take note and make this guide obsolete.


This guide will be relevant until OS compositors fundamentally change from just dealing with bitmaps. Which is unlikely to happen anytime soon.

And doing UI layout on background threads breaks the basic design of pretty much every UI framework, web or native, which are usually single-threaded.


> This guide will be relevant until OS compositors fundamentally change from just dealing with bitmaps. Which is unlikely to happen anytime soon.

There is absolutely no reason why browsers have to use the native compositor for CSS. It's a bad fit, and browsers should stop doing it.

> And doing UI layout on background threads breaks the basic design of pretty much every UI framework, web or native, which are usually single-threaded.

That's why you have a render tree on a separate thread from the DOM. Look at how Servo does it (disclaimer: I work on Servo). We've proven that it works.


> And doing UI layout on background threads breaks the basic design of pretty much every UI framework, web or native, which are usually single-threaded.

Dunno about the others but Qt has a threaded render loop (http://doc.qt.io/qt-5/qtquick-visualcanvas-scenegraph.html#t...) (and also has no problem handling 1080p/60fps animations on embedded hardware)


> Dunno about the others but Qt has a threaded render loop (http://doc.qt.io/qt-5/qtquick-visualcanvas-scenegraph.html#t...) (and also has no problem handling 1080p/60fps animations on embedded hardware)

Largely written by Glenn Watson, the primary author of Servo's WebRender, no less. :)


By "single-threaded" I meant business logic and non-fixed layout being done on the same thread. "Rendering" (drawing to a bitmap or translating to the OS compositor's object model or direct OpenGL/DirectX/etc.) is done on a separate thread in most frameworks I'm aware of.


> There is absolutely no reason why browsers have to use the native compositor for CSS. It's a bad fit, and browsers should stop doing it.

Not entirely true. It's pretty critical to use the OS compositor for <video> for power reasons so that video playback can be offloaded entirely.

For everything else, though, agree. The OS compositor isn't magic and at the end of the day doesn't do much for non-video layers.


> There is absolutely no reason why browsers have to use the native compositor for CSS. It's a bad fit, and browsers should stop doing it.

Here are a few:

- On some platforms, animations on native layers are applied every frame, even if the application that owns the layer is busy. This means fewer opportunities to drop frames.

- Sometimes web rendering engines are embedded in apps. Your app may want to draw web content that filters some other content that's behind it. This content may not be available to the browser engine (it may be in a separate process, for example). A native layer can apply this filter in the compositor process, where the content is available.

- Using the native compositor makes it easier to embed components (like video) that are provided by the system as native layers.


> - On some platforms, animations on native layers are applied every frame, even if the application that owns the layer is busy. This means fewer opportunities to drop frames.

The browser can and should do the same with its CSS compositor. There's no reason why the CSS compositor should run on the main thread (and, in fact, no modern browser works this way).

> - Sometimes web rendering engines are embedded in apps. Your app may want to draw web content that filters some other content that's behind it. This content may not be available to the browser engine (it may be in a separate process, for example). A native layer can apply this filter in the compositor process, where the content is available.

For transparent regions, a browser can do this by simply exporting its entire composited tree as a single transparent layer, where it can be composited over other content. In the case of a single-layer page, this is what the browser is doing anyway.

If you're talking about CSS filters, there's no way that I know of in CSS to say "filter the stuff behind me" in the first place. You can only filter elements' contents.

> - Using the native compositor makes it easier to embed components (like video) that are provided by the system as native layers.

I grant that you have to use the native compositor to get accelerated video on some platforms. But that doesn't mean that a browser should do everything this way. In fact, no browser even tries to export all of CSS to the compositor: this is why you have the various "layerization" hacks which give rise to the sadness in this article. Reducing what needs to be layerized to just video would actually decrease complexity a lot over the status quo. (If you don't believe me, try to read FrameLayerBuilder.cpp in Gecko. It would be way simpler if video were the only thing that generated layers.)


> The browser can and should do the same with its CSS compositor. There's no reason why the CSS compositor should run on the main thread (and, in fact, no modern browser works this way).

You still have to swap buffers from your background thread and then composite the buffer instead of compositing the animating layer directly. It's a small advantage, but it is an advantage.

> If you're talking about CSS filters, there's no way that I know of in CSS to say "filter the stuff behind me" in the first place. You can only filter elements' contents.

https://developer.mozilla.org/en-US/docs/Web/CSS/backdrop-fi... (yes, it's an experimental property, but it's the one I was thinking about)


> You still have to swap buffers from your background thread and then composite the buffer instead of compositing the animating layer directly. It's a small advantage, but it is an advantage.

By this I assume you mean that when you have two compositors, you have an extra blit. This is mostly true (though it's not necessarily true if the OS compositor is using the GPU's scanout compositing), but it's by no means worth the enormous downsides of current layerization hacks. Right now, when you as a Web developer fall off the narrow path of stuff that the OS compositor can do, your performance craters. The current status quo is not working: only about 50% of CSS animations in the wild are performed off the main thread.

There's another enormous downside to using the OS compositor: losing all Z-buffer optimizations. Right now, browsers usually throw away 2x or more of their painting performance painting pixels that are occluded. When using the OS compositor, the browser painting engine doesn't know which pixels are occluded, because that's something only the OS compositor knows, so it has to paint the contents of every buffer just in case. But with a smart CSS compositor, the browser can early Z-reject pixels covered up by other elements.

> yes, it's an experimental property, but it's the one I was thinking about

Ah, OK, I wasn't aware of that because it's only implemented in Safari right now. Well, using the OS compositor would make it easier to apply backdrop filters, as long as the OS compositor supports everything in the SVG filter spec (a big assumption—I suspect this is only the case on macOS and iOS!) But even with that, I think it results in less complexity to just use the OS compositor for this specific case and fall back on the browser compositor for most everything else, just as with video. CSS really does not map onto OS compositors very well.


> There is absolutely no reason why browsers have to use the native compositor for CSS.

Sure, but you're basically re-implementing big parts of the OS in the browser. Which is maybe not a bad idea, but at some point there's inception.


There's a good reason to, in this case: that the OS compositor is not designed to render the full generality of CSS.

The status quo is not working: only about half of CSS animations in the wild run on the compositor. As browser vendors, we need to admit that the attempt to carve out a limited, fast subset of CSS has failed, and we should do the hard work to make all of CSS run fast.


> "Which is unlikely to happen anytime soon."

Mozilla are moving Servo's WebRender over to Firefox. The feature is called Quantum Render, you can read up about it here:

https://wiki.mozilla.org/Platform/GFX/Quantum_Render

To give some idea of what this means for CSS animation performance:

https://m.youtube.com/watch?v=u0hYIRQRiws


That's not true at all. You don't have most of these problems when dealing with native apps, for example, and the reason is that browser rendering is just extremely slow. Thus the gap between the common path and the fast path is unusually massive on browsers.

It will get better when things like this: https://bugs.chromium.org/p/chromium/issues/detail?id=591179... get marked fixed (aka, use GPU rendering everywhere)

Although browsers needing to be highly defensive doesn't help, either. If a native app renders slow you generally blame the app, whereas if a website is slow you blame the browser (regardless of who is actually at fault). This leads to browser being conservative and defensive in their graphics stacks so that scrolling can be smooth in the face of graphically intense rendering.

OS composition is unrelated entirely here, and really there's no need for that to change away from just dealing with pixmaps.


You have this problem with native Windows apps. I'm not familiar with iOS or Android.

https://blogs.msdn.microsoft.com/windowsappdev/2012/05/01/fa...

An independent animation is an animation that runs independently from thread running the core UI logic. (A dependent animation runs on the UI thread.)

The elements map directly to OS compositor "visuals."

Disclosure: I work at Microsoft


I'm not as familiar with native windows apps anymore but the difference typically is that while there is a faster path on native it's not as critical that it's taken. As in, the slow path is still generally fast enough.

That certainly used to be true on win32 (in that an animation of left/top would easily hit 60fps on a desktop computer), but maybe Microsoft's UI toolkits have regressed significantly? I rather doubt it, though, and suspect that it would still work just fine on a native windows app even though the web equivalent grinds to a halt.


Animating left/top on a fixed layout is indeed still fast. But modern app layouts using responsive design are still going to need to hit the UI thread to modify controls as the size changes.


The same issues also happen on Android, which is why Android has separate threads for layouting and interactions, rendering, and business logic in every app, and why all complicated rendering ends up on separate GPU layers.


> and why all complicated rendering ends up on separate GPU layers.

No, it doesn't. The app renders to a single surface in a single GPU render pass unless the app uses a SurfaceView, which is generally only for media uses (camera, video, games).

Multiple layers are only used when asked for explicitly (View.setLayerType) or when required for proper blending. They are generally avoided otherwise as it's generally slower to use multiple layers.

You can absolutely do the "bad" things in the linked article in a native Android app and still hit 60fps pretty trivially. The accelerated properties, like View.setTranslationX/Y, only bypass what's typically a small amount of work (and don't use a caching layer). It's an incremental improvement, not something absolutely required. Scrolling in a RecyclerView or ListView, for example, doesn't even do that. It just moves the left/top of every view and re-renders, and that's plenty fast to hit 60fps.


This used to be true, but since Android M and N, where a lot more animations were added, a lot of animation now happens on separate GPU layers (and is rendered, if necessary, by separate threads).

This was especially necessary due to many of the ripple animations being introduced.


I think your confusing the RenderThread with GPU layers. There's only 1 rendering thread per app and it handles all rendering work done by that app. It's really no different than pre-M rendering other than a chunk of what used to be on the UI thread is now on a different thread. The general flow is the same.

The new part is that some animations (basically just the Ripple Animation) can happen on their own on that thread, but it doesn't use a GPU layer for it nor a different OS composition layer.


> but it doesn't use a GPU layer for it nor a different OS composition layer.

Really? As so often, there was a lot of talk about doing that beforehand, and it wasn’t discussed at all later on, so I had assumed that this had been done. Interesting that this didn’t happen.

What’d be the reason for that? Animating objects on a static background seems a prime case for GPU layers. Or was it the issue with the framebuffer sizes being too huge again?


Think about what the static background actually is. It's probably either an image (which is already just a static GL texture, no need to cache your bitmap in another bitmap), or it's something like a round rect which can actually be rendered faster than sampling from a texture (since it's a simple quad + a simple pixel shader - no texture fetches slowing things down). In such a scenario a GPU layer just ends up making things slower and uses more RAM.


> This guide will be relevant until OS compositors fundamentally change from just dealing with bitmaps. Which is unlikely to happen anytime soon.

A browser is essentially a small OS with server-side rendering, where clients/web apps send HTML/CSS/JS for their GUI and the compositor/browser engine renders into bitmaps. We probably need something simpler than HTML/CSS/JS, though, if we want it to be reasonably easy to implement a fast rendering engine.


Hmm... Wouldn't it be possible to address the problem at another level? For instance, when it's discovered that layout properties are being animated, see if it would be possible to create an equivalent effect in the compositing stage, if so, convert the layout position changes to transforms?

I realize that would probably not be easy, but maybe still possible without major architectural changes?


> For instance, when it's discovered that layout properties are being animated, see if it would be possible to create an equivalent effect in the compositing stage, if so, convert the layout position changes to transforms?

Yes, and we as browser vendors should absolutely do that.

Though a better solution is to just make the compositor able to accelerate the full generality of CSS in the first place.


Which thread are you going to do that work on?

It's complicated because layout changes can cause new elements to be created. Elements can be really expensive to create.


> Which thread are you going to do that work on?

The layout thread.

> It's complicated because layout changes can cause new elements to be created. Elements can be really expensive to create.

Huh? No, they can't. In rare cases, layout changes can cause new render objects to be created (line breaks), but that's not elements.


Not sure how Servo works, but what happens if a list viewport gets bigger and you need to display more list items?


The newly visible list items are already laid out, so we just display them.


That's the kind of thing that would cause a rejection: elements are gong to be created, so this can't be converted to compositing stage operations instead.

Dunno which thread—just an idea.


Ideally so called rendering tree should reside completely on GPU. Or at least all heavy rendering components that it represents shall be there.

In this case each animation frame will be just a set of commands sent by CPU to GPU: render thing A at coordinate Ax,Ay, render thing B at ...

So animations that do not involve relayout will cost almost nothing for CPU. And even with relayout ( transition: width 2s; ) it will not be that bad as relayout is partial as a rule.

At least this is how it works in Sciter: https://sciter.com/sciter-and-directx/ (demo of Sciter HTML/CSS rendering integrated into DirectX scene)


Never buy a >= 120 Hz monitor. Everything else will start lagging :(


I have one. Didn't notice any lagging. What are you talking about?


I meant that after you're used to it, you'll start to notice "only" 60 FPS. Also animations which are capped at 60 FPS will interpolate badly to 144 Hz.

But it isn't the case with the technique described in the article (browsers also render with more than 60 FPS if the monitor refresh rate is higher), so my comment was a bit offtopic, sorry.


"but the human eye can only see 30fps!"


Sarcasm doesn't carry well on the internet, if that was your intention (not all of us know everything about everything). Please refrain, specially on this board.


I don't know, I recognised the sarcasm right away, since the comment used quotes to distance the speaker from what was said.

If we're all formal all the time, it's going to be hard to welcome new people. It's important to make sarcasm obvious though (because it's difficult to express), so that newcomers who may not know that this is ridiculous can feel "in" on the joke, while learning something, and not being embarrassed.


Yes, I noted the quotes but the comment was still useless because it doesn't add anything really informative here. Also, further muddying the water (of course after my comment) was your sibling comment which says anything above 30fps is a waste of priority.


I can't actually tell if you are being sarcastic.. As he actually has a point. If you're spending your time getting to make your hamburger work past 30 fps; then you may need to rethink priorities.


There's a difference between the frame rate at which the human eye can process images, and the frame rate at which the human eye can detect that there was a change.

In a 60 fps stream, you can insert one image that is way off from the rest, and it will be detectable. You have to go around 100fps for that "blip" to go unnoticed.

So when it comes to transitions, high frame rates actually do matter, as that makes the difference between something smooth & natural, and something that appears jagged & visually annoying.


> As he actually has a point.

The sensitivity of the human eye goes way beyond 30fps. Studies have shown that viewers can distinguish between modulated light and a stable field at up to 500 Hz. As a more practical example, many reviewers have noted that the new iPads' 120 Hz refresh rate has obvious, visible benefits for scrolling and animations.


For a simple web app, yeah, I'd agree that trying to get over 30 fps is pointless, especially on mobile where I'll be concerned about battery usage.

But for gaming, anybody that can't tell the difference between 30 fps and 60 fps is blind. I have a 144 hz monitor, and I can certainly tell the difference even between 60 fps and 144 fps.

The UFO Test [0], while contrived, will show you the difference. If you have a 60 hz monitor, it will show things moving at 60 fps and 30 fps (And 15 and 7.5 fps, if you want). If you have a 144 hz monitor, it will show 144 fps, 72 fps, 36 fps, etc. The difference is clear, especially if you're following the UFO with your eyes. The higher framerate is less blurry. In games, this can be huge.

[0] https://www.testufo.com/#test=framerates


It could be some kind of placebo effect, but I know the difference between 60 and 150+ fps even on 60hz screens. At least I'm sure I can feel it when gaming.


It is 100% placebo because a 60 hz screen can only render 60 fps. If you're getting 150 fps, then if you're playing with V-Sync enabled, only 2 of every 5 frames is being shown to you. The other three are being thrown out, never reaching the monitor. If V-sync is disabled, then you're only seeing about 2/5ths of every frame with a tear in the screen when the next frame was shown, causing rendered objects to shear.


It was sarcasm, and no.


TL;DR

These are the best css properties to animate with

Position — transform: translateX(n) translateY(n) translateZ(n);

Scale — transform: scale(n);

Rotation — transform: rotate(ndeg);

Opacity — opacity: n;

I'll add in that translate3d is generally faster than the other translate options. There's some good performance info in the post and it's worth the short read.


They also point out that substantial performance gains are given by using 'will-change,' as in the following.

  .app-menu {
	-webkit-transform: translateX(-100%);
			transform: translateX(-100%);
	transition: transform 300ms linear;
	will-change: transform;
}

And I'd point out that the authors are specifically stating that animating transform+opacity (via a 'transition') are more efficient than things like left, top, bottom, etc.—because those properties affect the layout stage, which is earlier than the composite stage (where the transform+opacity properties operate), and subsequent stages have to be recalculated.

They also discuss different ways of structuring DOM trees to create the same animation which have performance trade-offs.


Yep, the will-change was something new to me.


This appears to be browser specific.

I copied his code-pen and used his first example in Safari (just using the `left` css property) and it was still 60 FPS in Safari (per debugging tools). I wonder how much variance there is between browsers on this. (I'm on a Mac 10.12.5 using Safari 10.1.1 (12603.2.4))


There are a lot of jsperf tests like this one https://jsperf.com/translate3d-vs-xy/4

This isn't pure css as it's using JS to set the property, but it should be a good indicator.


I'm not an expert in this area, but doesn't setting these properties only take effect in the next layout/paint cycle?

I think you need to call getComputedStyle (https://developer.mozilla.org/en-US/docs/Web/API/Window/getC...) to resolve the new style update outside of the layout/paint cycle.

But I'm not really sure to be honest


All CSS advice around "this causes the creation of a compositing layer" is browser-specific. Chrome and Safari happen to have a particular compositing model but AFAIK there's nothing in the HTML spec that actually requires it.




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: