Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
A 16.67 Millisecond Frame (koolcodez.com)
19 points by luciodale 7 hours ago | hide | past | favorite | 20 comments




This article says that using `transform` is faster than using `left` and `top` because `transform` is handled on-GPU, while `left` and `top` are not. This is a myth. I tried the demo page in the Firefox profiler; neither the optimized nor the unoptimized version missed frames. I tried it in the Chrome profiler; the unoptimized version missed frames, but the time was clearly labelled by the profiler as being GPU time, not reflow. Neither browser did reflows (or, all reflows were fast enough to not have any profiler-samples associated.)

The reality is that browsers contain large piles of heuristic optimizations which defy simple explanation. You simply have to profile and experiment, every time, separately.


Yep. At one point it was probably the case that it always caused reflows but browsers have had so much investment into optimizations that it probably realizes those layout changes don't require reflows and skips them.

Same experience here. Both consumed a lot of CPU, but they looked very similar on the perf graph, and neither missed any frames.

This article is almost certainly AI-generated.

The "demo" is kind of bologna.

1) The code that is running is not what's presented; it executes (non-transpiled) vanilla JS.* Why not just show that?

2) Removing the box shadow massively makes the two closer in performance.

3) The page could just be one sentence: "Reflowing the layout of a page is slower than moving a single item." GPU un-related.

---

*Code that actually is running:

```js

        , u = t => {
        h && clearTimeout(h),
        l.forEach( (e, s) => {
            const {top: o, left: n} = m[r[s]];
            t ? (e.style.transform = "translate(0px, 0px)",
            e.style.opacity = "0.7",
            e.offsetHeight,
            e.style.transform = `translate(${n}px, ${o}px)`) : e.style.transform = `translate(${n}px, ${o}px)`,
            e.style.top = "",
            e.style.left = ""
        }
        ),
        t && (h = window.setTimeout( () => {
            l.forEach(e => e.style.opacity = "1")
        }
        , 500))
    }
        , d = t => {
        y && clearTimeout(y),
        l.forEach( (e, s) => {
            const {top: o, left: n} = m[r[s]];
            e.style.top = `${o}px`,
            e.style.left = `${n}px`,
            e.style.transform = "",
            t && (e.style.boxShadow = "0 14px 28px rgba(239,68,68,0.45)") // REMOVING THIS LINE = BIG DIFFERENCE
        }
        ),
        t && (y = window.setTimeout( () => {
            l.forEach(e => {
                e.style.boxShadow = "none"
            }
            )
        }
        , 500))
    }
           
```

This is both informative and kind of amazing that anything works at all. Talk to anyone who did graphics at the turn of the century and you'll hear about "racing the beam" which you had only 16.67 mS before vertical retrace took you back to 0,0. Why double and triple buffering was invented, and how "animation time" is skewed by "frame time" and if you want to keep your animations from jumping or jittering you really needed to know how many milliseconds between the last frame and the one your rendering will take so that all your assets are rendered where they should be at that time.

There is a lot of fun programming to be had in that space.


We still race the beam, only on separate command chains. Across threads. With 100x the vertices. It’s still fun. Light, shadows, volumes, SDRs, HDRIs, PBR, so much has been thoroughly researched and standardized. We even have realistic clouds with volume.

This is why I got into programming to begin with. Fun first, visual second, technical challenges third, money fourth, company last.


How do we even see anything on a browser? How do pixels turn into shapes, color, and movement?

Every time we scroll, hover, or trigger an animation, the browser goes through a whole routine. It calculates styles, figures out layout, paints pixels, and puts everything together on screen. All of that happens in just a few milliseconds.

It’s kind of wild how much is happening behind what feels instant. And the way we write code can make that process either smooth and fluid or heavy and janky.

I wrote an article that walks through this step by step, with a small demo showing exactly how these browser processes work and how a few CSS choices can make a big difference.


A lot of these don’t happen on scroll and hover under normal circumstances. For example smooth scrolling on touchscreens is implemented by only re-running the compositor on each frame, and using the existing GPU-resident bitmap of the text being scrolled. That’s why non-passive onscroll callbacks make scrolling suck, especially on mobile.

What's not stated, is that we used to re-render the text at the bottom each time you scrolled up, and could still do it pretty fast (not quite in 16.67 milliseconds but we could have if computers had been today's speed), and in the meantime, we seem to have forgotten how to do that. Although we also have more pixels now, which probably changes things.

> we used to re-render the text at the bottom each time you scrolled up, and could still do it pretty fast

If you go back far enough, the IBM graphics cards used a text mode where text was accelerated by the graphics card, software would write a attribute byte and a data byte and the card had a bitmapped font to render text. VGA text mode at least could use hardware windowing so scrolling could also be accelerated; not every system used it, but you can set the start address and a line number where it returns to the start of the data area. That makes it easy to scroll without having to rewrite the screen buffer all at once. (If you set the stride to something that evenly divides the screen buffer and is at least as wide as your lines, it makes things even easier)


I think that many projects use wrong architecture, when it's a possibility for business code to block animations.

IMO all the "user" code must run in a dedicated thread, completely decoupled from the rendering loop. This code can publish changes to a scene tree, performing changes, starting animations, and so on, but these changes ultimately are asynchronous. You want to delete an element from a webpage, but it'll not be deleted at this JS line, it'll be deleted at the next frame, or may be after that, if rendering thread is a bit busy right now.

Animations must stay fluid and UI must react to the user input instantly. FPS must not drop.

Browser does it wrong. Android GUI API does it wrong. World of Warcraft addons do it wrong.


Which begs the question - if all of these projects got it "wrong", what's the chance that the "right" thing isn't right at all?

All animation is inherently discrete. No matter how many threads you have, there always has to be the last rendering thread, the thing that actually prepares the calls to the rendering backend. It always has to have frames, and in every frame, in the timestamp T, it will be interested in getting the world state in the timestamp T. So, the things that work on the world state - they have to prepare it as it was in T, not earlier, not later. You cannot completely decouple it.

In one of game projects that I worked on, a physics thread and a game thread actually were pretty decoupled, and what the game thread did was extrapolating the world state from the information provided by physics, because it knew not only the positions of physics objects, but also their velocities. Can we make every web developer to set velocities to the HTML elements explicitly? Probably not.


That's basically what React Native does/did and it's generally good but turns into a nightmare when you need to synchronize interactions between the two threads. 16ms is a long time - if your UI manipulations eat up most of that time then there's something wrong. Entire video games can run basically on one thread within that time and they do way more.

Multithreading in the browser kinda sucks though, it's too slow to share significant data between workers (threads), and if you try it with SharedArrayBuffer you eat the serialisation costs.

I’ve only ever seen competitive video games get the architecture right. They have the natural incentive of needing stable high FPS.

Random GUI apps aren’t incentivized enough and so garbage leaks through. I die a bit every time a random GUI app stutters drawing 2D boxes


Not really something anyone can change at this point, given that the entire web API presumes an execution model where everything logically happens on the main thread (and code can and does expect to observe those state changes synchronously).

If that's wild to you, wait until you see video games.

Correction: an 8.33 ms frame. Screens are 120 Hz now :)

As an early 2000’s game dev - I weep.

Scrolling? Animations? LOL.




Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: