Hacker News new | past | comments | ask | show | jobs | submit login
Fix Your Timestep (gafferongames.com)
131 points by replyifuagree 7 days ago | hide | past | web | favorite | 54 comments

The mixing of unfixed and fixed timesteps leads in my opinion to one of the greatest headaches in game development. Unity and Godot for that matter advice you to put all physic related scripts in a FixedUpdate method and game Logic scripts inside Update.

But it is never that simple. So you have to process your input in Update and forward actions to FixedUpdate yourself if physic objects are affected. And this alone leads to numerous tricky problems.

And then everything goes down quickly if you have physic related objects which also affect your gamelogic. What to do? Process everything in FixedUpdate always? Well, say goodbye to immediately processed input.

Additionaly, don't forget that you need to interpolate or extrapolate rigidbodies, otherwise you won't see them move smoothly if the physic world updates at 60HZ and your screen at 144Hz. And as soon as you do that, the representation on the screen does not always match the physic world, which is really bad for fast gameplay.

And if you do something wrong, your whole physic simulation can explode.

I haven't found a nice and simple solution to all of this, which works for all cases, even though I really tried [1] and I believe many games suffer from delayed input handling, micro-stuttering or unreliable physics due to mixing fixed and unfixed timesteps.

[1] https://www.zubspace.com/blog/smooth-movement-in-unity

Not using a fixed update rate is why i have to run a framecapping software on my fast PC in pretty much every game that is older than a few years.

Do not bother, Carmack couldn't get it right, Sweeney couldn't get it right, the guys at Monolith couldn't get it right, pretty much nobody could get it right. Even if you think your code works, it wont work. Floating point doesn't like operations like "thing += delta * speed", especially when delta varies from frame to frame.

Use a fixed time step for all updates (not just physics updates), use a high update rate like 60Hz to avoid any perceptible delays, interpolate the visible state in intermediate frames and if your game really needs immediate response (in practice this is only visible with mouse look in high refresh rate screens - other forms of input are not perceptible), do the visual change on the input event (e.g. rotate the camera when you move the mouse, play the sound effect when you hit fire or whatever) but let the game logic still run at the fixed update rate.

Consider anything else and you will have a broken game.

> The mixing of unfixed and fixed timesteps leads in my opinion to one of the greatest headaches in game development

Here's the scarier truth: bugs due to timestep handling like this are really just evidence that you're hitting the stiffness edge of the underlying differential equation. And you can't fix that with timestep, really, at all. At least not with feasible performance.

You have to move to fancier solver algorithms if you want any hope of controlling these things in this regime. And even then the yet-scarier truth is that you'll never be able to control it even there, but at least you'll have hope of recognizing and understanding it.

Source: wrote a physics engine for FlightGear many years ago, based on a solid, simple RK4 engine, thought I understood everything perfectly and had picked all the right algorithms. And holy tuning hell, Batman... did you know how stiff landing gear are when you're just sitting motionless on the ground?

Integration algorithms, in practice, are easily the most frustratingly intractable area of numerics programming. And yet they come up in situations like this where they seem "obvious". It's a giant trap.

People like RK4 for arbitrary ODEs because it's fast and usually-good-enough but it is emphatically not what you want for simulating Hamiltonian systems (the two-body problem for example has no bounded elliptical orbits when integrated with RK4; everything escapes eventually).

Usually what you want is some sort of simple lower-order symplectic scheme like Leapfrog. If however that is not feasible for efficiency reasons volume-preserving is often sufficient (again this produces noticeable errors in solutions of the two-body problem in the form of orbit precession, but it gets boundedness of orbits right at least)

Yeah, I haven't touched this world for over a decade, don't read those papers, had to look up "sympletic" to be sure I was remembering the right thing... and I am still like 80% sure you're full of it. There are no easy solutions in this space. The routine physics of the real world that the game developers in the linked article are struggling with are simply not susceptible to error free integration, period.

The fact that you're talking about orbits and 1/r potential fields tells me you haven't really groked this. The example above wasn't a joke: you can get the two body problem nailed down perfectly and your code will still fall over when a character stubs their toe or bounces off a wall.

Yeah most game engine physics systems tend to use semi-implicit Euler as it’s both extremely cheap and the results are good enough. Other integrators tend to be used in specific circumstances for example using velocity verlet for cloth sim.

On a related note, Blizzard added a feature to Overwatch about half a year ago that enables the processing of mouse input at 1000Hz! Or well, it doesn't process it at that rate, but it figures out where you were aiming when you fired, even if you fired right between two ticks (or frames).

The videos in their blog post [1] explain it much better than I can.

[1] https://us.forums.blizzard.com/en/overwatch/t/new-feature-hi...

Something that’s surprisingly tricky to get working well in Unity is to have the player hold an item in first-person perspective and still make that item a physics object. The camera will update at one rate and the physics objects will update at another. It’s easy to end up with an object that jitters horribly as you move it around.

Why do they recommend game logic in the non-fixed update?

A good example is moving the camera: lets say you have a camera set to follow a particular character (in some fancy way), but you only update it in FixedUpdate(). If your framerate is faster than the physics update, FixedUpdate() will not be called every frame, and your camera will get a visible stutter (it will essentially be "temporally aliased" with the physics updates). Therefore, camera movement updates should always go in Update(). This kind of reasoning applies for many "game logic" things.

There's a flip-side to that though: if the stuff in the world are physics objects (especially if it's your main character), and they only update in FixedUpdate(), then even if the camera moves smoothly, the objects will stutter. Hence, you need to do some kind of interpolation or something of them as well in FixedUpdate().

The point is: this stuff is really, really hard to get right. There is no simple right answer.

On the other hand if you update the camera per frame you'll end up with the shotgun bug in Deus Ex: The Fall when running with vsync turned off (see Total Biscuit's video on the game as an example for all sorts of bugs that exist because of timing issues from updating stuff per frame - note that TB didn't realize that).

If you want to follow a character in a fancy way (ie. not locked), update the camera's target position in fixed steps and perform smooth interpolation between the last position and the current position for the frames between the updates.

Update is bound to the framerate. And if you have a faster PC or a higher refresh rate you can perform more logic. So why limit yourself by a fixed timestep? But well, nothing prevents you from doing game logic in FixedUpdate and maybe it could even be beneficial. For example, you don't want your AI to run each frame.

But in my opinion Update should still be the place most of your logic resides. Even in the case where you don't want logic to run each frame, simply keep track of the time passed and run your logic in self defined intervals from inside Update.

Update is the place you handle your input and you can affect the next displayed frame immediately. The last part is really important, because you don't want the player to wait for the next physic step after he pressed a button (but unfortunately many games do).

I think a large part of this problem comes from the fact, that the Update/FixedUpdate logic of most game engines is very static. In many cases you can't really influence its behavior. Unity allows you to invoke the FixedUpdate cycle manually from inside Update, but it can lead to other problems.

This is completely wrong.

FixedUpdate can be called several times in one frame or not at all because it's designed to run a consistent amount of time regardless of framerate.

Don't put game logic that involves collecting input in FixedUpdate or you will miss inputs and get all types of errant behavior, definitely don't put AI in there...


The rule of thumb: FixedUpdate is for per-unit-time, like X meters per second, Update is for per-frame.

(really that boils down to, FixedUpdate is for physics. If you want FixedUpdate for something that isn't physics in Unity, use a Coroutine/InvokeRepeating)

Things in Update will happen less often on a slow computer and more often on a fast one, you don't want your physics to behave differently on a slow computer

Aren't inputs usually on a queue? So you shouldn't miss them. And why shouldn't AI move at the same time as the physics? You don't want your AI to behave differently on a slow or a fast computer.

"Aren't inputs usually on a queue" isn't really an easy question to answer because there's so many levels of it (and I'm not saying that to be difficult)

But generally if we're talking about Unity and stuff like GetButton, no. You're polling for input at a specific point in time.

Like I said, FixedUpdate can be called multiple times a frame, or not at all, so there'll be entire frames where Update is called (and you'd have gotten input), but FixedUpdate isn't (and you miss that input)


And AI moving and actually running AI are two different things.

You want the AI's physics-based effects to be in FixedUpdate like any other physics, but you don't want your AI to be in FixedUpdate.

A lot of people don't seem to get, FixedUpdate isn't magical, it's trying to run at a target rate, but if your game can't do it's work at that rate, there's no way to "cheat" time

So it will try and run 2 times in a row for one frame if your frames are slow enough. And if your FixedUpdate is slow enough, it won't be able to run 2 times quickly enough, and then the next frame it needs 3 runs to "average out" to the correct number of calls per frame.

That creates a cascading effect that leaves it trying to run as many FixedUpdates as it can.

It also leads to a weird sort of "jitter" where FixedUpdate is called several times and your AI in turn updates several times before Update is called once. So it "sees" the same exact world view multiple times in a row and acts on it (unless all your game's logic is in FixedUpdate, which you definitely shouldn't do) then the next frame because the engine "caught up" on FixedUpdate calls, FixedUpdate isn't called at all, while Update is called, and your AI doesn't act on the new world view from Update's logic until the next frame.

Instead, if you want a repeatable task updating independent of frame-rate like AI, in Unity use a coroutine or "InvokeRepeating"

I'm thinking though that if the AI runs less often, it might take different decisions depending on how fast the PC is. So just like if you did physics without fixedupdate, you could have players jumping lower or higher depending on how fast their PC is, maybe you'll be making the AI dumber or smarter (due to thinking more often).

The big problem here is "AI" is some an incredibly unconstrained problem space for games, "AI" could just be lerping towards the player, or it could be a state machine that doesn't have a concept of manually being stepped forward

In fact, I don't think I've ever worked with a game AI that had a concept of manually being stepped forward. But more importantly, I can't think of any common game AI that benefits from "more thinking time" on a continous basis. For example, there's games like chess where more time can result in a better solution, but you wouldn't be using FixedUpdate anyways, you'd manually start "thinking" and use a timer to end "thinking".

"Continuous" AI I've worked with are usually a state machine (or some loose definition of one) that will respond to events and change state, and while in a given state they're usually running a certain set of code on each update.

There are parts of that update code you should put in FixedUpdate, like a raycast. You're (ideally) moving the colliders that are being cast against during FixedUpdate, so you do the raycast in update, you can miss raycasts that should have hit.


But that actual decision making shouldn't be in FixedUpdate. FixedUpdate should set some flag or fire some event that is then handled in Update.

The AI won't be dumber or smarter based on FPS in any appreciable way (and if that's a concern you could still use coroutines). The parts of it that react to physics will already be happening as fast as possible since they're in FixedUpdate, the actual transition of states will happen on the very next frame rendered after this detection.

If anything, putting everything in FixedUpdates means there are times when the user gets to provide input (frame renders and Update is called), but the AI doesn't because FixedUpdate never got called

Honestly I kind of like the idea of (optionally) allowing the AI to use every bit of the player's silicon against them.

Physics simulation is really interesting. I had the fortune of working on a physics simulation engine in industry (although the company culture was dysfunctional, that’s another story), and I often hand-roll my physics simulation code in games that I make as a hobby, or use off the shelf engines like Unity’s or P2.

The timestep naturally has to be small enough that you can get good results with values that change quickly. The simulator I worked on professionally used adaptive step size. For these applications where accuracy is paramount, the adaptive step size gets you there with a smaller computational budget. For games, you often want a very repeatable physics simulation. For example, if the player can normally jump onto a 3.5m ledge, and the step size changes, maybe the player can now jump 3.6m, or maybe only 3.4m. This can frustrate the player (or it can be exploited in speedruns).

I’d also say that for most games, physical accuracy is very rarely a useful tool. That’s just my experience. 90% of the time, I just want velocity, momentum, and collisions. If you are using an existing engine like Unity, this means, among other things, that you probably don’t want to make everything in your game a Rigidbody. The physics simulator is great for things like knocking crates over and making characters ragdoll, but for your characters in-game, try doing the physics yourself in a custom character behavior instead of using Rigidbody.

You might be surprised that a ton of examples from the Unity site are like done like this.

There's a billiards simulator/game[1] that uses event prediction instead of step-based physics for realistic and reproducible results. It solves the equations of motion to find the next event, animates to that point, and continues until nothing is moving. This is feasible since there are only 16 balls and 1 cue tip, but I imagine that as computational power continues to increase, the scale of such simulations will increase too. I'm excited to see what's next :)

[1] - I read a paper that specifically talked about step-based vs event-prediction for billiards simulation, but I can't find it now :/ Virtual Pool 4 might use the system for its physics, but don't quote me on that!

P.S. - Unity has a pretty handy built-in CharacterController class that lets you move around while respecting collisions -- you just need to add your own Gravity. Also, for more advanced character movement, Catlike Coding has a great series of tutorials: https://catlikecoding.com/unity/tutorials/movement/

That’s sort of the way things are done in industry.

One strong example is in mixed-signal electronic simulation. Your digital portion of the system describes digital outputs in terms of digital inputs and gate delays, more or less. Changing the digital inputs results in a queue of events.

So you run the analog portion of the system until the pending digital event, process the digital event, and then continue the analog simulation.

> The problem is that the behavior of your physics simulation depends on the delta time you pass in. The effect could be subtle as your game having a slightly different “feel” depending on framerate

Is THIS why the physics in GTA San Andreas felt different on each console, and WAY different on PC? I have always suspected something like this but never knew for certain.

Most likely not. To make cross platform multiplayer possible, they practically have to enforce the same physics timesteps on all platforms.

What most people perceive as a different feel is the input to display latency. On a TV with a controller that can easily be 4x higher than with a mouse on a PC screen.

In the early days, Microsoft did a famous study of how strongly input latency affects the way how people use office software. If I recall correction, 50ms in additional latency would lead to significantly less exploratory behavior.

And I think we all know that intuitively from the web. When a bloated newspaper site needs 5 seconds to load, you start to consciously consider if you really want to click that link, as opposed to just checking out everything slightly related on Wikipedia.

GTA San Andreas (2004) did not have cross-platform multiplayer, so OP's assessment may be accurate.

2004 is a long time ago. I don't remember threading discussed any place when I started learning game development on a hobby and university level at that time. Probably because multiple cores wasn't a thing, so threading mostly would add complexity and cost performance. Core 2 was released in 2006 https://en.wikipedia.org/wiki/Intel_Core_2

IIRC all places I did learn from did tell to throw everything into the game loop and remember to use delta since last frame in all calculations

You are correct. I didn't consider that it was such an old game.

In current times with digital drivers, enormous TVs, triple-buffering, and wireless controllers, I very much agree.

Would you know what contributed to the ItD latency discrepancy back in 2004? I would imagine a CRT TV vs a monitor to have much more similar latency, as well as a wired gamepad versus a wired KB+M. But I'm far from a specialist in this kind of thing.

> Microsoft did a famous study of how strongly input latency affects the way how people use office software

Would be quite interesting to look at that, if anyone has a link.

Sure, duckduckgo.com for example has ;-) But in that paper [0] they talk about 500ms, not 50. The rest of the keywords sound so similar and it is the only match, so I think I got the right one.

[0] https://idl.cs.washington.edu/files/2014-Latency-InfoVis.pdf

That's not the study that I was referring to, but I cannot find the original one, either. If my memory serves me correctly, they compared PS2 with USB and they showed white squares on a CRT screen, so I believe it was from way before 2014.

Indeed, sounds similar, though 2014 isn't quite ‘early years’ in my book, and MS doesn't seem to be involved here.

On my feeble attempts at searching, the results were instead filled with laments about high values of latency on touchscreens and modern PCs.

did you just tell him to let me duckduckgo that for you?

Possibly. In the game Halo 2, there was an exploitable physics glitch called "superbouncing" that allowed the player to jump very high.

The effectiveness of this exploit differed on PAL (25fps) and NTSC (30fps) systems, with the exploit being easier to perform on the former.

When Microsoft ported the game to the Xbox One, they raised the framerate to 60fps, this made it even harder and less effective to perform on remastered versions of the game.

I remember this very fondly. I actually spent a long time trying to perform the superbouces as a teen, I pulled it off maybe twice. I was on NTSC and did not know there was a difference.

Yes, 3d era GTAs all have physics tied to the frame rate. The default setting on PC is to lock the frame rate to 30 or 25 fps (depends on which game) and you get problems if you disable that. This video shows some examples: https://www.youtube.com/watch?v=l0LwTzxhgyM

This is interesting and explains behavior I often encountered. I am no game developer, but I dabbled with different combinations of render and physics engines. While the times when game speed was dependant on framerate is probably over, this disentanglement can be a problem for physics simulations.

In many cases rendering is done mainly on GPU while physics is often based on your CPU, although that isn't necessarily true anymore either. But I am sure you can break quite a few games, if one component is disproportionately slower or faster than the other.

I wonder if physics engines might profit from mechanisms we know from video compression, having P- and I-frames, so that the errors from interpolation like suggested as a fix in the article could be corrected. If I understand it correctly and there is need for such a correction mechanism that is.

I read this article for the first time a few months ago while creating a rudimentary sound engine. The article makes a lot of sense, but I struggled to map the concepts to sound generation. I initially tried to replace the render step from the article (rendering a single display frame) with rendering a single fragment of sound buffer (called a period in ALSA, the unit of transfer to the sound card). My first mistake was ignoring the interpolation part of the article. The output was very choppy whenever the sound card transfers were too coarse.

And then I learned that I can ask the sound card how much sound buffer it wants, instead of always transferring a fixed amount. In other words, the rate of the main loop did not need to match the rate of transfers to the sound card.

Now I've rewritten the main loop as a series of inner loops, first running a variable amount of simulation at simulation rate, then rendering a variable amount of sound buffer at output sample rate (not fragment transfer rate, like I was doing before). This seems to be working better.

I still need to think about the spiral of death. Perhaps an upper limit on simulation would be enough?

(Any advice welcome!)

As someone who's been learning to develop games mostly from the fundamentals I've learned the hard way that this is really important. If you try to run your game updates at the same time delta as your last frame draw then you're going to have a really bad time debugging anything. A fixed timestep is a must.

This is true. For all simulations.

Not true for non-interactive simulations, some of which e.g. as used in Systems Biology, choose the time-step to the next event from a probability distribution. That is a huge speed-up compared to a fixed time-step simulation in which you choose whether or not an event occurs in the current time-step randomly.


I get the impression that most games are using some sort of hybrid approach. They run the game simulation at a fixed "tickrate", and then add some hacks on top to make the visuals match the physics in certain areas. For example, your mouse position and whether or not you fired your weapon can be updated between simulation iterations, to make shooting and killing smoother.

It's not simple in either direction. You have some events where responding faster is considered better most notably detecting input and some where you need a consistent frame-rate to avoid issues (e.g. simulating drag is really sensitive to framerate and will behave quite differently as a game slows down). A fixed timestep also causes the game to slow down if it can't hit the desired update rate.

FWIW neither Unity nor UE4 use a fixed timestep for their default update and tick.

My only real experience with this type of logic has been with training simulations, where you expressly don't want real time, just real-ish events that happen in an appropriate order. If you're an incident commander, then you aren't going to wait 5-10 minutes for additional units to "arrive" in a training simulation (60-90 seconds is good enough). Same for waiting for ladder companies to check a building, etc.

I find real-time games very interesting technically and conceptually, just no real interest in working on them. Understanding the technical approaches can be very enlightening for a number of different use cases.

Johnatan Blow has an intersesting video on issues with framerate and how it was handled in Braid at https://www.youtube.com/watch?v=fdAOPHgW7qM . As someone working on web stuff, I always find it fascinating to see all the complexity that arise in other domains, in games in particular.

Me too! I went deep into that rabbit hole and ended up with a presentation on things webdev can learn from gamedev:


Only other optimization I would add to this is that you don't need to use the same 'consumer' value for all parts of your game loop. Collision detection you probably want to be tight so that fast moving objects don't warp thru thin objects like walls. But enemy ai may be more tolerant to larger chunks.

The only way to reliably prevent the situation where fast moving objects from moving through thin walls (or even thick walls) is to use continuous collision detection. Even if you step the physics engine at a ridiculously high rate (say 1000 frames per second), objects will still pass through each other if they're moving fast enough.

You can always just limit the maximum velocity of objects.

I guess most games cap velocities for most objects if not all, so I doubt that is an issue.

That's not practical. If you cap velocities, you'll also need to enforce a minimum size for all objects. If you don't do this, it will still be possible for objects to pass through each other. So you can either have really small but really slow objects, or really large but really fast objects.

What most games do is enable continuous collision detection for important fast objects only, like the player or projectiles.

Gaffer on Games was an invaluable resource when I was at University - I remember this article fondly.

If I'm not mistaken, the first arcade game to fully decouple time step and frame rate was I,Robot in 1984. No physics, but the rasterization was variable enough to require this. Oh, and it didnt do physics.

I figure the major reason they don't just run DEVS under the hood is that they don't know about it?

This has a lot in common with how you design safety systems by the way ;-)...

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact