But it is never that simple. So you have to process your input in Update and forward actions to FixedUpdate yourself if physic objects are affected. And this alone leads to numerous tricky problems.
And then everything goes down quickly if you have physic related objects which also affect your gamelogic. What to do? Process everything in FixedUpdate always? Well, say goodbye to immediately processed input.
Additionaly, don't forget that you need to interpolate or extrapolate rigidbodies, otherwise you won't see them move smoothly if the physic world updates at 60HZ and your screen at 144Hz. And as soon as you do that, the representation on the screen does not always match the physic world, which is really bad for fast gameplay.
And if you do something wrong, your whole physic simulation can explode.
I haven't found a nice and simple solution to all of this, which works for all cases, even though I really tried  and I believe many games suffer from delayed input handling, micro-stuttering or unreliable physics due to mixing fixed and unfixed timesteps.
Do not bother, Carmack couldn't get it right, Sweeney couldn't get it right, the guys at Monolith couldn't get it right, pretty much nobody could get it right. Even if you think your code works, it wont work. Floating point doesn't like operations like "thing += delta * speed", especially when delta varies from frame to frame.
Use a fixed time step for all updates (not just physics updates), use a high update rate like 60Hz to avoid any perceptible delays, interpolate the visible state in intermediate frames and if your game really needs immediate response (in practice this is only visible with mouse look in high refresh rate screens - other forms of input are not perceptible), do the visual change on the input event (e.g. rotate the camera when you move the mouse, play the sound effect when you hit fire or whatever) but let the game logic still run at the fixed update rate.
Consider anything else and you will have a broken game.
Here's the scarier truth: bugs due to timestep handling like this are really just evidence that you're hitting the stiffness edge of the underlying differential equation.
And you can't fix that with timestep, really, at all. At least not with feasible performance.
You have to move to fancier solver algorithms if you want any hope of controlling these things in this regime. And even then the yet-scarier truth is that you'll never be able to control it even there, but at least you'll have hope of recognizing and understanding it.
Source: wrote a physics engine for FlightGear many years ago, based on a solid, simple RK4 engine, thought I understood everything perfectly and had picked all the right algorithms. And holy tuning hell, Batman... did you know how stiff landing gear are when you're just sitting motionless on the ground?
Integration algorithms, in practice, are easily the most frustratingly intractable area of numerics programming. And yet they come up in situations like this where they seem "obvious". It's a giant trap.
Usually what you want is some sort of simple lower-order symplectic scheme like Leapfrog. If however that is not feasible for efficiency reasons volume-preserving is often sufficient (again this produces noticeable errors in solutions of the two-body problem in the form of orbit precession, but it gets boundedness of orbits right at least)
The fact that you're talking about orbits and 1/r potential fields tells me you haven't really groked this. The example above wasn't a joke: you can get the two body problem nailed down perfectly and your code will still fall over when a character stubs their toe or bounces off a wall.
The videos in their blog post  explain it much better than I can.
There's a flip-side to that though: if the stuff in the world are physics objects (especially if it's your main character), and they only update in FixedUpdate(), then even if the camera moves smoothly, the objects will stutter. Hence, you need to do some kind of interpolation or something of them as well in FixedUpdate().
The point is: this stuff is really, really hard to get right. There is no simple right answer.
If you want to follow a character in a fancy way (ie. not locked), update the camera's target position in fixed steps and perform smooth interpolation between the last position and the current position for the frames between the updates.
But in my opinion Update should still be the place most of your logic resides. Even in the case where you don't want logic to run each frame, simply keep track of the time passed and run your logic in self defined intervals from inside Update.
Update is the place you handle your input and you can affect the next displayed frame immediately. The last part is really important, because you don't want the player to wait for the next physic step after he pressed a button (but unfortunately many games do).
I think a large part of this problem comes from the fact, that the Update/FixedUpdate logic of most game engines is very static. In many cases you can't really influence its behavior. Unity allows you to invoke the FixedUpdate cycle manually from inside Update, but it can lead to other problems.
FixedUpdate can be called several times in one frame or not at all because it's designed to run a consistent amount of time regardless of framerate.
Don't put game logic that involves collecting input in FixedUpdate or you will miss inputs and get all types of errant behavior, definitely don't put AI in there...
The rule of thumb: FixedUpdate is for per-unit-time, like X meters per second, Update is for per-frame.
(really that boils down to, FixedUpdate is for physics. If you want FixedUpdate for something that isn't physics in Unity, use a Coroutine/InvokeRepeating)
Things in Update will happen less often on a slow computer and more often on a fast one, you don't want your physics to behave differently on a slow computer
But generally if we're talking about Unity and stuff like GetButton, no. You're polling for input at a specific point in time.
Like I said, FixedUpdate can be called multiple times a frame, or not at all, so there'll be entire frames where Update is called (and you'd have gotten input), but FixedUpdate isn't (and you miss that input)
And AI moving and actually running AI are two different things.
You want the AI's physics-based effects to be in FixedUpdate like any other physics, but you don't want your AI to be in FixedUpdate.
A lot of people don't seem to get, FixedUpdate isn't magical, it's trying to run at a target rate, but if your game can't do it's work at that rate, there's no way to "cheat" time
So it will try and run 2 times in a row for one frame if your frames are slow enough. And if your FixedUpdate is slow enough, it won't be able to run 2 times quickly enough, and then the next frame it needs 3 runs to "average out" to the correct number of calls per frame.
That creates a cascading effect that leaves it trying to run as many FixedUpdates as it can.
It also leads to a weird sort of "jitter" where FixedUpdate is called several times and your AI in turn updates several times before Update is called once. So it "sees" the same exact world view multiple times in a row and acts on it (unless all your game's logic is in FixedUpdate, which you definitely shouldn't do) then the next frame because the engine "caught up" on FixedUpdate calls, FixedUpdate isn't called at all, while Update is called, and your AI doesn't act on the new world view from Update's logic until the next frame.
Instead, if you want a repeatable task updating independent of frame-rate like AI, in Unity use a coroutine or "InvokeRepeating"
In fact, I don't think I've ever worked with a game AI that had a concept of manually being stepped forward. But more importantly, I can't think of any common game AI that benefits from "more thinking time" on a continous basis. For example, there's games like chess where more time can result in a better solution, but you wouldn't be using FixedUpdate anyways, you'd manually start "thinking" and use a timer to end "thinking".
"Continuous" AI I've worked with are usually a state machine (or some loose definition of one) that will respond to events and change state, and while in a given state they're usually running a certain set of code on each update.
There are parts of that update code you should put in FixedUpdate, like a raycast. You're (ideally) moving the colliders that are being cast against during FixedUpdate, so you do the raycast in update, you can miss raycasts that should have hit.
But that actual decision making shouldn't be in FixedUpdate. FixedUpdate should set some flag or fire some event that is then handled in Update.
The AI won't be dumber or smarter based on FPS in any appreciable way (and if that's a concern you could still use coroutines). The parts of it that react to physics will already be happening as fast as possible since they're in FixedUpdate, the actual transition of states will happen on the very next frame rendered after this detection.
If anything, putting everything in FixedUpdates means there are times when the user gets to provide input (frame renders and Update is called), but the AI doesn't because FixedUpdate never got called
The timestep naturally has to be small enough that you can get good results with values that change quickly. The simulator I worked on professionally used adaptive step size. For these applications where accuracy is paramount, the adaptive step size gets you there with a smaller computational budget. For games, you often want a very repeatable physics simulation. For example, if the player can normally jump onto a 3.5m ledge, and the step size changes, maybe the player can now jump 3.6m, or maybe only 3.4m. This can frustrate the player (or it can be exploited in speedruns).
I’d also say that for most games, physical accuracy is very rarely a useful tool. That’s just my experience. 90% of the time, I just want velocity, momentum, and collisions. If you are using an existing engine like Unity, this means, among other things, that you probably don’t want to make everything in your game a Rigidbody. The physics simulator is great for things like knocking crates over and making characters ragdoll, but for your characters in-game, try doing the physics yourself in a custom character behavior instead of using Rigidbody.
You might be surprised that a ton of examples from the Unity site are like done like this.
 - I read a paper that specifically talked about step-based vs event-prediction for billiards simulation, but I can't find it now :/ Virtual Pool 4 might use the system for its physics, but don't quote me on that!
P.S. - Unity has a pretty handy built-in CharacterController class that lets you move around while respecting collisions -- you just need to add your own Gravity. Also, for more advanced character movement, Catlike Coding has a great series of tutorials: https://catlikecoding.com/unity/tutorials/movement/
One strong example is in mixed-signal electronic simulation. Your digital portion of the system describes digital outputs in terms of digital inputs and gate delays, more or less. Changing the digital inputs results in a queue of events.
So you run the analog portion of the system until the pending digital event, process the digital event, and then continue the analog simulation.
Is THIS why the physics in GTA San Andreas felt different on each console, and WAY different on PC? I have always suspected something like this but never knew for certain.
What most people perceive as a different feel is the input to display latency. On a TV with a controller that can easily be 4x higher than with a mouse on a PC screen.
In the early days, Microsoft did a famous study of how strongly input latency affects the way how people use office software. If I recall correction, 50ms in additional latency would lead to significantly less exploratory behavior.
And I think we all know that intuitively from the web. When a bloated newspaper site needs 5 seconds to load, you start to consciously consider if you really want to click that link, as opposed to just checking out everything slightly related on Wikipedia.
IIRC all places I did learn from did tell to throw everything into the game loop and remember to use delta since last frame in all calculations
Would you know what contributed to the ItD latency discrepancy back in 2004? I would imagine a CRT TV vs a monitor to have much more similar latency, as well as a wired gamepad versus a wired KB+M. But I'm far from a specialist in this kind of thing.
Would be quite interesting to look at that, if anyone has a link.
On my feeble attempts at searching, the results were instead filled with laments about high values of latency on touchscreens and modern PCs.
The effectiveness of this exploit differed on PAL (25fps) and NTSC (30fps) systems, with the exploit being easier to perform on the former.
When Microsoft ported the game to the Xbox One, they raised the framerate to 60fps, this made it even harder and less effective to perform on remastered versions of the game.
In many cases rendering is done mainly on GPU while physics is often based on your CPU, although that isn't necessarily true anymore either. But I am sure you can break quite a few games, if one component is disproportionately slower or faster than the other.
I wonder if physics engines might profit from mechanisms we know from video compression, having P- and I-frames, so that the errors from interpolation like suggested as a fix in the article could be corrected. If I understand it correctly and there is need for such a correction mechanism that is.
And then I learned that I can ask the sound card how much sound buffer it wants, instead of always transferring a fixed amount. In other words, the rate of the main loop did not need to match the rate of transfers to the sound card.
Now I've rewritten the main loop as a series of inner loops, first running a variable amount of simulation at simulation rate, then rendering a variable amount of sound buffer at output sample rate (not fragment transfer rate, like I was doing before). This seems to be working better.
I still need to think about the spiral of death. Perhaps an upper limit on simulation would be enough?
(Any advice welcome!)
FWIW neither Unity nor UE4 use a fixed timestep for their default update and tick.
I find real-time games very interesting technically and conceptually, just no real interest in working on them. Understanding the technical approaches can be very enlightening for a number of different use cases.
What most games do is enable continuous collision detection for important fast objects only, like the player or projectiles.