Hacker News new | past | comments | ask | show | jobs | submit login

This, except also relatively compared to reaction time thresholds/max latency for hci.

For example in predictive typing anything above ~100ms (don't remember the number but there's an exact measurement on the Gmail paper for smart compose) is noticeable and very annoying to a user.

Or video chat, I'd be curious to see the video latency required for a user to notice lag between audio & video.

I know in game dev, I was always taught that 60fps is fine, but then there are gamers swearing up and down when they drop below 140fps. Would be curious to see what the max allowed latency is here as well.

Another common one is the microexpressions latency which is frequently quoted as 1/20 of a second (max latency before an emotion detector starts to miss the microexpression).

I feel like these are the real goalposts in tech; especially in ML, we've got the accuracy/quality and now we need the speed. Every research paper likes to brag about how many ms it takes for one step, I'd like to see this compared to the maximum latency required for practical HCI.




> I know in game dev, I was always taught that 60fps is fine, but then there are gamers swearing up and down when they drop below 140fps. Would be curious to see what the max allowed latency is here as well.

The 60 fps mark isn't really latency related. Instead, at 60 fps animation without motion blur feels smooth. At 30 fps you need motion blur or it feels like it stutters (and in 24hz cinema the camera produces natural motion blur due to shutter time).

The latency aspect is more complicated, as there's a long feedback loop: The frame is shown on the screen, is perceived by the user's eye, the user's brain computes a reaction, the brain sends a signal to the hands, the hands execute the reaction, the input device perceives this and sends the result to the computer, the next frame is computed and finally it's send to the screen and displayed. Each of these steps takes on the orders of milliseconds (often tens of milliseconds). Faster refresh rates and more fps improve some of them, and in theory any improvement in fps and refresh rate makes the feedback loop faster and thus improves performance. There are diminishing returns because you don't control the entire pipeline (and the parts that happen inside a human are particularly slow), but going from 60 to 140 fps is a big enough improvement that it matters.

Another factor is the stability of the latency: humans can compensate for quite high latencies by predicting the correct inputs. But this only works if the latency is consistent. Just imagine trying to hit anything with bow and arrow if you don't know how fast the arrow will fly. That's why frame drops hurt so much: every dropped frame is a giant latency fluctuation.


> The 60 fps mark isn't really latency related. Instead, at 60 fps animation without motion blur feels smooth. At 30 fps you need motion blur or it feels like it stutters (and in 24hz cinema the camera produces natural motion blur due to shutter time).

After using 120/144hz monitors for a while 60 fps without motion blur doesn't feel smooth at all.

60 fps mark is really just because that's all LCDs could do for the longest of time. It has nothing to do with anything human vision related, it was just the long-standing technological limitation of displays.


> The 60 fps mark isn't really latency related. Instead, at 60 fps animation without motion blur feels smooth. At 30 fps you need motion blur or it feels like it stutters (and in 24hz cinema the camera produces natural motion blur due to shutter time).

Really, this is related to the speed of motion. If everything is moving slowly, 30 fps is fine; if something is moving quickly enough, even 60 fps looks choppy. Try scrolling a long page really fast on your smartphone, for example.


I remember reading about the Sega arcade game Daytona USA. Instead of back to front or front to back rendering, it was a mixture of most important (your car, race track) to least important (background stands, trees, etc). It included a hardware watchdog that would trigger the next frame even if the current frame was not finished rendering, to guarantee a constant 60fps always. For an immersive experience, it was more important to keep the frame rate constant than it was to render every polygon.


From my experience a non-continous user action with a response under 100ms is perceived as "instant". Continuous input is different, and humans can perceive individual frames at 16ms for sure.

The video/audio is well studied, I forget the number but it's pretty short, this is out of my butt but I think it's around 20ms?

Keep in mind frame drops are different from average FPS. Every frame taking 16ms will feel far smoother then most frames taking 8ms but with spikes of 80ms (which is far more common).


>I know in game dev, I was always taught that 60fps is fine, but then there are gamers swearing up and down when they drop below 140fps.

My 144 Hz monitor is a literal game changer. I can move my field of view very quickly and the background environment stays crystal clear throughout, making it easier to spot enemies.




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: