Hacker News new | past | comments | ask | show | jobs | submit login

Not to mention most of these now fixed problems likely weren't even problems if you had a G-Sync/Freesync display and a compatible GPU.

As an extra note, Windows 11 ships with a feature to force the DXGI flip presentation model in older games.

"Gamers" can be like audiophiles in the way the spread half-truths and even outright misinformation. The number of people I still see thinking you should turn G-Sync off is downright silly.




> Not to mention most of these now fixed problems likely weren't even problems if you had a G-Sync/Freesync display and a compatible GPU.

I have a Freesync display and a compatible GPU and i can feel the difference between no vsync and freesync. There is a slight lag there, if you cannot feel it do not assume that others cannot too (some people even seem to not be able to feel a difference between having vsync on and off or having a compositor or not enable - nowadays really only possible on Linux but in the past you could also do it with WinVista/Win7 and in Win8 with hacks like the one linked - which is mind boggling to me).

Of course the input lag depends on the game and how you play it. First person games with raw input, 1:1 mapping of the input (i.e. no filtering whatsoever) and little to no frame queuing (some engines run 1-2 frames "behind" the game state and if you add things like triple buffering you end up with extra lag) are the best to notice this whereas third person games played with a gamepad (where you have only indirect control of the camera - you control the camera's rotation speed, not the camera directly) are way harder to notice, as are games where you mainly use the mouse to click where to go (especially if the game uses a hardware cursor instead of rendering the cursor itself as part of the game UI).

Also high refresh rate monitors make it slightly easier to overlook the lag (though it still is possible to tell - mine is a 165Hz monitor after all and i can still notice, it just doesn't annoy me as much as on a 60Hz monitor).


One of the big things to remember with VRR is that you will incur a latency penalty of a couple of frames if your game is running right at the maximum refresh rate of your display. This is why it is usually recommended to set a system wide framerate cap a few frames below the maximum supported value.

For most modern gaming monitors, this isn't a huge deal. Cap your game at 140 FPS on a 144hz display and it will feel as snappy as it probably would have uncapped. Though if you have a lower end (or ultra high resolution) display that caps out at something much lower, like 60, then I can definitely see how leaving it uncapped with VRR disabled would feel a lot snappier.

Early Freesync-branded displays could also be a bit of a crapshoot for a variety of reasons, but this has gotten much better in the last 5-6 years.


> One of the big things to remember with VRR is that you will incur a latency penalty of a couple of frames if your game is running right at the maximum refresh rate of your display. [..] For most modern gaming monitors, this isn't a huge deal. Cap your game at 140 FPS on a 144hz display and it will feel as snappy as it probably would have uncapped.

That sounds very weird to me, why would that be the case?

(also FWIW i'm using Linux so a system-wide cap isn't exactly possible, though at least with games i can simply run them via mangohud)


On Linux I use Mangohud or libstrangle to cap framerates. For me it is not a major annoyance though, as I only play a few games regularly on Linux. I still boot into Windows after I'm done with work and play most games there.

My understanding of why this causes a problem is that the CPU starts running ahead of the GPU, and queues up frames to be rendered. So it shouldn't be a problem with games that support Nvidia Reflex or some other feature to limit pre-rendered frames.


Yeah freesync is just causing uneven frame times if the game is not running higher than the monitor refresh rate and with monitor pushing above 144hz is harder and harder. At some point frame time will be so small that this unevenness will be unnoticeable but in the most common price range adaptive sync at half monitor rate is a better experience


Top-level speedrunners of action games are definitely able to feel input lag changes on the order of 10-20ms.

For everyone else, you'd really want to prove it with a high-speed camera or something.


Playing rhythm games, I can easily feel ~3ms of input lag. However it's pretty easy to adapt even to 100-200ms of lag, so I'd say most people are in the audiophile placebo territory. I can hit a 5-8ms window reliably even with a ton of lag if I spend 20 mins getting used to how it feels (running a game at 10fps under wine on a 10 year old chromebook with bad wireless gear). Network lag is far more significant than any input lag due to input processing or display pipelines.


Are you mixing up jitter and lag? Even if you hit perfect 120fps synced to refresh, your input to display lag goes up to 8ms. And that's ignoring the input device itself, processing time, etc. which also stack up. 3ms is not really doable with typical modern hardware.


> Playing rhythm games, I can easily feel ~3ms of input lag.

Hrmmm. I’m not sure this sentence even makes sense. I’m not sure what the definition of 0ms input lag even is. And I say this as someone who has deeply profiled true end-to-end input/output systems. 0ms relative to what exactly?

If you spend 20 minutes getting used to something you could maybe tell the difference if 3ms of input lag was artificially injected. Maybe. But that’s an exceedingly small value and if I had a toggle switch that turned it on/off I’d bet against you being able to tell when it’s on.

It does depend on the context. In a game like Guitar Hero 3ms is going to be exceedingly difficult to distinguish imho.


Humans are incredibly sensitive to audio rhythms/lag, and we’re talking about a rhythm game. Visual processing isn’t likely to notice, but this is probably between hearing/feeling the keystroke and hearing the reaction. 3ms is entirely believable; detection is based on a frequency domain transformation of the overlaid waveforms, not tuning as such.

At the same time, sure, we easily compensate.


Yeah I don’t buy that. It’s a long pipeline and there’s almost definitely more than 3 milliseconds of jitter. I’d wager that success/failure sounds are quantized to audio buffers that are a good bit longer than 3ms.

This is a deep and interesting enough topic that you could publish a pretty good and valuable paper on what the JND is for poor, average, elite, and “professional” players is.

If the line is 3ms for pro I will gladly bet the over!


Have you ever had the experience of having a malfunctioning phone system play back your own voice to you on a slight delay? I have. It's maddening to the point where I find it difficult to continue the phone conversation, and I'd bet that's on the order of 3-5 ms audio delay.


https://arxiv.org/abs/1202.6106

". In general, human speech is jammed by giving back to the speakers their own utterances at a delay of a few hundred milliseconds. "


Trust me. It's way less than a few hundred ms.


That’s completely different from being able to identify 3ms of input lag.


Oh, I’m expecting there’s 3ms of jitter. On a good day.

The total lag is almost certainly an order of magnitude above.


You can set up a double bound really easily for this. Play on speaker, and have someone move the speaker a couple of metres. See if you really can tell whether it has been moved.


Can you tell direction of a sound just by turning your head? Congrats, you can detect the difference of less than 1ms.


After reading some of the science on auditory perception it seems you are wrong. If you perceive a signal in the left ear and then get a louder signal on the right ear a few milliseconds later you will perceive this as one sound coming from the right. The precedence effect breaks down.

https://en.m.wikipedia.org/wiki/Precedence_effect


With simultaneously present signals...

Edit: I can also tune a guitar by plucking two strings at once, but I can't by plucking them one at a time.

This really has nothing to do with noticing the absence or presence of a 3ms latency difference in a situation where your baseline is already dozens of ms of latency in the best case.


Adapting isn't the issue though? It just feels much nicer to play on my 144Hz monitor than it does on my 60Hz monitor. This is first-world problems territory at most. It's not just gaming either. Doing anything on my 144Hz feels so much nicer.


as long as the lag is consistent, then I can concur :)

nothing is worse though than a rhythm game that can't keep proper rhythm itself.


Sorry, but:

3ms is how long it takes sound to travel a meter.

Do you notice the difference in audio/video sync between sitting in the front or the back of the cinema?


>Do you notice the difference in audio/video sync between sitting in the front or the back of the cinema?

rhythm games are literally scoring your timings, a cinema offers no such equivalent of a scored feedback loop.

No, I wouldn't notice 3ms on the lips of an actor. Drummers are known to be able to perceive 1-3ms differences in timing; again, a tactile action.

If a human is involved in the action their sense of rhythm and prediction takes over to allow for manipulation within those timescales. It's not uncommon, either.


I do absolutely not doubt a 3ms difference in timing being perceptible with two rythmic signals. I absolutely do doubt a 3ms latency though.

You are claiming that someone can tell the difference between playing the game oneeter or two metres from the speakers.

Put another way, on a large stage the sound from the bands monitors takes ~10ms to reach the drummer.

I don't have data, and neither does anyone else here apparently, but in the absence of evidence it seems exceedingly likely that any perceived differences due to 3ms latency (difference ?) are placebo. We know that placebo is real and large so unless someone has an experiment that tries to account for this (e.g. properly blinded) my prior is that you can't tell when playing a rythm game whether the speaker is 1m or 2m from you.


Sound travels distance between ears in less than 1ms max. Are you able to detect direction of the sound with your ears?


There's also a difference in volume and freq response due to the head being in the way and also the shape of the ear that helps you determine the direction.

I don't think we could tell direction based on the distance and less than 1ms difference alone.


I have both signals coming in at the same time and can superimpose them. Completely different situation.

Like tuning strings by hearing when you can pluck both strings at the same time, vs doing it using absolute pitch.


That's about phase, not frequency.


It sounds insane, but I agree with him, it's absolutely perceptible.


So much bad or useless optimization information out there. Unfortunately it's hard to test so everyone goes on what it feels like.


>a G-Sync/Freesync display

Going on sort of a tangent, but since it was mentioned: I hate variable refresh rate.

I suppose my eyes are just sensitive to changes in rendering speed, because I tried it and I couldn't stand the variable ghosting. For me it's worse than screen tearing, which for me is perfectly resolved with vertical syncing or just capping the frame rate to something less than my monitor's refresh rate.


That sounds like more of a problem with your display than with VRR itself.

You could probably fix it by capping your game's framerate to whatever it can maintain consistently within your monitor's VRR window. This would give you a smooth consistent experience, without screen tearing or vsync-induced lag. Since it's not working in a fixed refresh rate container, you can do more unconventional numbers too. I like to run a lot of games at 90 FPS.


You're misunderstanding me.

First off, I almost never get screen tearing. The only times I get screen tearing is if a program renders faster than my monitor can refresh, and my monitor (a Dell S2522HG) refreshes at 240Hz; screen tearing simply isn't happening. As for input lag, if there is any it's insignificant enough that I don't care.

Second of all, my problem is with the the way variable refresh rate works and not with anything on the software side or the monitor itself. Variable refresh rate is exactly what it says on the tin, the refresh rate varies.

My problem is because the refresh rate varies, the difference in refresh rate becomes very distracting. One moment my monitor is at 240Hz, the next at 135Hz, then 170Hz, then 70Hz, then 200Hz, and so on for an exaggerated example. Anything that's moving (eg: my mouse cursor) leaves more ghosts the lower the refresh rate becomes, and vice versa. This is very distracting because it's not consistent.

Imagine you're listening to music, and the pitch goes up and down arbitrarily. You will probably agree that it's distracting, and that's basically what I find variable refresh rate to be. It is distracting to a fault, and I would rather deal with my monitor going at full speed and limiting my software as appropriate to prevent tearing for a consistent experience that isn't distracting.


You mentioned the variable ghosting, which absolutely is a monitor problem. Either the algorithm determining how much to overdrive the pixels from frame to frame is bad, or the panel itself is too slow for overdrive to work effectively without being noticeable. There may be a setting you can turn down in the OSD to improve performance, as many gaming monitors come with this set too high out of the box.

My suggested fix should still solve your other problem, which is the wild swings in framerate. Bouncing between 200 is 100 certainly is jarring, and capping your game's framerate to make those swings smaller or nonexistent will fix it.


That sounds distracting.

I have a VRR monitor, but I limit it to 120Hz specifically for that reason. Most of the time it just sits there. The exception is while gaming, where sticking to 120 was never an option; I find the variation less of a bother than the more dramatic shifts down to 60/30 would be without VRR.


I do this and cap to 60 (my max refresh on my monitor).


You arent alone. I despise variable refresh. I cant even stand VFR videos on top of VRR displays. But my eyes are extremely sensitive to various things that almost everyone I know either cant perceive or have to focus really hard to notice.

It can be a bit of a curse. LED headlights and taillights give me headaches. A lot of LED street lighting and business lighting hurts me. Etc.

I used to be basically unable to stand a CRT under ~85hz. :(


> "Gamers" can be like audiophiles in the way the spread half-truths and even outright misinformation

If I had a dime for every time people were being told to manually manage their Windows page file "for performance"...


Somehow managing the page file and turning other services off in windows has turned into the Autoexec.bat and Config.sys of the modern age. Although, tweaking both autoexec.bat and config.sys was a worthwhile thing to do back in the day.


There is, or at least was, tangible benefit if you understand how it works.

A page file is, as it says, a file that's located in the root of whichever partitions that they are configured on. By default it's only configured on the C: partition and the size of the page file is variable, managed by Windows according to system demands. The page file is equivalent to a swap partition, for the Linux and BSD users out there.

Manually configuring the page file has a number of benefits, if you know what you're after (and you're using HDDs):

* You can specify a size, fixed or variable, for your page files. By default Windows allocates a fairly small page file, and if your page file use exceeds that allocation Windows has to allocate more space on the fly. This takes time, especially if the page file resides on a HDD. If you configure a fixed size, or a variable size with a large minimum size, you don't need to worry about that dynamic allocation which translates to more performance. Another benefit to a fixed page file is that you can have the page file occupy one continuous area on a HDD platter, for better access times and thus better performance; a dynamic allocation more often than not leads to a fragmented page file and thus worse performance.

* You can specify which partitions you want to use for your swap space. It wasn't unusual to just dedicate a partition to a page file, effectively the same thing as a swap partition, especially if your C: partition wasn't that big. This is important for the next bullet point.

* Back when we still ran our programs off of HDDs, making a partition earlier in the drive meant locating that partition on the outer areas of the platter. The outer areas of the platter travel faster than the inner areas, because physics, and faster platter speed means faster access times. By placing the page file on such a partition, the page file has better access times and thus you get better performance.

A lot of these benefits have gone away thanks to RAM becoming both cheaper and more plentiful, and the storage medium where the page file resides changing from HDDs to SSDs in general. So most people usually don't need to mess with their page files anymore.

But for people with specific needs or just a desire to really fine tune their systems, tweaking the page file by hand is still a worthwhile endeavour if you understand what you can get out of it.


The problem with this IMHO is that most users who are trying to do this do not understand a couple of things:

* The NT kernel does not overcommit; and

* As a closed-source OS, client and server editions of Windows do not use build configurations targeting embedded systems; they essentially require ample swap space. They won't run well if users attempt to forcibly minimize or disable their page file.


Every time a game comes out with performance issues caused by shader compilation or poorly optimized loading of assets, reddit threads and game forums are flooded with people claiming these nonsense fixes magically made their games perfectly smooth. It's asinine.


The thing that gets me is for all the attention gamers put on benchmarks when a new game or bit of hardware comes out to compare them, you never see benchmarks for this class of tweaks. If the benefit was so obvious, it should be easy and clear to demonstrate/repeat. Over the years a huge variety of these geek equivalent to old wives tales have accumulated, and a lot of them persist.


> "Gamers" can be like audiophiles in the way the spread half-truths and even outright misinformation.

It's about ethics in game journalism!




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: