
Nvidia's G-Sync: Attempting to Revolutionize Gaming via Smoothness - cobrausn
http://www.anandtech.com/show/7436/nvidias-gsync-attempting-to-revolutionize-gaming-via-smoothness
======
ComputerGuru
This looks incredible and I want nVidia to succeed because it's actually been
a long time without any game-changing (no pun intended, honest) improvements
(evolutionary or revolutionary) in the gaming and graphics market.

That said, I read the article and yet remain confused as to where exactly the
G-sync module integrates with the monitor. From what I understand, it the
G-sync hardware/firmware will run on a packet level, analyzing in realtime the
incoming feed of DisplayPort packets and deciding how much of what goes where
and when. Very neat.

The most important question, I believe, is what monitors can this be used
with? The text makes it clear that users will be able to mod their own ASUS
VG248QE monitors to add on the G-sync card, but that's a very, very specific
make and model. Is this technology going to require hardware manufacturers to
cooperate with nVidia, or will their cooperation simply make things
nicer/easier?

Also, some of us have (in business environments) $1k+ S-IPS 30"\+ monitors —
the quality of these monitors is way above that of consumer models like the
VG248QE and others. If there is no way to generically mod monitors without
onboard DSPs, I could see that hindering adoption.

~~~
Miyamoto
> _Also, some of us have (in business environments) $1k+ S-IPS 30 "\+ monitors
> — the quality of these monitors is way above that of consumer models like
> the VG248QE and others. If there is no way to generically mod monitors
> without onboard DSPs, I could see that hindering adoption._

I think Nvidia is targeting hardcore gamers first and foremost. Most gamers
are not gaming at 2560x1600/1440\. Some are, but most aren't.

The most popular monitors by pro gamers right now (Twitch/eSport players and
enthusiasts) are 120/144hz 1ms monitors, such as the ASUS VG248QE. Color
reproduction isn't as important to pro gamers as smoothness/framerates.

Also hardcore/pro players are dumping lots of money on the most expensive
computer rigs, often upgrading to the latest and greatest every generation.
They are a very important marketing group for Nvidia.

~~~
rdtsc
> The most popular monitors by pro gamers right now (Twitch/eSport players and
> enthusiasts) are 120/144hz 1ms monitors, such as the ASUS VG248QE.

I wonder if that latency is noticeable to them or this this the same market as
the audiophile market that sells gold-plated cables for 100x markup.

~~~
hackinthebochs
It's not really about the latency (5ms vs 1ms is negligible), but its about
the pixel response to reduce/eliminate ghosting and other artifacts of LCD's
persistent pixels. The speed that the pixels can update is proportional to the
amount of ghosting. Interestingly enough, it won't eliminate it no matter how
fast the pixels update. The real problem with ghosting turned out to be
precisely the pixel-persistence. Even more interesting is that someone
discovered a hack for the modern 3D monitors like the ASUS mentioned that
completely eliminates ghosting: the strobing backlight functionality necessary
for 3D completely eliminates ghosting when applied to 2D. I currently have
this setup and its exactly like using a CRT. A flat, light, 1920x1080 CRT.
It's beautiful.

~~~
rdtsc
That makes sense, thanks for explaining!

~~~
Tuna-Fish
He's actually completely wrong. Persistence is about image quality, and can be
mitigated by filtering that hardcore gamers always turn off, because it costs
them latency.

Reducing latency isn't about how noticeable it is. Latency can be completely
impossible to detect for you but still hurt you.

Input lag is the time between providing some input, such as clicking with your
mouse, to getting feedback of this event on the screen. As the clicking will
be prompted by things happening on the screen, input lag acts as a command
delay to everything that is done. The most interesting feature of latency is
that _all latency is additive_. It doesn't matter how fast or slow each part
in the system is, none of them can hide latency for one another. Or, even if
the game simulation adds 150ms and your stupid wireless mouse adds 15ms, the 2
ms added by the screen still matter just as much.

The second mental leap is that the human controlling the system can also be
considered to be just another part in the pipeline adding latency. Consider a
twitch shooter, where two players suddenly appear on each other's screens. Who
wins depends on who first detects the other guy, successfully aims at him, and
pulls the trigger. In essence, it's a contest between the total latency
(simulation/cpu + gpu + screen + person + mouse + simulation) of one player
against the other player. Since all top tier human players have latencies
really close to one another, even minute differences, 2 ms here or there,
produce real detectable effects.

~~~
hackinthebochs
This is completely wrong. When even the fastest human reaction time is on the
order of 200ms, 5ms vs 1ms of monitor input lag has no effect on the outcome.
Also consider that 5ms is within the snapshot time that servers run on, so
+/\- 5ms is effectively simultaneous to the server on average.

Pixel persistence is not about image quality and cannot be mitigated by
anything, except turning off the backlight at an interval in sync with the
frame rate you're updating the image. This is how CRTs work, and that's why
they had no ghosting effects. The 3D graphics driver hack I mentioned does
exactly that for 3D enabled LCD monitors.

~~~
wtallis
People can notice input latencies that are many times smaller than their
reaction time. 200ms of input latency is going to be noticeable and bothersome
to basically everyone for even basic web browsing tasks. Most gamers will
notice more than 2-3 frames of latency, and even smaller latencies will be
noticed in direct manipulation set-ups like touchscreens and VR goggles where
the display has to track 1:1 the user's physical movements.

~~~
hackinthebochs
I think you misunderstood my point. In terms of actual advantage, 1ms vs 5ms
is negligible, considering the fact that human reaction time is 200ms. So in
the case of shooting someone as they popped out from behind a corner, the
200ms reaction time + human variation + variation in network latency +
discreet server time, will absolutely dominate the effects.

I definitely agree that small latencies can be noticed, even latencies
approaching 5ms (but not 5ms itself--I've seen monitor tests done that showed
this).

~~~
Tuna-Fish
> I think you misunderstood my point. In terms of actual advantage, 1ms vs 5ms
> is negligible, considering the fact that human reaction time is 200ms.

You did not understand the point of my post. The quality that matters is total
latency. How long a human takes to react is _completely irrelevant_ to what
level of latency has an effect. Whether average human reaction time was 1ms or
1s doesn't matter. All that matters is that your loop is shorter than his, and
your reaction time is very near his, so any advantage counts.

> the 200ms reaction time + human variation + variation in network latency +
> discreet server time, will absolutely dominate the effects.

Server tick time is the same for everyone. Top level gaming tourneys are held
in lans, where the players typically make sure that the network latency from
their machine to the server is not any greater than from anyone else. However,
none of that matters to the question at hand.

Assume that total latency of the system, including the player, can be
approximated by:

Human_reaction_time + network_lag + processing_lag + display_lag

and assume all are normally distributed random around some base value, except
display lag, and you have:

(midpoint, standard deviation)

rand(200,20) + rand(20,5) + rand(16,2) + 15

while I have:

rand(200,20) + rand(20,5) + rand(16,2) + 5

The total latency is utterly dominated by the human processing time. Yet if we
model this statistically, and assume that lower latency wins, the one with the
faster screen wins 63% of time. That's enough of an edge that people pay money
for it.

~~~
hackinthebochs
No I understood your point, I just don't agree that it results in any
_meaningful_ advantage. What you didn't model was the fact that the server
does not process packets immediately as they are received. They are buffered
and processed in batch during a server tick. If the two packets from different
players are not received along a tick boundary, then the server will
effectively consider them simultaneous.

And remember, we're considering 1ms vs 5ms, so the difference would be 4ms in
this case. I would like to see what percentage an advantage someone has in
this setup. Even 63% isn't anything significant considering skill comes down
to game knowledge rather than absolute reaction time. People will pay for
smaller/bigger numbers, sure. But that doesn't mean there is anything
practically significant about it.

------
brymaster
Carmack had some interesting notes on this, a couple being:

\- This would be perfect for games common in emulation where frame rates are
capped but vertical sync isn't used.
[https://twitter.com/id_aa_carmack/status/391303034745401344](https://twitter.com/id_aa_carmack/status/391303034745401344)

\- This technology will come later to laptop and mobile devices and sadly, he
tried to get Apple to do this years ago.
[https://twitter.com/id_aa_carmack/status/391303627278925824](https://twitter.com/id_aa_carmack/status/391303627278925824)

~~~
MBCook
> and sadly, he tried to get Apple to do this years ago.

As an Apple user, this doesn't really surprise me. Apple has never liked
games, although I'm hoping that was a Steve Jobs thing and the company will
see the light.

In some ways they are behind on displays. Windows 7 and above supports 'Deep
Color' (30-bit or more), but as far as I know Mountain Lion doesn't.

They're were out first with retina though.

It wouldn't surprise me if this doesn't come to OS X any time soon. Too bad.

~~~
KVFinn
>As an Apple user, this doesn't really surprise me. Apple has never liked
games, although I'm hoping that was a Steve Jobs thing and the company will
see the light.

But Apple DOES have a thing for responsiveness and smoothness, and latency and
smoothness is exaclty what this tech is improving. I think you could make a
strong case without mentioning games. Not that you would have to anymore,
since games are so popular on these devices...

But an example: I find the animations in IOS 7 look great on 5S but a bit
choppy on regular 5. They clearly aren't maintaining 60 FPS on that device and
you can see some hitching. This tech is most beneficial at making variable
frame rates between 30 and 60 be more smooth so it could be a big help.

------
jamesaguilar
I used to think my next computer purchase would be whatever was required to
drive the first 4k monitor under $2k. But I have seen that I was incorrect.
Smoothness is far more important than resolution (to me), so if this
technology proves to be useful, I will probably end up purchasing a monitor
with this capability and a new computer to go with it.

I really have to applaud the engineers at Nvidia, and whoever else drove this
initiative. I thought graphics were slowing down and I wouldn't need to
upgrade for a while (it's already been two years). To come up with a product
that proves me wrong is both surprising and delightful. Great work from a
corporate perspective, great work from a gaming perspective, and great work
from an engineering perspective. Just really fantastic all around.

~~~
rschmitty
I thought 4k was just for large tvs, can you tell a difference over a retina
display? Aren't they called retina because that's the most your eye can see?

Genuinely curious, thanks!

~~~
jholman
No. They're called "retina" because the people who name things at Apple are
professionally focused on deceiving you. What a surprise.

I've seen lots of articles about this over time, here's one I googled just
now:

[http://www.cultofmac.com/173702/why-retina-isnt-enough-
featu...](http://www.cultofmac.com/173702/why-retina-isnt-enough-feature/)

Combine this with what jamesaguillar already said, about wanting a larger
screen and also wanting high PPI, and why wouldn't you want higher res?

------
6ren
Why can't a display behave exactly as bit-mapped memory? That is, you set this
pixel in memory, and that pixel changes on the screen, at that time.

That is, like a vector display, but using the LEDs of a flat screen. There's
no electron ray scanning over the phosphors as there was in a cathode ray
tube.

\--

The LEDs can switch only so fast - but does this latency prevent them from
being switched independently?

The LEDs need to stay on for long enough for the human visual system to
perceive them (and without flickering etc) - but this could be managed in
other ways.

I think the main reason, apart from inertia, is that the larger market for
displays is as TVs, where the concept of updating the whole frame (frames per
second) is even more entrenched - though, there's no reason why video couldn't
be displayed in the same way, it's just pushing out a kind of compression to
the display itself. Light from real objects is not emitted one frame at a
time.

~~~
YZF
I assume you're talking about LCD? Not too many LED displays around (though
the principle is still similar).

While there's no scanning electron beam the electronics on these displays are
still optimized for scanning. That is they push a whole bunch of adjacent
pixels in every clock cycle. If you try to use those for random access your
refresh rate is going to fall dramatically. The problem isn't whether or not
each pixel can be switched independently it's how to efficiently address them
and move data from the display controller to the display.

(editing with some more info)

If you changed the monitor's protocol to push X, Y, pixel and souped up the
on-board electronics your pixel rate would probably fall by an order of
magnitude. So your 80Hz display is now an 8Hz display (for full frames). In
terms of how each pixel behaves they are independent (each has it's own
transistor and capacitor) but the addressing is on a grid. So you can select
your row and then set a whole bunch of pixels in this row (for example)...

~~~
asharp
That doesn't seem to agree with
[http://www.fujitsu.com/downloads/MICRO/fma/pdf/LCD_Backgroun...](http://www.fujitsu.com/downloads/MICRO/fma/pdf/LCD_Backgrounder.pdf)
.

~~~
YZF
What specifically do you have a problem with?

Details will vary between different displays but my point is that due to the
pixels being on a grid sequential access is going to be faster. This is not
unlike memory.

EDIT: Here's a reference that discusses scanning in a TFT:
[http://www.electronicsforu.com//EFYLinux/efyhome/cover/March...](http://www.electronicsforu.com//EFYLinux/efyhome/cover/March2010/TFT-
LCD_Mar2010%284%29.pdf)

"The TFT-LCD panel of the AMLCD is scanned sequentially line-by-line from top
to bottom. Each line is selected by applying a pulse of +20V to gate line Gn,
which turns on the TFTs in that specific row. Rows are deselected by applying
–5V to G n-1 and G n+1, which turns off all the TFTs in the deselected rows
and then the data signal is applied from the source driver to the pixel
electrode. The voltage applied from the source driver, called ‘gray-scale
voltage,’ decides the luminance of the pixel. The storage capacitor (CS)
maintains the luminance of the pixel until the next frame signal voltage is
applied. In this way, the next line is selected to turn on all the TFTs, then
the data signal is fed from the source driver and hence scanning is done."

~~~
buster
But it's most likely working that way because (analog) displays always worked
like that and your display technology needed to be as compatible as possible
for CRTs and TFTs.

From my point of view you have not explained why it's not technologically
feasible. You're merely describing that the current display tech isn't working
that way.. of course not..

~~~
YZF
It's hard to speak about feasibility in absolute terms here. I would disagree
that the current tech is the way it is simply because of the history of CRTs
(though there's definitely _some_ influence). Displays have evolved to their
current technology by optimizing for things like manufacturability, price and
performance. Those obviously would come ahead of CRT compatiblity.

The motivation for the gridded layout is clear I think? You have this grid of
transistors and you need to address them individually. Being able to drive an
entire line and then select the columns is a good and relatively cheap
solution. Now you can drive all pixels in one line concurrently if you need to
and the performance of a single pixel becomes less of a bottleneck. So the
row/col grid structure isn't a result of needing to be compatible with CRTs...
Also naturally accessing in sequence allows you to simply send the data and
clock down the line. Random access would require either multiplexing the
coordinates or widening your bus.

I would imagine it's possible to design a random access LCD. You would need
better performing individual pixels, you will almost certainly need more
layers and more conductors, you will complicated your interfaces and
protocols. So you end up with a more complex and expensive system for
practically little benefit. In many applications (games, videos) all pixels
change every frame.

Sub-scanning a rectangular portion of the display is maybe a more reasonable
target.

------
asdfs
I'm glad to see others have been working on this. I'd been batting the idea of
Hz-less displays around in my head over the past few weeks. I was actually
going to write a blog post about the idea, as I was surprised that it seemed
to have gone unimplemented.

I don't see much info about _partial_ screen updates. I hope the eventual
standardization of this sort of thing in DisplayPort will provide for Hz-less
partial screen updates. It would be nice to be able to run a 3D application
and a movie at the same time, providing 24 or 30 updates/second for the movie,
and a dynamic rate for the 3D application. It also allows for cleaner
implementation of 3D applications that aren't running fullscreen. Generally,
partial updates seem like a more flexible, more natural evolution of the Hz-
less display idea, particularly with "panel self refresh" and "partial frame
updates" already in DisplayPort.

The obvious downside is that you've got a classic example of resource
contention, this time in the case of bandwidth. If you've got two areas that
want to be redrawn at the same time on different parts of the screen, only one
(or a portion of one) can be sent at a time. This leads to jitter or
(depending on how you decide to deal with it) some other visual problem. But
let not the enemy of "good" be "perfect": it's still a very useful feature.

The best half-solution to the bandwidth contention problem (IMO) would be to
push all the decision making as to who gets access to the "pipe" to the OS. It
can decide which application (if any) gets a jitter-free experience (perhaps
with user preference taken into account), it can provide hints to applications
about when the pipe will be free, etc. The OS really has to manage it in order
to provide a good experience.

~~~
nitrogen
Me too. I thought about it a bunch when Abrash and Carmack were discussing VR
early this year, and while thinking about 48fps movies. Really, I've wanted
higher refresh rates ever since the switch to LCDs, and VFR displays would
make smooth, high-refresh video much easier.

Since it's unlikely I'll ever be able to get into VFR display tech
professionally, for the benefit of anyone working in the industry, here are my
thoughts on the subject from February (focused more on video and movies):

    
    
      Feb. 21, 2013
      -------------
    

INTRO:

In the modern digital age of LCD, LED, plasma, and DLP displays, there's not
much need for refresh rates to be all that high, or even constant. In the days
of CRTs it was necessary to have a constantly refreshing signal coming from
the video source in order to drive the electron beam across the screen and
refresh the image in the phosphors without visible flicker. With film, it was
much easier to run projectors and cameras at a constant rate due to their
mechanical nature (and probable lack of a standard for indicating when to
switch frame rates).

PROPOSAL:

I propose the development of a complete variable frame rate video chain, from
camera, through production, encoding and distribution, and an HDMI/DVI-like
video interface, to display devices. My primary focus of thought thus far has
been on the video interface and display devices.

APPLICATIONS:

Cinema: Recently, The Hobbit was released at 24fps in some theaters, 48fps in
others. As more directors want to experiment with 48fps cinema, why not remove
the restriction to a fixed frame rate entirely? Initially, with widespread
device support, directors and cinematographers could switch between 24fps and
48fps on a scene by scene basis. Eventually though, why not allow the frame
rate to be varied continuously on a per-frame basis? Special effect sequences
could be presented at 120fps, with either a gradual or an abrupt transition
down to 24fps or even lower as desired for emotional or psychological effect.
Variable frame rate displays would also allow mixing of NTSC, PAL, and film
content without the delay of switching video signal refresh rates or using
nasty time stretching or pulldown techniques. Video cameras and encoding
systems could be developed that automatically adjust frame rate based on the
amount of movement detected (to some extent this already exists in video
codecs).

Gaming: Video games would also benefit from variable frame rate displays. No
longer would gamers and game developers have to choose between a tear-free but
delayed game using vertical sync, and a low-latency experience at high frame
rate but with tearing. In a VFR display, the frame would be sent to the
display when the game is ready, rather than the other way around. The content
should be in control, not the device. This way a game could still remain
artifact free if the frame rate drops, without the lag induced by waiting for
the vertical blanking region before swapping buffers.

Mobile: Finally, variable frame rate devices could use much less power for
signal processing and transmission; this would be especially desirable in
battery-powered devices. A tablet running a word processor on an external
display only needs to send an update to the screen once or twice per second to
flash the cursor, saving the power that would be used for reading from video
RAM and driving the display interface.

AREAS OF STUDY:

A very incomplete list of some of the considerations that must be made when
developing a VFR technology suite follows:

The clock rate of an individual frame on the video display interface must be
determined. Should devices negotiate an optimum clock rate upon connection and
use that rate for all frames transmitted? Or should they adjust their pixel
clock rate based on the current frame rate? Using the maximum possible pixel
clock supported by the devices and the copper (or fiber or RF spectra)
connecting them would reduce transmission-induced latency but might increase
power consumption slightly.

A signaling method would need to be devised. Should the display interface
protocol be verbose, with the video source announcing frame rates or frame
times in advance to the display device? Or should the video card just start
scanning out pixels whenever it wants to, and the display just has to deal
with it?

Buffering techniques in the display would need to be considered. Should the
LCD (or other) panel be updated as the pixels come in, or should a full frame
be buffered first? How quickly could a buffered frame be shifted into the
display panel? Given the fact that gamers can adapt to and extract greater
temporal information from a higher framerate signal with tearing on a fixed
rate display, there may be some benefit to scanning out the rows of an image
as they arrive (the top row would be displayed a full 16ms sooner than it
would otherwise).

Software would need a method of informing the video card that it's done
drawing a frame. To a large extent this already exists in the form of the
buffer swap call, but in a variable refresh rate system, there would be less
need for double buffering to prevent artifacts. The application could draw to
the scanout framebuffer as it pleases, tell the video card to send a scan to
the display, then wait for the video card to notify it that the scan is done.
Double buffering would still be used in games and applications that don't want
to wait for one frame to finish scanning before drawing the next.

FURTHER THOUGHTS:

It would be interesting to go beyond VFR and create a display standard that
can update selected regions of the screen (AKA dirty rectangles). For example,
a video card could send a hypothetical "Start Scan" packet to the display
device that contains the location and size of the region being updated, then
stream raw pixel data that the display device fills into the updated region.
For that matter, the updated region needn't be rectangular.

What about variable resolution display updates as well? This seems to make
less sense in discrete pixel digital displays, but might find application when
displaying low-DPI content in a subset area of a high-DPI/"retina" display.

It seems that a VFR display chain is a lot like double or triple buffering was
back in the CRT days, but one of the buffers has been moved into the display
device itself.

------
KVFinn
Video of the panel discussion between Carmack, Sweeney, and Johan (lead on
EA's Frostbite engine):

[http://www.twitch.tv/linustech/b/471263848?t=2h27m](http://www.twitch.tv/linustech/b/471263848?t=2h27m)

Carmack answered some questions about it on twitter right after as well:

[https://twitter.com/ID_AA_Carmack/with_replies](https://twitter.com/ID_AA_Carmack/with_replies)

~~~
keyle
Thank you for the twitch link!

------
cobrausn
Submitted this a bit earlier with a link to the actual press release, but it
never made front page.

[http://www.geforce.com/whats-new/articles/introducing-
nvidia...](http://www.geforce.com/whats-new/articles/introducing-nvidia-g-
sync-revolutionary-ultra-smooth-stutter-free-gaming)

------
csense
Consider another revolutionary technology from the same company, Optimus. When
it first came out, it took a number of years and a public roasting by Linus
Torvalds [1] to convince nVidia to officially support it on Linux.

Hopefully, with Valve and increasing numbers of indies supporting Linux,
nVidia will learn from their previous mistakes and offer official Linux
support for G-Sync from the beginning.

[1]
[http://www.phoronix.com/scan.php?page=news_item&px=MTEyMTc](http://www.phoronix.com/scan.php?page=news_item&px=MTEyMTc)

~~~
devx
Optimus wouldn't have a reason to exist if say Nvidia made their "mid-range"
dedicated notebook GPU's to clock at 200 Mhz for "normal use", and then go to
600+ Mhz in games, or other high-end apps.

Of course, perhaps the company that led to the making of Optimus in the first
place is Intel, because they started bundling their GPU's and then charging
OEM's _more_ for the standalone CPU than from the bundle - and eventually
OEM's were like "why not just get both Intel's GPU, and a higher-end Nvidia
one?"

If you ask me, I think Intel's move should've been declared anti-competitive
from the beginning. There's no way the bundle cost Intel less than the CPU,
but they priced it that way because they had a monopoly, and could force OEM's
to just accept the deal "or buy the more expensive CPU if they don't like it",
which was obviously a non-option option.

------
bcoates
This is absolutely brilliant. Pushing the clock from software seems like it'd
help the issue with syncing the encoded framerate on streaming video to the
display, as well. I wonder if the non-uniform refresh of a 60fps and 24fps
video on-screen at the same time would look weird.

If they're adding some cpu and a framebuffer on the display, maybe they can
start doing some compression for the cable link between GPU and display -- the
raw bitrate is proportional to resolution² ✕ color depth ✕ framerate but the
information rate doesn't go up nearly as fast as that increases. Even simple,
lossless PNG-style compression would be a huge gain on a 240hz/48bit/4k
display.

~~~
Scene_Cast2
The flip side of the coin is that as soon as you're pushing more data than the
link can hold, if you encounter incompressible data (say I just generated some
recursive high-frequency Perlin noise) - you would start getting stutter and
lag.

~~~
bcoates
Yeah, it couldn't be perfectly general. You could still keep the decoder
simple and just put a little more intelligence on the GPU side to handle
quantizing on the time/space/color dimensions appropriately when you try to
send a pathological image. Getting the few multiples of compression you'd need
to run 4k@48bit 240hz on current interfaces would probably be safe, trying to
put HD over a serial cable will probably go less well.

------
mistercow
This could also be nice for watching simple videos. It's subtle, but on smooth
pans, you can see the stuttering of a 24p video on a 60Hz display. If the
video player could tell the display to refresh at 47.952Hz (or any other
integer multiple) instead, it would clean that right up.

Combine that with morphing frame interpolation, and you could be watching
movies at just about exactly the rate your hardware can manage to push them
out.

------
wtallis
It is kind of surprising that CRTs have been dead for so long but we're still
driving our displays in pretty much the same fashion, except with reduced
blanking intervals. We still treat video connections as constant data-rate
streams, when we should be bursting frames to the monitor as soon as they
finish rendering and letting the monitor worry about when it will be able to
update which part of the screen.

~~~
devindotcom
That's how it would work with OLED, right? So we should have that in a few
years once density and cost catch up to traditional LED-backed LCDs.

~~~
wtallis
That's how it _could_ work on any of the common display technologies that
don't use a single electron beam tracing across the screen. Active-matrix
displays - be they OLED or TFT LCD - don't require that pixels be updated in
any particular order or at any specific frequency save for the minimum update
frequency that is analogous to DRAM refreshing.

The way we currently send pixel data to monitors is basically optimized to
make the monitor's electronics as simple as possible and to minimize the
bandwidth used at all times, even if the hardware is capable of communicating
at higher speeds. Just simply changing DisplayPort to always send pixel data
at the highest speed even when operating at less than the maximum resolution
supported by that link would result in a significant reduction in latency, by
no longer taking 16ms to send each frame (which almost all monitors fully
buffer in order to apply color transformations or scaling). The next step
after that would be to allow frames to start on irregular intervals, which is
apparently what NVidia's implementing. But it's still all just about how the
contents of the GPU's framebuffer are transmitted to the monitor's
framebuffer, and is in no way dependent on what kind of technology is
downstream of the monitor's framebuffer.

------
Amadou
Sounds great. But it should be in a standard, keeping this proprietary to
nvidia means it will only be available on a limited number of monitors and
cross-fingers AMD doesn't do their own version.

Put it in the next HDMI spec (and the rev the spec fast, not another 2 years
or so).

------
hedgehog
I'm curious how similar this is to Embedded DisplayPort's panel self refresh
feature. It would be neat to be able to run the display at exactly 24 FPS for
movies, down lower when staring at code (to save power), and then dial it up
to 100+ Hz for scrolling and animations.

------
TheZenPsycho
I've never understood what is so difficult about making software that can
generate frames at a fixed rate, and I don't understand this product- I don't
understand how having a variable refresh rate would do anything other than
harm smoothness, and encourage more bad software.

Maybe someone can clarify this for me, but what is so wrong with writing
software that can just meet the frame deadline? Maybe the hardware innovation
should be hardware and drivers that helps you do vertical sync more reliably?

~~~
aethr
If your renderer can draw 40fps in the highest density scenes (lots of
polygons, particles, and effects) on specific hardware, then that is the most
you'll be able to guarantee to that user without sacrificing detail. That will
be your "fixed frame rate".

However, in simpler scenes the same renderer is likely to output significantly
more frames (even up to thousands) at the same detail level.

So in this scenario, all you're doing by setting a fixed rate is throwing away
tons of frames in low density scenes. Most gamers would prefer to have the
most fps possible at any given moment, even if it means variability.

The most hardcore gamers I know use 120hz monitors, and machines that can
deliver 241fps (120 * 2 + 1) in the highest detail scenes. They then set the
engine to cap frames at 241fps which will eliminate tearing, negating the need
for this technology. However, their gaming machines cost a LOT, so this would
deliver similar results for a much wider range of hardware.

~~~
TheZenPsycho
If you're generating thousands of frames and throwing them away, you wrote
your rendering software wrong. I ask again, what is so impossible about _just_
rendering the 60fps (just a bit under the limit of the rate at which a human
is able to perceive any difference), and then not rendering any more? instead
of rendering as fast as you can, do the different trade of of always meeting
the deadline.

 _sigh_ why am I explaining this again? is it really hard to understand? why?

~~~
patrickyeon
Your question was answered; it has nothing to do with the case where you're
"generating thousands of frames and throwing them away". What you want is a
rendering engine that will perform at 60fps in the worst case. What engine
devs want to write is an engine that can do better (even much much better)
than 60fps in the average case, and be allowed to slip in those pathological
cases. Gamers want more frames. More frames than is noticeable. They want some
slack so that if something totally unrelated to the game ties up the machine,
the framerate drop is not noticeable. They want to be able to double it so
that they can drive a 3D display but still have the same effective framerate
per eye.

Having a consistent 40fps is much worse (for a gamer) than a variable
framerate that will dip down to 40fps for 1% (or 10%) of the play time. Having
to limit your most complex scene to what can be guaranteed rendered at 60fps
is much less appealing to a developer than making sure all the likely scenes
can render at 60fps.

~~~
TheZenPsycho
> Having a consistent 40fps is much worse (for a gamer) than a variable
> framerate that will dip down to 40fps for 1%

stuttering animations is better than smooth animations?

Stuttering is better than smooth. gotcha.

> Your question was answered;

for someone who doesn't appear to understand what I'm asking you have a high
degree of confidence that I've been answered.

What is so terrible about having a lower complexity budget that guarantees
60fps? What if you had 60 fps no exceptions as a constraint in your hardware
and software design, how far could you really go with some creativity? Think
about it- is having a complexity ceiling the _only possible_ way to ensure
60fps?

~~~
KVFinn
>What is so terrible about having a lower complexity budget that guarantees
60fps

>Think about it- is having a complexity ceiling the only possible way to
ensure 60fps?

I'm really not following you. Are you asking what is the benefit of this
technology when games come out every day at 60fps even now? This technology
allows them to get the same fidelity and smoothness on less powerful hardware,
with more complicated simulations, and lower latency.

------
pgz
This also looks really good for video playback (as in movies). It would avoid
stuttering from pulldown. I am quite hyped about this.

------
beloch
This holds some potential for media playback as well, since matching display
frame-rate to source material is usually necessary to avoid jerky playback
(Some people are very sensitive to any kind of micro-stutter while others
don't seem to notice). Of course, playback software will have to support it
(easy) and it may be a while before HDTV's or projectors include Nvidia's
chip, if that even happens at all! However, if it does happen, this will
effectively signal the end of adjusting frame-rate on displays. Even simply
allowing higher refresh rates than 60 Hz could clear up a lot of problems.

I don't think Displayport has sufficient bandwidth for 4K video at 144 Hz
unfortunately. An upgrade on Displayport may be necessary. Given that HDCP has
proven to be utterly pointless, one would hope DRM gets the boot. Encrypting
and decrypting data at over 20 Gbit/s is probably a PITA when you're aiming
for low latency and low cost.

------
vilhelm_s
What exactly does the extension card do? 1920x1080 24-bit pixels is 6MB, so
why does it need 768MB of on-board memory?

~~~
kevingadd
There are games that can use up 2GB of on-board GPU memory right now, for
textures/buffers/etc. It won't be difficult to use up as much as 8 in the near
future when panel resolutions get higher - higher panel resolution means a
demand for higher resolution textures.

Given this, I'm sure there are things they can do with that onboard memory in
the display. Maybe buffer up a half dozen video frames with timing data for
smooth high res video playback?

~~~
phire
But we aren't moving the gpu to the screen.

With it's currently advertised feature set, there is no need for that much
ram.

Assuming future-proofing support for 4k monitors with framebuffers in 16bit
floating point format, That's still only 48mb per buffer and there is no
reason to have more than two buffers in a screen.

~~~
Dylan16807
Maybe they were 256 megabit chips.

------
seanalltogether
So is the idea here that the buffer is actually on the monitor now and no
longer on the gpu? Is that why there is 3x256MB memory on the controller?
Perhaps the gpu just pushes bits across the wire as soon as its ready now? And
the monitor is now responsible for maintaining a localize triple buffer
mechanism?

~~~
cobrausn
The idea is that the monitor will only refresh after it is given a frame to
render from the graphics card, which makes the monitor sync to the framerate
of the graphics card, not the other way around. From what I can tell, this
basically means instead of having a fixed refresh rate, the modified monitor
now has a variable refresh rate, slaved to the output of the GPU.

~~~
seanalltogether
Right, but a traditional 1920x1080 monitor only needs ~50MB of memory to store
a frame of data to flush to the screen. So why introduce 3x256MB of memory on
the controller? Unless the idea is to move away from managing frame buffers on
the gpu.

------
danmaz74
Am I missing something, or this would make a real difference only if you
monitor has a low top refresh rate? If the refresh rate is 144 Hz, like the
one they talk about in the article, I wouldn't expect that waiting for vsync
will have big impact on the fps.

~~~
KVFinn
If you running at 144 the frame rate drops to 72 whenever the game can't
maintain it. Carmack says when this happens in VR it's like being kicked in
the head but they can't disable Vsync because tearing is even worse.

Unless you triple buffer but which adds its own problems, including more
latency. But this tech is also about the feel -- even at the 'same' frame rate
the subjective impression is of a much smoother game. For obvious reason this
can not be captured in a video so it's tricky to sell, they have to show it
you in person.

Their slide showing vsync lag:

[http://images.anandtech.com/doci/7432/NVMontreal-097.jpg](http://images.anandtech.com/doci/7432/NVMontreal-097.jpg)

Anand has some of his personal impressions posted already:

[http://www.anandtech.com/show/7436/nvidias-gsync-
attempting-...](http://www.anandtech.com/show/7436/nvidias-gsync-attempting-
to-revolutionize-gaming-via-smoothness)

"I can't stress enough just how smooth the G-Sync experience was, it's a game
changer."

~~~
danmaz74
72 FPS looks very high to me, but I never tried VR. Would be curious to see
the difference there.

------
nivla
Here is a great read about V-Sync and how it tries to fix the disparities in
refresh rates:

[http://hardforum.com/showthread.php?t=928593](http://hardforum.com/showthread.php?t=928593)

Contrast it with what G-Sync is trying to achieve.

~~~
aspensmonster
Reading that, it sounds like triple buffering is already a better solution to
this problem. It sounds simpler than requiring a specially embedded DSP from
Nvidia in your monitor to talk with your Nvidia card (and _only_ with your
Nvidia card).

~~~
vinkelhake
Triple buffering does not solve the problem. If your monitor is locked at 60Hz
and your rendering can't produce more than, say, 55 frames per second then
you're going to have to display the same frame for multiple monitor refresh
cycles. The monitor will typically show every rendered frame for 2 refresh
cycles, with triple buffering you may be able to occasionally show a frame for
just one cycle.

With something like G-Sync you can instead show 55 frames per second.

~~~
aspensmonster
My understanding from nivla's link is that triple buffering does solve the
problem. Rather than experiencing the Frequency/N fps drop when the card can't
push out enough frames, triple buffering lets the display's framerate stay at
the framerate the card is pushing, with the price being VRAM overhead.

But perhaps I'm misunderstanding the problem statement. The link talks merely
about fps drops and how triple buffering can permit the display to be more
efficient than Frequency/N fps by essentially pipelining the frames over 3
buffers to keep the monitor's framerate at the graphic card's framerate.
However, it accomplishes this by having some frames stay for 2 cycles and
others for 1. I don't know how noticeable this is. Perhaps it's a perceptible
problem, in which case being able to dynamically manipulate the display's
refresh rate is a suitable solution. Granted, at the current price range it
sounds like a cheaper solution would be to simply buy a newer model graphics
card that can maintain an FPS greater than or equal to your monitor's refresh
rate. Though of course I'm sure the goal is to drive the price down and it
appears they already have display partners lined up to integrate their
embedded DSP.

~~~
jblow
Triple-buffering doesn't help that much, and it adds an extra frame of
latency, which is really bad.

------
shurcooL
For anyone excited about this, check out
[http://www.blurbusters.com/](http://www.blurbusters.com/). It's a blog
dedicated to everything-refresh-rate.

------
jswanson
One thought on this is that this technology, or something related, would be
beneficial to Oculus. Carmack already had a comment on it:

    
    
      G-Sync won't work on any of the display panels Oculus is considering right now, but it will probably be influencing many future panels.
    

[https://twitter.com/ID_AA_Carmack/status/391299110344867841](https://twitter.com/ID_AA_Carmack/status/391299110344867841)

------
staunch
Any benefit if your game is rendering 120fps and your monitor is doing 120hz?
Even though they're the same speed they're not necessarily in sync?

~~~
wmf
Almost no game can render at exactly a target frame rate, and if so it means
the game isn't pushing the GPU to the limit. For any graphically awesome game
the frame rate will vary and G-Sync should help in that case.

~~~
staunch
Nah, there's plenty of (mostly a bit older) games that my fairly powerful
gaming PC can max out on FPS at very high rates.

------
anonymous
I have been wondering for a while what reason there is for LCD displays to
have refresh rates at all. There is nothing to refresh per se. Shouldn't the
display just tell the videocard it can handle frames as long as they're at
least some ms apart, then the videocard pushes the frames when they are ready,
whenever that may be?

It makes way more sense to me than carrying over the limitations of old (CRT)
technology.

~~~
jblow
You are correct. The whole reason things are set up this way is historical,
compatibility with CRTs.

------
ConAntonakos
With this, ShadowPlay, and Gamestream, I'm very excited for the near future of
gaming. NVIDIA is really streamlining the experience for PC gamers providing
amenities like TwitchTV streaming via hardware acceleration.

They're going to make hardware manufacturers that strictly focused on
streaming capabilities go out of business.

------
zokier
This and Wayland and hopefully I will never ever see a teared or
mistimed(stuttering) frame anymore. I can't wait.

------
devx
Will these work with other GPU's? How hard is it to replicate by the
competitors? And will all displays have to support 10 different GPU makers for
this feature, if there's no standard set?

------
Demiurge
I don't think I like the LCD refresh rate to go anywhere below 60hz. 30 seems
like really quite bad, duplicated frame or not.

------
pantalaimon
Are my eyes just that bad or is this solving a problem I didn't ever notice
exists?

------
JanezStupar
Where is my Optimus driver for Linux?

