
Nvidia announces RTX 2000 GPU series with ‘6x more performance’ and ray-tracing - albertzeyer
https://www.theverge.com/2018/8/20/17758724/nvidia-geforce-rtx-2080-specs-pricing-release-date-features
======
bitL
Hmm, 24%-42% performance increase (14/16TFlops vs 11.3TFlops) for 70-90% price
increase... And prices were already inflated by crypto that is now collapsing.
Not sure who is the target market for this tech honestly. Still only 11GB RAM,
even if 50% faster, making it a nonsense purchase for Deep Learning
enthusiasts (state-of-art models are already larger).

Unless somebody invented RTX-based coin of course, then this is the minimal
price...

~~~
aqme28
Where are you getting a 70-90% price increase?

1080s are ~$450, will now be $799. 1070s are ~$400, will now be $599.

(I'm going off the top hits on Newegg)

~~~
carbocation
For the 1080, by your numbers, $799/$450 represents a ~78% price increase.

~~~
Retric
First those are founders edition pricing which cost an extra 100$.
[https://news.ycombinator.com/item?id=17802871](https://news.ycombinator.com/item?id=17802871)
Also 1080's did not start at 450$ each.

The 2080's price will drop eventually, until then nVidia has no reason to
lower prices when they are going to be sold out for weeks if not months.

------
whichdan

      			2080 Ti FE	RTX 2080 Ti	GTX 1080 Ti
      Price			$1,199		$999		$699
      GPU Architecture	Turing		Turing		Pascal
      Boost Clock		1635 MHz	1545 MHz	1582 MHz
      Frame Buffer		11 GB GDDR6	11 GB GDDR6	11 GB GDDR5X
      Memory Speed		14 Gbps		14 Gbps		11 Gbps
      Memory Interface	352-bit		352-bit		352-bit
      CUDA Cores		4352		4352		3584
      TDP			260W		250W		250W
      Giga Rays		10		10		?
    
    
      			2080 FE		RTX 2080	GTX 1080
      Price			$799		$699		$549
      GPU Architecture	Turing		Turing		Pascal
      Boost Clock		1800 MHz	1710 MHz	1733 MHz
      Frame Buffer		8 GB GDDR6	8 GB GDDR6	8 GB GDDR5X
      Memory Speed		14 Gbps		14 Gbps		10 Gbps
      Memory Interface	256-bit		256-bit		256-bit
      CUDA Cores		2944		2944		2560
      TDP			225W		215W		180W
      Giga Rays		8		8		?
    
    
    			2070 FE		RTX 2070	GTX 1070 Ti	GTX 1070
      Price			$599		$499		$449		$399
      GPU Architecture	Turing		Turing		Pascal		Pascal
      Boost Clock		1710 MHz	1620 MHz	1607 MHz	1683 MHz
      Frame Buffer		8 GB GDDR6	8 GB GDDR6	8 GB GDDR5	8 GB GDDR5
      Memory Speed		14 Gbps		14 Gbps		8 Gbps		8 Gbps
      Memory Interface	256-bit		256-bit		256-bit		256-bit
      CUDA Cores		2304		2304		2432		1920
      TDP			175W		185W		180W		150W
      Giga Rays		6		6		?		?

~~~
Keyframe
2080 Ti FE, 1259 Euros. All US companies follow this simple formula: $ to Euro
1:1, add 25% and then add 50-60 Euros for Free Shipping.

~~~
bitL
It's the 2-year warranty, translation of manuals and a need to have office in
all major EU markets due to different legal stuff that makes it all pricier.

~~~
Zach_the_Lizard
Don't forget taxes. US prices are typically listed without tax included,
whereas I'm guessing the European ones include VAT.

~~~
Keyframe
Most probable explanation. 19% in Germany. So, $1199 in US, ~$1443 in Europe.
1199 * 1.19 = ~$1427. $1439 with Austrian VAT of 20%, but price is still 1259
Euro, so it seems nice. Seems about right.

------
etiam
Since the post with the product page got merged here:

[https://www.nvidia.com/en-us/geforce/graphics-
cards/rtx-2080...](https://www.nvidia.com/en-us/geforce/graphics-
cards/rtx-2080-ti/)

Most of it is dramatically lit renderings of fans and vacuous marketing-speak
of course, but there's a tidbit near the bottom of the page about NVLink I
find interesting.

    
    
      "GeForce RTX™ NVLINK™ Bridge
    
      The GeForce RTX™ NVLink™ bridge connects two NVLink SLI-ready graphics cards with 50X the transfer bandwidth of previous technologies."
    

I guess NVLink is finally becoming relevant as a consumer technology then? Do
you think this will be the GPU generation when consumer motherboards capable
of pushing things to the GPU by NVLink will appear as well?

~~~
PeCaN
> Do you think this will be the GPU generation when consumer motherboards
> capable of pushing things to the GPU by NVLink will appear as well?

first you'll need CPUs that support NVLink, and that's currently limited to
the not so consumer oriented POWER9.

~~~
lightsighter
NVLink works fine between GPUs even if the CPUs are x86. Just look at the
NVIDIA DGX boxes/stations. [https://devblogs.nvidia.com/wp-
content/uploads/2017/04/NVLin...](https://devblogs.nvidia.com/wp-
content/uploads/2017/04/NVLink-topology-e1491278374674.png)

------
TwoNineA
Title is clickbait, 6 x performance gain is for ray tracing only, which is
normal, Pascal cards didn't have the ray tracing muscle of the new cards.

~~~
alkonaut
What are the approximate gains from the last gen, i.e. 1080->2080 or
1070->2070, for a non RTX enabled game? Would be speculation at this point I
assume, but just based on clock freqs, mem bandwidth and the number of units
on the chip?

~~~
test6554
24% - 42% improvement according to an earlier Top comment...

------
bratao
I really hope that the next generation of AMD Radeon have a good performance
and their efforts to port TensorFlow and others works.

AMD need to put on NVIDIA on the same heat that Intel got with the Zen
architecture.

NVIDIA is too comfortable, with a very high price and crazy demands ( you
can´t use Customer cards on Data centers)

~~~
ndesaulniers
What effort to port tensorflow?

~~~
y2kenny
[https://gpuopen.com/rocm-tensorflow-1-8-release/](https://gpuopen.com/rocm-
tensorflow-1-8-release/)

~~~
ndesaulniers
Upstream it.

------
ocdtrekkie
The prices are obscene. The '70 tier has usually been my tier of choice, but
they're sliding that up from the $400 range to the $600 range.

I actually just scored a 1080 in the realm of $400, presumably because crypto
demand dived, and they had a lot of stock coming into the new generation
announcement. I'm glad I got it, because it's going to be a while before I
touch cards at these price points. Presumably when they filter down to a 2060
or something it'll be affordable?

~~~
wmf
I think I spent $300 for a 970 and I've been keeping an eye out for something
that has 2x the performance at the same price. I guess the replacement will be
the 2050?

~~~
voltagex_
I think I'm in the same boat as you - but I'm hoping for 1.5x performance and
a bit less heat / power usage.

I think I'll leave the "4K" gaming to the Xbox One X.

------
gambiting
The prices are absolutely insane. I was prepared to pay £599 for the 2080, but
£749 for a non-Ti card is a joke. It feels to me like Nvidia knows there's no
competition from AMD so they can ask for literally any price and people will
pay it.

~~~
css
Remember that in the US you have to factor in the upcoming 25% import tariff.

~~~
BigJ1211
Which has nothing to do with the prices in Europe.

~~~
css
Not true. Prices in non-US countries are almost always tacked onto US prices +
extra import duties, VAT, and other regulatory fees (thanks EU!).

------
ErneX
Some clips of the Battlefield V demo real-time reflections:

[https://twitter.com/dan_mitre/status/1031599285576650757](https://twitter.com/dan_mitre/status/1031599285576650757)

[https://twitter.com/dan_mitre/status/1031599199316529158](https://twitter.com/dan_mitre/status/1031599199316529158)

[https://twitter.com/dan_mitre/status/1031598556585582595](https://twitter.com/dan_mitre/status/1031598556585582595)

~~~
Eridrus
The water reflections are better, but the non-RTX water is already pretty
good; the last link with the fire reflection is pretty baller though.

------
visionscaper
Is anything known about the number of tensor cores in the new 2080 Ti? What
about the float16 performance using the Tensor cores? I couldn't find this on
the spec page [1].

[1] [https://www.nvidia.com/en-us/geforce/graphics-
cards/rtx-2080...](https://www.nvidia.com/en-us/geforce/graphics-
cards/rtx-2080-ti/)

~~~
asparagui
A couple of sites have said 384 TC's, which would be approximately ~50% of the
v100/new high-end Quadro card. Would like to see a confirmed source as well.

~~~
mattnewport
Yeah, it seems at the moment that there's no confirmation on to what degree
they're actually enabled either. It may well be that NVidia will make them
available for inference via DirectX so they can do AI based denoising and anti
aliasing in games but artificially limit support in frameworks for training so
as not to eat into the Quadro / Tesla markets. I'm holding off on ordering one
until I see more info on that.

~~~
visionscaper
> It may well be that NVidia will make them available for inference via
> DirectX so they can do AI based denoising and anti aliasing in games but
> artificially limit support in frameworks for training so as not to eat into
> the Quadro / Tesla markets. I'm holding off on ordering one until I see more
> info on that.

Good point, that would not be cool. I'll wait with ordering until more is
known about this.

------
magnat
Using DNN for generating higher resolution images[1] in realtime at fraction
of the performance cost is quite brilliant. Is there any info it if this is
nvidia proprietary technology or general feature of DirectX Raytracing that
will eventually be available also from AMD and Intel?

[1] Though showing CT/medical imaging as one of it's applications is a big no-
no

~~~
brandonjm
> Is there any info it if this is nvidia proprietary technology or general
> feature of DirectX Raytracing that will eventually be available also from
> AMD and Intel?

This is a general feature of their RTX 'pipeline', not just DirectX, so I
imagine Vulkan will introduce support eventually. Whether AMD will be able to
put it into GPUs is a different story, but it appears they are doing some of
their own work in the area of Tensorflow so they may provide their own
equivalent.

> Though showing CT/medical imaging as one of it's applications is a big no-no

Why is that? They weren't showing it as an application of their higher res
image DNNs, that was just an example of where they have used DNNs in general.

------
djsumdog
I feel like there is a hard limit on rendering tech right now. We saw 4k
upgrades to the PS4/xbox and next gen consoles might be a way off.

I thought the gaming industry was waiting for hardware that would let you
render Marvel movie style graphics. Things that just barely hit that uncanny
divide; that would give the player comic book film style immersions.

The demo video they posted does look pretty impressive. Is this the generation
of video cards that will take us there, or if not, what is the next big
innovation in gaming/graphics going to be?

~~~
elihu
One of the major challenges when trying to generate realistic, believable
graphics is correctly modeling ambient light. Without that, adding higher
resolution displays or finer polygon meshes won't get you there.

The most efficient and accurate methods of modeling ambient light are
fundamentally based on ray tracing. Path tracing is probably the best known
and most popular, photon mapping is another. They are both based on random
sampling, so the more rays you can trace per frame, the more accurate the
lighting calculations are. (Inaccuracies tend to manifest as graininess.)

~~~
AstralStorm
With denoising in place, the inaccuracies manifest as low resolution. (similar
to pixellation or blur but not as even)

Apparently nVidia is using a variant of beam tracing if patents are to be
believed.

~~~
elihu
That's about what I'd expect. If they're doing it sensibly, though, it should
only affect the resolution of ambient light effects, not the portion of the
rendering that comes from primary rays. So, the geometry would have sharp
edges, but the caustics might be blurry. Seems like a good tradeoff if it
drastically reduces the overall computation.

------
nabla9
Any idea of how these GeForce RTX's perform in deep learning applications
compared to Quadro RTX? (price/performance ratio)

I assume that they might have decent inference speed, but how about learning?

$800 card would be nice for initial testing and developing in personal desktop
machine. Or are they somehow crippled??

~~~
asparagui
These will make a good card to experiment with. I am going to purchase one
when I can. More RAM which is nice for larger models, but you can do a lot
with 8GB of ram, especially if you are just getting going. Do not let worrying
about having the latest and greatest hardware keep you from getting started.
Time is the most important resource to have, not spec sheets.

These cards look like they will have Tensor Cores, which should give a
significant boost (e.g. ~50% of P100/V100 performance for certain operations
at ~10% of the cost) over the 1080 series.

The Quadro's have a higher spec ram (think ECC vs. DDR) which is important for
a workstation GPU, but the price difference goes up in a hurry. My suggestion
would be to learn on a RTX and then you can rent a Quadro if you need more
power to play with.

~~~
nabla9
> These cards look like they will have Tensor Cores,

How they look like that? Do those cores support FP32 precision? I'm genuinely
interested.

~~~
bitL
No, from INT4 to FP16. Inferencing can get nice boost.

------
outworlder
Why would we want raytracing as for 'gaming' as opposed to the rasterizers we
have now? Is this a gimmick or are there algorithms for post processing and
the like that could take advantage of this?

~~~
MikeLui
I believe there are a lot of “tricks” engineers have developed to accomplish
rendering goals that would be more natural with ray tracing. The added
simplicity would lower the barrier of entry for more complex effects.

------
gabcoh
Does anyone know how they are offering better raytracing performance? Are
there new api’s or something?

~~~
bearjaws
This is exactly what im wondering, with the next gen consoles confirmed for
AMD CPU/GPU I doubt there will be much Ray Tracing tech in the next
generation, which means we're looking at 3-5 years before this tech can be
mainstream.

These GPUs will be antiquated by the time Ray Tracing can gain any real
traction...

~~~
Rebelgecko
I believe that lots of games already do ray tracing/casting when they render
(especially for things like lighting and reflections), so there can be
benefits even before games switch to 100% ray tracing.

~~~
penagwin
For the most part it really isn't "ray tracing", it's _close_ but almost
everybody "cheats" because of the performance gains.

Even if some games do using ray tracing for some things, the developers would
have to update their game to support the features of a currently very
expensive and uncommon card. Not to mention most gamers have a GTX 1060 or
less according to steam to give you an idea of what the current "mainstream"
is at.

------
andybak
Anyone know if the raytracing cores on these things can be used to accelerate
Shadertoy style raymarching stuff? This technique has been threatening to
break through into the mainstream for a while now.

~~~
Impossible
This DirectX raytracing sample does a variant of shadertoy style raymarching.

[https://github.com/Microsoft/DirectX-Graphics-
Samples/tree/m...](https://github.com/Microsoft/DirectX-Graphics-
Samples/tree/master/Samples/Desktop/D3D12Raytracing/src/D3D12RaytracingProceduralGeometry)

This is basically a rewrite of iq's primitive shadertoy using DXR, with
additional, non-SDF objects like metaballs.
[https://www.shadertoy.com/view/Xds3zN](https://www.shadertoy.com/view/Xds3zN)

I'm not sure how much of a speed up you'd get from using this API though. Most
of the complex shadertoy shaders are unoptimized and brute-force given the
limitations of the environment, and you can render similar scenes without a
raytracing API using similar techniques in more efficient ways. The talk on
Claybook at GDC 2018 or Alex Evan's Siggraph 2015 talk are good examples of
ways to scale procedural sign distance field rendering. They both assume
dynamic scenes or user edited content, there are probably other tricks that
could be used to accelerate mostly static content.

~~~
andybak
Links to the two talks mentioned (I think...):

[https://www.youtube.com/watch?v=Xpf7Ua3UqOA](https://www.youtube.com/watch?v=Xpf7Ua3UqOA)

[https://www.youtube.com/watch?v=u9KNtnCZDMI](https://www.youtube.com/watch?v=u9KNtnCZDMI)

------
ksec
Are these Ray Tracing Extensions and Hardware going to be exclusive to Nvidia,
at least for the time being? Assuming AMD didn't know about it until now and
had to work and implement it, it would be at least 3 years before a Ray
Tracing hardware are included in their GPU.

And in terms of efficiency, how much better is this Hybrid Ray Tracing (HRT)
compared to current Shadow Maps and reflection? I assume the latter needs a
lot of fine tuning but are faster while HRT is a lot simpler and slower?

I think right now the "highest" graphics quality isn't bottleneck or most
important part at all, ( Whether majority of gamers were able to go to the
highest graphics quality is a different question ) we sort of reach that stage
some time after Crysis. It is how to reduce the time needed for designer and
developers to reach that fidelity. I assume HRT will help designer to get what
they want quicker?

Because somewhere along the line, most game developers only wants to give me
the best graphics looking gaming possible, and the fun and storyline part
seems to have faded. Apart from Nintendo.

P.S To those calling this expensive, the $999 RTX 1080ti gets you a silicon
Die Size of 775mm2, along with very fast GDDR6 memory. Even your top of the
line CPU don't come close to this die size, let alone the inclusion of memory.

~~~
zamadatix
For the software side: All of the games demoed were using DXR (DirectX
Raytracing) for which RTX is just a backend for. DXR works on any recent AMD
GPU (albeit slowly due to the lack of dedicated hardware) and AMD was a
hardware partner in development of DXR.

For the hardware side: AMD has not said anything about specialized hardware
for ray tracing. They almost certainly have known about it for quite a while
but nobody (publicly at least) knows if they ever started on this.

------
Sol-
Is ray-tracing a very large improvement for rendering quality or do you have
diminishing returns now that "conventional" rendering techniques already
employ such good heuristics and approximations to make the scenes look good?

Just curious since I don't know much about the state of the art of rendering
and the implication of ray-tracing (other than the fact that it's, somewhat
trivially by how it works, the most accurate method).

~~~
mattnewport
A lot of the clever tricks employed to get effects like reflections, ambient
occlusion, soft shadows and GI effects using rasterization are 'hacks' with a
lot of edge cases and artifacts. They're not terribly robust. Ray tracing
let's you get better quality with fewer artifacts because there's less
'cheating' going on. You can get better quality in more situations with less
manual tweaking, if you can afford the performance cost.

Artifacts around the contact points of shadows and reflections and artifacts
for reflections and ambient occlusion of off screen objects are places where
ray tracing can give much better visual results.

------
pier25
At these prices I think I will keep on playing 1080p/60Hz for a couple more
years.

------
mrb
Kinda disappointed that there is no improvement—in fact a small reduction—of
the raw GFLOPS computing power, compared to Volta:
[https://en.wikipedia.org/wiki/List_of_Nvidia_graphics_proces...](https://en.wikipedia.org/wiki/List_of_Nvidia_graphics_processing_units#GeForce_20_series)

~~~
jdietrich
Titan V is a $3000 card. Getting close to that performance on a $1000 card is
a big leap forward.

~~~
mrb
Good point, but what about the comparison with the old 10-series Titan Xp
(released at $1200)? The 20-series improved peak GFLOPS perf by a meager 10%.
I still would have expected more from upgrading from 16nm to 12nm. I guess
floating point perf wasn't a priority for Nvidia in this microarchitecture
refresh...

~~~
shaklee3
Much of the silicon is now used by RT cores and tensor cores. These were not
on the Titan xp, for better or for worse.

------
old-gregg
Putting AI/ML aside, I have a question for graphics experts or/and gamers:
what is the practical [1] ceiling of the current champion, the 1080ti? I was
under impression that that card already was much more than most people needed.
In other words, is the new generation only useful for 4K gaming?

[1] By "practical" I mean on 1440p@144Hz monitors.

~~~
archagon
VR, for one, is going to (eventually) have insane resolution and performance
requirements: a flat 90 to 120fps with perhaps 4K resolution _for each eye_.

Even my regular 1080 has trouble with a few current-gen games.

~~~
marvin
It doesn't have to be that bad. If they can get foveated rendering to work
(rendering only the part you're looking at in the highest resolution), it will
dramatically cut the performance requirements.

------
stcredzero
Here's how a VR enthusiast should think about this news. Make an estimate for
when 90Hz 1440P VR can be supported by the average laptop. I think that would
be a good benchmark for the time a year before the widespread adoption of
VR/AR by the general public.

------
archagon
I wonder how long it'll take to get these cards into a "Nano"/ITX/small form
factor? Is it even going to be possible after a certain point? IMO, it's the
ideal size for eGPU builds, but so far, you can only go as high as a 1080.

(Here's my hacky 1080 eGPU build over TB2:
[http://archagon.net/blog/2018/07/25/egpu-
redux/](http://archagon.net/blog/2018/07/25/egpu-redux/) And just to assuage
any fears, there are attractive, pre-built options of the same size for TB3! I
had to go this route because TB3->TB2 adapters don't work with my model of
laptop.)

------
audunw
I wish there was more information about the acceleration structures and how
they're generated.

When I worked on hardware raytracing, my conclusion was that the actual
tracing was not the difficult part. That can be very well optimized, requires
very little silicon to do and is only really limited by memory bandwidth.

The difficult part for real-time raytracing of animated scenes is generating
efficient acceleration structures fast enough.

It makes sense that the API for this is opaque, but it would still be nice to
know what goes on behind the scenes.

------
baybal2
>RTX 2080 draws up to 215 watts of power.

It actually went up over baseline 1080. 175 vs 215. And so did the 2070, it
also eats more than 1080 while using a more modern litho.

------
gigatexal
I think the RTX 2080ti FE might be the first card worth the 1200. I’d get one
and never get another for 3 years probably.

------
MikeLui
Can someone point me towards the deep learning tech they reference for
improving ray tracing performance?

~~~
Svoka
They're using deep learning as fancy Anti-Aliasing mechanism, to fill gaps
which GPU didn't have time to properly raytrace or to upscale parts of images.
As I understood from what I seen ML there mostly for AA.

~~~
the8472
You mean denoising? In RT raytracing you're generally short on samples and
need to interpolate between them / smooth over the shot noise.

~~~
magoghm
In their Siggraph presentation last week they mentioned using AI for both
denoising and antialisasing (but they didn't provide anymore details about
it).

~~~
xxs
It's likely mostly for denoising, I highly doubt it can have enough power to
perform both denoising and antialiasing. The ray tracing presented would be
sampling and then use the model to fill the gaps.

~~~
mattnewport
You're effectively training the network for both when training for denoising
as your ground truth images are both denoised and anti aliased. With ray
tracing both problems are brute forced by the same approach of throwing more
samples per pixel at the problem.

------
Lio
Could anyone comment on the likely state of Linux drivers for these new cards?

I've anecdotally, previously been recommended to use either AMD or Intel GPUs,
even when out performed by Nvidia, because their open source drivers will
usually easier to deal with.

Is this still the case?

My own use case would be mainly for Blender.

~~~
gameswithgo
Do you have an ethical requirement to only use open source drivers? Because
there are official closed source nvidia drivers for linux. That is what I have
always used and they have worked well. Games work, dual monitors with weird
aspect ratios work, etc.

~~~
Lio
I don't personally, not for this anyway. It's more about community support.

If the close source linux drivers are up to date and work well that would
probably be enough for me.

------
latchkey
Nvidia has also pulled out of crypto mining.
[https://cryptocoinspy.com/nvidia-pulls-out-of-crypto-
mining-...](https://cryptocoinspy.com/nvidia-pulls-out-of-crypto-mining-
sector/)

------
jamesblonde
Stats that matter for deep learning: 616 GB/s Memory Bandwidth, 11GB memory.
About 27% improvement over 1080Ti, assuming mem b/w is bottleneck. 11GB is too
low - business decision to segment gaming and deep learning markets.

------
archi42
It was about time... The old series is, well, pretty old by now, but (to me)
never really seemed worth the upgrade from my r9 290 (used for Vive/VR
gaming).

Maybe mid-2019 will see me dropping some money on a new GPU, once prices
settled.

~~~
mclehman
I'm right there with you. I'm still running a 290 as well, and haven't really
been able to justify an upgrade, especially given I really only play Overwatch
and Rocket League.

------
fizixer
Can anyone suggest a 4 GPU build for a non-GPU budget of USD 1500 to 2000
(i.e., not including the cost of the 4 GPUs, which will vary depending on
which GPU is chosen)?

I will add to this the cost of 4 GPUs (like GTX 1080 Ti or higher).

------
flyinfungi
Any clue on how this thing will compare to a 1080 TI?

~~~
bayesian_horse
In terms of real time raytracing the RTX will blow the GTXs out of the
water...

For anything else? Remains to be seen.

~~~
smacktoward
This is my big question too. It's pretty standard marketing: if your new
product isn't impressively better than the old one on commonly used
benchmarks, you invent a new benchmark where your new product _is_
impressively better. I wonder how much of the "look how much better these
cards are at ray-tracing!" actually reflects real-world improvements users
will see, and how much is there to create a benchmark that shows a big
improvement over existing GPUs.

~~~
bayesian_horse
I don't think it's that bad. The RTX is a whole new architecture and will have
significantly more cores. The bigger question is probably just how the
performance is compared to the price and power consumption.

Realtime raytracing wasn't feasible before.

------
dostres
While we’re at this, has anyone got significant training perf improvements for
Titan V. Best I’ve seen is 2x of 1080ti for 4x the price.

------
Svoka
I just realized that what in dark corners of mobile games we call Whales,
NVidia calls "enthusiasts".

------
gok
Love the idea of using ConvNets for spatial and temporal resolution
interpolation.

------
holografix
When we can expect gfx 1080 prices to drop given this new release?!

------
trumped
those can't be used for crypto mining, otherwise, you will find out that they
aren't worth the money.

------
saudioger
can't wait to not afford these because of bitcoin miners

------
Giorgi
But will it mine?

~~~
Retr0spectrum
I sure hope not.

~~~
xxs
It has ddr6 - it will be quite decent albeit overly expensive/inefficient.

------
gregf
I am usually able to keep a card for several generations before I can't get
60fps at 1080p in games anymore. Unless you're a professional gamer I don't
get the appeal of throwing that kind of money at a card.

~~~
trumped
if you have bionic eyes, you need 200fps... some people use video cards for
other purposes, also.

