Hacker News new | past | comments | ask | show | jobs | submit login
Nvidia Announces GeForce RTX 2060 (anandtech.com)
106 points by zdw on Jan 7, 2019 | hide | past | favorite | 86 comments



The bigger announcement here is that Nvidia is enabling support for FreeSync and certifying that things work correctly for certain FreeSync monitors. That's a big deal. Nvidia syncing previously required their proprietary "G-Sync," which is substantially more expensive and is usually only present on monitors explicitly aimed at gaming.

It's too early to tell whether this means G-Sync is dead and FreeSync is just the new standard, but it's a great day to own a FreeSync monitor.


And it looks like you'll be able to enable it for your monitor even if it's not one of the expensive certified ones.


Have to wait for benchmarks to say anything definitive, but from the specs table it looks like performance is in the ballpark of the 1070. Same TFLOPS, faster memory clock but not as much memory, TDP is 10W higher.

At $350 that sounds like a tough sell over the 1070, which is finally down to $300 after rebate. Unless you're really excited about the raytracing stuff.

I'm underwhelmed for 2.5 years of progress since the 1070 (mid 2016). Just sticking it out with my 970 until it dies, which will hopefully be until there's something new worth upgrading to.

Maybe AMD has something more impressive up their sleeves in the mainstream price range.


For games, benchmarks are released. Computerbase for example has their review article online, https://www.computerbase.de/2019-01/nvidia-geforce-rtx-2060-.... For that workload it's at the level of a GTX 1080/RTX 2070 when overclocked, only a bit lower on stock. Pretty good, and clearly faster than the 1070.


Interesting, that does make it look compelling over the 1070.

I wish they'd stop bumping everything further up the price ladder every generation though.


Well, it sits exactly where it has to sit price-wise to counter AMD's Vega 56, which is more expensive, more energy hungry and a bit slower. And the faster Vega 64 eats even more energy and is even more expensive, so it counters that as well.

When or if AMD ever releases a competitive card outside of the mid-performance level Nvidia will adjust prices, until then their only limit is what customers are willing to pay. It's exactly what Intel did pre Ryzen.


to be fair, I thought the Vega cards were only more expensive because crypto-miners found out they worked better for mining than the nvidia cards?

Unless I am mistaken, in which case, ignore this.


Makes me wonder if it's a deliberate strategy. Cryptocurrencies raised the prices of GPUs across the board due to demand, but maybe Nvidia has decided that the wider market has become accustomed to higher base prices and will still bear it, even with the added demand from miners dying off.

I imagine they are probably reluctant to cut margins when their main competitor in this market segment is similarly priced.


That's partly right. Vega was especially expensive during the last crypto bubble, but that is over and the Vega 56 is still rarely under $400.


Yes, Launch MSRPs were $400 and $500 for vega 56 and 64 respectively, and it seems like those cards are just now beginning to sell for around those prices.*

I suppose Nvidia and partners have no reason to compete with their last-gen inventory backlog at a similar price. However, it's nice to see some downward pricing pressure to the high-mid-range. I don't care all that much about gaming, but the chip famine made AMD Vega hardware (and everything else of course) a terrible value, which is sad because it seemed like nice hardware for compute. I also really like how recent AMD hardware Just Works on linux. Who'd have thunk it?

I have been buying used rx480/70 on ebay for $80-100, which is great if you don't mind flashing an original bios back on to a mining card (is the rom I downloaded from that Albanian carding forum really the right one? Grits teeth and hits enter). It would also be great though to just be able to buy a card the normal way, for a fair price.

Anyway, it's hard to imagine AMD doing the same trick they did with the rx590 to vega, like where they release the Vega Turbo OC'd flavor-blasted Thermopile edition. It's already too hot. So maybe they will just have to finally lower the price to less than, or equal to, the price of RTX2060.

* https://en.wikipedia.org/wiki/List_of_AMD_graphics_processin...


yeah. I really want to get a new AMD card (vega 56), but I just can't justify spending $400 to get one. especially when the 1070 Ti beats it in performance and costs $50 less.


RTX 2060 has Tensor cores on a mainstream, desktop offering, which might be of some interest if you want to play with deep learning at home

Edit: I'm also sticking with a GTX 970 right now, mainly because I game at 1080p. When I can't get a consistent 1080p@30FPS (though I prefer 1080p@60FPS) then I will consider upgrading. Right now I'm CPU bound, not GPU bound, in any title where I get sub-60FPS.


True, and I think these are also used for their Deep Learning Super-Sampling system, which is another potential difference between the 1070 and 2060. Render at lower resolution and upscale with a specially trained neural network for each game.


Is inference fast enough to not have latency issues?


I assume there's some latency since it would need a fully rendered frame to operate on, but other postprocess AA techniques would suffer the same issue. Not a graphics expert, but I think only doing real supersampling would happen during the initial frame render instead of in series afterward, and that's going to tank performance much further than DLSS (in terms of individual frame time rather than latency, but much more significant overall).

The DLSS neural network is trained off of renderings with 64x supersampling and tries to apply that to a low resolution image. In practice I wonder if there will be situations where the neural network screws up and makes strange artifacts, but the demos look impressive.


I'm also sticking to Nvidia GTX 970 for now but would be nice to upgrade at some point. I was hopeful about AMD's offering but after trying Ryzen 2400G APU on my Linux media box I am not going to touch new AMD hardware again with a ten foot pole any time soon. Their Linux drivers are a nightmare, freezing display all the time, requiring power cycling. Nvidia works nicely and I don't really mind the bigger price tag.


It's probably done enough damage to their reputation in your estimation to not matter, but please note these Linux problems are specific to the Raven Ridge APU [1] - not the case using a discrete AMD GPU, the open source drivers for which are by many accounts absolutely superb these days.

It's a shame, I want a 2400G to replace my ageing A10 Trinity media box, will probably give it a try fairly soon expecting latest kernels have this solved by now...

[1] https://www.phoronix.com/forums/forum/linux-graphics-x-org-d...


Thanks for this information. I was planning a GPU passthrough setup using this - 2400G APU for the Linux display and an nVidia GTX 970 for passthrough, but this now makes me pause.


Raven Ridge APUs in particular are terrible, every other piece of AMD hardware has very dramatically better Linux support than NVIDIA's GPUs.


I can't speak for linux currently, as I stopped running that in lieu of the Windows Subsystem for Linux, but I havent had any of those problems on either of my AMD-gpu-powered machines on windows. (one running Vega 8 integrated, the other running an RX 580 4G).

it definitely sounds like a driver compatibility issue. AMD cards have had....issues on linux for a long while now. I remember running a laptop with a dedicated AMD card about 10 years ago that wouldn't work on linux without installing a 3rd-party driver because their switchable graphics implementation was god-awful.


While I love Linux Mint and Antergos (for rolling), I'm a Windows-based C# dev, and WSL would be my choice as well. Other than increased hardware compatibility the big win is battery life when mobile. Not really a factor for me as I develop on a Ryzen 2700X, but Windows 10 with Edge (soon to be Chromium based) or macOS with Safari is as good as it gets for battery life. I doubt a primarily server OS like Linux will ever match Windows or macOS in this regard, though I certainly wouldn't mind seeing it happen.


yeah. battery life is a real concern as well when mobile for me on my ryzen laptop. It doesn't have a very large battery. lol


This is exactly where I'm at, my 970 handles everything I've needed so far.

I'm interested in exploring VR dev, but am waiting until I have a new GPU, and can't really justify the upgrade yet. I've had my eye on the recent Ryzen cards, but I'm not sold on the reliability yet compared to Nvidia.

I do like that AMD at least seems to be trying to play nice with the open source drivers and kernel infrastructure, but in practice Nvidia still seems to be the most straightforward solution, so... meh.


I'm interested in exploring VR dev, but am waiting until I have a new GPU

Don't wait. Iron out the gameplay with "low" graphics. 970 is just fine for that.


can a 970 even support VR hardware? I was under the impression it could not.


The GTX 970 used to be the minimum card for blue team. I've played all of my VR gaming on one. Perhaps gains in resolution on new equipment have left them behind, but they should still be fine for a 1st generation HTC Vive.

I've also run VR games using Windows Mixed Reality headsets, which have a higher resolution than the 1st generation HTC Vive, and it also seemed just fine. Also, at the time, though it was several months back, the Steam VR test program still gave my GTX 970 a pass.

YouTuber Blunty once ran his 1st gen VR headset off of a GTX1050Ti laptop and said it was just fine "in a pinch." (I wouldn't recommend it for the very VR sickness prone, however.)


it was the minimum recommended by Oculus and may've been for Vive too, can't remember, at one point. But I think what they're saying is you can conceivably work on VR/AR without having world beating hardware (it really only becomes an issue when you want to test, I would imagine).

But maybe after getting some VR development under one's belt with a 970, that may give incentive to do the GPU upgrade if things progress accordingly.


There’s also the option of running a pair of 970s in SLI. That was my solution a couple years ago when it beat the 980Ti in both performance and price.


Yeah but then you're working on VR development with SLIed cards just as Nvidia's newer hardware all drops that capability. I've heard SLI can be a headache to support in rendering, so that might be more trouble than it's worth to develop for.


Similar for me. I had to get the 1050Ti for VR (vrchat optimization issues) and that should suffice for some time.


The 970 also works smoothly on Ubuntu with Nouveau.


One thing of note about the 20xx cards that doesn't get talked up much is that they have an updated version of the NVENC video encoder that produces much better quality than the 10xx cards' at the same bitrate. A big deal if you're aiming to do realtime video capture or live streaming.


Wonder how h.265 compares to x265 in terms of quality... on my 1080 it took about 50% more size to get the same quality (so between x264/x265 sizes respectively), but oh so much faster to encode.


How do you take advantage of that while streaming?


They announced that OBS "just works" with NVENC.


That is the whole point of pricing it so high. They want to move the 1000 series stock of which they are sitting on a huge pile of.


> Have to wait for benchmarks to say anything definitive

ask and ye shall receive https://www.anandtech.com/show/13762/nvidia-geforce-rtx-2060...


See also https://www.anandtech.com/bench/product/2372?vs=2147

Seems like ~10% faster and 10% more expensive. So not completely unreasonable deal imho.


>I'm underwhelmed for 2.5 years of progress since the 1070 (mid 2016).

Cant blame Nvidia or AMD. GPU's performance Scales very well with transistors count. As long as the next node is ready in GPU maker's pricing equation, you will get more transistors and hence more performance. And this is now largely down to TSMC.

At the current schedule it may take up to Mid / late 2021 before a 7nm 2060 drop price by half.


And it’s only 6Gb memory :-(


Well that's only a problem if 6GB isn't enough, and for the time being it's more than plenty.


I’m getting from time to time to limits of training of relatively small models


Usually the new XX60's are great because you get the previous XX70's performance at the lower XX60 price. This time you're getting the previous XX70 performance for the previous XX70 price...more than 2 years later. Weak!


Hey, it worked for Apple!


I think you're being sarcastic, but it actually has according to their revenue. They've hit their targets except for the weakened China market due to increased trade wars and volatility there. Because the economy is exploding due to the tax cuts, and the increased amount of disposable income, the price increases of Apple product hasn't seemed to affect them adversely.


Have you checked the stock price lately? $223 to $148 in a few months is no one’s definition of good. Top company by market cap to getting passed by many of your tech rivals, not good either. It implies that just blindly raising prices on your products to increase profits only works so long.


All time highest quarter in US, this last quarter. all time highest.


I refuse to buy Nvidia cards until they stop blocking drivers in VMs with passed through video cards. Yes I know, you can bypass the dreaded code 43, it's more on principle.


Interesting, never heard of this until now. A KVM maintainer seems to confirm this behavior too [1], albeit 3 years ago. Some newer anecdotes are around but it isn't clear if this is still happening [2][3]. (I used to virtualize the GPU via RemoteFx in Hyper-V many years ago and didn't run into this then. But now I'll have to give it another shot!)

[1] https://www.reddit.com/r/linux/comments/2twq7q/nvidia_appare...

[2] https://www.reddit.com/r/VFIO/comments/5sh41p/any_other_reas...

[3] https://www.reddit.com/r/HyperV/comments/5u2ge2/nvidia_consu...


It's definitely still happening, however there are workarounds that bypass the driver's checks by changing hypervisor settings. One particularly insidious "feature" is that on the Quadro series of cards, which are ostensibly meant for this use case, it will disable some features if a VM is detected (in my usage, the displayport 1.2 support for 4k@60hz was not enabled until I enabled the workarounds)


The workarounds you mentioned didn't work at all for me (and reportedly for some other people too), on my GTX 680 or 1050m. Just a warning for those who may be considering buying a nVidia card and were given hope by exrook's comment...


I can add another anecdote in favor of this happening.

Tried it ~6 months ago with an old GTX 680 (had to update the video bios to get it to pass through to the VM, then got code 43), and about 1 year ago on a laptop with a GTX 1050m (detected by the VM, but code 43).

Had I known ahead of time that this would happen, I would not have bought the 680 or this laptop.


Do ATI cards allow pass thru?


Yes.


Nvidia is sticking to the strategy of releasing the same GPU with RTX bolted on under a lower number and charging the same price for it... 2060 ($350) is very close to 1070 ($300), 2070 to 1080, 2080 ($800) to 1080 ti ($750). Edit: not the same same obviusly, but very similar in terms of benchmarks.

Doesn't seem like progress to me. With RTX being underpowered on current gen cards there's no real benefit in being an early adopter by buying this generation of graphics cards if you at all count the money you spend. Just find a used 1080 Ti, I say.

I think, Nvidia tries to pull an Apple and move the "premium" price tier up 500 points while curiously nobody in the press is calling them out on it.


Would you have been ok with them calling this a 2080 while keeping a $350 price tag?

Today, I see that a new 1070 Ti goes for $400+. This new RTX 2060 has the same performance for $350.

In other words: if you ignore the name of the product, the new product line gives you better performance and more features (RTX, tensor cores) for the same price than yesterday's product.

Is your only beef with the naming of the product? Why does the name matter?


The problem in my opinion is that in the past they would effectively shift performance down one SKU on each generation, and at the top you could get a 30% upgrade in performance for the same kind of money. Like the 980 Ti to 1080 Ti for example. Now you get marginally the same amount of performance for marginally the same amount of money.

It doesn't matter what they call it if the performance/money is what it is. I compare names because names meant a set price point going up to 10XX generation.

If you've got no use for RTX and tensor cores, those cards are very similar. RTX is available in one game only 3 months after launch, and what if I just dont want to be an early adopter? Tensor cores, yeah, if you're going to earn money or otherwise have your life improved by additional hardware in Turing, sure, it's a no-brainer. But I feel like it's not for many gamers.

Disclaimer: I'm not saying I am entitled to progress. I'm pointing out that it has slowed. I feel that this is similar to the lack of competition Intel was having. At least I'm thankful to Nvidia gods for making the higher performance card at all available, I guess...


> The problem in my opinion is that in the past they would effectively shift performance down one SKU on each generation, and at the top you could get a 30% upgrade in performance for the same kind of money. Like the 980 Ti to 1080 Ti for example. Now you get marginally the same amount of performance for marginally the same amount of money.

That's the exact problem with this generation. Before the top end (980 ti) became midrange (1070) for substantially less money. Now they just pushed the names up a bracket and kept (roughly) the same, and justified it by inventing a new "top end".


Yep, you get me. And this saddens me doubly because we saw the same kinda "upgrade" from $700 to $1000 in flagship phones not long ago and everybody was just as okay there.

Maybe it's the tariffs, or inflation, or some other thing that totally justifies this kind of sudden jump, but somehow not many press outlets acknowledge this jump at all existing, for them it's business as usual it seems.

My only hope is that consumers will react with falling sales. I'm sticking to my 1080 Ti for sure until the next gen.


> If you've got no use for RTX and tensor cores, those cards are very similar.

But, again, they are not. The 2060 handily outperforms the 1070 Ti for $50 to $100 less. And that's all for existing games that don't use RTX or DLSS or whatever, so feel free to ignore those additional freebies.

> I'm pointing out that it has slowed.

Yes. And I think you better get used to that. It's a consequence of the fact that we are near the end of Moore's law.

> At least I'm thankful to Nvidia gods for making the higher performance card at all available, I guess...

In this case, we're looking at more performance for less dollars, both compared to Nvidia's previous product and compared to AMD's current product.

Your only issue is the name and marketing positioning. I think that's a pretty shallow argument, TBH.


> The 2060 handily outperforms the 1070 Ti

That's not what I'm seeing in the benchmarks.

> I think you better get used to that.

This may be your worldview, but other people are free to think for themselves and I choose to hold it to a higher standard. This is not an attack on your beliefs, though, I'm not telling you to adopt my point of view.

We saw with AMD that it was indeed not the Moore's law that led to Intel releasing the same flavor of quad-core desktop chip over and over. And I think, we'll see it again, so I'm keeping the pressure on while this happens.

> Your only issue is the name and marketing positioning.

That is not, in my opinion, correct.


> That's not what I'm seeing in the benchmarks.

Change “outperforms” with “equal performance” and my argument doesn’t change due to the lower price.

In the recent years, we have seen a number of cases where Nvidia set the perf/$ higher and AMD was forced to follow.

This case is no different, with AMD expected to announce lower Vega 56 prices today.

That’s the exact opposite of what happened with AMD and Intel wrt CPUs.


I really hope AMD can offer some real competition to Nvidia soon, right now NVIDIA is fleecing customers because they have no real competition. $350 for a mid-tier GPU is shameful.


NVidia stock has lost half their value since peak, if fleecing they aren't doing a very good job at it.


Personally I think the price explosion had two groups of people driving it: The first group, which I am in, believes nvidia with its proprietary CUDA will continue to grow its datacenter presence rapidly, and get some big contracts for field ML hardware. The second was looking for crypto exposure. The etherium bubble has mostly burst and it isn’t profitable (probably hasn’t been for a long time in most places) to buy nvidia cards for crypto mining. That and the trade war with China have taking half of the stock’s overheated value off. It’s still up quite a bit from ~$30 in 2016, and I think that reflects its real edge in ML hardware as well as some modest healthy growth in consumer gaming. I think, previously I assumed all the growth was from investors thinking like I was about their AI prospects and missed that I had picked a winner based on the crypto craze. I think a lot of smart money realized that and got out.

Disclaimer: was very long nvidia, sold a lot in the 200s and hold a lot less these days.


It's the Apple syndrome, when you increase the product prices too much people stop buying and your stock goes down.

Also don't forget the crypto boom, it would've been impossible to keep that level up.


I don't think the peak stock price is indicative of anything. It's been all over the place with bitcoin and retail investors causing havoc and pumping up the price.


Their "mid-tier" GPU has a bigger die, and die size directly effects pricing. And the absolute most important point, they have 90+ days of GPU Stocks sitting on shelfs and Channels, that is a whole quarter worth of stocks. They have their RTX Order placed early with TSMC, so they will need to price both approximately to move them.

Just to give some context, having 90 days of Stocks sitting during any Normal Quarter is already bad, having it sitting there Before your next gen product launch is even worst. ( Hence why the Stocks has been hammered )

We will see in their next report if Christmas has helped to Lower those stock pile.


For $350 you can get an RTX 2060 or a slightly slower Vega 56.

So AMD is fleecing customers too?


The Vega 56 was barely competitive when it was released in 2017, it's overpriced simply because AMD can't compete (let's hope for interesting CES news though).

The fact that the clear market and product leader releases a new product barely competitive with an old mediocre competitor product is troubling. The only reason they are allowed to increase the price of the *60 series by $100 this generation is since they don't have any real competition, something they're taking advantage of to the detriment of the GPU market.


See my reply somewhere else.

I simply do not understand why people are so hung up on the name and positioning of a product.

If these products were evaluated on performance, price, features, and power consumption, with the name blacked out, the currently announced GPU would be the undisputed winner in its price category.

For the consumer, that’s ultimately all that matters.


I'm not very convinced.

Nvidia has been very commendable by breaking new ground with the RTX technologies, but it seems that at least ray tracing has a huge performance impact, therefore, I wonder how this card will fare, given its (relatively) low-level position.

Moreover, with the RTX series, Nvidia has "pulled an Intel": they raised the TDP for each model, so that they on generic (non-RTX) benchmarks, they look good, but they actually have a negligible performance per watt gain compared to the previous generation.

GTX x60 cards traditionally had fixed power requirements (6-pin); this doesn't apply anymore.

My suspicion is that GTX 11x0 cards (1160, 1150) are not yet showing, because Nvidia hasn't figured out a way to show a performance per watt improvement yet.


> My suspicion is that GTX 11x0 cards (1160, 1150) are not yet showing, because Nvidia hasn't figured out a way to show a performance per watt improvement yet.

My guess is slightly different - they know that the RTX technology isn't really there yet, and models that were just faster version of the current generation would probably kill any RTX sales.

Putting RTX on a 2060 is odd to me. The 2080 can't really run ray tracing at usable performance levels, I don't even want to know what it would look like on a 2060. There might be some more rounds of improvements coming to software, but are those enough to make it playable two tiers down?


Perf/watt on traditional raster is about 15% higher at 4K. Not massive, but not bad considering there wasn't a big node shrink either.

https://tpucdn.com/reviews/NVIDIA/GeForce_RTX_2080_Ti_Founde...


> Nvidia has been very commendable by breaking new ground with the RTX technologies, but it seems that at least ray tracing has a huge performance impact, therefore, I wonder how this card will fare, given its (relatively) low-level position.

Phrased more simply: "Is anyone going to want to use Raytracing on this card?"

I'd expect the answer to be "no."


If only NVIDIA would drop their expensive G-Sync and use FreeSync instead. That would eliminate the strongest reason right now to buy AMD (way larger selection of quality monitors).


Apparently you're a wizard, but don't let that go to your head.

https://www.pcgamer.com/nvidia-brings-g-sync-support-to-free...


How would this compare to my 980ti? It's been working pretty well for VR, but I've been wondering if maybe there's some value in upgrading soon.


This will be a little bit faster than the 980 Ti, as the 2060 is at the level of a GTX 1070 Ti, while the 980 Ti is bit slower than a GTX 1070. But I'd wait a generation longer, the jump is not worth it (if you don't get a great price when selling the 980 Ti).


https://www.anandtech.com/bench/product/2372?vs=2141

I wouldn't bother unless you care about power consumption


I wonder if low sales numbers for the new generation of Nvidia cards will negatively impact the current and future development of the ray tracking tech.


Doubtful since the ideas behind ray tracing are very mature. Advances in real time ray tracing will take the form "x advancement in offline ray tracing from 30 years ago can now be done in real time."


It's still ridiculously huge.


It's not much different than any other Nvidia card. Dual fans, dual slot, it's probably smaller than some of the monster Radeon cards AMD used to make. They make smaller versions of some chipsets but what you lose in volume you lose in cooling.


That is true. To be honest, I'm still salty about some older OEM mobos I had to upgrade, which decided to crowd the PCI-e slots with capacitors. All that did was make the slots impossible to use!


I'm still waiting for the AMD implementation of hardware real time raytracing. I don't trust Nvidia anymore concerning hardware quality, and the price is just ridiculous.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: