Hacker News new | comments | ask | show | jobs | submit login
Nvidia announces RTX 2000 GPU series with ‘6x more performance’ and ray-tracing (theverge.com)
445 points by albertzeyer 6 months ago | hide | past | web | favorite | 376 comments



Hmm, 24%-42% performance increase (14/16TFlops vs 11.3TFlops) for 70-90% price increase... And prices were already inflated by crypto that is now collapsing. Not sure who is the target market for this tech honestly. Still only 11GB RAM, even if 50% faster, making it a nonsense purchase for Deep Learning enthusiasts (state-of-art models are already larger).

Unless somebody invented RTX-based coin of course, then this is the minimal price...


>Not sure who is the target market for this tech honestly.

These are gaming GPUs being announced at a games conference to an audience of gamers and games journalists. The focus of Huang's talk was how accelerated ray tracing will improve the graphical fidelity of games.

GPU compute is a spin-off from gaming. Gamers remain the primary market. They generate the volume that allows Nvidia to sell a supercomputer on a card for under $1000. If you want a professional product, Nvidia are more than happy to sell you a Tesla or a Quadro at a professional price point.


The point was different - it is way too expensive for regular gamers, 2080Ti likely won't be able to do 4k@60Hz like 1080Ti couldn't either and 10 games featuring RTX in the nearly future is not a sufficient draw, especially when some of the effects now look subjectively worse with RTX ON (see those shadows from multiple lights above dancing people from Jenson's demo). So the question remains - who is the real target audience that will actually buy those cards at these prices? Did NVidia turn into Apple and made RTX its own iPhone X?

Waiting this generation out until RTX is more wide-spread/tested and going for the next 7nm generation with hopefully AMD having a proper GPU to compete seems like a better strategy for most gamers out there.


>it is way too expensive for regular gamers

The 80 series has always been a low-volume, high-margin halo product within Nvidia's gaming range. It's dirt cheap and high-volume compared to Quadro, but top-of-the-range for gaming. Cryptomania has revealed to Nvidia that they probably priced the 1080 too low at launch - many gamers were in fact willing to pay substantially inflated prices for The Best GPU.

If the mass market decides to wait for the RTX 1060 or 1050, that's fine with Nvidia, as they face no real competition from AMD at the moment. It's very much in Nvidia's interests to make the most of their market dominance.


The 70 series is traditionally pretty popular for gamers though. At $600, though, I mean... that's at the point where just the graphics card is more than one of the 4K consoles Microsoft or Sony is putting out. Obviously a PC game is going to look nicer, but someone has to start thinking about that comparison.


"someone has to START thinking about that comparison."

Sorry, but this comment is really kind of hilarious.

The "PC vs. Console" debate is something that almost predates the Internet and it has generated countless forum wars...

A high range PC has always been the more powerful and expensive gaming machine, since basically the first 3DFX cards in the late 90s, some people are OK with that, other prefer consoles as a perfectly acceptable alternative.


That's not really what I meant. Obviously some people have drawn their lines in the sand and will never consider switching. I don't think that's everyone. I play games on both console and PC, as I imagine do many others. If the price is too unreasonable, or the PC version doesn't work right without a bunch of configuration, or whatever, I can't be bothered with it and will just go to the console version of something.


> I mean... that's at the point where just the graphics card is more than one of the 4K consoles Microsoft or Sony is putting out.

The console might output 4K but that doesn't mean the GPU inside can handle higher settings than a 1060. The $600 GPU is irrelevant to that comparison.


> The console might output 4K but that doesn't mean the GPU inside can handle higher settings than a 1060

Which is kinda irrelevant too, since console games are highly optimized for exactly 1 graphic card & rest of the setup. You get all the details hardware is capable of smoothly, nothing more or less. No fiddling with tons of settings that most have no clue about.

I don't get this often used argument - my games look better on PC than on consoles. Yes some UI gimmicks look better, textures have higher res, but after 30 minutes of playing it really doesn't matter at all, quality of gameplay and smoothness of experience is the king. Of course if one pours 5x the amount of money into PC that would be spent on console, there needs to be some inner justification. But it still doesn't make sense to me.

This is a view of person who plays only on PC, never had any console.


If you set it to medium you should get a smooth experience. You don't have to do anything I'd call "fiddling", and being optimized for a specific GPU is overrated (and not even true with the current consoles). Especially when you have a much better CPU.

> Of course if one pours 5x the amount of money into PC that would be spent on console, there needs to be some inner justification.

You can get a prettier and smoother experience if you do that and don't put the settings on super-ultra.

But also, if you're already going to have a computer, popping in a better-than-console GPU is cheaper than a console.


> You get all the details hardware is capable of smoothly, nothing more or less.

The "smoothly" part is far from guaranteed. cough Dying Light cough

> Of course if one pours 5x the amount of money into PC that would be spent on console, there needs to be some inner justification.

IIRC PC game sales are far juicier than their console counterparts, especially if you don't live in 1st world countries.


To the extent that's true it kind of works against your argument, doesn't it? I doubt that PC sales look better in India because Indians all have top-of-the-line Alienware rigs.


I don't think it does, actually.

The top of the line rigs are what you need to play new titles on Ultra settings, sold at $50-60+

With hardware comparable in pricing to what you'd find in a console (or using something that doesn't make much sense to mine with, like a GTX 780Ti) you can easily play a 3-5 year-old game at 1/2 to 1/4 of it's original price, which might even be reduced further by -50% to -90% during a Steam sale.


But it does open up the system's library of exclusive titles, which makes it seem compelling to someone considering a video card purchase who already has an older one that does OK with games.


I think the cryptomania revealed that people (young people, gamers, who got into cryptocurrencies) could earn a few bucks or so back with their investment. Some of whom used mom 'n pop's electricity grid for that purpose. If they had to pay that back, it was likely 0% interest.


You're deviating from the point. It's irrelevant to the discussion who are the crypto people and where or how they got the money.


> it is way too expensive for regular gamers

Better cancel my pre-order then...

> 2080Ti likely won't be able to do 4k@60Hz like 1080Ti couldn't either

My 1080ti and 4k monitor beg to differ.


> Better cancel my pre-order then...

I mean you're not countering his point in any way. He didn't say nobody would buy it, but it's a simple fact that most people can't afford or justify a thousand dollar GPU.

The vast majority of people buying Nvidia GPUs in the 10xx generation were going for 1060s and 1070s.


Yet, going by a scan of people on the train last time I caught it, heaps of people seem to find money for iPhone Xs at almost the same price point.

If you're sufficiently dedicated, even with limited funds going for every second or third iteration of halo products can be a great strategy. That way when you get it you'll have the absolutely best there is and a couple of years down the track it'll still be great but maybe it won't quite do Ultra anymore on the latest titles.

The 1080TI transformed my experience of my X34 Predator (it even paid for itself through timely sale of mining profits) which does 3440x1440 @95hz. I certainly wouldn't mind the new card but I'll wait for the next one after that minimum.


Don't most people get expensive phones because of subsidies from carriers? Or at least, they pay monthly installments for these devices, through data plans (basically).

Do people really take out loans to get super expensive video cards?


Do people really take out loans to get super expensive video cards?

Sure. Either by putting on a credit card or by using some sort of in store financing.


They find money, but it doesn't mean they can _afford_ it...


People seem to be missing something in this particular point of the conversation. It's not a function of absolute price. It's a function of price-per-performance. Sure, a lot of the crowd here can afford the 1080 or the Titan, but the bang-for-buck favors the lower-end cards.


And you also didn't.

'most people can't afford or justify'. Come one. People buy cars and other stuff. Someone working fulltime and buying a 1k cheaper car can already afford and justify a 1k graphics card.


Video cards tend to be from the entertainment part of the budget, cars are totally different.

Cars do last way more than a video card as well, they have increased warranty period(in the EU video cards have 2y, cars usually 5-7y).

Cars tend to be purchased on leasing due to their higher cost and actual necessity. So saving 1k on a car usually doesn't transfer to 1k in cash.


We are already talking about a small percentage of people who wanna buy a graphics card.

From those people, it is easily justifyable to spend less on a car, a holiday or rent and instead having a nicer gaming rig. If you spend a lot of time playing games why not?

Modern technology is way cheaper than the previous/old status symbols.

I'm thinking about buying a car and one simple calculation is still what else i can do with that money.

And yes in munich, where i live right now, there are enough people with a car who could use public transport and don't us it.

The target group of a 1k graphics card is not someone who can barely afford the car he/she needs every day and would not be able to earn anything if the car breaks down...


Really depends on the game and how much you value high refresh rates.

There's also the VR factor, as much as Nvidia/AMD like to label their products "VR ready" we are, imho, ways off from actually being there.

At least if the goal is gonna be something like two displays at 4k@90+Hz and upwards.


what kind of FPS can you get in Doom, all features on, at 4k?

my 1070 would do it, but it was garbage fps


I get 60 (limited by my monitor) or 1080ti. All settings maxed out at 4K


Nearly the same story for me on a 980 ti. I have to turn some settings down, but I get 60fps @ UHD commonly.


try uncapping your framerate, the game will let you run at higher than your hardware limit.


What CPU do you have? My 1070 plays doom at 4k no problem.


It is probably just that we have different standards of what a good fps is.

For me if its not at 60fps or more ALL the time I'm angry. Grew up playing Quake and Unreal Tournament pretty seriously.

For a more casual game like skyrim it might be fine for me.


If you played seriously then you knew enough to turn the settings lower, not higher.

The last thing you want is having your field of view obscured by colorful explosions, bloom, debris, etc. when your opponent has a crystal clear vision on you.


Yes, I am well aware of that.

But that is a completely orthogonal point to the question as to whether doom runs well at 4k with all features on. Because I am asking that question does not mean I would play death match that way. But I might indeed go through the single player campaign that way.

Indeed I didn't play Doom at 4k at all, because as I said, it felt like garbage at 4k, no matter what settings, on 1070


> it is way too expensive for regular gamers

Gamers arent kids anymore. Amongst my friends that game, most are nearly 30 working excellent jobs in software.


Doesn't matter if they aren't "kids" anymore or not, there's a reason AMD focused hard on the $200-300 range for graphics cards - because that's where most buyers budgets are. There are people who spend more, but even many enthusiasts are shopping in the $400-500 segment for a card to support high-refresh-rate gaming or higher resolutions like 1440P, the number of people who blow $800+ on a GPU like a 1080 Ti are few and far between in comparison.


> the number of people who blow $800+ on a GPU like a 1080 Ti are few and far between in comparison.

They always have been. Still doesn't stop those who can buying them.


Certainly not, but the infuriating part is historically the performance of those cards trickles down to the lower tiers at more reasonable prices as the generations go by. The GTX 1070 beat the GTX 980 Ti at an MSRP $200 less just one generation later, meanwhile at least from pure TFLOPS numbers the RTX 2080 is less powerful than the GTX 1080 Ti while costing around the same.

One would be forgiven for expecting roughly GTX 1080 Ti performance in the RTX 2070 at around $449-499 USD.


This all sounds like normal r&d and market forces. New stuff is low volume and premium prices. Once it becomes more common and more production lines are switched, the prices fall and the features get included into other models. This applies to virtually every product.

Or did you want to highlight something else I missed?


The new products are launching at the same price point similarly specced parts from the previous generation have been selling at. "Low volume" doesn't really play when you're talking silicon manufacturing, when you spend millions of dollars to make a mask you want to get ROI on it quickly - the GTX 1070 sold at over $200 less than the GTX 980 Ti, for example, at launch.

When you sell a product that actually has less compute performance (the RTX 2080) at the same price point as the higher-end part from the last generation (GTX 1080 Ti) something has gone horribly wrong. A lot of this is likely due to the Tensor/RTX units on the new GPU's taking up die space and there hasn't been an appropriate process shrink to make up for it, but it's all the more reason these are REALLY unappealing options for anyone outside the top-end enthusiast segment (the GTX 1070 is the most popular enthusiast card this generation, because even enthusiasts have budgets - which is usually the $400-500 range for GPUs).

tl;dr; Prices here make no sense, cards with similar or less performance selling at the same price point you could get from the previous gen - just with raytracing support added on top (so it won't net you WORSE performance with this new functionality utilized). I don't know who Nvidia thinks is going to buy these from a gaming perspective.


Lots of people.

The hype around "ti" is unreal. Now they're changing it to "RTX" and "ti". /shock /awe /s

To me, it's pretty clear NVIDIA is cashing out.

Have you not noticed the market slaps the word gaming on commodity hardware along with a bunch of christmas lights and people happily pay a premium for it.

Gamers aren't the brightest bunch and a $1000 is the right price point when people are gladly dropping that on a mobile phone now.

Sure compared to a few years ago I'd agree with you, this market? this hype? No.


Nvidia has a history of just sitting out their performance lead, see Geforce 8800 vs Geforce 9800.

Even in the initially released marketing material, by Nvidia, the 8800 GTX had the obviously way better raw specs than the 9800 GTX. Took them a couple of days until they changed the material to compare on performance % in different games.

But the 9800 GTX was actually a slower card than the one year older 8800 GTX due to lower memory bandwidth and capacity. As such it was competing against one generation older mid-range cards like the 8800 GTS 512.


Yeah but you don't start by announcing the cheap card. You announce the expensive one and then lower the price.


Nvidia's market cap is nearly 8x that of AMD. Volume =/= profit.


NVDA also has a significantly higher market share than AMD does right now, that doesn't change that $200-300 is still the most common price point for consumer GPU purchases.

Current Steam user survey results (now that the over-counting issue has been fixed) shows the GTX 1060 as the single most popular GPU installed by active Steam users with a 12.5% market share, the GTX 1050 Ti and 1050 take second and third place with ~9.5% and ~6% respectively that means about 30% of Steam users have a current-gen GPU in the $200-300 price range.

So yes, volume != profit, but the consumer obviously trends towards more reasonably priced cards. Cards in the Titan price-point that NVidia is trying to sell the RTX 2080 Ti at are so uncommon that they get lumped into the 'other' category of the Steam survey - and since I highly doubt magic like doing integer operations in tandem with FP32 operations is going to bring that much of a performance improvement to the majority of gaming workloads in tandem with the really weak raw numbers of the just-announced cards (fewer TFLOPS on the 2080 than the 1080 Ti selling in the same price bracket) it's obvious Nvidia is really taking the piss with their pricing. You're paying more for raytracing, that's it - and while it's certainly a cool feature I don't really see gamers caring that much until it becomes usable at that $200-300 price point.


Where are the TFLOPS figures from? I don't think Nvidia has officially released those but I'd be happy to be proven wrong.



Thanks. Not sure where they got it from. The Anandtech article that is linked to there does contain some TFLOPS numbers but I think they came up with those numbers somehow based on the CUDA core count so could well not be accurate.


I guess it's just simply twice the number of Cuda cores times the operating frequency and so it's accurate as such but lots more goes into gaming performance of a GPU.


I work in software, and I definitely can't spend that much on a GPU, a normal 2080 is more than my monthly rent. The prices are nuts.


They're really not. Compared to many other popular hobbies gaming's yearly cost is really low. Things like audio, photography, cars, warhammer, travelling, winter sports each will have yearly costs that make gaming seem cheap as hell.


Football (soccer) is cheap, so is basketball, traveling can be done on a budget (backpacking, hitchhiking). Board games or card games (not the collectible or trade-able cash-grabbing variety) are also cheap.

There's many expensive hobbies, but also a ton of cheap ones.


I'm gaming at 4k60fps ultra everything modern games on a single 1080ti right now.


If you turn on extreme anti-aliasing then yeah, it might struggle, but guess what? Anti-aliasing ain't that important on 4k.


I agree with you, aliasing is obviously less noticeable at higher resolutions, simply because the pixels are smaller (or the pixel density is higher, whichever way you want to see it).

It's just common sense but if anyone needs proof, they could look at the pictures here (e.g. look at 8X MSAA 1080p vs only 2X at 4K, yet the latter has less visible aliasing): https://www.extremetech.com/gaming/180402-five-things-to-kno...


Lucky you! Head over to /r/nvidia and read which games just can't cut it on 4k@60 Ultra.


Which ones though? Tbh I do have a 4k gsync so that does help when fps goes below 60. I find anything from 50-60fps to be smooth and below 50 it starts to get too choppy. It also helps with the input lag to run gsync @ 58fps. The most current game I run ultra is far cry 5, it's a pleasure @ 4k.


It's a graphics card running a computer program.

Did you expect it to run everything forever at an extremely high resolution with 60fps?

Why are Nvidia releasing new cards then?


Ultra is ridiculous and unnecessary. I play 4K60 on an RX 480 with "smart" settings — max textures/meshes, min shader effects, no ambient occlusion, no dynamic reflections, etc.


The only point i'd like to make is that the only reason that you "can't do" 4k@60 is because devs decided to tune their effects scaling in such a way that this is the case.

This doesn't affect the argument that you're making. I just think it's actually incredibly absurd to complain as though it's the hardware's fault for not being able to keep up with 4k@60, when it's the devs who you should be looking at when you're disappointed with the performance on a given piece of hardware.


Oh yes it’s the developer’s fault for not “tuning” something the right way. Sure.

You can “tune” something all you want, you’re always going to have a lower quality representation if you want better performance. The hardware should give the developers the possibility of getting better quality graphics at higher resolutions. We can play the original doom at 4 and probably even 8K without much problems. But that doesn’t mean it’s because they “tuned” it better, it’s because hardware has gotten better and hardware will always be the limiting factor for games.


I think the point is that with PCs they make less effort to eek performance out of hardware. When you've got a console, you know exactly what you'll be optimising for and work really hard to get the most out of it. With a PC release I think Devs tend to make far less effort and simply up the requirements


Not only that later in the same demo you can notice the fps drop during the explosion sequence.


They are likely incentivized to jack up the price on personal purchases so the big manufacturers can have more overhead integrating it with their consoles or pre-built gaming PCs.


They demonstrated real time super resolution on top of hybrid rendering. The meaning of 4k@60Hz has changed. They can render that just fine -- it's just a question of how many of the pixels are imagined by the model and how good the model is.


With my 1080 (not 1080Ti) I play most of the games (just couple of exceptions, really) with 60 FPS on 4k monitor. And look at 2080 with the same price tag as 1080 - gamers will wipe it out from stock in seconds.


Spec page says 8K HDR @ 60fps.


Talking about two different things here... You mentioned that card is capable of outputting 8k HDR @ 60Hz, i.e. your Windows desktop can happily have 7680x4320@60. I mentioned that running games at 4k@60Hz or 3k@144Hz smoothly might not be possible for some demanding games and that many gamers expected that from the new generation.


You can slow any hardware with sufficiently inefficient program (or sufficiently detailed scene, if we're talking about GPUs). You can easily make any videocard work at 0.001 FPS if your scene is heavy enough. So it only depends on game developers, it's unfair to blame Nvidia for that. GPU progress is astonishing, at least when compared with CPU progress.


> GPU progress is astonishing, at least when compared with CPU progress.

I have a dual xeon rig with sixteen cores that I used to use for video transcoding. It could transcode a movie in about 12 hours.

I read that a cheap Nvidia GPU could do the job in less than 30 minutes. It seemed way too good to be true, but I figured I'd spend $120 to find out.

It turned out the hype was wrong; it didn't take 30 minutes to do what sixteen Xeons spent 12 hours doing, it took fifteen minutes.

Unreal.


Ehm, if your server runs a reasonable modern x264, you should get significantly lower bandwidth at "transparent" quality compared to what the GPU's hardware encoders is even capable of reaching. The reason there being that the hardware encoder can't use some features x264 implements sufficiently fast to make them worthwhile to use at that sort of time investment.

Please don't try to measure lossy-anything-calculation by speed alone, always make sure that the required quality can even be reached, and even if, that it still exhibits the performance benefits, even after you tune these features up far enough.


NVENC is great for streaming, or archiving at 50-100Mbit, but at reasonable bitrate quality goes out the window. Same goes for Intel encoder.


It's not about blame, it's being realistic that 4K means 4x the pixels and 8K means 16x the pixels, and while these cards represent a lot of progress they're nowhere near that level.


Exactly, PC high-end gaming has moved on from higher resolutions to higher refresh rates, ideally both of it.

Many enthusiasts gamers consider gaming at 60 fps to be rather outdated.


Seems like gaming has long since hit an "eternal September" as producers are spewing out ever more impressive tech demos.

I have long since taken to killing every graphical effect game devs can think of throwing into a game (if they supplied a toggle for it) simply so i can tell where the enemy is without wearing sunglasses indoors in a dark room.


sure, for games like battlegrounds. but some games don't even have enemies!


Ah yes, visual stories 2.0...


The Witness Stephen Sausage Roll Minecraft peaceful mode or creative mode


Uhh, no. Non-fps games.


Non-fps, non-arpg, non-mmo, non-rts, non-moba, ...


It's far from a nonsense purchase for Deep Learning professionals. At https://lambdalabs.com nearly every one of our customers is calling us up interested in the 2080Ti for Deep Learning. The reality is that very few people training neural networks need more than 11GB.


Without anything beyond claims what reality™ is, your response unfortunately just reads like an ad for your company.


"Reality", in our case, is based on conversations with a few dozen customers who use our products as well as the actively publishing researchers we know personally. The majority of sampled customers are training convnets of some fashion, the most common being Yolo/SSD and ResNet.


Most people I know are just using Google Cloud. Directly integrated with Tensorflow, and way more scalable.

I can run 10 GPUs on my model training runs and finish in an hour now when they used to take at least 2 or 3 days, and it took absolutely no work on my end. It’s been absolutely wonderful for my productivity. The price doesn’t matter either because compared to how much the people are being paid at these sorts of companies, it really doesn’t matter. The boost in efficiency is so much more important.


Does Google Cloud run anything other than Tensorflow well? Specifically, I'm wondering about PyTorch.


GCP is pretty tightly integrated/optimizes for Tensorflow. That’s why scaling GPU amounts wasn’t a hassle for example since I was already using the TFEstimator framework.

I’m pretty sure for the next Tensorflow 3.0 update, they’re rethinking towards a Pytorch style with more dynamic computational graphs.


Do these people who you talk with know the specs?

Do they know if tensor cores in the these processors are just good for interference or are they similar to the more pricey models (floating point precision)?


Then most likely they would not be successful in their businesses; training older models or newer ones with a batch of 1 is not a recipe for success these days when you more like need 100s of GPUs running slight modifications to models in parallel to find the one that works.


Most people I know who publish in NIPS/ICLR train their models using between 1-4 GPUs under their desk or 8 GPUs in a server. I would argue that these folks make competitive models and "are successful in their business" with only 1-8 GPUs. While having access to hundreds of GPUs helps you explore model space more quickly, it's not a hard pre-requisite to success in Deep Learning.


Yup, state of the art doesn't require GPU farms, just a couple in a consumer desktop will do.


Lookup AmoebaNet-D if you think you don't need a GPU farm these days. Those were times when 1-2 GPUs were enough...


You know, there is more to deep learning research than meta-learning/architecture exploration. Sure you can explore the hyper-parameter space faster with 500 GPUs and get yet again a 0.05% better test score on ImageNet (or more I don't actually know), but there are other ways to do something meaningful in DL without using such compute power.


That's a fair point and I agree. It's just sometimes difficult to beat automated exploration; as a standard company you probably don't have access to top-end researchers/practitioners, just average ones, and those might get a significant boost by trading smartness for brute force and run many models in parallel in evolutionary fashion.

When you e.g. look at how Google's internal system constructs loss functions and how many combinations do they try, one has to have an unexpected idea to beat their results, and that idea can be usually quickly incorporated into their platform, raising the bar for individual researchers. At Facebook they basically press a few buttons and select a few checkboxes, then wait until best model is selected, leading to frustration among researchers.


It's just an indicator that extensive improvement is possible. But as with adding more and more shader cores you can get your FPS gainz to the point of Ray-tracing appears around the corner and now you need to add different kind of cores.

Same process for research. You suppose to find some insights on how to do one thing or another, find the direction of search, eventually there would be hardware to fully explore that direction. Then you move on to a different direction. Rinse-repeat


We used to call this model fishing and regard models that came from places that did it with fear and suspicion (due to sudden failure and poor performance in production)

What has changed that people think this is a wise approach?


They don't suddenly fail any more, and performance in production is fine... This is a rather empirical science right now.

If forced to speculate, I'd say far larger datasets are part of the answer. If you can afford to hold back 30%+ of your data for (multi-tranche) verification the difference between testing and production becomes a problem for the philosophers.


Hmm this isn't my experience. My experience is that highly complex models tend to reflect accurately the ground truth in the current dataset, but not the domain theory that generated it, so when things change (like, prevailing weather conditions, political alignments, interest rates, data usage) which move the distribution (but not the domain theory) they do fail. The question is : are you trading poorer compression for more fidelity or are you learning the system that generates the data?


AutoML and automated architecture/hyperparameter search used by FAANG, producing things like AmoebaNet-D, that beat all previous state-of-art results.


Where are you getting a 70-90% price increase?

1080s are ~$450, will now be $799. 1070s are ~$400, will now be $599.

(I'm going off the top hits on Newegg)


Those are the "Founder's Edition" prices for the first cards from Nvidia, so it's not quite this bad either.

MSRP on cards from OEMs should start at $499 for the 2070 and $699 for the 2080 ($100 cheaper than Nvidia's).

Personally I still think it's insane. I have a GTX 970 from October 2014 and that was $369.

EDIT: Even my $369 was a bit of a premium (OC edition and shortly after launch, I don't remember but I'm guessing I bought whatever was in stock). Wikipedia claims the GTX 970 launched in September 2014 at $329.

Assuming the $329 MSRP is comparable to the $499 announcement, the __70 line is up 52% in two generations. I'm sure it's better hardware, but that's a big pile of money.

And if my 970 experience holds through today, the OEM cards that are actually available are going to be pricier than the MSRP, but maybe still lower than the Founder's Edition.

We'll see where the 2060 ends up.


Founders edition 2080 is £749, and the cheapest non-Nvidia one is £715 from Gigabyte. The difference is tiny and those prices are still insane.


We're also talking huge VRAM jumps, and RAM manufactoring has also hiked prices.


For those confused, these are about what the 10 series Founder's Editions were priced at originally. When the 10 series was released, the 9XX series was around the same price as the 10 series is now compared to 20. The price isn't going up by 70-90%. It's that the previous series (10) is going down.

Does that mean the 10 series isn't worth it? By all means, no. The 9 series was still an excellent series when 10 was released, and people did pick the up.


Ehhhh, not really, but the release seems a bit different this time around. The 1080 Founders Edition and 1080 Ti FE cards both debuted at $700 USD. (The 1080 Ti was released nearly a year later.) This puts it closer to the now released 2080 FE, which is $800 USD.

The 2080 Ti FE, however, is in a league closer to the Titan X/Xp, which were at $1,200. Also they're releasing the highest-end Ti edition at the same time as the ordinary version, which is a first for Nvidia, I think? (The Titan Xp was also launched after, not concurrently, with the 10xx series...) I think the concurrent launch of the 2080 Ti with the ordinary variant means they're positioning it more like an alternative to the Xp, while the non-Ti variants are closer to the ordinary gaming cards you'd normally get. In other words, for people willing to blow their price/perf budget a bit more.

For DL workloads the 1080 Ti is very cost effective (vs the Xp), so it remains to be seen which variant will have the better bang/buck ratio for those uses. I suspect the fact these include Tensor Cores at their given price point will probably be a major selling point regardless of the exact model choice, especially among hobbyist DL users. The RTX line will almost certainly be better in terms of price/perf for local DL work, no matter the bloated margins vs older series.

They may also be keeping prices a bit inflated, besides margins, so they can keep selling older stock and pushing it out. The 9xx series, as you said, continued to sell for a while after 10xx was released. I expect the same will be true this time around, too, especially with prices being so high.


> Ehhhh, not really

Ehhh, yes really.

1080 FE $699 USD 2080 FE $799 USD

That qualifies as almost the same price in my book. By and large, not a 70-90% price increase as being suggested.

Unless you want to explain how 700-800 is a 70-90% price increase?


Tensor cores are great for inferencing; for training their support is still a bit shaky, last time I checked.


Tensor cores (e.g. SIMD matrix FMAs) are extremely useful for training. There is nothing shaky about it.

I do not get why you're being so bizarrely negative about this card. Yes, there are an enormous number of applications, for both training and inference, where an 11GB ridiculously powerful card (both in OPS and in memory speed) can be enormously useful.


Yes, I agree, tensor cores are awesome, if your framework of choice doesn't have some rough edges you inevitably hit when you try to do advanced models on e.g. V100, but which work just fine on TPU. I think the presented Turing card is a masterpiece, just given I was hitting 11GB limit with some semantic segmentation and multi-object detection a year ago, I am obviously disappointed that this wasn't increased, and I am forced to buy RTX 5000 instead ($2300) or used K80/P40. Also, for a gaming card outside a few RTX games I doubt it will give adequate value to gamers that expected 144Hz or faster VR and similar goodies. For raytracing and as a fusion of RT and DL it's truly redefining computer graphics.


Tensors cores were built for training. For inference they added the int8 instructions (dp4a) which have lower precision. The Turing also has int4, and for inference this card blows the Volta out of the water since the v100 was only ~TOPS for int8. The Turing is 250 TOPS for int8 and 500 TOPS for int4.


Which 10x0 FE did cost $1200? Did I miss something?


1080 FE $699 USD 2080 FE $799 USD

That qualifies as about the same price in my book, and not a 70-90% jump.

As for the Ti price when 1080 series was released, there was none.


For the 1080, by your numbers, $799/$450 represents a ~78% price increase.


First those are founders edition pricing which cost an extra 100$. https://news.ycombinator.com/item?id=17802871 Also 1080's did not start at 450$ each.

The 2080's price will drop eventually, until then nVidia has no reason to lower prices when they are going to be sold out for weeks if not months.


The price of 1080's where $600-$650 for the founders edition cards when they first came out. I know cause I preordered at those prices. I think the price of the 2080 FE cards are fair given that they are not only faster at raster graphics, but they also have the Raytracing/AI capabilities.


Alright fair. 50-80%, not 70-90%.


These are also Founders Edition prices. Still the 2080ti is pretty expensive even at non-FE pricing:

RTX 2070 cards will start at $499, with RTX 2080 at $699, and the RTX 2080 Ti starting at $999.


For video professionals doing ray tracing this upgrade would likely be a must-have. This assumes the improvement (i.e. 6x or anything in that ballpark) is real and their software supports it... if so, hell, nVidia could double or triple the price of the card and it would still make sense for them. nVidia is just cashing in on a lack of any real competition at the high-end currently... hopefully we'll see AMD get back in the game at some point re: GPUs.


video professionals should use the Turing powered Quadros, these are for gamers


Not true. There is (or at least until now, has never been) a reason for video or 3D professionals in the "Arts" to use anything but GTX cards. 3D CAD and AI professionals on the other hand will profit from the Quadro cards. The main difference between Quadro and GTX were amount of memory, clock speed, longevity and drivers. GTX are faster clocked but relatively less reliable. GTX are game optimized, Quadro drivers are CAD optimized (extremely so) GTX are much cheaper, but may have a shorter lifetime (but who runs a render farm on 10 year old cards...) GTX offer very fast single precision calculations, Quadro single and double. Almost all 3D rendering is done single precision.


It's a spectrum. Not every professional wants and/or can afford the top end cards. Many professionals (i.e. those who make money doing a thing) have been using high-end gaming cards for work pretty much since they existed. There's also the long-running debate as to the actual value of the 'pro' line of video cards for non-mission critical purposes (enough of one that nVidia in their license prohibited the use of gaming cards in servers with the odd exception of crypto mining)


But they are the low-to-mid segment of professionals doing weddings and local business presentations with median income around $50k. The ones doing interesting work can't live without real 10-bit HDR on calibrated 4k screens for realistic printing/video projections, without proper 5k+ RAW cameras and top-end lens etc. and those are extremely expensive.


Every self-declared "professional" I've met spent most of their time unproductively fiddling with their equipment.

While a select few can push the envelope with technology alone, a bit of talent seems to easily compensate for almost any technological limitations. The "latest and greatest" is the easy route to mediocracy.

That's been true for all creative disciplines: from photography to writing to animation. There might even be a mechanism, where inferior (or at least different) tools may be a restriction that nurtures creativity, or at least guarantees results that are easily distinguished from the rest of the market.


>But they are the low-to-mid segment of professionals doing weddings and local business presentations with median income around $50k

Yeah, tell that to the Octane Render community. There's plenty of incredible starving artists using consumer GPUs in their workflow to render top-notch work.


These expensive items are actually useful equipment for video production, but it doesn't mean a video card has the same importance. Who cares about slightly longer rendering times for rarely used special effects?


Last time I checked you needed at least Quadro for 10-bit HDR in normal Windows application (games in fullscreen worked on normal ones as well) :-(


This is the most elitist thing I've read today, and it's only 10:30am.


Isn't this just an implementation of Microsoft's DirectX Raytracing API?

If so, only video games are really going to use that API. I doubt that a software renderer (or CUDA-renderer) would leave raytracing / light calculations to a 3rd party.

There's a rumor that Dinsey bought a bunch of real-time raytracing equipment for their theme parks (the Star Wars section in Disney World). So high-quality realtime computation is needed for high-tech entertainment / theme parks / etc. etc. So there's definitely a market, even if gamers don't buy in.


You're wrong. Pretty much every major raytracing company is excited about this development. Blender is the only major one that I heard nothing from.


I mean, it'd be exciting if true.

Do you have a link of one of these raytracing companies explaining how they plan to use NVidia's RTX cards for this sort of thing?

As far as I'm aware, the only API to access these RTX raytracing cores is through the DirectX12 Raytracing API. I did a quick check on Khronos, but it doesn't seem like they have anything ready for OpenGL or Vulkan.

Alternatively, maybe NVidia is opening up a new Raytracing API to access the hardware. But I didn't see any information on that either.

EDIT: It seems like NVidia does have a specialized Raytracing API that various industry partners have announced support of: https://blogs.nvidia.com/blog/2018/08/13/turing-industry-sup...


Raytracing API is coming to Vulkan too. See here:

http://on-demand.gputechconf.com/gtc/2018/presentation/s8521...


Turing and Volta can do concurrent Integer and FP operations wich will give them a huge boost in traditional graphics as well.

How much will be the real question here.


I'd expect total of around 15% on 2080 TI, mostly due to more shaders and improved scheduler.


I think it will still be popular for DL. The models themselves will fit and batches can be distributed over multiple cards during training.


(DL = Deep Learning I expect. Two letter acronyms are rarely clear. Please don't use them)


OK


> Not sure who is the target market for this tech honestly.

The RTX series succeeds the GTX series, which historically targeted computer gamers.



For the money they're charging, they could have at least thrown us a bone by including HDMI 2.1 VRR, or ideally even adaptive sync.

Nvidia users still have to pay an Nvidia-tax to access expensive G-sync monitors, which are far fewer in availability and selection than Freesync (which also works with Xbox).


>70-90% price increase... And prices were already inflated by crypto

If you are going off the inflated prices, this is hardly a price increase at all. 70% is the increase from 1080 Ti release price, not from the inflated crypto prices.


I meant that the release price was subsequently inflated by crypto which in the meantime cooled down significantly; if crypto were still hot, then inflated 20x0 prices would be expected; this seems more like an opportunistic move, "residual crypto-wave surfing", and resembles strategy NVidia was doing in pro segment for a while due to no competition.


1080ti launched in March 2017 before the big 2017 boom.


Don't give people ideas like this. It only leads to misery for everyone.


TFlops is correllated with perfomance, but this corellation is not that perfect that you can just use TFlops as a stand-in for perfomance. Perfomance is complex. I'd wait for benchmarks.


The "target audience" is the hype train. If you believe Nvidia has a better long term future, that's good for Nvidia. Stuff like this trickles down to the main stream eventually.


Where are you getting these TFLOPS numbers? I haven't seen them in anything other than rumors that got a lot of things wrong so I wouldn't be too certain about them.


Well, one audience is obvious from your own sentence: people who are willing to pay to have the best performance currently possible.


2x Quadro RTX 8000 96GB ~$20000.


You can always decrease a batch size to fit in smaller RAM.


Unless you are already on a batch size 1 and TensorFlow greets you with a nice OOM message... Try some Wide-ResNet for state-of-art classification and play with its widening parameter.


Use gradient checkpointing.


...if the batch is the thing that doesn't fit in RAM, as opposed to the model.


introducing fiatlux


It's not as clever, but RayCoin would market better. Fiat Lux could be its tagline?

I hope I'm joking.


No crypto currency that I know of has succeeded based on the utility of their proof of work algorithm producing something useful. Only proof of work algorithms in service of the currency itself have made an impact.


I'm thinking more that a blockchain enthusiast will design a new coin that abuses the raytracing hardware into running some new crypto-token just because.

This series tentatively looks like it won't be subject to the insane price hikes and scarcity of the 10 series because it's already expensive for adding more diverse hardware that doesn't immediately help you mine ethereum (or whatever's the GPU coin of choice these days). But you never know...

It doesn't need to _do_ anything as long as you can convince people that they should buy RayCoins because everyone else is going to buy RayCoins and they'd better get in on the ground floor here.


If any glimpse of success of that venture happens, soon to follow

* RaCoin

* BeamCoin

* BeaconCoin

* <insert ray synonym here>Coin


NVIDIA isn’t used for any crypto currency mining AFAIK. It’s integer math performance per watt is not in the same ballpark as AMD.


1070 was very popular for ETH last year, even beating some AMD cards in performance/watt.


Whether it's substantially better for deep learning depends on whether they chose to neuter fp16 like they did in all other consumer GPUs. The main bottleneck is not actually the size of the model per se (sizes of the models have been mostly going _down_ as of late), it's the size of the batch during training. Both batch size and model size determine the size of the main memory hog: activations that you need to keep to do training. You don't need to keep activations to do inference: you can discard them after they're fed into the next layer. So it's not entirely accurate to say that this is useless for DL. What could make it totally kickass for AI would be a proper, non-neutered implementation of fp16, similar to what you'd find in P100, V100 and (surprisingly) Jetson TX2, which would effectively double the possible batch size, which in turn leads to quicker convergence. It could also halve the model size as an added bonus. This is NVIDIA, however, so I would bet good money fp16 is still neutered in "consumer" SKUs.


Of course, for inferencing assuming >100 TFlops on float 16 and >400 TOPS in INT4 as demoed by Jenson would be awesome for already trained/pruned models. Though for training I am not so sure, even V100 have rough edges in doing that with TensorFlow. And for model size - I guess it all depends on area in which are you are focusing; I run out of memory all the time these days even with batches of size 1 :(


If it's data size in GPU RAM you are concerned about, couldn't you store fp16 and cast into fp32 just in the kernel? In OpenCL, you would do this with vload_halfN and vstore_halfN to convert during load and store.

You won't get double throughput compared to fp32, but you shouldn't fall back to some terribly slow path either.


I haven't looked into it myself, but it could be that due to all this massaging you'd lose more on throughput than you gain on memory use. It's similar to doing 8 bit quantized stuff on general purpose CPUs. It's very hard to make it any faster than float32 due to all the futzing that has to be done before and after the (rather small) computation. In comparison, no futzing at all is needed for float32: load 4/8/16 things at a time (depending on the ISA), do your stuff, store 4/8/16 things at a time. Simple.


Right, you won't exceed the fp32 calculation rate, unless perhaps it was bandwidth-starved accessing the slowest memory tier. You are doing all fp32 operations after all. What you can do is fit twice the number of array elements into which ever GPU memory layer you decide to store as fp16, so potentially work on half as many blocks in a decomposed problem or twice the practical problem size on a non-decomposed problem.

You can separately decide which layer of cache to target for your loads and stores which convert, and then use fp32 with normalized 0-1 range math in that inner tier. You only have to introduce some saturation or rounding if your math heavily depends on the fp16 representation. The load and store routines are vectorized. You load one fp32 SIMD vector from one fp16 SIMD vector of the same size (edit: I mean same number of elements) and vice versa.


The point is, you might end up with compute throughput _substantially_ worse than raw fp32. No one will take that tradeoff.


I have used fp16 buffers frequently on NVIDIA GPUs with OpenGL in generations ranging from GTX 760 (GK104) to Titan X (GM200 and GP102) as well as mobile GPUs like GT 730M (GK208M). I do this for things like ray-casting volume rendering, where the dynamic range, precision, and space tradeoffs are very important.

My custom shaders performing texture-mapping and blending are implicitly performing the same underlying half-load and half-store operations to work with these stored formats. The OpenGL shader model is that you are working in normalized fp math and the storage format is hidden, controlled independently with format flags during buffer allocation, so the format conversions are implicit during the loads and stores on the buffers. The shaders on fp16 data perform very well and this is non-sequential access patterns where individual 3 or 4-wide vectors are being loaded and stored for individual multi-channel voxels and pixels.

If I remember correctly, I only found one bad case where the OpenGL stack seemed to fail to do this well, and it was something like a 2-channel fp16 buffer where performance would fall off a cliff. Using 1, 3, or 4-channel buffers (even with padding) would perform pretty consistently with either uint8, uint16, fp16, or fp32 storage formats. It's possible they just don't have a properly tuned 2-channel texture sampling routine in their driver, and I've never had a need to explore 2-wide vector access in OpenCL.


I wasn't aware that the Jetson TX1 and TX2 supported fp16; that's interesting and good to know. Thanks for the heads up.


IIRC all recent NVIDIA GPUs support fp16, it's just that the perf of fp16 is severely hampered on "consumer" grade hardware, so the memory/perf tradeoff is not viable. I mean, on the one hand I can see why that is: fp16 is not terribly useful in games. But on the other, it's probably nearly the exact same die with a few things disabled here and there to differentiate it from $7K Tesla GPUs, which as an engineer I find super tacky, much like MS deliberately disabling features in Windows Home that don't really need to be disabled.


[flagged]


Wow you have really compelling arguments as to why crypto isn't collapsing! Thanks you for you insight


Please don't reply to a bad comment with another bad one.

https://news.ycombinator.com/newsguidelines.html


Let's see your argument for it collapsing. It's been very volatile sure, but you'd still get insane gains if you invested just last year.

BTC has gone up 45% in the past year. 1015% in 2 years.

source: https://cryptowat.ch/

Collapsing.


The cost to settle a transaction is prohibitively high on GPUs, so is mining. Crypto might be fine with ASICs after this downward movement, perhaps, but GPUs are quickly going out of the loop due to cost/performance ratio.


I never said it was collapsing.


I suppose it’s collapsing up since 2009?


  			2080 Ti FE	RTX 2080 Ti	GTX 1080 Ti
  Price			$1,199		$999		$699
  GPU Architecture	Turing		Turing		Pascal
  Boost Clock		1635 MHz	1545 MHz	1582 MHz
  Frame Buffer		11 GB GDDR6	11 GB GDDR6	11 GB GDDR5X
  Memory Speed		14 Gbps		14 Gbps		11 Gbps
  Memory Interface	352-bit		352-bit		352-bit
  CUDA Cores		4352		4352		3584
  TDP			260W		250W		250W
  Giga Rays		10		10		?


  			2080 FE		RTX 2080	GTX 1080
  Price			$799		$699		$549
  GPU Architecture	Turing		Turing		Pascal
  Boost Clock		1800 MHz	1710 MHz	1733 MHz
  Frame Buffer		8 GB GDDR6	8 GB GDDR6	8 GB GDDR5X
  Memory Speed		14 Gbps		14 Gbps		10 Gbps
  Memory Interface	256-bit		256-bit		256-bit
  CUDA Cores		2944		2944		2560
  TDP			225W		215W		180W
  Giga Rays		8		8		?


			2070 FE		RTX 2070	GTX 1070 Ti	GTX 1070
  Price			$599		$499		$449		$399
  GPU Architecture	Turing		Turing		Pascal		Pascal
  Boost Clock		1710 MHz	1620 MHz	1607 MHz	1683 MHz
  Frame Buffer		8 GB GDDR6	8 GB GDDR6	8 GB GDDR5	8 GB GDDR5
  Memory Speed		14 Gbps		14 Gbps		8 Gbps		8 Gbps
  Memory Interface	256-bit		256-bit		256-bit		256-bit
  CUDA Cores		2304		2304		2432		1920
  TDP			175W		185W		180W		150W
  Giga Rays		6		6		?		?


Those prices are Nvidia's OC Founder's Editions. The non-OC versions are actually:

RTX 2080 Ti: $999

RTX 2080: $699

RTX 2079: $499

Source: https://i.imgur.com/LrUitua.png


Thank you! Updated.


The 2080 TDP seems to be 215W rather than 285W (source: https://www.anandtech.com/show/13249/nvidia-announces-geforc...).

Are you sure the FE TDPs are different from the reference spec? I haven't seen that mentioned anywhere else.


Ack, I can't edit my post again. Good catch. I'll message a mod and see if they can update it.


OK, I just found the FE TDPs listed on the NVidia site (2080 FE: 225W).


I assume the OC versions simply have a bit toggled in the VBIOS?


Probably better binning.


No, you're free to do your own OC. But how much OC your chip will be able to handle is unknown. It's called silicon lottery.


2080 Ti FE, 1259 Euros. All US companies follow this simple formula: $ to Euro 1:1, add 25% and then add 50-60 Euros for Free Shipping.


The premium for EU market is just about the amount of VAT. In US there's a sales tax for that which isn't included in MSRP.


Prices must include VAT in Europe, which seems to account for that 20% plus you’re referring to.


But that's about covered by the difference in exchange rate.

A few years ago prices where mostly 1:1 in USD before taxes and EUR after taxes. So the situation got worse.

I think the main reason is not that anything makes selling the cards there more expensive, but simply that they try to charge more.


Exchange rates fluctuate, often it’s been ~1.05:1 which is a long way from covering the 20% VAT. Add some uncertainty, stronger consumer protection laws, and smaller markets and the price difference is relatively small.


> the price difference is relatively small

Where in reality there shouldn't be any, since sending stuff across the pond costs nothing as seen in products like bananas. At least I live in place which has only 8% VAT, so prices look a bit more like US (but still higher for no good reason)


Sending stuff across the pond is expensive because when importing stuff you pay VAT + import fees on top of what you already paid.


VAT I get - everybody pays it, but if company imports stuff (especially local branch/official distributor), they should do import paperwork - are there any additional fees included?


Same or even better price then! Still, that's a lot of money for a card.


It's the 2-year warranty, translation of manuals and a need to have office in all major EU markets due to different legal stuff that makes it all pricier.


You dont need an office in every EU market, thats the thing about the EU, thats it one big singular (or connected) market, and if you can sell your stuff in one EU country, you can sell it in all of them. (very few restricstions apply)


Don't forget taxes. US prices are typically listed without tax included, whereas I'm guessing the European ones include VAT.


Most probable explanation. 19% in Germany. So, $1199 in US, ~$1443 in Europe. 1199 * 1.19 = ~$1427. $1439 with Austrian VAT of 20%, but price is still 1259 Euro, so it seems nice. Seems about right.


All prices in the EU must be final (hence incl. all taxes). Yes the ones in the US is pre sale/use tax.


Yeah; it's illegal to list prices for consumer products ex. VAT. If you're b2b you're allowed list ex-VAT but you are required to be very clear about what you're doing.


> need to have office in all major EU markets

So, one?


Either you're missing a smiley or you're serious. You can't be serious.


So you expect the things he mentioned are provided for free?


Which things exactly? Manuals translations is SO DAMN EXPENSIVE. They have to sell, what a dozen or so cards to get those losses? Or 7 european (EU) offices of which one is also a dev center vs 15 US offices (of which one is HQ/dev)? Or stock which comes to both from Taiwan?

In any case, it seems VAT is the explanation and in that case price is without premium, even great compared. Still expensive.


2-year warranty easily.


The EU does enforce a "2-year warranty", but it's not what you think. (Speaking as a German:) When you want to replace a broken product under the mandated warranty, then:

- In the first 6 months after purchase, the merchant must replace the product unless they can prove the defect was not present at purchase.

- After 6 months, the burden of proof reverses, and the customer must prove that the defect in question was already present at purchase.

In practice, whatever party has the burden of proof usually doesn't bother. So in effect, "6-month warranty" is a much more realistic description of this 2-year warranty.

(The fine print: Many vendors offer their own voluntary warranty on top of the mandated one. And I don't know if the rules are different in other EU countries.)


> In practice, whatever party has the burden of proof usually doesn't bother.

In practice, I've never had to prove anything within 2 years of purchase. Might be a difference between Germany and other EU countries, but somehow I doubt that.


This is probably how it is across the EU. Why antagonize your customers unnecessarily by forcing them to jump through hoops when your product fails in less than 2 years? That just leads to bad PR and reduced customer satisfaction and thus lower trust and sales.

They benefit in the perceived sense of reliability of a 2 year warranty. Why buy a product if it's going to fail in 6 months and you can't get it replaced, when the competitor is more likely to treat you fairly.


Stuff very rarely breaks in the second year. I'd bet the vast majority of warranty claims are within the first year, which is legally mandatory in every jurisdiction of note.


You're way, way overestimating the cost-per-item for everything mentioned there.


At NVidia's level, nearly so. They ship on the order of 50 million desktop GPUs each year. Legal hours, real estate, or translators may seem expensive, but when you count the number of graphics cards that would be required to pay for them it's not even in the ballpark of 50 million.

The warranties do cost something because they add significant risk/cost against each incremental unit, I will grant you that.

Regulations like these have a disproportionate effect relative to volume. NVidia would probably have most of those things even in the absence of the regulations, a couple guys in a garage would certainly not.


Have you heard about the EU single market? "Single" here is marketing speak for "one".


Your TDP figures are off

* 2080 TI FE is 260W

* 2080 TI is 250W

https://www.nvidia.com/en-us/geforce/graphics-cards/rtx-2080...


Fixed, thanks!


Wow, were the founders editions really that cheap on the last gen? Guess the inflated prices from all the crypto mining really threw me off.


Yeah, they were.

NVidia's lesson from the Crypto-boom seems to be: "Some gamers are willing to pay >$1000 for their cards".

EDIT: To be fair, NVidia is still on 14nm or 12nm class lithography (I forgot which one). So the increased die size of these new chips will naturally be more expensive than the 10xx series. Bigger chips cost more to produce after all. So if you fail to shrink the die, the economics demand that you increase the price instead.

Still, we all know that NVidia has fat margins. We also know that they overproduced the 1080 series during the Cryptoboom, and that they still want to sell all of their old cards. If they push the prices down too much, then no one will buy the old stuff.


they overproduced the 1080 series

If they did, 1080Ti wouldn't be "out of stock" on their website.


NVidia doesn't make many cards. They mostly make chips. The "Founders edition" are an exception, but the mass market products are made by EVGA, MSI, and other such companies.

The fire-sales on EVGA 1080 Ti chips make it darn clear that there's too many 1080Ti and 1080 cards out there.

https://linustechtips.com/main/topic/939592-nvidia-have-a-hu...

https://camelcamelcamel.com/EVGA-GeForce-GAMING-GDDR5X-Techn...

Second: these RTX 2080 chips have been in the rumor mill since June, maybe earlier. The fact that NVidia delayed until now is proof enough. NVidia has been stalling on the release of the RTX series.


Firesale? EVGA 1080 Ti FTW3 prices on Amazon right now are more than I paid for that exact same card in 2017... Am I missing something?


Doesn't this mean that EVGA or MSI overproduced them, and NVIDIA sold everything they made?


With the explanation I said earlier, yes.

But there are alternative sources for what is going on. In particular:

https://www.reuters.com/article/us-nvidia-results/nvidia-for...

>> Nvidia previously had forecast sales for cryptocurrency chips for the fiscal second quarter ended July 29 of about $100 million. On Thursday it reported actual revenue of only $18 million.

So its not entirely clear who overproduced things per se, but what we DO know is that NVidia was expecting $100 Million cards to be sold to cryptominers between April and July. Only $18 million were sold.

In any case, it is clear that there's a lot of 10xx cards laying around right now. And NVidia clearly wants to extract as much value from current stock as possible. Pricing the 20xx series very high above the 10xx series is one way to achieve what they want.


Could you cite this? I've heard the opposite: that Nvidia didn't chase after the crypto market /because/ a crash would cause problems (if they overproduced). Besides, they make plenty of money everywhere else. Furthermore, Intel has charged (edit: consumers) $1000+ for a chip before. The market will bear it, crypto or no crypto.


Intel has $10k xeons but that's no consumer grade hardware.


https://linustechtips.com/main/topic/939592-nvidia-have-a-hu...

EDIT: Seems to be a citation from: https://seekingalpha.com/article/4182662-nvidia-appears-gpu-...

Alternative:

https://www.reuters.com/article/us-nvidia-results/nvidia-for...

>> Nvidia previously had forecast sales for cryptocurrency chips for the fiscal second quarter ended July 29 of about $100 million. On Thursday it reported actual revenue of only $18 million.

That suggests that NVidia has $82+ million worth of 10xx series GPUs laying around somewhere.


That might not be a winning strategy if you have real competition though.


Rumor is that AMD's Navi is but a minor update next year. "Next Generation" is 2020 and beyond for AMD.

So unfortunately, NVidia can bet on a lack of competition for the near future. NVidia can always drop prices when Navi comes out (if it happens to be competitive). But it seems like they're betting that Navi won't be competitive, at least with this pricing structure.


It’s very strange how AMD reveals so much of their roadmap.


I dunno. We know Intel's roadmap: Icelake next year at 10nm (with AVX Instructions), Tiger Lake (10nm optimization), Sapphire Rapids (7nm) in 2021, etc. etc.

It seems like if you want people to buy your products, letting them know about them and the features they'll support (ex: AVX512) so the hype can build is a good thing.


> and the features they'll support (ex: Spectre v14)

FTFY


About 4 years ago Nvidia also used to publish a "roadmap" that showed a somewhat fake performance versus architecture plot. They stopped doing that after Volta.


There’s a huge difference between saying “we will have something in the future” (duh) and saying “we have absolutely nothing for the next year and a half.”

The latter gives your competitor the freedom to ask any price the market will accept without having to worry about a competitor undercutting this price in some near future.


When did they say the latter? It's more of just not giving out the codenames anymore.


> When did they say the latter?

At CES, AMD said that they'd only have Vega 20 coming up and it was only for the datacenter and AI. And that Navi would be for 2019.

https://www.anandtech.com/show/12233/amd-tech-day-at-ces-201...

That's like giving a blank check to your competitor, saying "Feel free to set prices anyway you want, you're not going to be bothered by us."


Balance between area, yields and tooling - a mature process with established tooling and strong yields can offset some of the additional wafer costs required by larger area.


The grid doesn't have any 10xx founder edition prices.


I paid $699 for my 1080TI FE card in early 2017.


Where do you get $399 for GTX 1070? Wikipedia claims $379 and I paid $374.

https://en.wikipedia.org/wiki/GeForce_10_series#GeForce_10_(...


The approximate Giga Rays for the 1080TI was given in the presentation somewhere around 1 giga ray. :)


Since the post with the product page got merged here:

https://www.nvidia.com/en-us/geforce/graphics-cards/rtx-2080...

Most of it is dramatically lit renderings of fans and vacuous marketing-speak of course, but there's a tidbit near the bottom of the page about NVLink I find interesting.

  "GeForce RTX™ NVLINK™ Bridge

  The GeForce RTX™ NVLink™ bridge connects two NVLink SLI-ready graphics cards with 50X the transfer bandwidth of previous technologies."
I guess NVLink is finally becoming relevant as a consumer technology then? Do you think this will be the GPU generation when consumer motherboards capable of pushing things to the GPU by NVLink will appear as well?


> Do you think this will be the GPU generation when consumer motherboards capable of pushing things to the GPU by NVLink will appear as well?

first you'll need CPUs that support NVLink, and that's currently limited to the not so consumer oriented POWER9.


NVLink works fine between GPUs even if the CPUs are x86. Just look at the NVIDIA DGX boxes/stations. https://devblogs.nvidia.com/wp-content/uploads/2017/04/NVLin...


"NVLink is a wire-based communications protocol for near-range semiconductor communications developed by Nvidia that can be used for data and control code transfers in processor systems between CPUs and GPUs and solely between GPUs"

Sounds like you're assuming operational mode A when this is going to be operational mode B.


Title is clickbait, 6 x performance gain is for ray tracing only, which is normal, Pascal cards didn't have the ray tracing muscle of the new cards.


What are the approximate gains from the last gen, i.e. 1080->2080 or 1070->2070, for a non RTX enabled game? Would be speculation at this point I assume, but just based on clock freqs, mem bandwidth and the number of units on the chip?


24% - 42% improvement according to an earlier Top comment...


Did you notice the '' quotes?


Why do you need major performance increases for old looking games that are already maxing out your monitor's refresh rate? Next you're gonna say programmable shaders are overkill.


Because a 1080 Ti can't do 144 FPS at 1440p or 60 FPS at 4K in all current gen titles with maxed out settings.


Could you reenforce the gamer meme a little more?


I really hope that the next generation of AMD Radeon have a good performance and their efforts to port TensorFlow and others works.

AMD need to put on NVIDIA on the same heat that Intel got with the Zen architecture.

NVIDIA is too comfortable, with a very high price and crazy demands ( you can´t use Customer cards on Data centers)


What effort to port tensorflow?



Upstream it.

More

Applications are open for YC Summer 2019

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: