Unless somebody invented RTX-based coin of course, then this is the minimal price...
These are gaming GPUs being announced at a games conference to an audience of gamers and games journalists. The focus of Huang's talk was how accelerated ray tracing will improve the graphical fidelity of games.
GPU compute is a spin-off from gaming. Gamers remain the primary market. They generate the volume that allows Nvidia to sell a supercomputer on a card for under $1000. If you want a professional product, Nvidia are more than happy to sell you a Tesla or a Quadro at a professional price point.
Waiting this generation out until RTX is more wide-spread/tested and going for the next 7nm generation with hopefully AMD having a proper GPU to compete seems like a better strategy for most gamers out there.
The 80 series has always been a low-volume, high-margin halo product within Nvidia's gaming range. It's dirt cheap and high-volume compared to Quadro, but top-of-the-range for gaming. Cryptomania has revealed to Nvidia that they probably priced the 1080 too low at launch - many gamers were in fact willing to pay substantially inflated prices for The Best GPU.
If the mass market decides to wait for the RTX 1060 or 1050, that's fine with Nvidia, as they face no real competition from AMD at the moment. It's very much in Nvidia's interests to make the most of their market dominance.
Sorry, but this comment is really kind of hilarious.
The "PC vs. Console" debate is something that almost predates the Internet and it has generated countless forum wars...
A high range PC has always been the more powerful and expensive gaming machine, since basically the first 3DFX cards in the late 90s, some people are OK with that, other prefer consoles as a perfectly acceptable alternative.
The console might output 4K but that doesn't mean the GPU inside can handle higher settings than a 1060. The $600 GPU is irrelevant to that comparison.
Which is kinda irrelevant too, since console games are highly optimized for exactly 1 graphic card & rest of the setup. You get all the details hardware is capable of smoothly, nothing more or less. No fiddling with tons of settings that most have no clue about.
I don't get this often used argument - my games look better on PC than on consoles. Yes some UI gimmicks look better, textures have higher res, but after 30 minutes of playing it really doesn't matter at all, quality of gameplay and smoothness of experience is the king. Of course if one pours 5x the amount of money into PC that would be spent on console, there needs to be some inner justification. But it still doesn't make sense to me.
This is a view of person who plays only on PC, never had any console.
> Of course if one pours 5x the amount of money into PC that would be spent on console, there needs to be some inner justification.
You can get a prettier and smoother experience if you do that and don't put the settings on super-ultra.
But also, if you're already going to have a computer, popping in a better-than-console GPU is cheaper than a console.
The "smoothly" part is far from guaranteed. cough Dying Light cough
IIRC PC game sales are far juicier than their console counterparts, especially if you don't live in 1st world countries.
The top of the line rigs are what you need to play new titles on Ultra settings, sold at $50-60+
With hardware comparable in pricing to what you'd find in a console (or using something that doesn't make much sense to mine with, like a GTX 780Ti) you can easily play a 3-5 year-old game at 1/2 to 1/4 of it's original price, which might even be reduced further by -50% to -90% during a Steam sale.
Better cancel my pre-order then...
> 2080Ti likely won't be able to do 4k@60Hz like 1080Ti couldn't either
My 1080ti and 4k monitor beg to differ.
I mean you're not countering his point in any way. He didn't say nobody would buy it, but it's a simple fact that most people can't afford or justify a thousand dollar GPU.
The vast majority of people buying Nvidia GPUs in the 10xx generation were going for 1060s and 1070s.
If you're sufficiently dedicated, even with limited funds going for every second or third iteration of halo products can be a great strategy. That way when you get it you'll have the absolutely best there is and a couple of years down the track it'll still be great but maybe it won't quite do Ultra anymore on the latest titles.
The 1080TI transformed my experience of my X34 Predator (it even paid for itself through timely sale of mining profits) which does 3440x1440 @95hz. I certainly wouldn't mind the new card but I'll wait for the next one after that minimum.
Do people really take out loans to get super expensive video cards?
Sure. Either by putting on a credit card or by using some sort of in store financing.
'most people can't afford or justify'. Come one. People buy cars and other stuff. Someone working fulltime and buying a 1k cheaper car can already afford and justify a 1k graphics card.
Cars do last way more than a video card as well, they have increased warranty period(in the EU video cards have 2y, cars usually 5-7y).
Cars tend to be purchased on leasing due to their higher cost and actual necessity. So saving 1k on a car usually doesn't transfer to 1k in cash.
From those people, it is easily justifyable to spend less on a car, a holiday or rent and instead having a nicer gaming rig. If you spend a lot of time playing games why not?
Modern technology is way cheaper than the previous/old status symbols.
I'm thinking about buying a car and one simple calculation is still what else i can do with that money.
And yes in munich, where i live right now, there are enough people with a car who could use public transport and don't us it.
The target group of a 1k graphics card is not someone who can barely afford the car he/she needs every day and would not be able to earn anything if the car breaks down...
There's also the VR factor, as much as Nvidia/AMD like to label their products "VR ready" we are, imho, ways off from actually being there.
At least if the goal is gonna be something like two displays at 4k@90+Hz and upwards.
my 1070 would do it, but it was garbage fps
For me if its not at 60fps or more ALL the time I'm angry. Grew up playing Quake and Unreal Tournament pretty seriously.
For a more casual game like skyrim it might be fine for me.
The last thing you want is having your field of view obscured by colorful explosions, bloom, debris, etc. when your opponent has a crystal clear vision on you.
But that is a completely orthogonal point to the question as to whether doom runs well at 4k with all features on. Because I am asking that question does not mean I would play death match that way. But I might indeed go through the single player campaign that way.
Indeed I didn't play Doom at 4k at all, because as I said, it felt like garbage at 4k, no matter what settings, on 1070
Gamers arent kids anymore. Amongst my friends that game, most are nearly 30 working excellent jobs in software.
They always have been. Still doesn't stop those who can buying them.
One would be forgiven for expecting roughly GTX 1080 Ti performance in the RTX 2070 at around $449-499 USD.
Or did you want to highlight something else I missed?
When you sell a product that actually has less compute performance (the RTX 2080) at the same price point as the higher-end part from the last generation (GTX 1080 Ti) something has gone horribly wrong. A lot of this is likely due to the Tensor/RTX units on the new GPU's taking up die space and there hasn't been an appropriate process shrink to make up for it, but it's all the more reason these are REALLY unappealing options for anyone outside the top-end enthusiast segment (the GTX 1070 is the most popular enthusiast card this generation, because even enthusiasts have budgets - which is usually the $400-500 range for GPUs).
tl;dr; Prices here make no sense, cards with similar or less performance selling at the same price point you could get from the previous gen - just with raytracing support added on top (so it won't net you WORSE performance with this new functionality utilized). I don't know who Nvidia thinks is going to buy these from a gaming perspective.
The hype around "ti" is unreal. Now they're changing it to "RTX" and "ti". /shock /awe /s
To me, it's pretty clear NVIDIA is cashing out.
Have you not noticed the market slaps the word gaming on commodity hardware along with a bunch of christmas lights and people happily pay a premium for it.
Gamers aren't the brightest bunch and a $1000 is the right price point when people are gladly dropping that on a mobile phone now.
Sure compared to a few years ago I'd agree with you, this market? this hype? No.
Even in the initially released marketing material, by Nvidia, the 8800 GTX had the obviously way better raw specs than the 9800 GTX. Took them a couple of days until they changed the material to compare on performance % in different games.
But the 9800 GTX was actually a slower card than the one year older 8800 GTX due to lower memory bandwidth and capacity. As such it was competing against one generation older mid-range cards like the 8800 GTS 512.
Current Steam user survey results (now that the over-counting issue has been fixed) shows the GTX 1060 as the single most popular GPU installed by active Steam users with a 12.5% market share, the GTX 1050 Ti and 1050 take second and third place with ~9.5% and ~6% respectively that means about 30% of Steam users have a current-gen GPU in the $200-300 price range.
So yes, volume != profit, but the consumer obviously trends towards more reasonably priced cards. Cards in the Titan price-point that NVidia is trying to sell the RTX 2080 Ti at are so uncommon that they get lumped into the 'other' category of the Steam survey - and since I highly doubt magic like doing integer operations in tandem with FP32 operations is going to bring that much of a performance improvement to the majority of gaming workloads in tandem with the really weak raw numbers of the just-announced cards (fewer TFLOPS on the 2080 than the 1080 Ti selling in the same price bracket) it's obvious Nvidia is really taking the piss with their pricing. You're paying more for raytracing, that's it - and while it's certainly a cool feature I don't really see gamers caring that much until it becomes usable at that $200-300 price point.
There's many expensive hobbies, but also a ton of cheap ones.
It's just common sense but if anyone needs proof, they could look at the pictures here (e.g. look at 8X MSAA 1080p vs only 2X at 4K, yet the latter has less visible aliasing): https://www.extremetech.com/gaming/180402-five-things-to-kno...
Did you expect it to run everything forever at an extremely high resolution with 60fps?
Why are Nvidia releasing new cards then?
This doesn't affect the argument that you're making. I just think it's actually incredibly absurd to complain as though it's the hardware's fault for not being able to keep up with 4k@60, when it's the devs who you should be looking at when you're disappointed with the performance on a given piece of hardware.
You can “tune” something all you want, you’re always going to have a lower quality representation if you want better performance. The hardware should give the developers the possibility of getting better quality graphics at higher resolutions. We can play the original doom at 4 and probably even 8K without much problems. But that doesn’t mean it’s because they “tuned” it better, it’s because hardware has gotten better and hardware will always be the limiting factor for games.
I have a dual xeon rig with sixteen cores that I used to use for video transcoding. It could transcode a movie in about 12 hours.
I read that a cheap Nvidia GPU could do the job in less than 30 minutes. It seemed way too good to be true, but I figured I'd spend $120 to find out.
It turned out the hype was wrong; it didn't take 30 minutes to do what sixteen Xeons spent 12 hours doing, it took fifteen minutes.
Please don't try to measure lossy-anything-calculation by speed alone, always make sure that the required quality can even be reached, and even if, that it still exhibits the performance benefits, even after you tune these features up far enough.
Many enthusiasts gamers consider gaming at 60 fps to be rather outdated.
I have long since taken to killing every graphical effect game devs can think of throwing into a game (if they supplied a toggle for it) simply so i can tell where the enemy is without wearing sunglasses indoors in a dark room.
I can run 10 GPUs on my model training runs and finish in an hour now when they used to take at least 2 or 3 days, and it took absolutely no work on my end. It’s been absolutely wonderful for my productivity. The price doesn’t matter either because compared to how much the people are being paid at these sorts of companies, it really doesn’t matter. The boost in efficiency is so much more important.
I’m pretty sure for the next Tensorflow 3.0 update, they’re rethinking towards a Pytorch style with more dynamic computational graphs.
Do they know if tensor cores in the these processors are just good for interference or are they similar to the more pricey models (floating point precision)?
When you e.g. look at how Google's internal system constructs loss functions and how many combinations do they try, one has to have an unexpected idea to beat their results, and that idea can be usually quickly incorporated into their platform, raising the bar for individual researchers. At Facebook they basically press a few buttons and select a few checkboxes, then wait until best model is selected, leading to frustration among researchers.
Same process for research. You suppose to find some insights on how to do one thing or another, find the direction of search, eventually there would be hardware to fully explore that direction. Then you move on to a different direction. Rinse-repeat
What has changed that people think this is a wise approach?
If forced to speculate, I'd say far larger datasets are part of the answer. If you can afford to hold back 30%+ of your data for (multi-tranche) verification the difference between testing and production becomes a problem for the philosophers.
1080s are ~$450, will now be $799.
1070s are ~$400, will now be $599.
(I'm going off the top hits on Newegg)
MSRP on cards from OEMs should start at $499 for the 2070 and $699 for the 2080 ($100 cheaper than Nvidia's).
Personally I still think it's insane. I have a GTX 970 from October 2014 and that was $369.
EDIT: Even my $369 was a bit of a premium (OC edition and shortly after launch, I don't remember but I'm guessing I bought whatever was in stock). Wikipedia claims the GTX 970 launched in September 2014 at $329.
Assuming the $329 MSRP is comparable to the $499 announcement, the __70 line is up 52% in two generations. I'm sure it's better hardware, but that's a big pile of money.
And if my 970 experience holds through today, the OEM cards that are actually available are going to be pricier than the MSRP, but maybe still lower than the Founder's Edition.
We'll see where the 2060 ends up.
Does that mean the 10 series isn't worth it? By all means, no. The 9 series was still an excellent series when 10 was released, and people did pick the up.
The 2080 Ti FE, however, is in a league closer to the Titan X/Xp, which were at $1,200. Also they're releasing the highest-end Ti edition at the same time as the ordinary version, which is a first for Nvidia, I think? (The Titan Xp was also launched after, not concurrently, with the 10xx series...) I think the concurrent launch of the 2080 Ti with the ordinary variant means they're positioning it more like an alternative to the Xp, while the non-Ti variants are closer to the ordinary gaming cards you'd normally get. In other words, for people willing to blow their price/perf budget a bit more.
For DL workloads the 1080 Ti is very cost effective (vs the Xp), so it remains to be seen which variant will have the better bang/buck ratio for those uses. I suspect the fact these include Tensor Cores at their given price point will probably be a major selling point regardless of the exact model choice, especially among hobbyist DL users. The RTX line will almost certainly be better in terms of price/perf for local DL work, no matter the bloated margins vs older series.
They may also be keeping prices a bit inflated, besides margins, so they can keep selling older stock and pushing it out. The 9xx series, as you said, continued to sell for a while after 10xx was released. I expect the same will be true this time around, too, especially with prices being so high.
Ehhh, yes really.
1080 FE $699 USD
2080 FE $799 USD
That qualifies as almost the same price in my book. By and large, not a 70-90% price increase as being suggested.
Unless you want to explain how 700-800 is a 70-90% price increase?
I do not get why you're being so bizarrely negative about this card. Yes, there are an enormous number of applications, for both training and inference, where an 11GB ridiculously powerful card (both in OPS and in memory speed) can be enormously useful.
That qualifies as about the same price in my book, and not a 70-90% jump.
As for the Ti price when 1080 series was released, there was none.
The 2080's price will drop eventually, until then nVidia has no reason to lower prices when they are going to be sold out for weeks if not months.
RTX 2070 cards will start at $499, with RTX 2080 at $699, and the RTX 2080 Ti starting at $999.
While a select few can push the envelope with technology alone, a bit of talent seems to easily compensate for almost any technological limitations. The "latest and greatest" is the easy route to mediocracy.
That's been true for all creative disciplines: from photography to writing to animation. There might even be a mechanism, where inferior (or at least different) tools may be a restriction that nurtures creativity, or at least guarantees results that are easily distinguished from the rest of the market.
Yeah, tell that to the Octane Render community. There's plenty of incredible starving artists using consumer GPUs in their workflow to render top-notch work.
If so, only video games are really going to use that API. I doubt that a software renderer (or CUDA-renderer) would leave raytracing / light calculations to a 3rd party.
There's a rumor that Dinsey bought a bunch of real-time raytracing equipment for their theme parks (the Star Wars section in Disney World). So high-quality realtime computation is needed for high-tech entertainment / theme parks / etc. etc. So there's definitely a market, even if gamers don't buy in.
Do you have a link of one of these raytracing companies explaining how they plan to use NVidia's RTX cards for this sort of thing?
As far as I'm aware, the only API to access these RTX raytracing cores is through the DirectX12 Raytracing API. I did a quick check on Khronos, but it doesn't seem like they have anything ready for OpenGL or Vulkan.
Alternatively, maybe NVidia is opening up a new Raytracing API to access the hardware. But I didn't see any information on that either.
EDIT: It seems like NVidia does have a specialized Raytracing API that various industry partners have announced support of: https://blogs.nvidia.com/blog/2018/08/13/turing-industry-sup...
How much will be the real question here.
The RTX series succeeds the GTX series, which historically targeted computer gamers.
Nvidia users still have to pay an Nvidia-tax to access expensive G-sync monitors, which are far fewer in availability and selection than Freesync (which also works with Xbox).
If you are going off the inflated prices, this is hardly a price increase at all. 70% is the increase from 1080 Ti release price, not from the inflated crypto prices.
I hope I'm joking.
This series tentatively looks like it won't be subject to the insane price hikes and scarcity of the 10 series because it's already expensive for adding more diverse hardware that doesn't immediately help you mine ethereum (or whatever's the GPU coin of choice these days). But you never know...
It doesn't need to _do_ anything as long as you can convince people that they should buy RayCoins because everyone else is going to buy RayCoins and they'd better get in on the ground floor here.
* <insert ray synonym here>Coin
You won't get double throughput compared to fp32, but you shouldn't fall back to some terribly slow path either.
You can separately decide which layer of cache to target for your loads and stores which convert, and then use fp32 with normalized 0-1 range math in that inner tier. You only have to introduce some saturation or rounding if your math heavily depends on the fp16 representation. The load and store routines are vectorized. You load one fp32 SIMD vector from one fp16 SIMD vector of the same size (edit: I mean same number of elements) and vice versa.
My custom shaders performing texture-mapping and blending are implicitly performing the same underlying half-load and half-store operations to work with these stored formats. The OpenGL shader model is that you are working in normalized fp math and the storage format is hidden, controlled independently with format flags during buffer allocation, so the format conversions are implicit during the loads and stores on the buffers. The shaders on fp16 data perform very well and this is non-sequential access patterns where individual 3 or 4-wide vectors are being loaded and stored for individual multi-channel voxels and pixels.
If I remember correctly, I only found one bad case where the OpenGL stack seemed to fail to do this well, and it was something like a 2-channel fp16 buffer where performance would fall off a cliff. Using 1, 3, or 4-channel buffers (even with padding) would perform pretty consistently with either uint8, uint16, fp16, or fp32 storage formats. It's possible they just don't have a properly tuned 2-channel texture sampling routine in their driver, and I've never had a need to explore 2-wide vector access in OpenCL.
BTC has gone up 45% in the past year.
1015% in 2 years.
2080 Ti FE RTX 2080 Ti GTX 1080 Ti
Price $1,199 $999 $699
GPU Architecture Turing Turing Pascal
Boost Clock 1635 MHz 1545 MHz 1582 MHz
Frame Buffer 11 GB GDDR6 11 GB GDDR6 11 GB GDDR5X
Memory Speed 14 Gbps 14 Gbps 11 Gbps
Memory Interface 352-bit 352-bit 352-bit
CUDA Cores 4352 4352 3584
TDP 260W 250W 250W
Giga Rays 10 10 ?
2080 FE RTX 2080 GTX 1080
Price $799 $699 $549
GPU Architecture Turing Turing Pascal
Boost Clock 1800 MHz 1710 MHz 1733 MHz
Frame Buffer 8 GB GDDR6 8 GB GDDR6 8 GB GDDR5X
Memory Speed 14 Gbps 14 Gbps 10 Gbps
Memory Interface 256-bit 256-bit 256-bit
CUDA Cores 2944 2944 2560
TDP 225W 215W 180W
Giga Rays 8 8 ?
2070 FE RTX 2070 GTX 1070 Ti GTX 1070
Price $599 $499 $449 $399
GPU Architecture Turing Turing Pascal Pascal
Boost Clock 1710 MHz 1620 MHz 1607 MHz 1683 MHz
Frame Buffer 8 GB GDDR6 8 GB GDDR6 8 GB GDDR5 8 GB GDDR5
Memory Speed 14 Gbps 14 Gbps 8 Gbps 8 Gbps
Memory Interface 256-bit 256-bit 256-bit 256-bit
CUDA Cores 2304 2304 2432 1920
TDP 175W 185W 180W 150W
Giga Rays 6 6 ? ?
RTX 2080 Ti: $999
RTX 2080: $699
RTX 2079: $499
Are you sure the FE TDPs are different from the reference spec? I haven't seen that mentioned anywhere else.
A few years ago prices where mostly 1:1 in USD before taxes and EUR after taxes. So the situation got worse.
I think the main reason is not that anything makes selling the cards there more expensive, but simply that they try to charge more.
Where in reality there shouldn't be any, since sending stuff across the pond costs nothing as seen in products like bananas. At least I live in place which has only 8% VAT, so prices look a bit more like US (but still higher for no good reason)
In any case, it seems VAT is the explanation and in that case price is without premium, even great compared. Still expensive.
- In the first 6 months after purchase, the merchant must replace the product unless they can prove the defect was not present at purchase.
- After 6 months, the burden of proof reverses, and the customer must prove that the defect in question was already present at purchase.
In practice, whatever party has the burden of proof usually doesn't bother. So in effect, "6-month warranty" is a much more realistic description of this 2-year warranty.
(The fine print: Many vendors offer their own voluntary warranty on top of the mandated one. And I don't know if the rules are different in other EU countries.)
In practice, I've never had to prove anything within 2 years of purchase. Might be a difference between Germany and other EU countries, but somehow I doubt that.
They benefit in the perceived sense of reliability of a 2 year warranty. Why buy a product if it's going to fail in 6 months and you can't get it replaced, when the competitor is more likely to treat you fairly.
The warranties do cost something because they add significant risk/cost against each incremental unit, I will grant you that.
Regulations like these have a disproportionate effect relative to volume. NVidia would probably have most of those things even in the absence of the regulations, a couple guys in a garage would certainly not.
* 2080 TI FE is 260W
* 2080 TI is 250W
NVidia's lesson from the Crypto-boom seems to be: "Some gamers are willing to pay >$1000 for their cards".
EDIT: To be fair, NVidia is still on 14nm or 12nm class lithography (I forgot which one). So the increased die size of these new chips will naturally be more expensive than the 10xx series. Bigger chips cost more to produce after all. So if you fail to shrink the die, the economics demand that you increase the price instead.
Still, we all know that NVidia has fat margins. We also know that they overproduced the 1080 series during the Cryptoboom, and that they still want to sell all of their old cards. If they push the prices down too much, then no one will buy the old stuff.
If they did, 1080Ti wouldn't be "out of stock" on their website.
The fire-sales on EVGA 1080 Ti chips make it darn clear that there's too many 1080Ti and 1080 cards out there.
Second: these RTX 2080 chips have been in the rumor mill since June, maybe earlier. The fact that NVidia delayed until now is proof enough. NVidia has been stalling on the release of the RTX series.
But there are alternative sources for what is going on. In particular:
>> Nvidia previously had forecast sales for cryptocurrency chips for the fiscal second quarter ended July 29 of about $100 million. On Thursday it reported actual revenue of only $18 million.
So its not entirely clear who overproduced things per se, but what we DO know is that NVidia was expecting $100 Million cards to be sold to cryptominers between April and July. Only $18 million were sold.
In any case, it is clear that there's a lot of 10xx cards laying around right now. And NVidia clearly wants to extract as much value from current stock as possible. Pricing the 20xx series very high above the 10xx series is one way to achieve what they want.
EDIT: Seems to be a citation from: https://seekingalpha.com/article/4182662-nvidia-appears-gpu-...
That suggests that NVidia has $82+ million worth of 10xx series GPUs laying around somewhere.
So unfortunately, NVidia can bet on a lack of competition for the near future. NVidia can always drop prices when Navi comes out (if it happens to be competitive). But it seems like they're betting that Navi won't be competitive, at least with this pricing structure.
It seems like if you want people to buy your products, letting them know about them and the features they'll support (ex: AVX512) so the hype can build is a good thing.
The latter gives your competitor the freedom to ask any price the market will accept without having to worry about a competitor undercutting this price in some near future.
At CES, AMD said that they'd only have Vega 20 coming up and it was only for the datacenter and AI. And that Navi would be for 2019.
That's like giving a blank check to your competitor, saying "Feel free to set prices anyway you want, you're not going to be bothered by us."
Most of it is dramatically lit renderings of fans and vacuous marketing-speak of course, but there's a tidbit near the bottom of the page about NVLink I find interesting.
"GeForce RTX™ NVLINK™ Bridge
The GeForce RTX™ NVLink™ bridge connects two NVLink SLI-ready graphics cards with 50X the transfer bandwidth of previous technologies."
first you'll need CPUs that support NVLink, and that's currently limited to the not so consumer oriented POWER9.
Sounds like you're assuming operational mode A when this is going to be operational mode B.
AMD need to put on NVIDIA on the same heat that Intel got with the Zen architecture.
NVIDIA is too comfortable, with a very high price and crazy demands ( you can´t use Customer cards on Data centers)