The 20-x cards I suppose are OK, a big jump to signify a big change in architecture.
But now we have... 16-60? Why 16? Is this the successor to the 1060? And it's a "Ti", but there isn't a non-Ti 1660?
> As far as naming goes, and why 16 series instead of just using 11? Quite simply, we felt that from an overall architecture and performance perspective, TU116 is closer to the other TU10x parts than it is to prior generation GP10x. TU116 has most of the Turing architecture features, including the new dedicated cores for INT32 and FP16 operations, and it also has all of the new Turing shading features, including variable rate shading and mesh shading. And as you see, performance is closer to the GeForce RTX 2060 than it is to the GeForce GTX 1060. In fact just like you said, it performs closest to the GTX 1070, beating it in some games and losing some others. So we ultimately settled on 1660 Ti instead of 1160 Ti. ‘16’ is closer to 20, after all.
Or AngularJS 1.8 going to a complete fucking mess.
An unknown number of RegEx strings are looking for Windows 95 and Windows 98 just by looking for 'Windows 9'.
I researched this recently, and this article  did a great job clarifying things for me.
If only marketings gurus were as clear and BS free than this guys...
Cars are versioned by year/model, which again makes it pretty clear to understand minor/major updates. Sometimes significant updates are introduced in a model year, but generally the core features remain the same and it could still be considered an upgrade to that model.
Without a clear and intuitive versioning scheme it can be confusing and time consuming to make sense of a product line. And that gets frustrating if it keeps changing.
Tesla managed to break this trend massively, which proves a problem for things like insurance. The feature set on (say) the January 2014 Model S is very different from the December 2014 Model S, even though they technically share the same "year".
This may make finding and pricing spare parts difficult, or categorizing safety and performance metrics.
This is a case of Nvidia working with that sentiment rather than against it.
They were roasted by review sites previously for co-mingling architectures in the same numbering generation. So this time, they didn't.
TU106 was far too big of a chip to die-harvest low enough for a true volume x60 budget part.
Add in all the non-graphics acceleration hardware that needed to be cut to hit price and... Nvidia didn't feel this could be called an RTX 20xx part.
The 16xx is awkward, but it's the least bad choice.
Just look at BMW or Porsche. Their numbering systems are just as obtuse as GPUs.
Since they started strapping turbochargers to everything down to 1.0l, the engine size comparison has become less important. If anything my 3.0l car is seen as a negative because of the higher fuel consumption.
The manufacturers are just trying to walk a fine line.
Between generations, the whole line moves up something like one level of performance, so an (X)70 should be compared against an (X-1)80 and so on.
It's not the simplest thing, but I think most people will do research once the first time they buy a GPU, and then you have your mental model from then on.
No worries though, I decided to settle on integrated Intel GPU. Good enough for 2 years old games.
So a win?
Or if there’s a grouping like GP describes, you can quickly ascertain which model is the one you want.
That's true, but how does it help you as a customer? Wouldn't you need to know what you needed? What is the benefit of buying an X110 over an X109?
I don't like "it's just a number" schemes, because they look like transparent attempts to get you to buy something for no reason. What's the difference between 3G and 4G? Well, they changed the number. Does that mean anything? How would you know?
In this case it's probably intended to invoke the GTX 660 in the minds of customers, since that was a very successful and well-loved card when it was released.
Not sure what you mean, BMW seems like the complete opposite of what we are discussing, the way they are named is extremely functional. It used to be simpler because they had less models but it's survived the changes fairly well.
For the majority of cars the first number (1 through 8) is the segment, from lowest to highest. The next two used to be directly the engine size but because smaller engines are getting more powerful they've disconnected that and it just means the performance level. They then have prefixes for different types of car (M for performance, X for SUV, Z for convertible), and suffixes for details of the drive train (e for hybrid, i for gasoline, d for diesel). There are a few more nuances but the bulk of the naming is this.
Their range isn't just sorted well in terms of naming, the interiors and features are highly consistent between cars as well so the naming allows you to get in a car and know pretty much exactly what to expect. If Nvidia was doing the same we'd know everything about this card's positioning in the market just from the name (does it have raytracing, what peformance level, what market segment, etc).
The numbering has been very logical for more than a decade: you have a product generation, and increasing numbers mean increased performance.
What is so nonsensical about that?
Over a 1060, the 1660 gives a +36% performance boost for a +12% price difference, and the 2060 gives +59% performance for +40% price. So if you just want the best performance-per-dollar ratio, the 1660 is better, but the 2060 is probably better overall if you don't mind the extra cost (and it also supports the new RTX enhancements for ray-tracing and such).
If ray tracing does become increasingly relevant, this batch of cards (and especially the 2060 at the bottom) are likely to become dated rather quickly. Also as with any bleeding-edge technology, you're paying an early-adopter premium for something which is likely to see relatively large inter-generation improvements.
I think of RTX like the first Oculus Rift. You shouldn't buy it to be future proof, it will be anything but, you buy it only because you're an enthusiast that want to play with the latest tech early and plans to upgrade before it breaks or become obsolete anyway.
Obviously - this is partly because of VR uptake being slower than some expected but I use a Rift daily and it still does it's job admirably.
You might mean the DK1 - which wasn't generally available to non-developers but I don't think there were any stringent checks in place over who was a developer.
And no 4k ? Wtf? Shouldn't that be the standard now.
Mine is "only" 1080p, but at 144hz it still makes my GTX 1080 sweat in just about every game I throw at it
I have 2x1080 ti in SL with a really nice watercooled and overcloked rig (64gb of the fastest RAM I could buy, best possible CPU and SSD). This was pretty much the best possible system you could have to play WoW a year ago and I STILL get 21fps in this weeks set of Timewalking dungeons. I have spent hours upon hours tweaking settings and it usually runs fine but this week was terrible.
WoW is CPU bound and is very, very poorly optimized for modern systems.
To be picky, those settings aren't from 2004.
The push for 4k gaming does a lot more for hardware manufacturers who want to sell the "next big thing" than it does for actual users.
Particularly 1080p just looks like blurry ass, the textures have no detail, objects in the distance are harder to identify--
Even 4k still looks blurry compared to the real world..so I think we need 8k :)
I just got a 1060 that has 3 x displayport 1.4 outputs + 1 x HDMI 2.0 output, it can drive four 4K displays at 60 Hz.
The 1060 is probably somewhat more power hungry and hot under load.
Now, whether it's worth another 279 to upgrade from the 1060, well, probably not. It is very much worth considering if you have an older card than from the 1000 series. It performs on par with a 1070, gets within spitting distance of the Vega 56 in 1080p/1440p in a bunch of games, and it's the most power efficient card out there.