Hacker News new | past | comments | ask | show | jobs | submit login
Nvidia GeForce GTX 1660 Ti 6GB Review (anandtech.com)
115 points by babak_ap on Feb 22, 2019 | hide | past | favorite | 88 comments



I'm very confused by Nvidia's numbering scheme. It used to be generation-model, so my 1080 is generation 10, and "80" is an arbitrary model number where higher is better. It's the logical successor to the 9-80.

The 20-x cards I suppose are OK, a big jump to signify a big change in architecture.

But now we have... 16-60? Why 16? Is this the successor to the 1060? And it's a "Ti", but there isn't a non-Ti 1660?

I'm confused.


From NVIDIA [1]

> As far as naming goes, and why 16 series instead of just using 11? Quite simply, we felt that from an overall architecture and performance perspective, TU116 is closer to the other TU10x parts than it is to prior generation GP10x. TU116 has most of the Turing architecture features, including the new dedicated cores for INT32 and FP16 operations, and it also has all of the new Turing shading features, including variable rate shading and mesh shading. And as you see, performance is closer to the GeForce RTX 2060 than it is to the GeForce GTX 1060. In fact just like you said, it performs closest to the GTX 1070, beating it in some games and losing some others. So we ultimately settled on 1660 Ti instead of 1160 Ti. ‘16’ is closer to 20, after all.

[1] https://www.gamersnexus.net/hwreviews/3445-evga-gtx-1660-ti-...


So like USB2 becoming USB3, or Java 1.4 going to 5, or Thunderbird going from 3 straight to 5.


> So like USB2 becoming USB3, or Java 1.4 going to 5, or Thunderbird going from 3 straight to 5.

Or AngularJS 1.8 going to a complete fucking mess.


Haha. Or php 5.6 to 7


I was curious about this so I looked it up. Apparently there was a PHP 6, but it was scuttled and had some of it's features backported to PHP 5.3.


USB 2->3 doesn't really help your point, since 2 comes directly before 3 in integer math, so it (3) is literally a logical successor to 2.


IIRC after USB2 came out, there was "USB Full Speed" (12mbps) and "USB High Speed" (480mbps), then they renamed "USB 2.2 High Speed" it to USB3. USB3 is literally USB2.2 with a name change, so I feel like it fits.


Are you sure you don't mean that USB 1.2 was changed to USB 2.0? I don't recall any naming confusion between USB 2.0 (480 Mbit/s) and 3.x (5 Gbit/s and up).


Ah, that's probably what I was thinking of. Thanks for the correction!


...Netscape 4 to Netscape 6. Windows 8 to Windows 10. The i7-8086k processor.


Not having Windows 9 was for a very good technical reason, not because of marketing.

An unknown number of RegEx strings are looking for Windows 95 and Windows 98 just by looking for 'Windows 9'.


[dead]


Can't do that if you want to sell an overpriced 2050 Ti.


They probably want to reserve 20xx for RTX cards only.


It used to drive me crazy as well. Couldn't understand why my old 980 Ti card ranked better than many of the 10xx cards that came after it.

I researched this recently, and this article [1] did a great job clarifying things for me.

[1] https://steamcommunity.com/discussions/forum/11/627456486990...


> x80 - For high-end gamers, high budget card, for high/ultra settings and higher resolutions. x90 - These are pretty much just two x80 glued together with a custom cooler. For the insane users with more money than sense, overpriced / overkill card.

If only marketings gurus were as clear and BS free than this guys...


So, perhaps it's me, but why do folks care about the number on the part? It's all marketing. Why does it matter? What matters is performance, right?


Software is versioned in numerical order which is pretty intuitive. At a glance it's clear which versions are minor updates, and which are major.

Cars are versioned by year/model, which again makes it pretty clear to understand minor/major updates. Sometimes significant updates are introduced in a model year, but generally the core features remain the same and it could still be considered an upgrade to that model.

Without a clear and intuitive versioning scheme it can be confusing and time consuming to make sense of a product line. And that gets frustrating if it keeps changing.


>Cars are versioned by year/model, which again makes it pretty clear to understand minor/major updates. Sometimes significant updates are introduced in a model year, but generally the core features remain the same and it could still be considered an upgrade to that model.

Tesla managed to break this trend massively, which proves a problem for things like insurance. The feature set on (say) the January 2014 Model S is very different from the December 2014 Model S, even though they technically share the same "year".


Tesla didn't break this any more than any other car manufacturer. The model year hasn't been tied to the manufacture year for a long time. See https://www.autotrader.com/car-news/why-doesnt-cars-model-ye... for instance.


I think the difference is more "model year no longer reliably indicates feature set", not "model year no longer indicates date of manufacture."

This may make finding and pricing spare parts difficult, or categorizing safety and performance metrics.


> Without a clear and intuitive versioning scheme it can be confusing and time consuming to make sense of a product line. And that gets frustrating if it keeps changing.

This is a case of Nvidia working with that sentiment rather than against it.

They were roasted by review sites previously for co-mingling architectures in the same numbering generation. So this time, they didn't.

TU106 was far too big of a chip to die-harvest low enough for a true volume x60 budget part.

Add in all the non-graphics acceleration hardware that needed to be cut to hit price and... Nvidia didn't feel this could be called an RTX 20xx part.

The 16xx is awkward, but it's the least bad choice.


> Cars are versioned by year/model

Just look at BMW or Porsche. Their numbering systems are just as obtuse as GPUs.


BMW went that way very recently. It wasn't that long ago that their model numbers were very much explicit. A 328i was a 3-series chassis with a 2.8 liter. The i and d stood for fuel-injected and diesel respectively.


Well it still makes sense to some extent. i and d still mean petrol and diesel, with e for hybrid joining the ranks lately. But in general x1x(say 116d or 114i) are entry level engines, x2x(320d) are mid-tier and x3x(430d) x4x(240i) are higher end, more powerful engines.


That was fine back in the day when there was a correlation between engine size and performance.

Since they started strapping turbochargers to everything down to 1.0l, the engine size comparison has become less important. If anything my 3.0l car is seen as a negative because of the higher fuel consumption.

The manufacturers are just trying to walk a fine line.


No one does semantic versioning for hardware products (at least consistently), but maybe they should.


It's apple and oranges. Adobe doesn't release 5 versions of photoshop every year with different performance levels.


It makes it hard to know what I'm buying if I can't put it in a mental hierarchy.


I agree that it's not totally obvious to new customers, but it's also no that hard: within a generation, the relative performance of the cards corresponds to the order of model numbers. "ti" cards are more powerful than non-"ti".

Between generations, the whole line moves up something like one level of performance, so an (X)70 should be compared against an (X-1)80 and so on.

It's not the simplest thing, but I think most people will do research once the first time they buy a GPU, and then you have your mental model from then on.


No, the mental model I have from the 90's and the 00's doesn't help me in picking up a graphic card.

No worries though, I decided to settle on integrated Intel GPU. Good enough for 2 years old games.


Because it’s nice to be able to have some idea of what the product is without having to dig into a bunch of tech specs.


By naming it 1660 instead of 2050, there is a lot more distance between the RTX/tensor core enabled products and those that are not.

So a win?


Because it’s easy to compare X110 > X109.

Or if there’s a grouping like GP describes, you can quickly ascertain which model is the one you want.


> Because it’s easy to compare X110 > X109.

That's true, but how does it help you as a customer? Wouldn't you need to know what you needed? What is the benefit of buying an X110 over an X109?

https://dilbert.com/strip/2002-06-11

I don't like "it's just a number" schemes, because they look like transparent attempts to get you to buy something for no reason. What's the difference between 3G and 4G? Well, they changed the number. Does that mean anything? How would you know?


Which may not be the model marketing wants you to want.


More than 10, less than 20. The article says as much.


Like line numbering in BASIC. Leaves room to insert.


You're not the only one. I know a couple Engineers working there. No one understands the naming scheme :-)


These numbers have always been marketing driven. Just like with cars: is there any sense to the way BMW models are named?

In this case it's probably intended to invoke the GTX 660 in the minds of customers, since that was a very successful and well-loved card when it was released.


>Just like with cars: is there any sense to the way BMW models are named?

Not sure what you mean, BMW seems like the complete opposite of what we are discussing, the way they are named is extremely functional. It used to be simpler because they had less models but it's survived the changes fairly well.

For the majority of cars the first number (1 through 8) is the segment, from lowest to highest. The next two used to be directly the engine size but because smaller engines are getting more powerful they've disconnected that and it just means the performance level. They then have prefixes for different types of car (M for performance, X for SUV, Z for convertible), and suffixes for details of the drive train (e for hybrid, i for gasoline, d for diesel). There are a few more nuances but the bulk of the naming is this.

Their range isn't just sorted well in terms of naming, the interiors and features are highly consistent between cars as well so the naming allows you to get in a car and know pretty much exactly what to expect. If Nvidia was doing the same we'd know everything about this card's positioning in the market just from the name (does it have raytracing, what peformance level, what market segment, etc).


In the past BMW model names were correlated to the chassis type & the engine inside. So 530d would mean it’s a 5 series chassis, with 3.0 liter diesel engine and so on. These days, AFAIK, that’s no longer the case and model numbers are a bit arbitrary


Nvidia have used nonsensical model numbers for a long time. Remember the GeForce 4 MX?


Having to go back 15 years for an example is usually a good predictor of the strength (or better: weakness) of an argument.

The numbering has been very logical for more than a decade: you have a product generation, and increasing numbers mean increased performance.

What is so nonsensical about that?


What matters are the specs, not the generation/age of the card.


It’s the new 1080p card since the 2060 can handle 1440p


This is the reason I made a full AMD config ten years ago: I could understand the numbering scheme of CPU and GPU. Higher is better, and for processors more expensive models have at least the instruction sets supported by cheaper ones. With Intel not some much and I had to check too many CPU for virtualization instructions.


The Nvidia numbering scheme is still higher is better?


The NVENC hardware encoder in Turing is actually comparable to x264 at fast / veryfast. This card will be quite interesting to Twitch streamers as it opens up the possibility to stream without CPU impact for those with a limited budget.


A lot of people are comparing RTX's NVENC to x264 medium. Favorably, I might add: https://www.youtube.com/watch?v=-fi9o2NyPaY


I think some charts with SSIM measurements at various bitrates and resolutions etc. would be way more informative than a youtube video.


Is the hardware encoder the same in every Turing based card or should we expect different performance when comparing ultra high end to mid range? (2080TI vs 1660TI)


It appears so far that the NVENC chip has been the same in all the RTX 20xx cards. I haven't looked into whether that is the case with the 1660 TI, but I wouldn't be surprised if it was.


The question is: With prices returning to sanity, is it a better bang for your buck to get a 1660 or 2060. Seems like a $100 difference. But the 2060 seems like still an amazing value today.


It's a $70 difference, and the last page of the article has a chart that includes comparing a 1060 against both a 1660 and a 2060.

Over a 1060, the 1660 gives a +36% performance boost for a +12% price difference, and the 2060 gives +59% performance for +40% price. So if you just want the best performance-per-dollar ratio, the 1660 is better, but the 2060 is probably better overall if you don't mind the extra cost (and it also supports the new RTX enhancements for ray-tracing and such).


It comes down to whether you think RTX/raytracing will go from being a novelty to being a Must Have within the next few years. If it doesn’t, the 1660 would be the right choice, you’ll save some money now and can always get RTX in your next card if it makes sense then. On the other hand, if it does, getting the 2060 today will spare you having to buy a new card sooner than you normally would just to get RTX support.


But arguably, the 2060 is already the wrong card for the future of ray tracing. This current batch of cards are barely able to handle the limited ray-tracing features in the few fames its available in: i.e. one-bounce global illumination to a single light-source or reflections or ray-traced shadows.

If ray tracing does become increasingly relevant, this batch of cards (and especially the 2060 at the bottom) are likely to become dated rather quickly. Also as with any bleeding-edge technology, you're paying an early-adopter premium for something which is likely to see relatively large inter-generation improvements.


If ray tracing becomes widespread in games, my bet is you'll need a new card to run that ray tracing with a good enough visual fidelity and high performance anyway. This is just the first generation.

I think of RTX like the first Oculus Rift. You shouldn't buy it to be future proof, it will be anything but, you buy it only because you're an enthusiast that want to play with the latest tech early and plans to upgrade before it breaks or become obsolete anyway.


Being fair - the Rift hasn't been superceded and there's not really anything imminent that would make it obsolete. So someone buying it on release has had a pretty good run.

Obviously - this is partly because of VR uptake being slower than some expected but I use a Rift daily and it still does it's job admirably.


I'm talking about what the first version that consumers could buy, that a colleague brought to work in 2013 I think. It had a low resolution.


The first consumer Rift is the same one currently being sold.

You might mean the DK1 - which wasn't generally available to non-developers but I don't think there were any stringent checks in place over who was a developer.


Why do they benchmark at 1080p, can't every card under the sun run games at 60fps with such a low resolution.

And no 4k ? Wtf? Shouldn't that be the standard now.


it seems to me that while 4K may be standard in other fields (e.g. media, movies), in gaming, many prefer to trade resolution for higher FPS [1]. The middle ground (1440p) seem to be quite popular lately among 'normal' gamers, but I think that in competitive gaming many still prefer to stay on 1080p and achieve very high FPS. Competitive gamers also do a lot of other 'strange' things, for example I saw people who select a 4:3 aspect ratio and stretch it over 16:9 instead of running 16:9 natively [2]

[1] https://www.digitaltrends.com/computing/steam-july-survey-re...

[2] https://medium.com/@lurppis/the-16-9-vs-4-3-aspect-ratio-arg...


1080p benchmarks are helpful for those of us on 144hz monitors at least

Mine is "only" 1080p, but at 144hz it still makes my GTX 1080 sweat in just about every game I throw at it


Definitely not every card under the sun. I have 1060 and it can't run WoW (game from 2004) at 60 fps 1080p unless I'm lowering some settings.


>Definitely not every card under the sun. I have 1060 and it can't run WoW (game from 2004) at 60 fps 1080p unless I'm lowering some settings.

I have 2x1080 ti in SL with a really nice watercooled and overcloked rig (64gb of the fastest RAM I could buy, best possible CPU and SSD). This was pretty much the best possible system you could have to play WoW a year ago and I STILL get 21fps in this weeks set of Timewalking dungeons. I have spent hours upon hours tweaking settings and it usually runs fine but this week was terrible.

WoW is CPU bound and is very, very poorly optimized for modern systems.


Well, in your case it's definitely a CPU bottleneck. In my case I just gradually increased settings and when I'm turning antialiasing to highest possible setting, my FPS drops a bit below 60 (to 50+) and GPU is loaded by 100% according to task manager, so I suppose that it was a GPU bottleneck. I'm playing with mostly ultra settings but not highest antialiasing, it provides 60 FPS for most single-player scenarios and GPU is loaded at 80-90%. Of course in raids and dungeons FPS might drop below 30 and that is CPU bottleneck indeed, can't do anything here. My friend recently bought 9900K and 2080 and experiencing FPS drops in raids, it's almost funny.


I get 200 fps in the open world with everything on ultra and view distance set to max. I get 30-40 in certain areas like Org. The issue is that the timewalking dungeons from BC are especially old and there was something odd going on with them this week - almost everyone I ran with complained of a similar issue.


That sounds like a CPU bottleneck, or else very poor optimization. I cannot imagine WoW is that demanding on a GPU.


There's CPU bottleneck, but there's also intense GPU requirements for high settings in some scenarios. At least according to task manager GPU was loaded at 100%, so I suppose, it was GPU bottleneck.


It is when you're in a raid and there are dozens of spells going off all around you. They have however made some tweaks recently, but it's still not incredibly well optimized.


Sounds like particle system batching and/or overdraw issue. Probably this could be solved by more intelligent batching and culling, or by throttling particle count/resolution when the scene gets too busy.


You've solved it!


> lowering some settings

To be picky, those settings aren't from 2004.


To he honest, I think 4k is more relevant in terms of benchmarks than it is in real-world impact. At the monitor sizes most people are using for gaming, it's barely discernible from 1440p.

The push for 4k gaming does a lot more for hardware manufacturers who want to sell the "next big thing" than it does for actual users.


I have a 4k monitor, and to me games look much better running in 4k than 1080p or 1440p..

Particularly 1080p just looks like blurry ass, the textures have no detail, objects in the distance are harder to identify--

Even 4k still looks blurry compared to the real world..so I think we need 8k :)


For FPS games the best experience is still at 1080p with 120Hz+ display IMO. Once you try it you can't go back to 60Hz, it legitimately feels choppy.


> [...] $279 for a xx60 class card, and which performs like a $379 card from two years ago.


...with 2GB less memory


1440 @ 144Hz seems to be the sweet spot (at least for now).


Looks like 1440p gaming at a reasonable budget is finally here. Now we just need some good (non-curved) Nvidia-approved adaptive sync cards at 32" 1440p IPS.


For the price, I'm kind of skeptical that it will outperform a previous-generation GeForce 1060. Which can be purchased from $259 to $299 in various places.

I just got a 1060 that has 3 x displayport 1.4 outputs + 1 x HDMI 2.0 output, it can drive four 4K displays at 60 Hz.

The 1060 is probably somewhat more power hungry and hot under load.


I'm sorry, what is there to be skeptical of? There are plenty of reviews with hard data and measurements. This isn't one of those things where something stops being true just because you refuse to look.

https://www.guru3d.com/articles-pages/msi-geforce-gtx-1660-t...

Now, whether it's worth another 279 to upgrade from the 1060, well, probably not. It is very much worth considering if you have an older card than from the 1000 series. It performs on par with a 1070, gets within spitting distance of the Vega 56 in 1080p/1440p in a bunch of games, and it's the most power efficient card out there.


Yes, what I meant is that for a person who bought a 1060 two months ago, it's not worth spending the same $279 again to upgrade.


Oh definitely. The 1060 is plenty performant for modern games (except for pathological cases like Anthem) and somebody who owns one shouldn't be in any hurry to get the 1660. The 2060 is a better candidate for that, but pricey.


Which makes it particularly sad the Galax SNPR 1060 eGPU didn't see a wider release. I have one, it's great.


"Looking at the numbers, the GeForce GTX 1660 Ti delivers around 37% more performance than the GTX 1060 6GB at 1440p, and a very similar 36% gain at 1080p"


Benchmarks start on page 5.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: