Hacker News new | more | comments | ask | show | jobs | submit login
AMD Radeon VII Review: An Unexpected Shot at the High End (anandtech.com)
92 points by gbrown_ 15 days ago | hide | past | web | favorite | 81 comments



EDIT: if the chart on the Anandtech review is true I have to take back what I wrote below. I didn't know the AMD Radeon Instinct MI50 (6.7 TFLOPS FP64, 16 GB memory with ECC and also 1 TB/s bandwidth) only costs 1000 USD, I expected it to be closer to the FirePro cards. So if you don't need active cooling this seems like a even better choice.

Good that AMD reverted their decision on FP64 which means the card has 3.5 TFlops.

The only real competitor in the consumer space seems to be the Nvidia Titan V which has 6.9 TFlops, 12 GByte of memory with 652.8 GB/s and costs 3000 USD.

The Radeon 7 only has half of the TFLOPS for FP64 but 16 GByte with 1024 GB/s and only costs 700 USD.

I don't have a use case for double precision but I'm sure people who need it will like the card.


The only disappointment is that AMD's software stack is very lacking for compute. While, Nvidia commands absurd prices for 2080Ti with a mere 11GB GDDR6, Radeon's with 16 GB of HBM2 are closer to the memory spec of Titan V100.

OpenCL, while being a much cleaner API, suffers badly from the lack of quality libraries. The requirements aren't even that large for the killer application of DL - BLAS, a few convolution kernels, and random number generation. Darknet's {of the YOLO fame} interface to CUDA, for instance is composed of fewer than 10-15 functions.

AMD's efforts in this regards have not been very satisfactory IMO. Infact third party work such as CLBLAST are of much higher code quality than libraries like AMD's own CLBLAS (or it was). Sadly, AMD over the past few years has seems to have come under the belief that building a transpiler from CUDA bytecode to AMD's would be the magic bullet. It remains to be seen if this will suffice, but it does seem like a quixotic solution to a straightforward problem.

Had such a foundational software libraries (which mind you requires a lot of harware tailored optimization ) been perfected over the past few years (even while being multiple years behind NVDA), they could easily have turned the game, more so considering NVDA's shady tactics of banning GTX/RTX cards in the cloud.

I don't know what went wrong. It's not like AMD didn't have engineers working on ML applications; the direction seems misguided - AMD's engineers (unlike NVDIA's) seem to have focussed on porting frameworks onto low quality libraries, rather than securing the foundation. I'm afraid the ROCm effort will have the same effect.


I disagree with OpenCL being a "clean" API. CUDA's API is far simpler than OpenCL, if only because you don't have to rewrite structures between your host-code and device code.

Any "struct" or any other data you pass between CPU and GPU is automatically compiled by CLang / CUDA, and works on both. OpenCL on the other hand is... C-only (CUDA has C++), requires double-speak and has a compiler in the lol-device code.

So compiler bugs are fixed by updating AMD Device Drivers on the client. Its... horrifying. This is not a way to build libraries. In contrast, CUDA compiler bugs are fixed by updating the programmer's compiler.

> AMD's efforts in this regards have not been very satisfactory IMO. Infact third party work such as CLBLAST are of much higher code quality than libraries like AMD's own CLBLAS (or it was). Sadly, AMD over the past few years has seems to have come under the belief that building a transpiler from CUDA bytecode to AMD's would be the magic bullet. It remains to be seen if this will suffice, but it does seem like a quixotic solution to a straightforward problem.

I happen to agree with it. AMD's CUDA transpiler is built on top of AMD's HCC efforts (a C++ single source environment). The code for HCC is far cleaner than anything I've ever seen in OpenCL, and is mostly compatible with CUDA (thanks to the efforts of HIP).


Could not agree with you more - tracking the AMD compute story has been painful. Talks about having Tensorflow / Torch upsptream support have been going on forever. Reading between the lines OpenCL seems to not be the favored way to do compute anymore, so its unclear if I was starting today what the canonical forward looking path is for writing compute on AMD cards.


> I was starting today what the canonical forward looking path is for writing compute on AMD cards.

HCC and HIP seem to be the canonical compute path forward, with compiled ahead-of-time OpenCL being supported as well.

There's some "#pragma openmp target" tests in the Github repos. So AMD is clearly working on that as well. If HCC got OpenMP support, it would be significantly easier for typical programmers to write GPU code.

--------

If I were starting a project today, it'd be OpenCL 1.2 on the AMDGPU-pro drivers (the old proprietary driver is more stable for sure), but with a careful eye on HCC / HIP advancements. HCC and HIP have gotten a huge amount of attention / github commits in the past year, so its clearly where AMD's main effort is going into.


Vulkan is actually intended as a stack for compute, not just graphics. That's probably our best chance for a cross-platform approach to GPU compute that can work roughly equally well on NV, AMD and the upcoming Intel GPU hardware.


The compute performance was quite low for 3.5TFlops, in F@H even beaten by 2070... I am wondering if it's again drivers/SW holding AMD back, like with Deep Learning (in theory 15TFlops, in practice 7.5 etc.)


NVidia's Turing cores are clearly more dynamic than AMD's.

There are also driver issues for sure. ROCm and RadeonGPU-Pro (two different OpenCL implementations) have numerous differences in performance. ROCm 2.1 is still absolutely awful at Blender for example (definitely use RadeonGPU-PRO drivers for that instead).

In some cases, I bet on Ahmdal's law. NVidia has fewer shaders at a higher clock than AMD. AMD's architecture is implicitly more parallel than NVidia's, so Ahmdal's law will hurt it (AMD's 4096 shaders on Vega64 will use more RAM than NVidia's 2304 CUDA shaders on the 2070).

In others, it seems like AMD's drivers are still unoptimized. Something weird is going on with Blender's code and ROCm for example.

--------

EDIT: I've seen some profiler data, and it seems like AMD's GPUs are rarely compute-ALU bound. They are often memory-bound. So AMD has far too strong compute for the graphics problems at hand. Alternatively, AMD's algorithms may be poorly tuned: they probably should be more compute heavy and try to use less RAM Bandwidth.


3.5 Tflops at double precision. The applied math/scientific computation people are going to love this.


I was talking about FP64 ofc.


Think Anandtech has made a mistake with the price of the Instinct card.


For me, 700$ are still high end.

Time ago the high end was above 500$ and the price climbed during this time thanks to cryptomining.

With 200/300 you could buy a modest graphic card that could last 3 years playing top titles. Now in that price range you can just keep up one year and then it's completely trash


I think a RX580 or Nvidia 1060 is still perfectly great for modern gaming and can be picked up for $200 or less.


Recently bought an RX580 for 200 euro. Amazing value to price. For 1080p gaming i still haven't found any titles that make it feel slow. Essentially GPU prices always had an exponentially increasing price to performance ratio so the 1060 and RX580 are the current sweet spot.


Yep, used 480/580 are an incredible deal right now. And perfectly adequate even for higher resolutions if you know what you're doing (disable MSAA, SSR, etc. depending on the game) and not always playing the latest most demanding overrated AAA stuff


Heck I'm doing fine with an RX 380. I was expecting it to be a problem by now, but I still haven't found something I can't play if I'm willing to fiddle with the settings.


I have a RX 390 and it is certainly an issue on most modern games.

If I set a game to low or medium, I can probably maintain 30-40FPS.

The idea of achieving a constant 60fps @ 1080p with this RX 390 on any graphically intensive game made in the last 3-4 years is a pipe dream though, even at low graphical settings. Monster Hunter World is an example of a title which was made for better hardware. On the other hand, Assassin's Creed Odyssey runs pretty well.

I've just gotten used to decent graphics at 30-40fps, and I can eek out a mostly constant 60fps on games like Rocket League where it matters (with only some dips)


I guess it depends on what your definition of "an issue" is. Mostly I don't measure FPS at all, and I will only adjust settings if the frame-rate is perceptibly choppy.

So for example I recently installed Far Cry 5, and since it has an FPS counter in the corner by default I know it runs between 40-50 FPS at 1080p, with nice effects like volumetric lighting. For me that's fine.


I use an 8gb 580 to VR game, and it is phenomenal. Every game at at least 1.5 supersampling, maintain 90fps, even running hardware encoded video capture in the background.


rx 580 8GB is good. the 4GB version is lagging behind a little -- VRAM spills into system RAM regularly.

I assume the 1060 3GB has a similar problem.

source: I have a 4GB and am currently saving up to upgrade.

edit: also, the rx 580 4GB generally runs at a lower clock than the 8GB counterpart, which affects performance, as well.


Which titles are using more than 4GB these days?


https://techreport.com/r.x/2019_02_06_AMD_s_Radeon_VII_graph...

Seems to be a VRAM-based calculation. There have been some youtubers looking into the "pop in" effect on GPUs with less VRAM. Basically, GPUs will render images at a lower texture level if the texture wasn't fitting in VRAM. So the failure case is pretty smooth these days and hard to notice.

Basically, the mountain range in the background may only be a 2k texture, but may pop into a 4k texture after a bit. Because the 2k texture is a mipmap of the 4k texture, its actually hard to notice. So you can stretch lower VRAM Settings these days better than you'd expect.

Another thing: just because VRam is allocated doesn't necessarily mean that the VRAM is being used. If you look at a wall in any video game, all the textures are going unused. Taken to the extreme: there are people and objects "behind" a lot of geometries that you'll never see in the game, but will be allocated to VRAM. As such, a game may allocate 9GB total, but not really need the last 2 or 3 GB. So even if a game reports 9GB or 10GB of usage, you may get by with a 8GB card.

-------------

In any case, we're long past the 4GB of RAM mark. Practically speaking, games require 6GB at least and may break 8GB in the near future IMO. 8GB seems to be sufficient for now though (thanks to pop-in, VRAM overallocation, etc. etc.)


destiny 2, for one, will run over 4GB VRAM on high settings. the new resident evil easily runs over 4GB VRAM on high settings.

Apex Legends recommends an 8GB card because on max textures it will use more than 4GB.

there are a lot of modern titles that will allocate/use more than 4GB VRAM.


>it's completely trash

only if you are trying to get 140fps in battlegrounds or something. An nvidia 1060 will play almost anything just fine unless you are a competitive twitch gamer, or wanting to play at 4k.


not really. imo.


> Time ago the high end was above 500$

In the meantime "gamers" went from 30fps/720p to 120fps/4k.

Just like new entry cars are more expensive than 40 years ago, the baseline moved much higher. (security, speed, standard equipment).


Don't forget inflation.


i'd settle for 60-70fps at 1080p on max settings.


>With 200/300 you could buy a modest graphic card that could last 3 years playing top titles.

As someone who has fallen out of the arms race thanks to the perfectly serviceable GTX970 I bought just a hair over years ago for ~$300, I am frankly appalled by the lack of value in the GPU market today. At the time, that $300-350 card was decidedly high end in that it was the best 1080p gaming card on the market with the 980/980Ti only making sense for those playing st higher resolution. There still isn't anything on the market worth upgrading for (~30-50% faster in the same price bracket) 4 years later and that is not a good thing for the PC gaming market in the long term.

The idea that one has to spend $300 to play games above medium settings at the resolutions found on midrange monitors is crazy to me. I got by with $100-175 cards (HD4850, HD5770, HD6950) for years before making the plunge into the high end for use in VR but I feel like cards that market is no longer viable in the same way those were.


In 1999 high end was $300, thats what 3dfx and Nvidia charged for Voodoo3 3500 and GeForce 256 DDR. $200 got you perfectly capable middle range card, $100 was still enough for a decent 3D acceleration(voodoo2/NVIDIA Vanta, ok for contemporary Quake 2), and $30 a VGA outputting piece of turd.

Today slowest ok cards start at $300 :(


>>Today slowest ok cards start at $300 :(

I'm sorry, but that's absolute nonsense. You can get a brand new GTX1050 from newegg for $129: https://www.newegg.com/Product/ProductList.aspx?Submit=ENE&D...

And that's an absolutely "ok" card. It will play every new game in 1080p and medium settings. I'd argue that you could go sub-$100 with a GT1030 and still play anything if you are willing to drop settings to low or resolution to 720p.


Lets check 1999 cards in Quake 2. Top of the line $300 Voodoo3/geforce 256 delivered 80-100 fps in 1024x768. Voodoo3 2000 was $100 in November 1999, and delivered solid 60 fps in 1024x768, used Voodoo2 could be had for $30 and still ran the game at 60fps, albeit only 800x600.

Wolfenstein 2 is a two year old game at this point, and runs 35-60 on GTX 1050 with fast CPU, and more like constant 35/medium settings on weak one. You have to drop down to 1600x900 for >40fps.

I guess GPU prices dropped from the last time I checked, RX 580/GTX 1060, both delivering playable framerates in current games, actually start around $200 now, so Ill give you that :)


Also don't forget that $300 USD in 1999 would be equivalent to ~$450 in 2019 dollars.


I look at it from slightly different, subjective perspective.

Entry level CPUs. August 1998 Intel started selling legendary(to overclockers) $149 Celeron 300A. By 1999 entry level moved to $100 for ~300MHz Celerons/K6-2. In 2000 entry level moved to $69 433MHz Celeron, $75 533MHz Cyrix III, and $112 600Hz Duron. Top of the line CPUs were around $1K at the time.

Todays entry level CPUs are still around $60 https://www.anandtech.com/show/13660/amd-athlon-200ge-vs-int...

Same goes for motherboard prices. Even RAM stayed in the same price bracket https://jcmit.net/memoryprice.htm . 1999 computers came standard with ~$100 128MB of SDR, todays standard is ~$100 2x8GB of DDR4. Somehow everything got cheaper, except usable GPUs.


I just continiously disagree with your promise that somehow "usable" GPUs have gotten more expensive. GTX1050 will play literally any title currently on the market in 1080p at locked 30fps. You mention that older cards could play games in locked 60fps - but the issue here isn't that you get less bang for your buck, but that the goalpoasts have moved. 99% of console games run in 30fps, and vast majority of gaming is on consoles not on PC. Therefore, I think it's fair to say that 30fps gaming is the norm, compared to the 60fps gameplay of 8/16 bit gaming systems of the yesteryear.

And then on top of all of this, majority of CPUs on the market come with a "competent" GPU, so the effective cost of a GPU for most PC users is....zero. Sure they won't play Anthem, but a new intel or AMD cpu-embedded unit will easily play 3-4 year old games very smoothly. For a lot of people that is more than enough. I'm personally sinking hundreds of hours into mmos like WoW or ESO and they play fine using embedded GPUs. The cost of a GPU here is zero.

So I'd argue that the average cost of GPU per PC user has gone down, not up. It's just that the top end has exploded - you can have multiple 4K displays, RTX, 32x antialias, nonsense like hairworks etc etc. But your regular PC player on average will have a 1080p monitor and is content with playing in medium/high settings. They can absolutely do that without spending lots of money.

As a personal example - I have a living room PC with an i5 2400 + 8GB ddr3 1066mhz + GTX1050Ti. That CPU is literally 8(!!!) years old. And yet this machine can play Forza Horizon 4, brand new title, in locked 1080p and 60fps in high settings. Granted, that's the 1050Ti not the 1050, but the 1050 is not that far behind. Drop the settings to medium and you're playing in locked 60fps. Or keep them high and you will play in locked 30fps.


I have an 8GB 580 and it seems likely to last me a good while. It replaced an HD 7770 which I was fine with for something like 5 years +/-. I think you can still spend sub- $500 and have it last ages easily. (Though I only do occasional, moderate, gaming and use Linux as my day-to-day desktop.)


This is why I prefer a console for gaming & also not having to put up with full version of Windows. In my country due to crypto craze, cost/performance of PC hardware for gaming is far worst than a console.


It seems that this has a better process (7nm vs. 12nm), more memory (and bandwidth), and more power draw than the RTX cards.

So why does it have equal/lower performance? Is it just that it has fewer transistors (13.2B vs. 18.6B)? (And if so, why doesn't the smaller process allow more transistors?) Or is there also a difference in the architecture where a better design could get more performance out of the same amount of silicon?


There are so many variables. A few that you didn't mention in your comment is the number of shaders/cores and the core frequency.

However there's one other thing to consider for games. Essentially every big video game has a special profile in the Nvidia drivers that optimizes performance for that specific game. Gaming card reviews almost always use popular AAA games for benchmarking. It's possible and even likely that each was developed with Nvidia cards in mind, in coordination with Nvidia to optimize those drivers.


It depends on the game, many games (especially previous ones) would say POWERED BY NVIDIA or whatnot. Usually specific series get special treatment, Hitman goes AMD but the Witcher goes Nvidia or whatnot. To my knowledge that generally means that A) they're using hairworks or something, and B) they likely got to work with that company for extra optimization.

Then I believe the drivers also optimize games without the developers involvement (But I would assume this isn't as effective as working directly with that company).

But generally certain games "work better" on AMD/Nvidia based on whoever they worked with/built it for, however the majority are "better" on nvidia for various reasons.


In case of Nvidia it also means game will force certain feature known to not work as well on AMD, like for example tesselation in Crysis 2 going down to many triangles per pixel just to make sure AMD cards struggle https://techreport.com/review/21404/crysis-2-tessellation-to... , or invisible tesselated water under main geometry https://techreport.com/review/21404/crysis-2-tessellation-to... Other times Nvidia will "help" developer drop DirectX feature it doesnt like, like Assasin Creeds removal of DX10.1 after joining Nvidia "meant to be played" program.

Optimizing drivers is one thing. Sabotaging libraries, and then forcing said libraries on developers under the guise of "support"/cross promotion (read Nvidia is paying developer in advance) is another.


There are AAA games optimized for AMD cards too. Like the upcoming Tom Clancy's The Division 2 - there's going to be a joint talk at GDC by Massive's Technical Director and Technology Engineer from AMD about this:

https://schedule.gdconf.com/session/advanced-graphics-techni...

Also don't forget that games are usually first and foremost optimized for consoles, and both PS4 and X1 use an AMD gpu ;-)


'Yields' are what fraction of the chips produced end up usable rather than damaged/broken. With 7nm, yields are down. With larger chips by area, yields are always lower than smaller chips.

So AMD can save money by putting extra memory rather than gpu silicon because the memory costs less than gpu silicon.

https://www.engadget.com/2019/02/07/amd-radeon-vii-review-vi...


umm.. the Radeon VII's memory makes up over half of the production cost of the card. HBM2 is not cheap.


Per square millimeter, memory costs less than gpu silicon. (and Radeon VII's memory takes up more square millimeters than half).

https://www.anandtech.com/show/13923/the-amd-radeon-vii-revi...

(Also yield decrease per transistor added is nonlinear, so going from a 331 square mm to say 495 square mm would probably decrease yields by roughly half, even though number of transistors would be less than doubled).


It seems like NVidia's fixed-function graphics are still superior over AMD's.

From a raw compute perspective, the AMD card performs as well as you'd expect. The phoronix OpenCL benchmarks prove that point. But in gaming, people have been blaming AMD's low allocation of ROP units and poor geometry pipeline.

A lot of it is drivers I bet. NVidia got a lot more money and puts more effort into drivers for sure. AMD pushes Vulkan / DirectX12 for that reason, since the raw hardware is better exposed to the programmer.


(And if so, why doesn't the smaller process allow more transistors?)

Smaller die sizes mean larger yields, which mean bigger margins. I can't read minds, but maybe they have the design dials set with that goal. It's a 7nm version of the Vega64, but with 4 compute units disabled. They're also pushing the clock more aggressively.

https://www.techpowerup.com/251444/amd-radeon-vii-detailed-s...

Or is there also a difference in the architecture where a better design could get more performance out of the same amount of silicon?

Which Nvidia have and AMD don't? Maybe.


Building more transistor = larger die size = very expensive on leading node. And RTX are using more transistors for Ray Tracing, hence the higher transistor count didn't translate to any increase in gaming performance.


The big surprise is the disabled PCIe 4 support. It makes sense in isolation, but since they plan on launching Zen2 ASAP with PCIe4 being one of its flagship features this would be very complementary. Doesn't the left hand of AMD talk to the right hand?


AMD's gotta have a reason to sell the higher-end MI50.


They could have sold a ton of Zen2 / Radeon7 bundles. Instead they chose to sold a few dozen more MI50's.


Maybe these are binned chips? Don't make PCIe 4.0 spec? That's okay, sell them as consumer parts.


The pricing puts it outside of what I'd consider a gaming oriented card. Same can be said about Nvidia's RTX by the way.

Nice performance though, it's the highest performance "gaming" card with open drivers today.


Taken from the other thread:

Other reviews:

- (guru3d) https://www.guru3d.com/articles-pages/amd-radeon-vii-16-gb-r...

- (tomshardware) https://www.tomshardware.com/reviews/amd-radeon-vii-vega-20-...

- (techpowerup) https://www.techpowerup.com/reviews/AMD/Radeon_VII/

- (arstechnica) https://arstechnica.com/gaming/2019/02/amd-radeon-vii-a-7nm-...

- (gamersnexus) https://www.gamersnexus.net/hwreviews/3437-amd-radeon-vii-re...

- (gizmodo) https://gizmodo.com/amds-radeon-vii-is-a-solid-gaming-card-b...

- (gamespot) https://www.gamespot.com/articles/radeon-vii-review-can-amds...

- (digitaltrends) https://www.digitaltrends.com/computing/amd-radeon-vii-revie...

- (extremetech) https://www.extremetech.com/computing/285286-amd-radeon-vii-...

- (pcper) https://www.pcper.com/reviews/Graphics-Cards/AMD-Radeon-VII-...

- (techspot) https://www.techspot.com/review/1789-amd-radeon-vii/

- (tweaktown) https://www.tweaktown.com/reviews/8894/amd-radeon-vii-review...

- (engadget) https://www.engadget.com/2019/02/07/amd-radeon-vii-review-vi...

- (pcmag) https://www.pcmag.com/review/366382/amd-radeon-vii

- (techradar) https://www.techradar.com/reviews/amd-radeon-vii

- (hothardware) https://hothardware.com/reviews/amd-radeon-vii-review-and-be...

- (kitguru) https://www.kitguru.net/tech-news/zardon/no-custom-radeon-vi...

- (hardwarezone) https://www.hardwarezone.com.sg/review-amd-radeon-vii-review...

- (rockpapershotgun) https://www.rockpapershotgun.com/2019/02/07/amd-radeon-7-rev...

- (hexus) https://hexus.net/tech/reviews/graphics/126752-amd-radeon-vi...

- (bit-tech) https://bit-tech.net/reviews/tech/graphics/amd-radeon-vii-re...

- (legitreviews) https://www.legitreviews.com/amd-radeon-vii-16gb-video-card-...

- (overclock3d) https://www.overclock3d.net/reviews/gpu_displays/amd_radeon_...

- (techgage) https://techgage.com/article/amd-radeon-vii-1440p-4k-ultrawi...

- (computerbase - google translate) https://translate.google.com/translate?sl=auto&tl=en&u=https...

- (pc games hardware - google translate) https://translate.google.com/translate?sl=de&tl=en&u=http%3A...

- (harwareluxx - google translate) https://translate.google.com/translate?sl=de&tl=en&u=https%3...

- (golem - google translate) https://translate.google.com/translate?sl=de&tl=en&u=https%3...

- (hardware.info - google translate) https://translate.google.com/translate?sl=nl&tl=en&u=https%3...




Could this be the piece Apple was waiting for to release the new Mac Pro?


At this point I expect the Mac Pro to go the way of AirPower with Apple never mentioning it again and calling the iMac Pro "good enough".


To be honest, for many purposes it is good enough. I love this machine. The Vega 64 inside the iMac Pro even has 16GB instead of the standard 8GB.


I'm sorta surprised that the specs are so similar to the MI50. Seems like this might cannibalize the sales of MI50 for applications that don't use fp64 (i.e. all ML/DL uses).


Well, that's the thing. Its clearly a discount MI50. PCIe 4.0 and a coherent GPU-fabric however are MI50 only features, so anyone buying bundles of GPUs for hardcore compute will want to stick with the MI50 for sure.


TL;DR:

- Stock/Better than RTX 2080 performance at 1440p/4k.

- Needs driver updates for overclocking and for matching 1080p performance.

- Heat design bad on founder card.

- Expensive and mostly Sold out.

I'll wait for Navi.


Happy to see at least SOME competition, would be nice to see AMD offer something by end of the year competative to RTX 2080 Ti, which should at least drive some pricing competition. I wouldn't be surprised to see a $100-200 drop for RTX 2080 and 2080 Ti because of this.


300W power consumption, and it still only caught up with Nvidia cards drawing less than 200W on gaming performance


>300W vs 200W

That's not measured. That's manufacturer spec, which is best ignored.

NVIDIA's TDP is not the thermal design number as you'd expect, but rather, the "typical" as in measured while running some arbitrary loads.

It's dodgy, but that's how NVIDIA rolls.

Always do refer to measurements for actual numbers to compare.

Manufacturer specs are best not trusted.


Discovered this the hard way when building quad GPU servers. 1080Ti has TDP of 250W per spec, yet most "1600W" power supplies will crap out and shut down your system if you fire up 4 1080Ti's simultaneously. The only brand I've tried that didn't crap out was EVGA. Supposedly Corsair 1600W PSU is also OK, but I haven't tried it. Even in nvidia-smi (which I think still averages things out somewhat and never shows the momentary peak power), 1080Ti will often jump way over the 250W "limit" when it's at 100% utilization.


This is classic AMD... medium performance for medium pricing.


Still kind of shocking that $700 is "medium pricing" now.


I splurged last year on a card that could play Elite: Dangerous, DOOM and The Witcher 3 in 4k and spent about that much. Now that card is "medium"? (not that I'm complaining about the onward march of PC gaming demands and power, just that $700 was very much the highest end last year)


I'm still using an RX 380 (mid-range in 2015), and I'm still able to get quite decent performance on modern games on my 1080p projector. The most recent titles are only in the 40-50 FPS range with settings turned up, but still totally playable. I'm not sure what practical benefit there is to the high-end cards outside of bragging rights.


more winning if you do shooters competitively. but yeah if you are playing single player, 50fps is fine for most people.


If you drop down to 1080p you can save a ton of money and it still looks great.

4k = ~8million pixels 1080p = ~2 million pixels

bumping up to 4k is like running the game 4 times at once from a pixel shader perspective


Yeah, I guess I have to adjust my expectations now. Fortunately, I don't yet have a compelling need to replace my GTX960.


AMD's pricing tends to tumble kinda quick.

Expect a $500 cost in 6 months and $300 in a year.

The $700 dollar is probably due to low yields right now.


That's very unlikely. 16 GB of HBM2 alone probably costs AMD around $300, and HBM2 prices are unlikely to fall soon (they've actually been trending up, if what I've read is to be believed).


Navi is expected by late H1, and at that point VII won't matter.


Isn't Navi expected to be more of a mid-market Polaris replacement, rather than a Vega successor?


It was about pricing, above.

As I understand it, Navi's meant to be very aggressively priced.


> AMD's pricing tends to tumble kinda quick.

Not sure that's really been the case with the Vega Line has it?


The Vegas are currently on sale at Newegg for $100 less than the price quoted in the article. I don't know what their regular price is, though.


Yes, the RX series has been good price cuts.

This begs the question of whether the VII is considered a RX series card or Vega card.




Applications are open for YC Summer 2019

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: