That's funny, I just built my first new home Linux desktop in maybe 7-8 years, and installed an AMD RX 480 GPU into it, because the increased openness in their devices is finally really allowing for a first class open source driver for Linux to be written for this generation of GPU. At this point it really seems to be just starting to pay off in the performance & stability departments. If they keep it up, there will be good reason (at least on Linux) to favor AMD video cards.
Here's to hoping the new AMD CPU does well too, it's good for the market that Intel has competition.
AMD has definitely caught my interest with the steps they've taken towards embracing open-source software recently. I run Linux full time, and the open source movement is really important to me, but I went with Intel and Nvidia when I built my new computer last year.
My last rig was all AMD, and while it was really powerful for the time it was built, it was basically a space-heater. The power supply on my current machine wouldn't even run my old computer at idle. I have a philosophical issue with the way that Nvidia and Intel handle their Linux drivers and interact with the open-source community, but after 7 years with a computer that sounded like a jet engine, I just wanted silence.
I know it seems disingenuous to compare a 7 year old computer to a modern one, but the TDP of AMD products has not improved much during that time. Plus, my computer spends most of it's time near-idle, and Intel and Nvidia look better the lower the utilization. The fans in my new rig don't even spin up unless I'm doing something rather resource intensive, like playing game.
So, despite my misgivings about Intel, there was no chance I was going to with AMD. The Bulldozer architecture is terrible. Bulldozer hasterrible performance, regardless of whether you measured actual performance, or performance per watt. The only reason I considered AMD was because they enable virtualization on all of their processors (or at least the ones I was considering), whereas Intel only allows virtualization on Xeon and K-series processors.
My biggest issue right now is that Nvidia takes steps to prevent their non-Quadro cards from working in VFIO passthrough. I could have just as easily gone AMD, which I understand works much better with VFIO; the main reason I went with Nvidia for graphics was because I got the 970 for a song. Plus, after years of using AMD with Linux, I don't really trust AMD drivers to work. Nvidia drivers might be closed-source, but they work as long I wait a week to upgrade after a new driver is released.
The Zen architecture seems like it will hit all the right notes for me, as long as AMD maintains their welcoming approach to virtualization.
That's a list of all Intel CPUs with VT-x. It's really easy to narrow down the selection from there. FWIW, I see lots of Skylake i3, i5, and i7s with VT-x.
He probably means VT-d since he mentioned passthrough. VT-x is just the virtualization extensions needed for many hypervisors, while VT-d lets your VMs directly access the actual physical hardware. Like dedicating a physical GPU or NIC to a VM.
I'm not sure which modern CPUs and motherboards support that. I know back in the Core2 days when VT-d was first introduced, it was a nightmare to find the right combination. Looking from the outside, Intel also seems to play a lot of marketing games with their CPU features. Sometimes top-of-the-line CPUs will be missing features that others have. Like selling a $6K Xeon CPU manufactured this quarter without virtualization support. (http://ark.intel.com/products/95831/Intel-Xeon-Phi-Processor...) Supposedly they want to get that feature in there at a later date, but that's not something we like to trust in with computer components.
Your VT-x search returns 1459 products, while the VT-d search returned only 788. Here's a list of things that say they support VT-x but not VT-d (http://ark.intel.com/search/advanced?s=t&VTX=true&VTD=false). Pretty sure some entries on it are wrong, but Intel is also terrible about keeping old technology names on newer spec sheets. Maybe they folded VT-d into VT-x without telling the world?
FWIW, it appears that all of the i3, i5, and i7 Skylakes support both VT-x and VT-d.
I agree that this has been confusing in the past (as evidenced by the mix of Atom, Celeron, Pentium, i3, i5, i7, Xeon in your link), but I think at least in VT-* it's been greatly simplified.
I thought I was going crazy, because I thought only the newest Haswell K-series processors supported VT-d, but I found this link which indicates it is available on basically everything now:
From April 2014: Craziest thing. The i7-4770K NOW does support vt-d. It didn't used to, but went on their site today....
I checked the edit history for the Haswell architecture on Wikipedia, and it appears that VT-d was supported by all i5s and i7s, except the K series, save for the 4790k and it's i5 counterpart since at least August of 2014. I'm not sure where I got the idea is was more limited. I had been speccing a computer for a couple of years at that point, so I wouldn't be surprised if I'm digging up irrelevant information.
I'm getting ready to build a new Linux Desktop- and I am thinking about an AMD CPU, but a Nvidia GPU- from everything I have read, Nvidia's performance on Linux is far better than AMDs? Is that information out of date, or are you just forecasting future improvements by AMD? I do really appreciate the open source driver efforts by Nvidia, but I'm not sure I am willing to sacrifice that much performance at this point.
The new RadeonSI Mesa driver is a massive improvement over the previous generation drivers. My R9 290 only gets better every 6 months when I upgrade to a new Fedora release, and it's all because AMD publishes specification documents while the majority of nouveau development is still based on reverse engineering. If you care about performance, the Volcanic Islands cards and newer are all supported by AMD's proprietary AMDGPU-PRO driver which is still better than the open source ones, but I expect that gap to continue closing a lot faster than the past as the Gallium-based RadeonSI driver is a lot easier for developers to work on.
Still, comparing NVidia's current lineup to AMD isn't apples to oranges. If you want a 1070 or 1080 just get one, Vega is still months away and while we see AMD stomping in Vulkan and DX12 benchmarks compared against similarly priced cards (RX480 vs GTX 1060) they don't have a proper answer for the higher end cards until Vega is out.
Otherwise, don't let driver support deter you, the whole design behind AMDGPU-PRO is much like NVidia's proprietary drivers - it lets AMD reuse the majority of their Windows drivers to keep parity between platforms; and, again, the open source drivers only keep getting better at a rapid pace.
I do have to say that my experience between the OLDER fglrx driver (for an R9 285 card (GCN 1.2)) and the mesa 13/13.1 releases (amdgpu driver) has been that the fully open stack is MUCH MORE STABLE. It isn't perceptibly slower, but it is a lot less likely to have some weird corner case bug in random games.
nvidia raw performance is better, but requires installing a proprietary driver (like you would on windows).
amd is a bit lower, but you can use the built in open source driver and get roughly the same performance as the proprietary driver (From what i understand, the proprietary driver is going to only be there to support their direct clients, and in general, they are working on getting all generic improvements into the open driver).
From what i understand, this means kernel updates mean you have to re-build the nvidia drivers, whereas with AMD, the new drivers are included in the new kernel.
either way, I went with an AMD rx480 this month because i dislike nvidia's closed off business practices (especially in comparison to AMD's open practices). it runs everything I want it to run just fine, so its not like i suddenly cant play some game because i went AMD.
AMD wasn't pushing code for better performance, they were pushing a janky abstraction layer for their devices to make porting their Windows driver easier, that basically did what NVidia does and skipped a lot of the existing DRM infrastructure in the kernel.
If you want to write a kernel driver that's effectively a Russian train toilet nobody will stop you, and you can feel free to maintain the engineering effort on it - but it's not going to be accepted into the mainline Kernel and you'll have to maintain it out of tree.
If AMD wants a shim to make it easier to port their Windows drivers over that's perfectly acceptable, but they need to work with the existing DRM infrastructure and the people that maintain it to get a kosher driver that everyone can be happy with.
Actually, it was rejected only in its current state. AMD is still iterating on that driver to get it in the kernel, and is working with the kernel team to get that in there.
That said, they were warned 6 months before the rejection that abstraction layers in the kernel would probably get rejected. The rejection wasn't news in any sense - Dave Airlie just decided to be more direct about it because the warning from half a year ago was ignored. No one knew that was what they were doing because AMD did all the development in private and then just handed over the code. Had they been more open and engaging with the kernel developers, they could have saved a lot of time.
There were a few emotional responses, and then it was made clear that they weren't done yet, and were still willing to work together. So everybody calmed down and got back to work. This all happened within a few days of the initial rejection, so you really shouldn't push the "it's never going in the kernel" narrative. Because much of it will probably be in there by the middle of next year.
You're thinking about the display code. That code is not that important for performance; it's about detecting and driving displays connected to your graphics cards, managing page flips and so on.
It only interacts with performance in that driving displays requires memory bandwidth and consumes power.
These last two generations of GPUs, I've gone Nvidia, after being sick and tired of fighting AMD graphics drivers, and especially fglrx.
I'm glad they're increasingly open, and it's good to see the excellent work happening in the radeon drivers et al. I've just been burned enough by them that my aversion has become quite deep seated. That and so many things favour CUDA instead of OpenCL for acceleration, which is incredibly annoying (especially as it's possible to leverage even inbuilt Intel GFX for that in addition to the main GFX card)
I'm also using a RX 480 on my Ubuntu 16.04 machine. We only got free-sync support very recently, and there is no overclocking/fan control or GUI of any kind.
Also, as far as I understand the open source driver is only included with Linux Kernels 4.7+ while Ubuntu 16.04 is on 4.4 (16.10 is on 4.8). So if you want to use the open source driver you have to pick your distribution carefully.
I loaded Ubuntu 16.10 into my build for just that purpose, though really it wouldn't be difficult to bump the 16.04 kernel up to 4.8+. There are also 16.04 and 16.10 ppas with different levels of mesa driver that are built and tested ahead of the main repositories (stable and nightly). If you're not interested in being that close to the bleeding edge I understand, so some of the improvements at the front-edge are maybe 6-12 months out from wider distribution.
Slightly OT but why is the .10 release always the short-term release and .04 the LTS? You would intuitively think higher number => more stable and that you would avoid major breaking changes on a minor release number, but they do exactly the opposite.
To put it a little more explicitly. Ubuntu names their releases for the YY.MM of release 16.04 was released in the year 2016 (16) in April (04). So those are date based releases rather than semantic versioning (semantic is x.y.z with x major, y minor, z patch).
They release a LTS version every two years in April: 12.04, 14.04, 16.04 and the next will be in 2018, 18.04. They do add a "patch" version after that, but the YY.MM are the major releases. 16.04 is an older version than 16.10, but it will be supported with patches for a longer period of time (major versions of individual software packages will stay the same).
No reinstalling needed, you can just do a dist-update to the next distro release whenever it's out. There are reasons to favor the LTS release, but I don't really see that as one of them.
I have never had a problem on a release upgrade. It's one of the reasons to like Debian based distros IMHO. I suppose that may in part be because I typically waited for a while after a new major platform release to move the the next release.
The 10.04 to 12.04 upgrade has broken audio on my xbmc (now Kodi) box. Audio is pretty fundamental thing on a such box. I've spend a day trying to fix it, until I threw in a towel and reinstalled it from scratch.
The 12.04 o 14.04 upgrade was also broken. I don't remember details, it booted, but the dependencies went weird. I wasn't trying to fix it anymore, just straight reinstalled from scratch.
I haven't upgraded it to 16.04 yet. I will try, what will happen, once Kodi stops supporting 14.04.
You should use the upgrade system provided by Ubuntu ('update-manager'), it knows more about the changes than the standard 'dist-upgrade'. You can also run it to test what changes it will make.
Second, if you are using PPA's you are much more likely to have problems. There's no way to test all the inter-relations between packages that are outside the central archive. If you have packages that have upgraded fundamental packages in Ubuntu's Main archive then it's likely to break.
I mention this because you're talking about multi-media.
In general, it's best to remove PPA software before you launch the upgrade, in fact it's a good opportunity to remove anything from the system that you don't really need: for example for my 16.04 upgrade I removed the C++ environment as I'm not using it.
The only third-party ppas were team-xbmc and nvidia driver. I think that the more likely reason for breakage was, that it is not the desktop installation from live cd, it is minimal installation from the mini cd and then hand-picked packages. It has X11, but no desktop environment, for example.
I guess no-one (or fewer people) were testing that configuration. I wonder if you could use a VM or similar to at least test the upgrade before you do it. I do quite a bit of cleaning before an upgrade, but hitting the button to 'do it' is always a bit of a stressful moment!
AMD are in no position to fuse off functions in their CPUs - they're barely relevant even with all the functionality enabled.
If you look to the GPU market where AMD is in better shape, they do fuse off features like FP64 and ECC unless you pay for the heavily marked up FirePro versions.
Yeah. I'm definitely more loyal to AMD's GPU products than I am their CPU products. I am hoping to get a new zen cpu, but if it flops, I wont hesitate to go intel... I'll definitely avoid nvidia though.
The thing that stops me from getting AMD GPU for GPU programming is CUDA seems to be more widely used. OpenCL I really want to like it but hopefully the open standard AMD is pushing will gain more traction...
Not sure how much use it will be, but AMD's GPUOpen initiative recently released a tool[1] that translates native CUDA code to portable C++ that can be recompiled into either CUDA or OpenCL. I haven't played with it at all so I can't verify how well it works.
The difference between the kernel languages is minimal, many kernels can be translated with basic search & replace.
The difference is in the scaffolding (CUDA is easier but less flexible), and pre-existing libraries on offer.
If you're going to be writing the kernels yourself then it is easy to maintain both versions and it if it is for learning it doesn't matter at all (but I prefer the standard that works on all my machines).
There are plenty of zero-cost C++ wrappers available, including the official bindings. Most people who use OpenCL from C++ tend to accumulate their own convenience wrappers over time. The rest of those languages have fairly easy interoperability with C code and therefore access to the C api, though some may also have established convenience libraries. I wouldn't suggest .NET or Haskell for most serious GPGPU work, but Python, Julia, Fortran and others are frequently used. The PyOpenCL library is particularly excellent.
If you were referring to the kernel language (a restricted subset of C99), I don't find that to be a disadvantage in practice. While CUDA supports a fairly useful subset of C++ (most notably, templates), I still find host-side metaprogramming to generate kernel code to be a much more useful approach in most cases. Of course your preferences may vary, but I haven't seen many experienced GPGPU people express dissatisfaction with this point.
False. The official wrappers support C++ from OpenCL 1.0 on. Maybe this isn't obvious because the C++ bindings were introduced later, but they're backwards compatible and you can set the target API level. (And of course you use 1.2 if you need to support lagging vendors such as NVIDIA)
I have no idea about the other languages but surely other bindings must exist?
Anyone know whether GPU Ocelot actually works well? In theory that should allow you to run CUDA on your AMD hardware via OpenCL.
I always wanted to try it, but on the flip side the fact that AMD was starting their own project to do the same thing doesn't fill me with confidence that GPU Ocelot is worth my time.
I haven't used it myself, because I looked at it and saw that it's been unmaintained for over a year, has no adoption and requires ancient versions of CUDA and LLVM. The concept is interesting, but at this point the project seems dead to me.
I have the same GPU card in a new desktop, waiting for OpenBSD to pick it up. Since they don't support Nvidia on principle, AMD cards are all I ever look at.
I am still loyal to AMD CPU's, but I haven't been loyal to AMD GPU's since they were ATI. I always found their driver stack to be more cumbersome in the past, and I had a rash of bad cards that left a bad taste in my mouth. Not sure if it's changed since, but I still have faith in their CPU's. The way I see it, the benefit of supporting innovation that drives Intel to improve outweighs the benefit of having an arguably faster computer for arguably the same amount of money.
Without Chevrolet we'd be driving Model-T's. Without AMD we'd by running 8080's. Support the little guy, even though they've historically not always made the right choices.
I was loyal to AMD's CPU from 1999 - 2013. Price and not nerfing advanced features (overclocking, VM extentions) kept me coming back. However, there aren't many recent server/workstation options. So I begrudgingly switched over to Intel Xeon chips.
I bought an AMD RX 480 this year. I hadn't bought a discrete GPU in decade. I looked at Nvidia, but saw you needed Quadro/Grid to use with VT-d. AMD's GPU work with VT-d out of the box.
It'd be a damn shame to lose a company like AMD that doesn't disable features for marketing reasons. I'll happily buy another AMD CPU if the zen line comes close to the hype.
AMD's CPU offerings have been lackluster in recent years, Intel has taken the cake in that department. I'm not surprised people are building Intel more and more. Ryzen is going to change all that, though. AMD is squaring that up to be the next big competitor. On the GPU front, the 8GB RX480 is making quite a dent in the market. It's beating out NVidia's 6GB GTX1060 in DX12 games, creeping towards the 1070's benchmarks, while costing less than $250. They aren't catering to the high-end market that the GTX1080 latches onto, but the price for performance is hard to beat. I surmise Ryzen to play out the same way.
A major cost of that good price per performance metric with AMD cards however is their incredibly eye-watering TDP per performance (and their poor heat dissipation solutions).
The electricity cost will in itself mean in maybe a year or so they'll cost more than an equivalent Nvidia/intel card.
Also not sure how the hotter-running AMDs affect their life, but it's possible it's considerable...
Being honest, AMD disappointed me, it was my first AMD GPU, that I purchased after all the pain of nVidia Optimus, and... AMD managed to outdo nVidia in how shitty they are.
* All cards and models have terrible power usage and heat.
* Drivers, both on Windows and Linux still aren't decently stable, on my Windows I had to switch drivers SEVERAL times depending on what game I wanted to play, because with each game it had a different serious bug.
* AMD software (not just drivers) crash a lot on my machine. No other software behave like that.
* AMD tried to "pretend" they don't have too much power usage, on the 380X case they just put a tiny TDP limit for a beastly GPU, so it all the time keeps throttling due to power limits (even when increasing them... I am looking now for information on how to edit the TDP beyond the card defaults), the 380X TDP is the same as the 380, despite it having double the RAM and having more transistors to power.
* AMD RX480 "tdp cheat" instead was to add only a single cable, pretend the card didn't had too much power usage, and let it melt people's PCI-e slots by pulling 7.7A from slots rated to 5.5A
* AMD distribution network is terrible, they make zero effort to sell around the world, meanwhile nVidia and Intel trounces them, not only in marketing, but by reaching local companies to do distribution deals, for example I paid for my 380X the same price as a GeForce 970... (this was one month before RX480 launch). When nVidia launched the 1080 they called the local media, and told them what price the retailers were supposed to sell the 1080 (even if they ran out of stock), a price that was slightly cheaper than AMD Fury cards...
* AMD and partners support just suck, I asked the SIZE of my card, and their support instead told me to "RMA" it, I tried to explain I wanted information, not an RMA, and they refused to help... when I asked about the TDP, then thigns got worse, they got even more staunch that I should just return the card (and eat the shipping costs to US myself!)
* AMD official forums has employees spouting bullshit (like claiming in a huge thread of people complainign about the 380X, that it was the hardware on the complainers machines that was defective, withotu realizing that he just accidentally painted the whole 380X product line as shitty, since there is lots of people complaining of common issues, and if lots of people have the same issues, and is a hardware issue, then the hardware that is crap... also, on the forums making ANY negative comment about AMD, get you attacked, people claim you are nVidia shill or worse, I even got banned from AMD chat after I asked how to circunvent a driver bug that was preventing me from setting my CRT resolution correctly, because they didn't wanted me talking in public about negative things).
Interesting, I am an nvidia owner but have read many times that AMD's software and drivers are much better now than they used to be. If that bit about the 480 is true that's appalling. It doesn't really surprise me the brand's product forums are defensive about the brand, I'd wager the Nvidia forums users respond to negativity about nvidia the same way.
1. Draw more power from the cable (it is what the card should have done in first place).
2. They put back on the card the behaviour the 380X have, that I am trying to get rid of, of clamping down hard on the TDP and throttling heavily.
And yes, the claims of this making the card faster ARE true, and also applies to older cards, AMD cards draw so much power and make so much heat, that if you UNDERCLOCK them, they can get faster, because sometimes the stable speeds with less throttling is a bigger benefit than the amount that you underclocked the card.
That said, last I checked (this was 2 months ago), the driver changes were Windows only, some guys mining bitcoin on Linux didn't noticed and happily melted their riser cables.
I really don't like nVidia behaviour as company, pulling shady tactics left and right, but AMD frankly shocked me with their shoddy engineering. (also see: recently Linux kernel devs REFUSED AMDGPU driver patch... AMD was warning in February that the patch they were attempting to do was crap, and they still went ahead with it, there was even some passive-agressive personal insults exchanged on official Linux kernel mailing list).
The really interesting "loyalty" aspect I've seen recently is AMD users (/r/amd, etc) buying into AMD stock, big time. It's not just loyalty on hardware purchases anymore, it's literally propping up their stock price.
Of course they seem to have been correct that AMD was way underpriced where it was before - but at some point it'll come back down too.
The lack of a competitive consumer CPUs from AMD gave the year to Intel when it came to new gaming systems. With RYZEN and Vega GPUs coming Q1 2017, AMD’s 80%+ rally in the market, and Intel’s lack of innovation, I expect a comeback for AMD in the gaming PC market. Their move back into the Professional and HPC GPU market will also be a huge financial push for AMD to get back where they want to be.
We all know AMD is losing market share to Intel and Nvidia, but it's interesting to see that there was so much loyalty to AMD GPUs among people who bought AMD CPUs. This is starting to decrease as Nvidia wins more of the GPU market. AMD CPUs were only included in 11% of PC builds in the last six months on PCPartPicker.
I believe AMD's market share in consumer PC's is about to double or more with Zen, if their demo is to be believed. Their supposed < $500 CPU is on par in performance with Intel's $1100 CPU with a TDP of 95W instead of Intel's 140W.
If Zen ends up mopping the floor with the E3 and E5 Xeons I will be one of the first to replace gear in my homelab. The amount of money Intel wants for a second E5-2403v2 (a really weak cheap in comparison to my similarly priced i5-4570 in my desktop) to go in my ThinkServer TD430 is insane. Not to mention they're still gimping the memory capacity and PCI-E support of the low-end E3 chips, even though they can cost almost as much as a low-end E5/E7 with similar clock speeds (this part is extremely annoying for my FreeNAS box which is an HP ML10, I could really use support for more than 32GB of RAM).
High end I don't know, but for low end gaming (where my builds always fall into) I'd never dare to buy anything but intel/nvidia, bangs-for-bucks there simply is no comparison especially if you live in a place where kWh are expensive.
The new architecture may change things, but for now, that's it. They might have started losing market due to shady tactics from the competitors, but the nail in the coffin was the botched bulldozer. that had consequences felt on every series down the road, including unrealistic power consumption per unit.
In AMD's case, I think at least some of the loyalty to them is more than fanboyism. They're the only competitor to Intel in the desktop CPU space and Nvidia in the GPU space, and they're in a much more precarious financial position than either competitor. If AMD go away, both of those markets become completely monopolized. So in a way, it is to everyone's benefit to disproportionately support AMD.
I don't know how many of you here keep up with AMD but their drivers have lately been great, on par with Nvidia at least. The Crimson edition showed a change toward quality. I still think that for the price AMD cards are competitive.
The data sample is for the past six months -- from my limited understanding, most big-ticket items have their prices raised prior to the Christmas holiday shopping frenzy so that grandiose claims of price cuts can be met with a straight face.
Perhaps most AMD GPU products have seen a higher price raise in the past six months in preparation for this shopping frenzy?
I've always supported AMD because they were the underdogs. However, overtime the cost advantage disappeared for it's GPU.
I will continue to buying AMD CPU but I've already switched to Nvidia for GPU. I will switch back if the next offering is something like Maxwell cards, silent, low power consumping GPUs.
I still purchase AMD CPUs since the price difference between Intel is so huge. I have however given up on AMD GPUs, the price and performance are usually around Nvidia, but it's just always a little crappier or buggy.
I always want them to do well, since I don't want just on company running the x86 market.
That's no surprise. AMD's CPUs are now objectively inferior except for a few narrow use-cases: highly threaded workloads, VM hosts that need lots of physical cores to pin to machines, maximum iGPU performance, ECC RAM support on a budget (note that i3 also supports this), or very cheap fileservers/media PCs (especially AM1). A high-end FX-8350 is going to significantly underperform an i3-6300 while gaming - especially in minimum frametimes. The single-threaded performance of AMD's construction cores (Bulldozer/Piledriver/Steamroller/Excavator) has always been abysmal.
The FM2-based products are alright, but they are glorified laptop processors and they do not really compete well in the desktop market overall. Nice if you want a decent iGPU but most people use discrete GPUs for any serious gaming, and without the iGPU all you have is a mediocre CPU.
AMD's future in the CPU market rests heavily with Zen. Right now they essentially do not compete for the vast majority of users (power-sensitive mobile/server market, productivity users, or gaming). They run the games, the averages are sometimes decent, but the minimum frametimes suffer pretty badly and they use a lot of power. Compare the 99th-percentile frametimes and the cumulative frametime charts here (both are "badness" metric for measuring stutter) and you can see that AMD's single-threaded performance really torpedoes some games far beyond expectations. The FX-8350's 99th-percentile frametimes are significantly worse than a Pentium G2130 and it even falls behind a dual-core Clarkdale (first-gen Core i5).
AMD's GPU products, on the other hand, are still reasonably competitive - although their top product only competes with a GTX 1070 and is very low on VRAM capacity during a time when VRAM consumption is increasing rapidly and will continue to do so (particularly DX12 and Vulkan games). The RX 480 is a solid card though, especially for the price, and the Fury is due for a refresh soon with the new Vega series, which will undoubtedly have more VRAM.
In particular, there's a problem with combining AMD GPUs and AMD CPUs. AMD's GPU driver stack has a reputation for being single-threaded and somewhat inefficient - so you really need good single-threaded performance with AMD GPUs more than ever before, and Intel processors are very much the preferred pairing. Again, this difference is particularly pronounced when comparing minimum frametimes rather than averages.
They have been working hard on cleaning this up with Crimson and they made another big driver refresh recently too - but AFAIK there's still a pretty significant quality-of-life improvement from using Intel processors with your AMD GPU due to minimum frametime improvements.
Conversely NVIDIA's driver stack has a reputation for being less dependent on good single-threaded CPU performance. So perversely, if you are running on an AMD CPU then you are best off getting an NVIDIA GPU.
Also, side note: AM1 is my favorite AMD CPU platform right now by far. The CPU supports ECC, most motherboards don't but the Asus AM1M-A does. It makes a nice little NAS box if you can forgive its paltry 2 onboard SATA channels and mATX footprint, and you can pick up a CPU+mobo for $45 from MicroCenter.
The surprise is that it is supposed to happening now, shrinking relative to 2015 when AMD CPUs where already just as far behind.
Maybe there was a pattern of brand-loyal, sufficiently rich gamers who made a hobby out of building a new system at the very top end of AMDs offerings whenever a new generation Radeons came out. Those would not find an excuse for an upgrade this year because the latest generation of Radeons tops out lower than previous generations.
But this explanation attempt breaks down completely when I try to back it up with the numbers from the article. The way I understand them, the average AMD/AMD system has actually become more expensive, suggesting that AMD lost more at the low end. But I have similar problems with the main conclusion of the article: what I see in those tables is that the Radeon fraction has dropped faster on Intel machines than on AMD CPUs. With this in mind I would rather conclude that brand loyalty is even more important for AMD than it used to be, it's just that with shrinking general popularity of Radeons, the CPUs get less of a helping hand.
The analysis in the article (and the PCPartPicker spreadsheet from which they source their data) excludes integrated GPUs.
This is a significant difference, because Intel has a 17.48% marketshare on graphics per the Steam Hardware Survey [1] (the original source for the spreadsheet data), while AMD has a 23.46% share -- which includes both their integrated and discrete graphics.
The average price of AMD builds has risen because the lowest-end builds on the Intel side can rely on Intel's integrated graphics. Intel integrated GPUs attain one or two percent share of all GPUs for each generation, while AMD APUs are far behind.
I used to be quite loyal to AMD (and I'm typically not loyal to any companies), but their latest chips are meh. Their GPUs perform very good in benchmarks, but are buggy and poorly supported otherwise.
I often had a lot problems with the drivers on AMD GPUs. Sometimes it took 2 months to be fixed. Two months during which my laptop heated too much and I was unable to play games and had problems on Youtube.
My last 3 discrete GPUs for PC were AMD, and while AMD keep reasonable prices, Linux Kernel support, and supporting standards on Linux (OpenGL, OpenCL), I'll continue buying from them.
My kids and I game on the family computer some, but nothing that requires bleeding edge technology. To me the performance of AMD GPUs is good enough, but they produce a ton more heat and require more active cooling than a comparable NVidia GPU. Even with very good fans there is no way to get a quiet PC that performs well using AMD GPUs.
its funny its decreasing now... because now Vulkan is released which put amd back in the market tbh with power consumption and performance. the cpu's i've ditched after i couldn't get the octa core properly stable. but i'm learning towards their gpu's now as it's less than half price for same performance...
After a AMD driver update removed audio pass through over HDMI on my HTPC build I'm not going back for more, Nvidia seem to support their gear for longer too.
Here's to hoping the new AMD CPU does well too, it's good for the market that Intel has competition.