Hacker News new | past | comments | ask | show | jobs | submit login
AMD Returns to Full-Year Profitability, Forecasts Strong 2018 (extremetech.com)
497 points by artsandsci on Jan 31, 2018 | hide | past | web | favorite | 238 comments

I recently bought a Ryzen. I was moving from a mac to a pc and I hadn't owned a pc in over 10 years. Thing is, 10 years ago my pc had 4 cores .. and at the time of choosing Ryzen it was the only consumer processor (with a reasonable price tag) that was offering more cores.

I'm very pleased, my Ryzen is fast and stable, I'm glad the company is reaping the rewards.

Another Ryzen owner here. I came from mobile 2nd and 3rd gen i7s.

The price to performance is insanely good. I'm on an x370 desktop, with 2xrx550 and rx560. I'm using it as a virt/dev host. I have 16 docker instances running on the host os (Arch Linux).

Then 3 vms with PCI pass through. One for windows gaming/development. Until about three weeks back I was getting a solid 60fps @ 1080p on high across most games, but recently it's been down below 10fps. The next VM is a Linux developer desktop. Lastly, an emulator box connected to a projector away from my desk.

To switch between I just have dongles, 1xhdmi and 1xusb, behind my keyboard, and alternate the plugs. I did this because I couldn't find a good hdmi kvm that did 4k above 60hz.

This year has really been amazing for enthusiasts.

Why has the framerate dropped so much?

I have no idea, to be honest. I tried different driver versions in windows, host kernels, etc. Even booting back into native windows OS, no KVM in between. It was showing the same frame rate drops. My 3DMark was also just about halved as well.

Sounds like an awesome setup :) How much load does your cpu have running so many containers and vm's? I have heard of software solutions for io switching. So you could get rid of usb switching.

I run an Arch host with passthrough myself but it was really a long project setting everything up just right. Especially for gaming.

Really looking forward to buying a Ryzen or Threadripper to go further and virtualize my working enviroment etc.

External public link, I also just realized my monitoring isn't fully active but you can see historical trends.


The CPU is barely breaking a sweat. I wouldn't mind a thread ripper so all VMS can get a quad core. But right now it's plenty powerful.

The input was the biggest pain in the butt. Especially on windows... I've seen the evdev passthrough and hot swap but didn't have the best luck with it. While hacky. Assigning a USB controller to each VM has worked best.

I agree it's a long setup, and still requires some care. It's not straight up and running like a linux/windows desktop. I've had windows just mark the usb controller as inactive, and I then need to reboot the entire host. Something I'm still digging into. But there is much better documentation out there. Things like project looking glass are also impressive.

Only KVMs that I'm aware of that'll work for this are expensive and display port rather than HDMI (not a big deal, passive adapters exist).



> This year has really been amazing for enthusiasts.

I mean, graphics card prices have been pretty horrific, but I agree on the CPU side.

Right? I can’t wait for Bitcoin to crash. Let us do something of real value than mining coins.

RAM prices are through the roof as well. SSD seems to finally be stabilizing somewhat though.

Awesome, what do you use to run the VMs ?

Yea the whole zen lineup seems to have been huge for the company. I switched from an older intel platform (i5 4690k) that i was using to host a bunch of virtual machines for development and play to a threadripper. the difference is night and day, and it's an affordable way for me to go to ECC ram since I've got the need for a lot of it with my use case. Very happy to see AMD making great strides again.

Ditto on the affordable option for ECC ram. I've also gone the Threadripper route and it is just fantastic for compile times of large systems.

Yea I've had it about 3 weeks now, and I'm still amazed at how much power it's got. I've never had a system where I have so much spare cpu and ram that I can run anything without ever impacting something else. I've actually hit the point where compile times are significantly more disk bound than cpu bound. I'm actually working on a project to take advantage of all this cpu power to help the Perl community a bit by using the large 30k+ libraries and their test suites to validate in-development versions of perl being properly backwards compatible (been a bit of a sticking point in this latest cycle of perl 5 development).

Oh, what's your use-case for ECC ram?

The good question would be what's use-case for non-ECC RAM. I wouldn't want unreliably hardware for any task, even as simple as gaming. It's a shame that ECC ram available only for enthusiasts and with unnecessary premium. ECC really should be baseline. Computers meant to be reliable.

Is there any windows / linux utility that shows the number of ECC errors corrected since the boot?

If not, AMD should write and promote one.

Very curious on how often it happened for normal home/offic usage.

I used to work for a silicon company who took a embedded network switch system with ECC logic to some nuclear lab for testing to verify/showcase the ECC functionalities.

In Linux, yes, there is a service called mcelog and a utility from the edac-utils package called edac-util.

You will see correctable ECC errors on systems. How frequent honestly seems to depend on the workload and the system itself. My suspicion is that often they are caused by poor PCB layout and ECC saves you. I spent literally weeks (nights, weekends) chasing down an issue I thought was a software bug but turned out a board layout issue on an embedded system. If the system had ECC, the error would have either been corrected or we would have gotten the uncorrectable ECC error trap. Since then, every workstation/server/desktop I spec is ECC. I wish more laptops had it.


Try the edac-util on 5 of 30+ or so servers.

   "Intel(R) Xeon(R) CPU E5-2667 v3 @ 3.20GHz" with 128G RAM each. 

   edac-util: No errors to report.

System uptime ~30+ days. We recently have to move those servers. I wish I check this command before the move. The uptime should be 600 days + for some of the servers.

Even if you never personally encounter an error, knowing that the 1 bit correct and 1+ bit likely detect exists will save you in terms of piece of mind and trouble-shooting other issues.

Plus, if you're scrubbing your storage the last thing you want is a memory error killing your data.

The problem with bit flips is that they accumulate in high uptime systems.

If you reboot your PC at least once every week it's not going to be a problem.

Early ECC days, you 'washed' memory to fix this. On a read, a single-bit ECC error is actually repaired by the hardware. To get the most benefit from this you would want to read every allocated memory location periodically, 'washing it clean' so the accumulated errors wouldn't become double-bit errors (unrecoverable).

I'd put a wash routine in the background process, where it would string-move a block of memory to nowhere in a round-robin way. Not a terrible hit on the cache; we're idle when in the background task so not impacting the most used code. Some latency issue with interrupts and the like.

> I wouldn't want unreliably hardware for any task, even as simple as gaming.

Would you prefer frame-perfect rendering to increased performance?

We just set up a Kubernetes cluster for building C++ and Java software. Thing is - you want to be absolutely sure that the software you software is built correctly and don't have any bit-flips ending up in your final builds, so ECC is an absolute must. Threadripper supporting this allowed us to create that cluster with cheap commodity hardware and made self-hosting the build-farm the clear financial winner, especially since we have tons of free rack-space (came with the building that was bought), already host servers locally - which means a lot support infrastructure is already in place, and have a solar surplus throughout the year in this building.

In my case I've got a large amount of ram (128GB) and run many virtual machines for various purposes, along with that I've got 24TB of drives hooked up with ZFS running to back up those virtual machines, family photos and movies, etc. Being able to know that an error has happened and it's been corrected or handled appropriately (even killing the system and needing a reboot is appropriate) so that data isn't destroyed is a good thing for me.

I'm not the parent commenter but it's probably finance or CAD modeling, environments where soft bit-errors (ones that silently and unpredictably cause data corruption rather than hard-crashes that are reproducible) can lead to nightmares.

Or could be using ZFS

"There's nothing special about ZFS that requires/encourages the use of ECC RAM more so than any other filesystem." - Matt Ahrens[0]

[0]: https://arstechnica.com/civis/viewtopic.php?f=2&t=1235679&p=...

I didn't say it was required. It's not uncommon for people to want to use ZFS specifically because they highly value data integrity. If that's the case, then it probably makes sense to use ECC.

There is not.

1. desktop or laptop? 2. does it get hot under load?

This is great. It's super important to have multiple market players producing CPUs; obviously AMD is not the same company as Intel, but we also really can't afford to have a monopoly in something as important as CPUs.

Wait a second though, AMD and Intel are practically a duopoly on the PC space; they almost fully control the x86_64 patents and they keep patenting more instruction sets every year (in fact the original x86_64 design requires SSE2 and it's not expired yet). Something similar exists for PC GPUs with NVIDIA and AMD.

I'm pretty confident the situation wouldn't be that acceptable if all 3 weren't American corporations.

What do you mean by "obviously AMD is not the same company as Intel"?

I mean that they’re not directly comparable as companies - since AMD is fabless, they’re not competing on process, which I imagine is a huge chunk of Intel’s budget. But I could be wrong!

don't forget though that intel owns a sizeable chunk of amd (iirc)

I've heard that they have a patent cross license agreement, which is why AMD can use x86 and Intel can use AMD64, but I hadn't heard about the ownership.

I don't see Intel listed at http://www.nasdaq.com/symbol/amd/institutional-holdings

Where do you see they have an ownership stake? How much is it?

It was IBM who forced Intel and AMD to sign the agreement. From CNET:

"Early 1980s--IBM chooses Intel's so-called x86 chip architecture and the DOS software operating system built by Microsoft. To avoid overdependence on Intel as its sole source of chips, IBM demands that Intel finds it a second supplier.

1982--Intel and AMD sign a technology exchange agreement making AMD a second supplier. The deal gives AMD access to Intel's so-called second-generation "286" chip technology."

I don't think so. They certainly don't show up on any of required disclosures of major AMD holders, so if they do hold AMD stock it is less than 1%.

You do not recall correctly.

actually I do, but it turns out it was not 'sizeable' and it was a long time ago, so it may have been divested by now.

Will be adding to AMD's bottomline since I will be buying their CPU/GPU next time my workstation needs updating. This is fundamentally due to the grief caused by Intel's handling (or the lack of it, is perhaps more accurate description) of Spectre/Meltdown crisis.

Simply do not interested in Intel's kind of mentality when it comes to the hardware that runs all my computing. Nah.

Great news! I hope AMD releases a reasonably priced Intel NUC equivalent with their new APU. In the same compact, businesslike case (without gamers' ridiculousness), with a couple 40gbps USB-C Thunderbolt 3 ports, powered by USB-C/PD cable.

AMD would have to license Thunderbolt from Intel, so it's unlikely.

Apparently, not: "Intel to make Thunderbolt 3 royalty-free in 2018"[1].

[1] http://www.zdnet.com/article/intel-to-make-thunderbolt-3-roy...

That's good news! AMD can finally have full stack.

Why is thunderbolt such a big requirement for a NUC equivalent?

I guess external GPU as well as pro-audio equipment. NUC is super small and handy (I have 3 of them) and powerful enough CPU-wise to do almost anything.

Is it possible to have an external GPU via USB-C that would be competitive to one via Thunderbolt? I understand that there may be nothing interesting as of yet.

No, USB-C doesn’t offer DMA.

Some of the new Intel processors come with AMD Radeon graphics, so...


I would love to see them release something that competes with or beats the new Kaby Lake G NUCs

I realize they've always been an underdog compared to Intel, but they still dominated the console market. When every xbox and PS4 sold as your CPU in it, how were they not profitable?

Where they dumping all that money into the Zen code R&D? With the Meltdown craziness, will this be the year we start to see AMD return to the data centre?

Console market is where usually the chip maker in second or third place tries to make a play for their business because console makers are basically assured sales but they will have strange requirements.

PS3 XBOX360 and Nintendo Wii were all IBM chips . They tried a bunch of new architectures but for the most part developer ergonomics was horrible.

So for the PS4 and XBOX One they decided they needed regular computer parts. I'm sure AMD and Intel may have put bids on it but AMD would be the one that would have been extremely motivated to make sales to Microsoft and Sony. As time goes on you can make higher and higher margins since the chips don't change (but on the other hand we are seeing console revisions within the traditional generation). But, I think that the margins will keep on increasing.

Well, there's motivation on AMD's part, but they also have a value proposition that nobody else could have matched. They can provide both the CPU and GPU on one chip, which simplifies the console designs pretty dramatically. More compact, easier to cool, etc.

Intel graphics solutions were (and remain) sub-par, and Nvidia has no x86 solutions (and their ARM chips would have been underpowered).

I don't know if an ARM chip really would be underpowered - remember the consoles are using the ultra-low-power jaguar cores. Probably not a million miles away from an A57, or whatever denver-based core NVidia could pull out of the bag.

It definitely wasn't underpowered for the Switch. And they managed to get Skyrim and Doom running on that. I'd imagine they could do even better performance wise in a traditional console rather than kind of a portable hybrid.

The Switch is definitely underpowered compared to the other games consoles though (and certainly compared to PCs). Reviews consistently mention that it's underperforming its competitors in frame rates, graphics quality, etc. Heck, it doesn't even do 1080p in all modes of play.

A lot of that has to do with it being a handheld hybrid, as you point out, but these compromises would have been a lot harder to stomach if it had been a dedicated set-top console like its direct competitors.

The Switch has sold almost half as many units in the last 10 months as the Xbox One has in the last five years.

If the only concern is resolution and not quality or gameplay, then the Switch doesn't stack up, but I think the sales numbers point to the Switch being a smash hit with consumers so those compromises seem stomachable from my perspective.

(Anecdotally: I wish the graphics in BotW were better, at least in terms of draw distance, but then I take the Switch out of the dock and take it to work to play Mario Kart with my coworkers and it seems like a fine tradeoff to me.)

They're not debating the competitiveness of the console. The OP suggested a modern ARM core is comparable in performance to the Jaguar cores used in the AMD chips in the PS4 and Xbox One.

The ARM core is garbage even compared to modern ARM cores, it’s not only an old GPU and CPU it’s also quite underclocked compared to other products with the same SoC.

That just sounds like a Wii 2.0 situation to me. The Wii sold like crazy thanks to the motion controls gimmick but due to the console being so vastly underpowered there was no real third party support and as a result, most of the Wii game releases boiled down to shovelware.

And while this time the Nintendo gimmick seems aimed more at the core gamer crowd (Take your console game anywhere!) I'm still not convinced this won't be turning out the same way like the Wii did.

Btw: If you want BotW with better graphics you can play it using a Wii U emulator, all the 4k BotW your eyes can handle.

can i transfer my saves?!?!

Well, Skyrim is from late 2011, so I'm not sure it's necessarily a good indicator that a modern game system processor is up to snuff. Doom is considerably more recent, but then again, I'd be willing to bet Carmack's legacy at Id is strong, and there's lots of people there that could (and possibly did) tweak the engine for good performance given the Switch's limitations. Actually, I think they demoed Rage on mobile, so they probably already had it highly optimized for ARM.

That said, I love the switch, and BotW has quickly become one of my all time favorite games. My only problem is resisting the urge to kick the kids off the switch so I can play. ;)

I know it was part of the appeal for some (most?), but I hated the breaking weapons. My method of playing was to use the crap weapons, hoard the nicer ones and try not to use them. But the unfortunate side effect is the crap weapons had poor durability so you'd be mid-fight and break.

Well, the benefit is that they do double damage (I think) when they break, so it's not as bad as it could be, but yes, they do seem to break too often. Then again, it depends on your play style. I'm a completionist and item hoarder, so I'm running around with almost an entire full inventory screen of high-end weapons and using the crappiest I have (which is still pretty good now) to fight with. Having to choose to drop a great thunderblabe with extra durability because there's a chest with something better and I'm full with better weapons is excruciating. I would agree they break too quickly through, as going through multiple in a battle is sort of ridiculous. Lasting maybe 2x to 3x as long might be good (or having a difficulty level that affected it).

> they managed to get Skyrim and Doom running on that

Original Doom was already running on the SNES like 25 years ago.

Getting something to run is not really that big of a hurdle as you can always scale down resolution/graphical fidelity/frames per second. The question after that rather being: Who would want to play an obviously inferior version of the game? Because that's exactly what these Switch versions of Skyrim and Doom are, inferior.

Inferior, except for the fact I can take them with me.

The value proposition is also developer ergonomics on x86 systems . I think I failed to mention that sorry .

Those margins are usually baked into the contract and decrease over the lifetime of the console. At least that's how I've heard it told with people who were in the know.

I'd suggest those people "in the know" are probably talking out of their arse.

It's commonplace for consoles to go through a number of hardware revisions over the lifetime of the console. These revisions are rarely made for performance reasons, but instead would be made to improve reliability, security and manufacturing cost. Setting aside security (which is would be done as reaction to a hack that can't be patched with software), the whole idea is to increase the margins for the manufacturer. Note that the brand owner may, in some cases, sell consoles at a loss, but the manufacturer is very unlikely to be working to the same business model.

Eh, I've been in the consumer electronics space a while and it's pretty common for a SoC/CPU vendor to offer a time-based tiered price as a part of a procurement contract.

As your chip goes from cutting edge process technology($$$) to mid-tier($$) and commodity($) they usually pass that savings along, otherwise they would incentivize going for a mid-cycle refresh and suddenly a large revenue stream for your chip dries up as a product moves to a (potentially) different vendor.

> "as a product moves to a (potentially) different vendor."

The only other manufacturer of x64 CPUs in Intel, and the only other manufacturer of (competitive) GPUs is Nvidia. Why would Microsoft or Sony incur the high NRE cost to switch to two different suppliers mid-cycle just so they can shave a small amount off their unit cost?

That's not to say AMD wouldn't pass on some of the savings to Microsoft and Sony, I'm sure they would, but it's not likely to be a one-sided deal.

The same reason you fight tooth and nail for pennies on the BoM, because at scale every cent matters. There's also the inverse where if the jump to the next chip is small(see XB One/S/X) you may migrate early and EoL a product before planned leaving a chip production line with no product.

You also always have multiple vendors bidding so if one of them offers a sliding scale you're going to use that as leverage against the other vendors.

> "The same reason you fight tooth and nail for pennies on the BoM, because at scale every cent matters."

It doesn't matter if the potential savings are sunk by the NRE costs of shifting to new vendors before the projected end of the product. You have to look at cost holistically.

Well, duh ;).

When you're talking 15M+ units being able to drop $1-2 from BOM is a huge motivator for things like this. When you get into the millions of dollars worth of savings you can fund a lot of engineering time.

They also may never go through with the switch but use it as leverage for a better deal. Back when I did SoC evals we'd take products right up to the brink of production just to apply pressure. It was almost like a game of chicken between 2-3 vendors to see who blinked first and gave us a better BoM/deal on the core chip. These weren't easy bring-ups but usually were vastly different GPUs + CPU combos. It's very much a thing that happens, at least back when I was involved with stuff like that ~6 years ago.

I don't doubt it happens with other electronics manufacturing, and negotiating with multiple vendors is a good negotiating tactic, my comments are purely directed at the console business.

If you look at the average lifetime of a (successful) console, you're looking at between 5 to 10 years. Within that time, multiple revisions take place, often to replace peripheral components to reduce costs, but in the case of the longest living consoles they will often get a design refresh, including a redesign of the CPU / GPU. I can think of only one example in recent memory where a console switched manufacturers for these core components during its lifetime (that example being the GPU for the Xbox 360, which switched from AMD to IBM). In the case of the Xbox 360, IBM were already manufacturing the CPU, and the redesign combined the CPU and GPU into a single SoC. Aside from that one example, there haven't (to my knowledge) been any other examples of companies switching manufacturers for their core components mid cycle. Contrast this with the PC business, where OEMs will frequently swap between motherboard/CPU/GPU manufacturers, and you'd have to wonder what makes the dynamics in the console market different.

It sounds like we're just arguing past each other at this point.

I spent a while in the industry(and shipped some god-awful titles) so I tend to trust the people I know(and don't plan on outing them) but can understand if you don't want to take the word of a random person on the internet.

For what it's worth, I don't doubt you know industry insiders, and I agree we're just talking past each other at this point. It's completely uncontroversial to state that console manufacturers will shop around, my main point was that there's a certain amount of momentum (both business and technical) that comes into play with the more complex parts of these consoles, which is compounded by the limited number of companies that are able to compete (a high proportion of the consoles of the most recent generations have used IBM and AMD chips). It's not the same as the embedded space where there's tons of competitors. In any case, I respect this conversation is going nowhere fast, so I'll respect you have a different point of view on this, and concede that I was unable to convince you of mine.

Yeah, it's an easier conversation in person(combined with the fact that there's generally very few secrets inside the industry). It's certainly an interesting space and appreciate the conversation.

I know you were probably talking about traditional consoles(xbox/ps/etc) but when you start including portables(with the switch just smashing the 1 year mark at 15M units) then the field opens up significantly. You've got at least 5-6 different GPU vendors and a whole host of companies providing SoCs. Embedded GPUs have been making huge strides and I wouldn't be surprised to see some of the top end of that space start nipping at NVidia/AMD soon. Lots of the people in that domain came from desktop(Adreno is a anagram of Radeon for instance ;)).

Protip: When you accuse people of talking out of their arse, don't then proceed to talk out of your arse.

Please don't post uncivil comments here, even if you're right that someone else is wrong. There's nothing substantive here, which means you're just degrading the site.

If you know more, which you may well do, the way to comment here is to share some of what you know in a way the rest of us can learn from.


Protip: When you disagree with someone, try to offer more than just an ad hominem.

If you disagree with something I've said, by all means explain why.

I'm not trying to start a debate here. It's obvious from your posts that you don't have any particular knowledge of AMD's console dealings, or even console dealings in general. You only speak to generalities which are well known among even casual observers of the industry. None of what you've said specifically refutes the original claim. I know you don't have any grounds to speak with any authority on the matter, and you know it too if you're honest with yourself.

> "I'm not trying to start a debate here."


> "It's obvious from your posts that you don't have any particular knowledge of AMD's console dealings, or even console dealings in general."

I never claimed to.

> "You only speak to generalities which are well known among even casual observers of the industry. None of what you've said specifically refutes the original claim."

It's precisely because these market generalities do not match with what was proposed that I called it into question.

Developing new silicon is a risky endeavour. Even going from one process node to a smaller process node without changing the design architecturally is fraught with problems, resulting in lower yield until the manufacturing kinks are worked out. Why would a chip manufacturer agree to a deal where they're expected to take on increased risk for a lower reward? It makes no business sense. That's what I was calling into question.

> "I know you don't have any grounds to speak with any authority on the matter, and you know it too if you're honest with yourself"

I may not personally know the same people vvanders knows, but I can put myself in the shoes of a businessman running a chip manufacturer. I enjoyed debating with vvanders, and aside from leaving out the dig at the people they know in the industry, I'd do it again.

They previously (I saw previously cause they're doing a mild re-org right now), organized their console business under their "Enterprise, Embedded & Semi Custom" business unit which is broken out by itself in the article (https://www.extremetech.com/wp-content/uploads/2018/01/AMD-Q...).

You can see that in 2016, their console (+ other stuff) was making an operating income (basically revenue - cost of business not including R&D) roughly the same size as their losses in their main CPU&GPU business. This is before their ~400 operating losses in other categories. I think R&D gets rolled into that? I'm not sure

Opterons were the only server chip worth using for years, until Xeons got good.

> When every xbox and PS4 sold as your CPU in it, how were they not profitable?

I'm guessing razor-thin margins to get the contracts.

I think console market is an unhealthy case, because consoles are sold at loss by big incumbents (Sony and MS), who try to reap the profits through lock-in rather than hardware competition. So AMD can barely profit on hardware there. It's starting to improve lately since it's becoming a bit more competitive and less stagnated, you can notice that the rate of console hardware upgrades is increasing. Their prices will also become more real in result.

The PS4 was never sold at a loss(Sony learnt from their past with the PS2 and PS3). The Xbox One however was https://arstechnica.com/gaming/2013/11/report-399-playstatio...

It's a bit pedantic of me to say this, but if your cost of goods is $381, and the retail price is $399, you are losing money. There is the retailers cut, distribution, marketing, R&D etc. They were losing money to invest in the platform.

The article says the PS3 cost of goods was $800 per unit at launch, but it is worth remembering that the PS3 was an investment in two strategic initiatives for Sony. One was the PS3 standard, that they would earn future software royalties on. The other, and at the time maybe more important, was the BluRay standard.

Three... They were also investing heavily in the Cell architecture, codeveloped with IBM and Toshiba. That one just didn't pan out.

well, it kinda - just not in the way they expected.

The changes game developers had to make to handle cell allowed them later on to support multicore CPUs and GPGPU much easier.

AMD probably used consoles to keep the wafer supply agreement above the minimum threshold.

Agreement with who?

Global Foundries (aka: former AMD fabrication lab. GloFo spun off as AMD lost money in the late 00s)

Despite spinning off, GloFo and AMD remained tightly aligned businesswise, and AMD signed agreements such as minimum silicon wafer usage.

Why did they have to spin off the foundries to begin with?

They were rapidly running out of money, and running a competitive fab requires billions of investment every year. They could no longer sustain their fab operations, so they spun them off and sold them to ATIC (which is part of the Abu Dhabi investment fund, which is a project by Abu Dhabi meant to turn their oil revenues into something sustainably profitable for when the oil runs out).

In order to make the fab seem like something someone would want to buy, AMD tied their futures into it by entering into a wafer supply agreement where they will buy certain amounts of GloFo:s production whether they need it or not.

This agreement was a millstone around AMD's neck during the worst years. It means that AMD absolutely wants to sell at least a certain amount of silicon, even at negative margin if necessary.

To not go bankrupt.

Indeed. People may not know just how bad AMD's situation got.

But by 2013, AMD had sold its headquarters in a "reverse mortgage" to remain solvent. https://arstechnica.com/information-technology/2013/03/amd-s...

AMD had almost a full decade without making money.

I tried to buy a RX580 a while ago to play some games I have been meaning to play and they were sold out everywhere.

Part of me wonders if the long term effects of the mining craze will be negative for the makers of graphics cards and in return PC component makers as a whole. Many people might be turned off of pc gaming by the prohibitive prices of buying cards at 2.5x MSRP and turn to consoles instead. Or in my case I intend to wait and pick up cheap hardware when crypto tanks hard (i.e. Tether fraud collapses), or GPU mining becomes unprofitable, or altcoins switch to proof of stake.

Yes, this is a major concern for PC gaming. I have a couple 1070s I bought for 400$/each, now I could get double that used. In-freaking-sane.

just curious but why not sell them now and buy newer ones later for non-inflated prices?

how is it possible that AMD seems to be at the limit of production for their GPUs but barely beat estimated earnings this quarter?

If you're trying to be economically rational you're better off simply using the GPUs to mine while you're not using them. ROI time is around 6-9 months for most GPUs these days.

Does that include electricity costs? For which cryptocurrency?

yes the electricity cost is almost insignificant. currently most profitable on GPU is XMR before it was ETH, profitability changes as the prices fluxuate

>just curious but why not sell them now and buy newer ones later for non-inflated prices?

At least in my case, it’s because I want to play games in the meantime between now and when mining finally dies.

AMD is in a great place at the moment - but we need to see what the smaller Zen cores coming this year can do, and what the Zen 2 release next year look like. If they have solid, significant improvements, it means that AMD is set for a bit, and can really compete and take marketshare from Intel.

Better yet, I am really hopeful about Epyc - it still doesn't seem to be shipping in huge numbers, but as someone really burned by Meltdown, it seems like perfect timing for AMD to compete.

If you don't mind me asking, in what capacity were you affected by the Intel Meltdown exploits? Was it an issue in terms of security, performance, etc?

Not OP, but I personally was pretty heavily affected - I run Robohash.org, which runs off a bunch of Digital Ocean droplets.

I needed to fire up ~ 30% additional nodes after they did their migration.

I didn't measure as carefully as I could have, but the existing nodes couldn't keep up with demand after the upgrade.

Database performance dropped badly, and we had to increase the number of database servers.

A lot of people are buying AMD/ATI GPUs to mine ethereum, their GPUs do extremely well against this one specific crypto currency. With ETH moving to PoS this year, thus removing the need for GPU mining, I wonder how this will impact the price of AMD stock in the future. Lot of people use Nvidia to mine other GPU compatible atlcoins.

There are significant technical obstacles to overcome with the switchover to PoS. The switchover was supposed to happen more than a year ago (i.e. given the current ETA they are more than 2 years behind schedule) and the release date is being pushed back essentially as we go.

In fact the technical problems are so severe that it's questionable if the switchover will happen at all. All current PoS coins use centralized control due to these technical issues.

NVIDIA GPUs are actually typically more efficient for mining most coins, AMD cards are only preferred because they have quicker ROI. If mining profits continue to decline, that efficiency comes more into play. And, NVIDIA may be about to introduce a new generation which further improves efficiency.

If they do, god only knows what the prices are going to be. They're already nuts on Pascal, let alone if miners are snapping them up for the efficiency. Availability is going to be shit too.

> NVIDIA GPUs are actually typically more efficient for mining most coins, AMD cards are only preferred because they have quicker ROI.

In what way are nvidia cards more efficient? Vegas are in some cases much faster and at worst equal to gtx 1080.

> In what way are nvidia cards more efficient?

The usual way, i.e. "efficiency=performance/power"? Here, I did the math for you. Numbers from whattomine.com:



As you can see, apart from Vega being a very efficient Cryptonight miner, and Polaris being an acceptably efficient Ethash (Ethereum) miner, AMD cards have absolutely garbage efficiency, they use just tons of power for their performance. And they are one-trick ponies: there are only one or two decent algos for AMD cards.

The 1080 and 1080 Ti specifically have problems with a few algos due to their GDDR5X (which results in half of the memory bandwidth being wasted) but on the whole NVIDIA cards are extremely efficient miners across a variety of algorithms, and usually keep up with AMD on efficiency even on Ethash.

The 1070 Ti, in particular, is the reigning champion of efficiency. It's basically a 1080 with GDDR5 instead of 5X, which makes it the most efficient card in most algos, and its worst-case is "only" being as efficient as a 580 at Ethash.

Again: people prefer AMD cards because they ROI quicker during the booms. The half of this that efficiency obscures is power: AMD cards have higher TDPs and are capable of pumping out a lot more watts per dollar (RIP Planet Earth). But when profits are down and efficiency matters (or for those with expensive electricity), NVIDIA is better, and you also have that flexibility to move across algorithms if there is a hot new currency.

There is also a pretty substantial cargo-cult with regards to mining: people use AMD because that's what they see other people using, and they don't bother to do the math. But NVIDIA has the same efficiency advantage in compute as they do in gaming: 1.5-2x the performance-per-watt in most situations.


The 580 and Vega 56 (especially) are popular for modding. People mining with the 580 are running them at only 120w.


These numbers take that into account - mouse over the device name on whattomine.com to see what mods/settings they're using for each device, f.ex the 570/580 use the Eth bios mod and 1100/2000 with a 0.2V undervolt. Pascal cards are undervolted as well. Without those mods you are going to use even more power and/or get a much lower hashrate (particularly on Vega).

Again: AMD is way, way behind on efficiency and has been since Maxwell. There are a few algos they do well on, but on the whole they are roughly a generation to a generation-and-a-half behind NVIDIA (Vega 56 is roughly as fast and efficient as a 980 Ti). You buy them because they're cheap for the hashrate, not because they're efficient. Think cheap-and-nasty here.

The thing with AMD power measurements that you have to be real careful of is that AMD's sensor is only reading the core power, not any power spent on memory or any losses in the VRMs, and they do it very inaccurately (and don't account for efficiency/etc). So you should be taking people's chatter on Reddit with a big grain of salt - unless you are measuring the power at the wall, or using a digital PSU, you are probably getting figures that are ~40W low, and potentially have 20% variation or so from the actual figure. Anyone using GPU-Z to measure their card is doing it wrong.


A 1070 for example will mine marginally more eth than a 580 but will consume less power. The case you mentioned, mining Monero is an outlier at best, as cryptonote is used by no other major crypto and vegas are pretty good at it, despite being power inefficient compared to a 1080ti.

AMD hasn't scaled up their production to meet that demand, and neither has Nvidia; they're both wary of a crash causing demand to stop as miners sell their burned-out GPUs.

Right now that makes things hard for everyone not a miner, but if AMD ends up being the only option for gamers while Nvidia still sells to miners, they might actually gain from the mess.

> they're both wary of a crash causing demand to stop as miners sell their burned-out GPUs

Offtopic FYI: GPUs that are in use 24/7 would probably in a better shape than those that encounter more thermal (on/off) cycles. There's obviously more wear and tear on the moving part, but fans are easier to replace than mechanically stressed silicon/PCBs.

a) it is not at all certain since both processes do damage to board and chips and not a single person in this thread knows what will be more severe for each small part of the board or connections.

b) have you actually tried replacing monstrous coolers on moderns cards? Or even buying spares? It is rare, expensive, hard to disassemble and hard to fit back. Some cards even use glue stickers on memory thermo-interface, you can potentially tore chips away.

Regarding B I just checked iFixit for Radeon and found 2 (two!!) guides for graphics cards. One for reapplying thermal paste, and one for fixing a noisy fan with I think tape. All the other guides are for laptops specifically. If you know any other good written guides (pref not videos) available to clean and/or replace fans on graphics cards I'd like to learn about them.

Do they fail from thermal stress or electromigration?

The thing to remember about NVIDIA is that their production levels are much higher to begin with due to their overwhelming marketshare. They have a lot more inventory in the channel at any given time, which gives them more of a buffer when GPU mining heats up.

I actually think AMD's timing on increasing production is really questionable here. Barring another major runup in Bitcoin prices (altcoin profitability follows Bitcoin prices), which seems unlikely at the moment, network difficulty will continue to increase and mining profits are going to continue to decline. That means miners are going to taper off purchases, and some may even be dumping their rigs.

So AMD is essentially increasing production at the exact moment when they may already have a problem with the market being flooded. The time when they needed to increase production was 4 months ago, it's too late now.

On top of that, NVIDIA may be pushing out their new GPUs within the next few months, implying another step in efficiency which puts AMD's performance/efficiency even further behind. They are already about a generation behind (roughly competitive with Maxwell), despite being on similar nodes. I'd be prepared for NVIDIA to make a minimum of a half-gen step (30%), if not a full-gen step (60%).

AMD's timing seems exquisitely poor here. They didn't want to bet on crypto, and now it's too late. Fortune favors the bold.

Memory shortages have also played their part.

The /r/buildapc/ subreddit is depressing right now. As long as I can remember, it has always been the case that you can build a higher quality and faster computer than you can get for the same money with a pre-built.

However, there are tons of threads due to skyrocketing memory and GPU prices stating that it doesn't make sense to build right now and you can get better deals on pre-built machines.

What is more likely - people saying "oh, I can't mine now, guess I'll have to sell all my hardware at a loss" or just switching to whatever next coin will be GPU oriented? For me it is obvious. Until (if) whole crypto market crashes people will continue to mine on GPU whatever they can, propping its valuation.

Not surprised. Ryzen was a hit and their GPUs are great for Linux and mining.

I had no end of issues with Nvidia cards on Linux, I had to buy a ridiculously expensive AMD GPU (thanks crypto miners!) which has since been rock solid. I needed monster card that can drive four 1920x2160 displays, and the liquid-cooled fury didn't dissapoint (plus it's nice and quiet.)

4x 1920x2160 ... is that the same as 2x 4k screens?

My 1080TI is only reasonably happy to run 1920x1200x2 (60hz) + 3440x1440 (95hz); it won't fully idle the clocks, resulting in higher temps.

Yeah, it's really 2 4k screens in picture by picture mode making them 4 individual displays to the computer, with 4 cables. I find that gives me more usable real estate than 2 4K screens because of the way it's easier to maximize windows and place them side by side.

I think AMD GPUs are crap; at least their windows drivers. I recently built a gaming PC (MSI RX-580 GPU) for my son, and never had so many frustrating crashes, black screens, etc. Waited for a month for their 5.18 driver release, and no improvement.

What a piece of shit. Sold it on eBay in 30 seconds for $40 more than I paid for it, and bought an Nvidia 1060; works flawlessly. The guy I sold it to said the RX-580 is working fine for mining. He got a card for $300 and I got rid of a headache.

> 5.18 driver release?

How many years ago was this? The latest AMD Adrenalin drivers are at 18.1.1. Oh and on Linux the open-source AMDGPU drivers (made by AMD themselves) are now part of the kernel since not too long ago, so any recent cards will work out of the box on a distro with a recent kernel.

AMD's driver scheme is basically year.month.something

I haven't figured out the last number. But the 18.1.1 release basically means "2018 January". Similarly, their 17.7 release meant "2017 July".

5.18 seems to predate AMD's current naming scheme. So it was probably something from a long time ago...

Well 18 whatever; that was released after the 12/17 release. Fuck AMD. Good riddance.

I think your unnecessarily caustic words are reason enough for your downvotes. That kind of language isn't really welcome in ycombinator. This isn't Reddit: there's an expectation for a higher-class of discussion.

With that being said: the 17.12 update, aka Adrenaline December 2017, on Windows, broke DirectX9 and also broke a few OpenGL games. AMD suffered a major PR hit over the winter-break when a few forum moderators said things they shouldn't have said about this issue (which was eventually fixed when the executives returned to work after the New Year and performed PR Damage Control). So there's a lot of latent anger in the gamer community around AMD Drivers right now.

With that said: it isn't too difficult to roll back to an earlier update in these cases, but AMD did release a DirectX9 fix by the next month (January 2018). The OpenGL issues seem to still be a known issue, and that's why a number of people are remaining on 17.7.

I had good PSU, MB, mostly clean OS install and no overclock and my AMD card run like clockwork, with zero issues over 3 or 4 years. Literally zero crashes while running almost 24/7 and gaming regularly. Don't skimp on important PC parts and install crapware and you see that both manufacturers produce pretty decent software (putting shady marketing practices aside, like internet account requirement to update geforce driver or raptr-craptr on amd, just from pure tech point of view).

At least you don't need to register with a Facebook account to use all of the driver's features.

my rx580 is working flawless on Windows and Linux. Perhaps your computer have a problem.

The experience can't possibly be that bad for everyone. Some other factor must have been at play with your setup.

Ryzen looks really, really good. I'm planning on upgrading to Team Red over Intel as soon as DDR4 prices come down a bit. Intel's only advantage seems to be AVX512 (and that comes with a clockspeed penalty), and a truly unified L3 cache (good for databases).

But Ryzen's "split" L3 cache seems to be great for process-level parallelism (think compile times), and seems to scale to more cores for a cheaper price. They have an Achilles heel though: 128-bit wide SIMD means much slower AVX2 code, and no support for AVX512.

But for most general purpose code, Ryzen looks downright amazing. Even their slower AVX2 issues is mitigated by having way more cores than their competition. AMD sort of brute-forces a solution.

AMD's GPGPU solution looks inferior to NVidia, but for my purposes, I basically just need a working compiler. I don't plan on doing any deep learning stuff (so CuDNN is not an advantage in my case), but I'd like to play with high-performance SIMD / SIMT coding. So AMD's ROCm initiative looks pretty nice. The main benefit of ROCm is AMD's attempts to mainline it. They're not YET successful at integrating ROCm into the Linux mainline, but their repeated patch-attempts gives strong promise for the future of Linux / AMD compatibility.

The effort has borne real fruit too: Linux accepted a number of AMD drivers for 4.15 mainline.

NVidia's CUDA is definitely more mainstream though. I can get AWS instances with HUGE NVidia P100s to perform heavy-duty compute. There's absolutely no comparable AMD card in any of the major cloud providers (AWS / Google / Azure). I may end up upgrading to NVidia as a GPGPU solution for CUDA instead.

OpenCL unfortunately, doesn't seem like a good solution unless I buy a Xeon Phi (which is way more expensive than consumer stuff). AMD's ROCm / HCC or CUDA are the only things that I'm optimistic for in the near future.

> I'm planning on upgrading to Team Red over Intel as soon as DDR4 prices come down a bit.

You probably want to buy that DDR4 as soon as you can - memory prices (DDR3, DDR4) have been consistently going up - not down[1]. It's insane. The price-fixing fines the memory manufacturers paid were clearly not punitive enough, we need more anti-trust action in this area.

I recently made a build on a budget, I had to snipe online specials on DDR4; waiting for the prices to come down is not a winning strategy.

1. https://pcpartpicker.com/trends/price/memory/

That trend seems to be over. China's antitrust regulator has put the DRAM cartel on notice, and they're willing to stand up state-sponsored fabs to deal with it if necessary. Coincidentally just after that, Samsung announced production is going up and prices are expected to decline over this year.




Pretty blatant market manipulation here by the DRAM cartel - and China has enough downstream manufacturing at stake here that they're willing to go to the mat over it.

This is fantastic news! I hadn't heard of this - it means my future memory upgrades will be reasonably priced :-).

> The price-fixing fines for memory manufacturers were clearly not punitive enough

This commonly touted meme seems to ignore market realities:


Data-centers buy TBs (not GB, but Terabytes) per machine. And then they buy ~20 machines per rack, and then fill an entire room full of racks.

When Data-centers decide its time to upgrade from old 1.5V DDR3 machines (or 1.35V "low voltage" LDDR3) to 1.2V DDR4 2666 MT/s like they have been doing in late 2017 through (estimated) 1H 2018, it is only natural to expect DDR4 prices to rise.

The power-savings from switching from 1.35V "low-voltage" DDR3 to 1.2V DDR4 are big enough that there are lots of upgrades incoming from the biggest purchasers of RAM in the world.

> waiting for the prices to come down is not a winning strategy.

2H 2018 seems to be the current market predictions. This also works in line with waiting for Zen+ (aka Ryzen 2). If DDR4 prices are still high at that point, I'll reconsider then.

I plan on recycling my old R9 290x until GPU prices come down... but waiting till August 2018 may be long enough for GPU prices to normalize as well.

> When Data-centers decide its time to upgrade from old 1.5V DDR3 machines (or 1.35V "low voltage" LDDR3) to 1.2V DDR4...

So, which data-centers are upgrading from DDR2->DDR3 then? I ask because DDR3 prices are also trending upwards.

DDR3 prices trend upwards because fewer and fewer companies make it anymore.

LPDDR3 has one slight advantage: an advanced sleep state which uses less than 1/10th the power than normal sleep. Laptops, tablets, and cell phones use this sleep state regularly to remain in "suspended mode" for extended periods of time.

DDR4 cannot do this. As such, LPDDR3 still has some demand in the mobile marketplace, despite fewer fab-labs making it. As such, I'd expect DDR3 prices to go up.

LPDDR4 has the advanced sleep state, but is unsupported by Intel / AMD processors.

But most general computing is single threaded, while high core counts work well for code that often benefits from AVX512. High core counts are also good for webservers, but here AMD is once again going to have an underpowered consumer chip.

I wish though they'd produce power efficient Vega for gaming sooner. Looks like it's being pushed off to 2019.

Sapphire Nitro version requires 3 (yes, 3!) 8-pin power connectors. That's why I'm staying with RX 480 for the time being. It works very well with Mesa on Linux.

That's up to Sapphire, why they did it (probably for overclocking).

The reference design (including the LC version) and some other designs, like Asus Strix, require only 2 8-pin power connectors.

But they still are quite power hungry at least according to the TDP specs. So it gives an impression that Vega isn't really where it's supposed to be efficiency wise.

And I prefer to avoid reference design which is usually too noisy.

Many people are undervolting them, which brings power draw down considerably and leads to only a miniscule difference in performance.

Let's not pretend like this is unique to AMD though: NVIDIA products can be undervolted too. And if you are willing to drop 10-15% performance, you can cut power down by 40-50% on Pascal, just like Vega.

The same voltage-vs-frequency curve applies to both products.

So why are they so overvolted by default?

Unpopular truth: because that's where they need to be to ensure stability and yields.

There is a wide variation in the stability of hardware across various different tasks. An overclock/undervolt that is stable in one task is not necessarily stable in others, as anyone who's overclocked can attest. f.ex Just Cause 3 needed several hundred MHz less than I could get in TF2 or Witcher 3.

The voltage is set where it needs to be to ensure that there's no instability in any program, on any sample in a batch. People look at one task and one sample and assume that their OC must be stable on everything, on every card in the batch. In reality it's not, or not to the degree that the manufacturer requires.

Yes, you can get extra performance on any given sample by eating up your safety margin and pushing closer to the limit of that specific sample's frequency/voltage curve. But as the economists say: if there were free money laying on the sidewalk, AMD would have picked it up already. They're not stupid, they ship the voltages they need.

> They're not stupid, they ship the voltages they need.

That's my point then. Vega just needs too much by default. Hopefully next iteration will be less power hungry.

It's a design decision, which boils down to AMD loves to "overclock" by default to squeeze out as much performance as possible. I could easily imagine the mirror-universe version of this conversation had they done otherwise "You can gain 5% perf by 'overvolting' AMD GPUS. Why don't they do this by default?"

Whatever they make will be bought up by miners. Wait for the crash of cryptocurrency markets D:

For those that haven’t seen it, intel is releasing a new chipset that has an integrated AMD GPU on it; I am eager to see how the sales for that go. It does seem to imply that intel has given up on their own iGPU aspirations, which makes no sense given how much they’ve doubled-down on the GPU tech in recent years. But it may be just intel playing the long game to compete with nVidia, which is just killing it in the GPU department in recent years.


Intel's plan seems to be doing this temporarily until their own high performance GPUs are ready.

Don't they say this every year? That their integrated GPU will be better, but it's still crap.

Well, to be fair, Intel HD graphics is a lot better than GMA. With Crystal Well, it's actually pretty close to something like a GTX 750 and outperforms even the new Vega APUs.

But this time they're actually making a play for the discrete graphics market. They've hired Raja Koduri and everything. Not the first time they've done that either (see: Larrabee) but they do look to be making a serious attempt.

if it genuinely outpreformed the vega apus why would intel put an vega apu core on a new processor line...

They're not in the same performance class. A Crystalwell Iris Pro 580 (and the 2500U) are slightly slower than a GTX 750 non-Ti. The new Intel APU is going to be ballpark RX 570 performance - or something like a 1060 3 GB on the NVIDIA side. About 4x the performance - which is absolutely necessary for the sorts of applications Hades Canyon is aimed at, like VR.




Having a discrete graphics die, and especially having access to HBM2, makes a huge difference in performance. There isn't much you can do with 30 GB/s of bandwidth to share between CPU and GPU. Having Crystal Well L4 cache is a huge boost but it's still a halfassed fix compared to having proper VRAM available.

Of course, it's also a vastly more expensive part as well. Just the CPU+GPU package is more expensive than some of the 2500U laptops.

Presumably Intel is aiming for something more like Vega M GH with Jupiter Sound/Arctic Sound - it makes little sense to design a low-end discrete part with no room for future performance growth.

1/3 of the sales increase from crypto, probably not sustainable.

I don't think it has to be sustainable. They just need more time for EPYC to really take off and for Ryzen to continue gaining market share. I think their strongest areas are going to be CPUs and APUs, and things like Kaby Lake G. They shouldn't give up on graphics but it's going to take longer for them to be super competitive there.

If it gives them the capital to start seriously competing in deep learning, then it doesn't matter.

If anything, (having assumed they also know the crypto position might not be sustainable) it's better that it's not. Hopefully that means enforced innovation from them for future products, bank the cash now and invest in newer tech for the future.

Why’s Nvidia’s margins so high in comparison? Given the technology and patent portfolio I’m surprised that someone hasn’t tried to take them over.

I am curious about this as well and would like the context if anyone has it.

As far as I could find, it seems that the interesting thing about Nvidia is that while they only make GPUs they've learned how to cross-market them across domains (data centers, AI, gaming, automotive) and to charge premiums according to each niche. So I think that while the products themselves have subtle differentiations, you more or less get access to each vertical from doing the same R&D for each chip generation. This may be too much a generalization, but it looks as though Nvidia has figured out how to create wildly different products out of the same GPUs and multiplies its revenue accordingly.

Are AMD's margins actually any worse in the graphics card market? Or is it just that AMD is also playing in the (lower-margin) CPU market, so their overall margin is worse?

Nobody here can tell you that officially, at least without breaking an NDA, but the answer is very obviously "yes".

Comparing apples to apples, AMD sells their Vega 64 flagship at a MSRP of $499, NVIDIA sells a product with an equivalent die size at $1200. AMD sells their cutdown at $399, NVIDIA sells theirs at $799. And that's before you figure that NVIDIA is using commodity GDDR5/5X while AMD is using expensive HBM2 on consumer products - NVIDIA charges between $3k and $15k for their HBM2 products. So half the MSRP, with a more expensive BOM.

AMD's margins on Vega are trainwreck bad. Some experts actually think they are losing money on every unit sold at MSRP, hence the "$100 free games bundle" on launch, and the de-facto price increases above MSRP during the fall. They're banking heavily on HBM2 costs coming down, and probably also on NVIDIA not being aggressive with the launch of gaming Volta (aka Ampere). Apart from Vega FE, they really don't see any of the extra revenue from the inflated prices during the mining boom either. That's all going to AIB partners and retailers. All AMD gets out of it is volume, and up until now they've been reluctant to increase production.

In contrast Ryzen is actually dirt cheap to manufacture due to its MCM layout. Their margins there are probably better than Intel, even with prices significantly below the launch prices.

There is no way to know their margins for GPU because in their financial reports, AMD combines "CPU and Graphics" into one bucket.

Can you elaborate further why you think their technology and patent portfolio can be taken over by "someone"?

I thought Intel might have wanted to purchase them years ago when their own homegrown graphics stack was so terrible instead of spending years and millions of dollars to get it up to merely adequate for moderate use.

Today I don't know who would buy them. Intel seems to be ok with being mediocre because they're the choice for people who don't give a shit anyway.

There was an outside chance a car manufacturer might have wanted to buy them up as a strategic move against their competitors if nVidia's autonomous car GPU thing panned out, but I think they were too late to the ball for that one. It's hard to see the value proposition for most companies.

>Intel seems to be ok with being mediocre

I have a lot of complaints about Intel and their mediocrity, but the Intel HD graphics have gotten good in the last 10 years. The Intel GMA was awful and could barely run Quake 3. Intel HD will power through quite a lot. Not up to snuff with a real gamer's card, sure, but good enough that a casual gamer won't have a problem with it. And you get it for free, so bonus there.

I play relatively recent games on my Macbook and it's a lot better than you might think.

It's fine until you actually start pushing it hard. Intel is far less aggressive with driver fixes and the cards will absolutely collapse at higher detail levels on big screens.

No argument that they have improved (the many years and millions of dollars I mentioned were spent), so much so that the low end discrete GPU market is completely dead. But there is still a lot of space between the best Intel card and a mid range nVidia or AMD card like a Geforce 1070.

>It's fine until you actually start pushing it hard.

That's exactly what you don't do with an integrated card.

I can play Cities: Skylines and Guild Wars 2 on my 2015 13" Macbook Pro. I think that's amazing. I could barely get 5FPS on the last generation Intel GMA with the original Guild Wars all the way turned down.

China would jump on this in a second but the US government would shoot this down.

Sure hope AMD can stay on plane this time. Athlon 64 was tremendous and forced a huge opening in the x86 market, but AMD squandered this opportunity in following years. Competing with Intel long term means keeping up on all fronts.

i'm optimistic AMD GPUs will be usable for general purpose computing soon. ROCm seems to be coming along nicely, and they have various deep learning frameworks functioning to various degrees.

in particular, I think there's buy-in from the framework maintainers, they're not going to go out of their way to port but they also aren't averse to merging in code written by AMD engineers.

i don't think people in research have any particular loyalty to NVIDIA, and everybody's macbook pro now has an AMD GPU, so there are also personal incentives to get this stuff working properly.

I was about to buy a Ryzen workstation, but ended buying an (used) HP Z600 (2x E5620, 48GB ECC RAM, 2TB HD) because it was the best bang for the buck; just the Ryzen 1800x + motherboard would cost me what I paid for the Z600 (I live in Brazil, prices here are crazy).

That said, I'm really rooting for AMD here. It's very nice to have this kind of competition and customers will benefit a lot from this.

Been a loyal AMD user ever since my first laptop which had an Athlon64. They've always given me great performance at a fraction of the cost of Intel.

Not for laptops they haven't. Also never understood brand loyalty over so many generations.

The key part of GPs suggestion again...

"great performance at a fraction of the cost of Intel"

Only two points need to be true for that statement to hold up:

* Performance levels that GP is happy with.

* Cheaper than Intel.

Which point(s) do you disagree with?

Were there laptops with Athlon64s back then? Maybe the huge ones with desktop parts.

However, at that time, on the desktop, AMD was 80-90% of the performance for 50-60% of the price. Fine with me, I was using one.

Total debt at 1.4 billion dollars looks huge. So while revenues look great I wonder if their cash flow has improved as well?

Might be relative. I'm no financial expert, but intel has something like 26B in debt, compared to 226B in market cap: https://ycharts.com/companies/INTC

It doesn't seem so out of line for AMD to have 1.4B in debt to 13B market cap.

The question is really on whether they are generating enough cash. Intel for example has 11.8 billion in Free cash flow:


whereas AMD seems to be still losing cash, currently at 217 million in loss:


Totally rooting for AMD to be the AVIS to Intel's HERTZ. The early Opteron/x64 days were great when I actually had a credible alternative to the status quo. Go AMD! Competition benefits everyone, everyone, even Intel.

I recently doubled my position in AMD prior to earnings, mostly from discussions on HN. Here’s hoping they will have continued success in the long term.

Is it worth picking up some AMD stock?

Probably not today, but soon yes


They weren't as impacted by the recent vulnrabilities, Intel is likley going to lose quite a bit of marketshare (Intel currently has ~95%+ of the server market). If AMD can break into the server market with even a 10% showing, then their golden.

(Image and prediction(s) from: https://projectpiglet.com)

Edit: words

> "(Image and prediction(s) from: https://projectpiglet.com)"

From the site...

"Experts' Opinion Score, is a score representing how experts feel about a given stock, product, or topic."

With all due respect, I don't think that's a very sensible metric to base investment decisions on.

I disagree, i do think we may have a misunderstanding on experts. The system identifies experts as people who say things such as " I work at AMD and things are going well" or " Google probably won't have a good quarter, source: I work at Google"

There are other methods, but it typically will pick out people who work at a given place, are holding a large amount of an asset, or have some expertise in the field (say CPU design). As they are literally the best sources of info there is a high correlation with movements in price within roughly 45 days, aka at predicting quarterly results.

> "I disagree, i do think we may have a misunderstanding on experts."

I have no problem with "experts" in general, but I do think we have a misunderstanding with regards to experts in the field in question.

What incentives are in play for the experts behind that website to be honest? Let's conduct a role play. Consider that I'm the expert in question, and I decide I want to short the stock for a company (i.e. bet that the stock price will fall rather than rise). In this situation, what would be the best advice I could give on this website to make sure my bet works out? I would mark down my "confidence score" (which is effectively what the website publishes) in order to maximise my chances of making money. The financial health of the company is secondary. The financial health will have some influence on how likely the bet I placed is likely to succeed, but it's not the only factor at play.

Benjamin Graham (Warren Buffett's mentor) summed up the most obvious path for a successful investor to take by stating "The intelligent investor is a realist who sells to optimists and buys from pessimists.". Taking this a step further, without knowing the financial position of the "expert" you're getting advice from, how do you know if the "expert" is in the market to buy, to sell, or is neutral? I would suggest to you that you don't know that, and with that in mind any advice you get from an anonymous investor is advice to be taken with a large dose of salt.

Just in case you think I'm just describing a hypothetical situation, there have been high profile cases where investment firms were caught betting against the advice they gave their clients. For example:



Typically, I just run the numbers and it usually works out. I see your point though, any suggestions?

> "Typically, I just run the numbers and it usually works out. I see your point though, any suggestions?"

I'm not an expert, but I believe Buffett's advice of 'invest in what you know' is sound:



In other words, it pays off doing your own research into the financial health of a company, to determine whether a company is currently undervalued or overvalued. Should be noted that this approach works best by taking a long term approach when buying stocks, as you may have to weather some short term market irrationality.

Ehhh for day traders it can be. The market seems to react heavily on opinions and non factual stuff, thus the saying "buy on rumors, sell on news".

Source: My opinion... its all pretty hand wavy to me.

Do you realize that your chart indicates that it would have been a very bad idea to buy AMD stock in late December? How can you make recommendations off such bullshit charts with a straight face? Maybe cherry-pick a better example to sell people on your product next time.

I don't understand your statement really. It did say it would be a bad idea to buy right now (and the past month), but in my comment I said don't buy, but soon probably yes.

The charts I didn't share are the uptrends in promotor scores compared to Intel. I.e. more people are promoting AMD over Intel. This will lead (in time) for the system to predict a buy, as soon as the process drops (it's been steadily going up).

Maybe you’re missing the context that the stock is up 30% since December? Your graph indicates that it was a bad idea to buy in December, therefore it’s pretty safe to say it’s worthless for making investment decisions on.

Does that make sense?

"Intel is likley going to use quite a bit of marketshare"

How would they use their marketshare?


Can you explain that chart re: "no" vs "yes"?

I bought AMD about a 9-12 months ago, and sold it for what I paid last week. I was just happy it got back to my purchase price. I was expecting a bump with their recent return to relevance. The market yawned apparently. I am also slowly getting out of all my positions of individual companies and into indexes.

That's a good thing if you're into value investing for long-term returns. If a strong or strengthening company is selling for less than it's worth, then it's basically on sale. Such irrationalities are bad for short-term trading though unless you find a way to exploit such tendencies consistently.

Is a P/E over 300 is that good value?

I have no idea, and I have no idea if AMD is actually a good value at the moment. I'm still learning about accounting and investing in my free time. I'm mostly talking about the concept of stock price stickiness despite market changes in general.

It was worth it before the Ryzen premiered. Now all the current and future profits are already incorporated in market price. Currently, you stand a better chance of earning money if you short the AMD stocks. The old "buy the rumor, sell the fact" rule.

Wow. 34% gross margin in a competitive hardware business. AMD is doing OK.

Intel and Nvidia have gross margins of 58% or higher. Other semiconductor companies like Broadcom are around 50%.

34% is better than the 30% of last year, but it's still really bad.

Their P/E is now over 340. I’m not sure if that’s new territory do a stock that is not penny? Or is it because they just became earnings positive?

Prob. PER is only really useful for a firm with a stable and positive net income (think of things like industrials where growth capex isn't such a major factor). For something like a firm fluttering around zero EPS you would use a more forward measure of expected profit. The problem with PER is that stock price is forward looking while EPS is backwards looking.


Not new territory at all. P/E is not a good indicator of future earnings, because it artificially penalizes capital investments. Cost of revenue is a better metric.

It is due to crypto miners. Once crypto coins goes down the market will be flooded with used graphic cards. Hence sales will decrease.

Would you buy a used GPU if you knew it was used for mining? I'd feel like it was pushed too hard and could be ready to burn out. Makes me wonder if the secondhand market for video cards will tank for a while after mining stops.

I would in a heartbeat.

The mining meta has changed. In 2013 it was about squeezing max perf out of cards at any cost. Now the focus is on efficiency. Most miners will undervolt and underclock cards.

In this scenario, cards used for mining see less heat cycle stress than the cold/hot workloads a typical gaming PC sees. The primary stress is on the fans. However, even those are rated for 5-10 years MTBF under constant use.

Fans and coolers are easy to replace as well.

Cheaper GPUs for me then. I don't see a difference between buying used parts from someone that browsed the internet versus playing games. CPUs/GPUs don't just 'wear out' on a normal time scale before they are obsolete. We've got companies that have been running the same mainframes for 50 years.

I'm not sure that's an accurate statement... anything that's used hard will incur extra wear and tear. If you're running a GPU at 100% 24/7 for a year, the heat stress alone will reduce its lifespan. The reason the Xbox 360 had the RRoD problems isn't because it was bad from the factory, it's because it didn't dissipate enough heat so over time the components wore out quicker than expected.

I might buy the idea that miners don't stress their cards like the other person said, but I'm not convinced by your statement that running an electronic component hard for long periods of time won't increase the risk of failure. Especially in something as high tech and compact as a GPU.

It is in no one’s interest except the miners to promote used mining GPUs as a viable used part. If you simply continue the narrative that used mining gpus are bad, the prices will be pushed down as people refuse to buy them.

If you are looking to buy those used GPUs for cheap you should avoid any public language that tries to downplay the impact of 24/7 mining.

Maybe instead of trying to influence the market, he's trying to have an honest discussion about the viability of GPUs used for mining?

That is best done behind closed doors.

Are there thinkpads with AMD?

PS: how does one delete old HN comments? Seems like a major privacy issue to me.

A series are basically X/T series (exact same chassis) with AMD. The Ryzen versions aren't out yet, but everyone is expecting them "soon"…

> PS: how does one delete old HN comments? Seems like a major privacy issue to me.

There's no option in the interface, the best option right now is asking nicely per email.

Of course, you can just wait until May and then use the rights the GDPR grants you to force them to delete your comments.

Assuming the poster has rights under GDPR, and hn will adjust to follow GDPR. Does seem likely they will (have to).

The GDPR applies to (basically) any business on the planet that stores data of EU citizen. So it's quite easy to make that assumption.

Yes, but the GDPR doesn't apply if the poster is eg a US citizen, as ycombinator isn't based in the EU.

It will/would apply to my posts - or my relationship with ycombinator, for example.

And then it would depend if/how yc would comply; by refusing access from the EU, or aiming for compliance.

How is social media commentary willingly handed over for public exposure a form of personal data? If GDPR is allowed to have such wide applicability there's going to be a lot of shakedown scams from bad actors.

> there's going to be a lot of shakedown scams from bad actors.

I'm not sure how that would work.

Any compliant service is likely to allow self-service (eg: a button to delete a comment; a link to list out all data; an edit function to correct wrong data).

If you're storing personal information and don't comply with the law, you risk a fine. Just as you risk a fine for mismanaging health data, or risk prosecution for storing data that is illegal, like child pornography.

You might also want to look at GDPR chapter 3, article 12, point(?) 5:

"Information provided under Articles 13 and 14 and any communication and any actions taken under Articles 15 to 22 and 34 shall be provided free of charge. Where requests from a data subject are manifestly unfounded or excessive, in particular because of their repetitive character, the controller may either:

charge a reasonable fee taking into account the administrative costs of providing the information or communication or taking the action requested; or

refuse to act on the request.

The controller shall bear the burden of demonstrating the manifestly unfounded or excessive character of the request."


Basically, for any data you give a company, you have to be able to later on have them remove it, or at least remove the association with you. For stuff like personal messages or posts, just anonymizing won't even be enough to comply.

This is grossly oversimplified to the point of almost being wrong, but is the general idea of what applies here.

Wouldn't it only matter if the company is within EU jurisdiction?

Not really. For example, if the company has assets within EU jurisdiction, those can be seized to pay the fine, or if the company has assets outside of the EU, but with EU companies (e.g. banks), they can be forced to help seizing the assets.

There is countless precedent of the US using these tactics to enforce IP law, and EU countries using them to enforce consumer laws.

It is expected that the EU would do exactly this.

For example, if YCombinator refuses to adhere to the law, but holds shares in an EU startup (and they do in several), then those shares could be seized and auctioned off to enforce the law.

Strong forecasts with Spectre still around seems very optimistic.

Can AMD or Intel release another generation of chips with Spectre vulnerabilities?

Do we think they're going to have a solution to Spectre within a year?

Can x86 survive without out of order processing? Can any architectures perform at modern levels without it?

Is x86 relevant in the server if you use Linux/BSD and can recompile your deployables?

Without Spectre I'd be very bullish on AMD. With Spectre, I'm bearish for the entire sector.

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact