Hacker News new | past | comments | ask | show | jobs | submit login
AMD Reports Q2 2020 Earnings (anandtech.com)
309 points by galeos 16 days ago | hide | past | favorite | 205 comments



Interesting to compare Q2 2020 results between Intel & AMD:

Intel Revenue: 19.7B, Net Income: 5.1B, Market Cap: 209.42B, P/E: 9.06

AMD Revenue: 1.93B, Net Income: 157M, Market Cap: 79.2B, P/E: 133.82

But by the sentiment of all media reports INTC is in sharp decline & AMD is killing it, yet even in their most recent results Intel revenue/earnings still dwarfs AMD's.


1. It is possible to be bigger and still be in decline, that just indicates you had an even bigger lead before.

2. Everybody loves a good "underdog takes over from the big bad empire in decline" story, both the press and the commenting public. The stories about big companies absolutely crushing their competitors in quality are just not as interesting.

3. AMD can't scale up as rapidly as a SAAS because fabs take a while to build. I'd wager Intel has (at least) five to ten years to come up with a good processor design before they get overtaken by AMD in sheer volume.

4. There are no doubt Chinese competitors with even lower sales and even higher P/E and growth rate than AMD. It'll be interesting to see how that plays out.


The problem with "they have 5 to 10 years to come up with something" is that they had 5 to 10 years leading and they came up empty handed.


Considering fabs take so long to build and get processes functioning well, it's likely they have another 5-10 years to blow their lead entirely. Rome didn't fall in a day


On the other hand, Colosseum did not work until the version 1.1.


> _Rome_ didn't fall in a day

Nice.


I agree with your points but the game has changed.

It’s not AMD vs Intel

It’s AMD vs INTEL vs Apple vs Amazon (and nvidia might be next.


And Nuvia, Qualcomm, Huawei and whoever else decides to get TSMC to make some killer Arm-based SoCs. Seems like the playing field is about to get a lot bigger.


Well it won't be Huawei


Google also makes TPUs, which competes with GPUs and Nvidia, in addition to making ARM SOCs, is one of the key stakeholders behind Power PC now.


No, you cannot buy TPUs. You can only buy their usage hours from GCP.


I wonder whether the Apple silicon will also feel like "buying usage" or if we'll have low-level enough access to feel like we own it.

I don't think having physical possession of the thing is always the best criteria for whether you really own it.


https://cloud.google.com/edge-tpu/ has a "Buy" button that leads to https://coral.ai/products/ . They appear to be available for purchase?

edit: reading more on the edge-tpu page, they definitely appear to be of limited use compared with the ones available to google's hosted plans.


Edge TPU is not TPU in GCP.


And your point is what exactly?


>1. It is possible to be bigger and still be in decline, that just indicates you had an even bigger lead before.

Except Intel is still growing its revenue. 43%+ YoY in Q2 for their Datacenter.


The stock price is in part determined by the discounted future Cashflow, the one of AMD is pointing upwards that of Intel seems neutral or declining. Intel had a long string of failures and bad acquisitions recently, which points to bad management. AMD has all the potential to attack Nvidia’s deep learning moat with in house talent. Maybe this is all wishful thinking though, but my AMD calls jumped 46% within a day.


This might be wild speculation but I just now realized that AMD, Nvidia, and, of course, TSMC are all Taiwanese lead. Did Taiwan have some kind of incubator or have some kind of special emphasis on semiconductors or is this just pure coincidence? I wouldn't be so surprised if Taiwan isn't such a small country (relative to its bigger neighbors). Just wondering.


The government of Taiwan realized that their semiconductor industry might be the one thing that the US would be willing to go to war with China over to protect. So the government has invested heavily in ensuring they maintain critical mass in semiconductor expertise, typically by investing heavily in university education and research in related engineering fields. Computer/electrical/materials engineering is pretty much the default major everyone gets funneled into without capacity limits since at least the 70s.

For example, my parents and all their friends weren't well off enough to afford private university, but majors at public university were limited by the government based on projected need with slots offered to only the highest test scores. They couldn't test high enough to secure a public university slot for accounting, art, architecture, education, medicine, and trades such as automotive repair or plumbing. So they all ended up with computer science and engineering degrees. Fully paid for by the government.

They basically treat the semiconductor industry like how the US treats its defense industry.


I have long held the opinion that education should be part of national defense.

I think is a good example of what happens when you do that.


There's a fairly commonly held view that improving education access means giving people more ways out of poverty and therefore less people choosing armed forces as their career.

It seems intuitive to me that as war technology progresses, number of humans becomes less of a factor in military strength, but I know so little about this area that I couldn't guess how greatly alternative educational opportunities impact military applications, nor at what point in the past/future the scale might tip between wanting policies that push more into armed forces vs. no longer being so important (and I assume it would be different for different countries, too).

But I do believe good, free education should be a key part of any country, regardless of whether it helps national défense or not. If that drives up the cost of recruiting people to the armed forces then fine - I'm no fan of them in general, but if people are going to risk their lives potentially in wars then it shouldn't be because their choices were limited to that vs a life of poverty.


Interesting how they invert the qualifications for the disciplines versus US universities. It makes a lot of sense for a centrally-planned education system to force higher admission requirements for disciplines with less rigor and lower job demands. I have a suspicion that the US college system would implode for lack of willing customers and greatly reduce rigor to pablum. It’s basically a babysitting service here. It also makes it clear that pursuit of some measures of prestige reduce effectiveness of a system.


I vaguely recall a John C. Dvorak column (yes, I am too lazy to look it up) from the 1990s (maybe?) concerning a trip to Taiwan where he wondered if they were doing an experiment to see what would happen if everyone got a degree in electrical engineering.


Took me a minute, but it showed up in Google books:

https://books.google.com/books?id=C6VFJIbxX7MC&lpg=PA71&ots=...


I started going to the Super Computer/HPC confs here in the USA over the last few years and it was shocking how few American companies are there with impressive hardware engineering. Maybe the Japanese/SKOrean/Taiwanese companies showing up are the Dell/IBMS of those countries and I'm just not familiar with the names but walking around the conference always gives me this feeling that the USA is lagging behind or sitting this one out. The majority of companies I see are the usual hardware conf goers; IBM/HP and then a bunch of Gov/NSA/DOD groups on the USA end with a few universities. On the AIPAC end tons of what I feel like are small hardware companies doing cool bleeding edge nvmeoF/arm/fpga stuff, TONS of universities with really cool looking projects, etc.


But there is nothing to be set out. Either you play and win or you have already lost by not playing.

The only chance to sit it out would be a giant technological leap as in quantum computing, I just don't see that leap.


Look up Morris Chang. The man's a visionary and the single reason why Taiwan still exists as a sovereign nation.


They're not crippled by shortsighted management.


I really don't see anyone de-throning Nvidia from the deep learning market.

I'd like to see it, but I'm not sure anyone can catch up now.


I think it's highly unlikely that any traditional chip company dethrones Nvidia in DL, at least in a reasonably soonish time horizon. As others have said, CUDA is just too far ahead in terms of development and adoption.

However, I think NVIDIA is still vulnerable—but against AWS/GCP/Azure, not Intel/AMD.

My opinion is that deep learning is moving to the cloud. That's a bigger conversation with a lot of nuances, but if you take that basic assumption, then the development of ASICs like TPU/Inferentia become a big threat to Nvidia.

If the biggest buyers of chips in deep learning are the clouds, and the clouds are increasingly developing their own chips for deep learning, Nvidia is in a tough spot. They'll always have a place among labs that use their own machine, and of course, Nvidia's business is bigger than machine learning, but in general I think the clouds are a real threat.


It is relatively trivial to hook any new accelerator you develop into the popular deep learning frameworks. In the case of AMD there actually already exists a mature compiler framework for their GPUs and their cards are mostly on par with Nvidia's. Most deep-learning researchers don't write custom CUDA kernels, but simply stitch high level operations together in python. So as soon AMD delivers a performance / power advantage there will be almost no friction to deploy a AMD only cluster.

One of Nvidia's actual moats is their system building competency, which AMD lacks. They can sell you a box / a whole server room configuration, since their acquisition of Mellanox together with network equipment.


Yes, but there is one assumption in this hypothesis which I think is not accurate.

The cost of Hardware Development is the main cost contribution.

Which I think is not true with regards to both GPU and GPGPU computing. The major cost for GPU is Drivers, and CUDA for GPGPU. i.e It is Software.

Unlike ARM where AWS/GCP/Azure can make their chips and benefits from the Software ecosystem already in place for ARM, there is no such thing on GPU. Drivers and CUDA is the biggest moat around Nvidia's CPU. And unless Developers figure out a way to drive the cost of DL and Drivers down, there is no incentives to switch away from Nvidia's ecosystem.

That is why I am interested to see how Intel tackle this area. And if History will repeat itself again in the Voodoo, Rage 3D and S3 Verge era.


Not happening. nVidia has a stranglehold on deep learning because of CUDA and Cudnn. I don't see any AMD alternatives to take over either of these. So I wouldn't bet too much on AMD taking over the deep learning chip market.


Who’s writing bare CUDA though? For most tasks a framework like Tensorflow or PyTorch is good enough.

If AMD could provide a backend for the most popular frameworks then they could skip over the CUDA patent issue completely.

The real problem is that it seems like AMD’s not investing substantially in software teams to make it happen.


In the deep learning world every major framework works on top of Cudnn which works on top of CUDA Pytorch, Tensorflow you name it.

https://github.com/pytorch/pytorch/issues/10657

That is the state of Pytorch support for AMD GPUs.


> Who’s writing bare CUDA though?

I do. Not everything you can do with a CUDA card is deep learning. In fact that's just one of many applications.


Lots of people do. We write cuda all the time.


This. The software lead is just incredible; almost everything uses CUDA.

There has been some progress, but PyTorch still isn't fully functional with ROCm yet and that feels like a good litmus test.

https://github.com/pytorch/pytorch/issues/10657


The Apple ecosystem with its amd graphic cars and future Apple Gpu card seems to be a fight. Or at least maintain certain software not totally all cuda all the way down. And also the gaming with amd dominates both gaming platform.

Really do not want just one players. And hope the high level plays have more completion.

Still interest in Taiwan part. Purely from economic point of view. How secure are we ok that front, if all eggs are in one basket. Hk is fallen. Taiwan or South China Sea is in play. That will affect the supply chain.


I think the stranglehold is about to burst.

Intel is launching a GPU/Deep learning accelerator, Huawei is thinking about launching a GPU. Pytorch and Tensor flow work well enough on AMD GPUs. There are also custom deep learning ASICs from Google. There is simply too much competition at this point for CUDA to continue to be the standard.


Is there any chance that some of the upcoming open-source cross-platform standards like WebGPU could have an effect on this, if tooling around them was built to support writing more GPGPU-focused code?


DLSS is black magic


Have you heard of the billion dollar unicorn from England, United Kingdom called Graphcore?

https://m.youtube.com/watch?v=_zvU0uwIafQ

https://www.graphcore.ai/products/mk2/ipu-machine-ipu-pod


Just saying deep learning still is mostly a marketing fad...


Last quarter, Nvidia’s datacenter segment exceeded $1B in revenue for the first time, and it’s close to overtaking the gaming segment as largest business segment.

Marketing fad or not, it’s not a bad business to be in.


I say the total opposite.

It is barely starting. In 10 and 20 years it will be huge.


I am not seeing AMD putting any serious contenders for CUDA though.


They don’t really need to.

Nvidia will have to validate and launch all of it’s PCIe4 on Epyc/Ryzen processors, so it’s not like AMD won’t benefit from the Deep Learning hype


Isn't HIP+ROCm a serious "competitor" to CUDA? You can convert your code automatic from CUDA to HIP. At least that's why the advertising says; I haven't used it myself. Plus, PyTorch and Tensorflow have AMD support, I thought.


It may convert your own code, but it can’t convert the CUDA libraries.

It also doesn’t help that AMD has stopped supporting ROCm for the current consumer GPUs.


Which CUDA libraries are you referring to? NVIDIA libraries like cuBLAS? There are ROCm libraries for a subset of those, but it's definitely a work-in-progress.


They might not have to, as others have pointed out. Intel, on the other hand, is fighting the multi-front battle against a lot of competitors. It’s great for consumers, but Intel might have to decide to focus too.


Wait until Nvidia buys ARM from SoftBank and steals CPU markets from both Intel & AMD in high-end high profit HPC/DL sector.

https://www.bloomberg.com/news/articles/2020-07-22/softbank-...


Why do they have to wait to buy ARM to do that?


Presumably the patent minefields of the CPU industry.


Intel is 10 times larger than AMD.

AMD can't ramp up the manufacturing volume it has from TSMC up quickly. TSMC builds capacity to match roughly what it is contracted for. I can just assume that any extra capacity they planned sells for a very good price.

Neither AMD nor TSMC can fully exploit Intel troubles because they can't foresee what happens in Intel manufacturing. Intel is selling chips like hotcakes.

TL;DR Intel has high manufacturing volume and the ability to make money with less competitive product.


That is how Intel is hanging on...

The problem for Intel is AMD is putting out a clearly better value proposition in the high-margin server space, with single socket systems beating Intel dual socket systems, and other advantages like more PCI lanes and lower power consumption.

That will drive Intel prices, and margins, down. That means a lower Intel stock price as probable future earnings shrink.


In another submitted story regarding the rumor that Intel might rely on TSMC for some of its future products, one comment indicated that [1] "Intel’s fab capacity is several times TSMC’s".

I haven't fact-checked that, but given Intel's market share, it sounds plausible.

Intel may be behind on process and microarchitecture, but as long as they can ship that volume, and with a far better gross margin than AMD (Intel 53%, AMD 44%), I wouldn't count them done just yet.

[1] https://news.ycombinator.com/item?id=23976605


The irony is that the more of TSMC's capacity Intel contracts, the less AMD has access to.


Paying TSMC to fab chips is going to cost more than producing chips in their own fabs, reducing Intel's profit margin the more they do so.

TSMC also knows full well that Intel will switch back to their own fabs the instant it's practical for them to do so, making them a less reliable long term customer than Apple, AMD, etc. and so won't be inclined to give them priority or much of a break on pricing.


Between Intel, Apple and Nvidia, AMD might catch the short end of the stick there.


AMD and TSMC have really excellent relationship. Nvidia on the other hand does not, since it's rumoured most of their next gen GPUs are going to be made on Samsung 8nm because TSMC bid high against Nvidia and wouldn't budge. Apple might be the only company with a better relationship with TSMC than AMD. Apple and AMD together should be able to keep Intel's pressure low.


Given that AMD is fabless these days, you would have to compare Intel's gross margin to AMD and TSMC's combined to have an oranges to oranges comparison. In that light AMD is doing pretty well.

The other factor is going to be that TSMC has 5nm in risk production already. If they bring 5nm fabs online before Intel has a real answer to their 7nm process, AMD could be buying capacity from the 7nm and 5nm fabs at the same time.


Where does the meme come from about AMD being ahead on microarchitecture? Head-to-head single-threaded benchmarks are mixed results for Rome vs Skylake and its descendants, which seems to indicate that Intel had a microarchitecture comparable to AMD's, but years ago.


I wonder if this is merely an advantage of Intel's larger process size. It seems a threshold has been crossed at 14nm or 22nm where the smaller sizes can't be driven at higher clocks because of voltage and heat dissipation issues. So AMD is forced to trade clocks for cores, whereas Intel's larger (and very mature) process sizes can sustain higher frequencies.

For the most part it's a worthwhile trade for AMD because you get much greater overall compute power with only marginally diminished frequencies, but it's still something for Intel to hang its hat on in benchmarks.


The easiest way to interpret this is that the market cap size is what investors expect the market share to be in about 2,3 years. That is that Intel would have about 70% share compared to something like 90% today.

PE or net income is not really relevant for AMD as they are in very high growth phase (and because their gross margins are fine). The most important thing here is that Intel is failing to execute on the upcoming fab process which will unequivocally make them less competitive from 2021 - 2023 (at the very least). The uncertainty of Intel being able to execute here on out is also impacting their share price since they have had a long history of failing to execute on 10nm and now on 7nm.


I think the problem with this is they will just start using TSMCs process until they become competitive again? Let's face it they have the money to pay TSMC more than AMD do I'd guess.

How volume limited is TSMCs process?


This is a very interesting question. With everyone fabbing at TSMC (amd, nvidia, apple, google?, amazon?, aspiring startups) I could see a bidding war for volume guarantees if there is a volume problem going forward. Some of these players have deeper pockets and margins than others.


[This is my take on it (largely informed by stratechery) and probably overconfident]

Intel runs their own fab and for whatever reason they've messed this up repeatedly (unclear what the reason is, but it's at least partly a management/strategic failure). Their focus on old designs that are currently profitable instead of the future was a short term benefit and a long term mistake.

AMD uses TSMC (Taiwan Semiconductor Manufacturing Company) to fab their chips. Apple does the same.

Intel's profits from older technology and server sales have made them slow to recognize the severity of their situation. First they missed mobile, now they've had years of delays with their own manufacturing process, and now they're going to feel pressure from ARM on the desktop and probably on the server.

No US based fab for modern chips is a concern for national security (particularly given Chinese interest in eventually taking over Taiwan).

AMD spun off their fabs a while ago (Global Foundries) and uses TSMC for modern chip manufacturing. This gives them lower overhead now, but I'm not sure it's a much stronger position overall. There's a benefit to owning your own fab if you can pull it off.

I think Intel is at serious risk long term. They need someone who recognizes the existential crises they're in and can save them. Their current results are a lagging indicator.


What is Intel hindering from getting an ARM licence and outpace Apple? Is this unthinkable?


It is pretty much unthinkable from a pride perspective.

Processor IP is Intel's bread and butter.

I think if they do it it's the beginning of irrelevance, really.

What might be more interesting is if Intel became champion of Risc-V...that would be like Microsoft and Linux.


In theory nothing (I think), in practice they've failed to execute on really important strategic moves for the last decade.


"yet even in their most recent results Intel revenue/earnings still dwarfs AMD's."

Paradoxically this is likely WHY they are getting punished so much by the market. Their current revenue/margins are too juicy to give up, despite the fact that it's causing them massive pain on the technological front.


"in their most recent results Intel revenue/earnings still dwarfs AMD's."

Very simple explanation: stocks reflect future (expected) performance, not present performance. So the stock market is really just telling you they expect Intel to continue failing.


Yes, but as a gross oversimplification, the P/E shows how investors really believe in AMD and the media/market sentiment is in line.

Gross oversimplification because it's one ratio that can be interpreted in so so many ways. But, this is one way to look at it.


> But by the sentiment of all media reports INTC is in sharp decline & AMD is killing it, yet even in their most recent results Intel revenue/earnings still dwarfs AMD's.

That is because AMD only has GPUs and CPUs while Intel has a lot more side business (VDSL modem chips, Thunderbolt, FPGAs, ...).

Additionally, the CPU side of Intel doesn't look very promising for the future (architectural issues like the whole sidechannel attacks, technical issues in their lithography process), they will have to invest a lot of money to get this under control - while AMD has a wildly positive outlook and is only limited by the capacity of TSMC's fabs - whatever they produce gets ripped out of their hands by customers.

From a financial point of view, Intel also has the problem that AMD was drastically undercutting prices for competitive products... part of the Intel stock price was the ridiculous amount of money they could squeeze out of customers for their top-notch processors for years. AMD all but flattened that as Intel was forced to cut their prices by a bunch.


Some of those exploits also affect AMD CPUs.


Also, that was a record Q2 in revenues for INTC. As much as I love AMD and that they are now competitive +/- in most cpu metrics, Intel held its own with a lot of 14++ inventory that is only now shifting to 10nm.

Reality is, there is no way TSMC would be able to mfg. all of Intels cpus even if both wanted it. What I can see happening is that Intel licenses some IP from TSMC.


Right. It's all about perceived future growth.

AMD has a ton of room to grow. Intel does too, but not in the processor business where it's had a near-monopoly in some spaces. Intel also has challenges in other areas, and ARM is looking strong too.

It also doesn't help Intel that the long-running data leakage issues hurt it a lot more than AMD.


Investors are about growth.

AMD is in a better position to seriously increase the E by utilizing its P.

Intel has failed to use its P to seriously increase its E for a few years, and just experienced a string of setbacks with long-term consequences. (This is why buying AMD a couple weeks ago was an obvious good move.)


Dont consider P/E as sole index of company's valuation.

Stock markets assign high P/E for anything they consider as "Growth". TSLA, CMG is all considered as "Growth". For some reason the present market is geared towards "Growth" than "Value".


To me this difference in market cap and revenue you illustrate demonstrateS more that AMD has a lot of market to gain while INTC stands to lose ground.

As an investor it would seem that AMD has more potential upside. Add to that they are currently producing better technology at the moment...


>But by the sentiment of all media reports INTC is in sharp decline & AMD is killing it, yet even in their most recent results Intel revenue/earnings still dwarfs AMD's.

Decline means decreasing. Which is true. But they are decreasing from a big number.


Some believe it is better to be poor but getting richer, than to be rich but getting poorer.


What is funny, Intel revenue growth : 20%, AMD : 26%. And Intel has NSG division that is growing much faster than AMD, almost having the same global revenue and more benefice.

AMD stock is much more a meme stock than TSLA.


Intel has no leadership and their revenue is expected to go down because of that


Their P/E ratio has been bouncing off a floor of ~9-10 since after the financial crisis, so there's nothing unusual about that. Prior to the FC you have to go back to 1994.


Those P/E numbers clearly show that Intel is in decline and AMD is killing it. Why do you think otherwise?


It's important to separate a leading indicator from a trailing one.


So, do you see this as Intel being undervalued or AMD overvalued?


The issue I see is that the prices imply Intel is going to fall to 70% of the market, and AMD is going to take 90%. Obviously both things cant be true.


Now project that five years into the future?


Also, what if the cash sink in Intel's fab continue to become a drag for cash on hand and unable to produce any returns on those huge new fab process investments?

3-5 years from now, still 14nm CPU only without any volume productions on 7nm, 10nm? How much can those 14nm 300 Watts CPU can be sold for and who will buy them at that time?

Actually from the past 3-5 process development history of Intel, it is not hard to see what is likely to happen.


So in the early/mid 2000s would you have rather owned Facebook or Myspace?

You can be be a market leader and still be a failing company.


Which of the two companies who just posted record revenues and growth is "failing"?


If you think Intel isn't a building on fire you either don't work in tech or haven't spent more than 10 seconds looking at what has happened to them over the last 5 years.


In the last 5 years Intel revenue has grown from $55B to $72B. Sure AMD's TSMC-manufactured chips have a price/performance lead today, but 50 years of history show us that's only part of the story between AMD and Intel.


Missed process shrinks and had multiple major security issues. Went from having all of the major oems and cloud providers on lockdown to them universally seeking other options: whether amd, in-house arm, or both.

Past revenue has never and will never be an indicator of future success.


I'm a huge AMD fan but I fear that its current valuation already has an immense amount of success already priced in.

They are doing absolutely spectacular work, but there's still much to do, and there are significant risks.

They have been making progress on the GPU side, but as long as they don't provide a CUDA-like ecosystem and experience, I don't see them challenging NVIDIA soon in the accelerator market.

I'm pretty confident that they will continue to outpace Intel on the CPU side, but with Amazon's Graviton2, and the recent TOP500-success of Fugaku -- pure ARM, no accelerators - there is still a tremendous amount of competition ahead.


If you think P/E ratio of 25 to be reasonable after the growth spurt slows down, AMD's earnings must grow

10% per year next 20 years, or

20% per year next 10 years, or

30% per year next 8 years, or

50% per year next 5 years

to get P/E 25 with current price.

I don't think the price is unreasonably high but I don't think ROI will be very high if you buy AMD today.


Looking at earnings growth as a steady process can be misleading when profit margins vary. Intel has a 26% net margin, and AMD is just over 8%, so if AMD gets Intel's pricing power their earnings could be 3X as high before any change in revenue. For a look at the pathological case, realize that a company that breaks even one year and makes a trivially small profit the next has experienced infinite earnings growth.

That being said, I agree it looks like AMD stock is priced for something spectacular to happen, which makes me more excited about their chips than their stock.


One of the most interesting aspects of investing is that the success of a company and its stock are only loosely correlated. It's entirely possible for an investor to do very poorly after investing in a company that grows a lot if the price they paid was too high (as you mention, AMD might be in such a situation right now). Conversely it's entirely possible to make excellent returns on the stock of a company that is not growing (or even shrinking) but pays out large dividends.


A market can stay irrational longer than you can stay solvent.

If you want to sit on the sidelines and watch valuations soar to unreasonable levels and not try to claim a piece of that fine, but don’t cry when you see how much you missed out. AMD could be the next NVDA.


Congratulations to those who have enjoyed AMD's stock price rise by 30x over the past few years. If you think its going to rise by another 30x, prepare to be disappointed. Another decade or more of massive growth is already priced in.


> but as long as they don't provide a CUDA-like ecosystem and experience

Let's ROC!!


The real threat to intel comes from arm not amd. Amd is also threatened by arm.

If you compare with intels numbers. Then amd is a dwarf. And the growth figures are loosely following intels:

https://www.intc.com/investor-relations/investor-education-a...


ARM is an architecture and their competitors there are pretty much all fabless just like AMD. Even if there was a market shift to that architecture, they could just do this again:

https://www.amd.com/en/amd-opteron-a1100


I agree that ARM is a threat to AMD as well, but AMD has two things going for it:

1. Stronger x86 design: AMD's recent CPU releases have shown they are inching ahead of Intel on x86 design, and are able to achieve significantly better cost per $. At the same time, AMD are already well into shifting a big chunk their 7nm manufacturing to TSMC. Intel has only just started this process.

2. A strong GPU business: Yes, they are second to Nvidia, but given the design skills they are showing on the CPU side, I expect that gap will narrow very quickly. Both Sony and Microsoft have chosen AMD for CPU and GPU in the PS5 and Xbox Series X, with support for full 4k ray-tracing. Given how long this generation of consoles will be on the market for (likely 5-10 years at least), it is a strong forward indicator of roadmap strength.

tl;dr: I expect AMD will weather* the ARM storm better than Intel.

* Originally a typo as "whether". Thanks for the correction!


I wouldn't interpret Sony's and Microsoft's decision for AMD graphics as anything other than Nvidia being dickheads.

Keep in mind that Apple is also exclusively building Macs with AMD graphics cards. They don't even support Nvidia cards as eGPU anymore. The rumour is that Nvidia is not willing to do any customised designs and someone at Apple is very upset with Nvidia.


Nvidia simply not interested in consoles because how thin margins are:

https://www.extremetech.com/gaming/150892-nvidia-gave-amd-ps...


Unless it is their own one.

https://www.nvidia.com/en-us/shield/


Does everyone here forget that the Switch is a custom NVIDIA Tegra processor ?


Is that Tegra not off-the-shelf? I didn't think it was custom, due to said reluctance to do custom designs.


I may have been misled, but here's my source:

> "Nintendo Switch is powered by the performance of the custom Tegra processor."

https://blogs.nvidia.com/blog/2016/10/20/nintendo-switch/


Same Tegra that was in Tesla MCU. Off the shelf.


No, but I rather pointed out the one which NVidia has full control, and uses to sell NVidia optimized games.


They were apparently interested because Nvidia is in the Nintendo Switch.


> Keep in mind that Apple is also exclusively building Macs with AMD graphics cards.

Any reason to believe that will continue to be true when Apple move to their own ARM chips? No technical reason they couldn't keep using AMD GPUs, but Apple seems to be leaning pretty hard into getting as vertically integrated as possible.


> I wouldn't interpret Sony's and Microsoft's decision for AMD graphics as anything other than Nvidia being dickheads.

I think that's a large component, but I'd like to add, ontop of maybe pricing and the like, Nvidia is being a dickhead in openness towards their hardware/software stack. Which, documentation of which is determined important for AAA game optimization over the course of operation of the console.

Additionally, there are important aspects of AMD's GPU architecture that are adventitious for teams squeezing the most amount of performance out of a fixed platform. Specifically, as far as I am aware, AMD's compute is much more flexible in context switching while the graphics pipeline is active, which at least used to be a problem for Nvidia's architecture.


Don't forget that almost every desktop and laptop CPU that Intel ships has a GPU. That probably gives Intel a larger installed base of GPUs than AMD and Nvidia. Intel is also entering the discrete GPU market for the first time in over 20 years.


How's better cost per $ better for AMD? Intel has much better margins; if AMD went with those margins they wouldn't sell shit.


I think that's the point. AMD sells compute power at a lower price point than Intel. If AMD can continue to lower the retail price of CPUs, they will chip away further at Intel's market share.

Margin means nothing if people don't buy your product.


Exactly this. AMD is willing to forgo on margin to make up for it in volume. Being ahead of Intel in outsourcing manufacturing will help them keep up with demand.


> Being ahead of Intel in outsourcing manufacturing will help them keep up with demand.

I don't think that follows. You realize the world is capacity constrained on leading-node fab capacity? And that by going fabless, AMD now has no guaranteed capacity?


*weather (sorry for nitpicking)


I think this has more to do with the irrationality of the customers than any legitimate technical reason. From a technical perspective, AMD is crushing Intel and will be for the foreseeable future.

Maybe there are enough suckers to keep Intel afloat. I couldn't say.


While most of the focus here is understandably on the CPU side, there seems to be some interesting shifts taking place on the GPU side.

AMD currently has a process lead over Nvidia (and this is rumoured to be set to continue for a little while longer - apparently the first consumer Ampere chips are being fabbed on Samsung's inferior 8nm process due to lack of capacity at TSMC for the next few months)

Nvidia has clearly had an architecture advantage, although RDNA2 may close this gap, depending on how Ampere performs.

While Nvidia has had a much stronger showing in the GPGPU space, with CUDA helping it be the clear current winner, this also appears to have driven architecture decisions at Nvidia with the focus on tensor cores.

In gaming, Nvidia has put a lot of work into utilising these tensor cores for Deep Learning Super Sampling (DLSS). The idea being that you render at a lower resolution and then use deep learning to upscale in real-time to higher resolutions. DLSS 2.0 made some leaps in quality and DLSS 3.0 is on the horizon. It will be interesting to see:

a) How well they can get this working b) Is AMD working on its own version of this? c) If so, how well will the RDNA architecture be suited to this approach?

Will be interesting to watch how this plays out!


Lets put it this way, Octane Render has dropped Vulkan support and replaced it with CUDA based Optix 7.

https://home.otoy.com/render/octane-render/

So how is AMD going to get relevant in the Hollywood and TV studios that are the big buyers of OctaneRender?


I wasn't aware that OctaneRender was actually a large portion of the Hollywood and TV studio segment.


> apparently the first consumer Ampere chips are being fabbed on Samsung's inferior 8nm process due to lack of capacity at TSMC for the next few months)

I just wanted to clarify to anyone else that was initially confused, that the parent is referring to Nvidia's next-generation GPU architecture, not the ARM CPU developer.


What's your source? Ampere are shipping. AMD has no fab advantage since they have no 7nm enterprise cards. Their entire enterprise line has been somewhat of a joke to date .


You are correct, the enterprise Ampere A100 is on the TSMC 7nm process. I should have been clearer that I was referring to the consumer Ampere GeForce cards due later this year.

The 8nm rumors have been widely reported[1] but at this point are just that, rumours.

[1] https://www.tweaktown.com/news/73592/nvidias-ampere-geforce-...


Random comment about AMD, but damn their cpu line naming is really confusing. between zen, zen+, zen2, threadripper, ryzen 7,8,9. ryzen is actually 3 different architectures? then there's like ryzen 7 2000, 3000, now 4000. But for the laptop cpus the architectures are actually different. zen2 isn't used in the ryzen 3000 mobile cpus. Then you can look at best buy and see a laptop listed using a 3rd gen ryzen. Im not sure what that is actually referring to. I'm not sure how this compares to their epyc line either. I still need to read up on that...


How is this any different from Intel Core i7? Intel Core i7 are a line of architectures from 2008. The i7-950 is a Quad-core Nehalem. The i7-2600k was a quad core Sandy Bridge.

Then Ivy Bridge, then Haswell. Crystalwell (laptop-only L4 cache version). Broadwell. Skylake. Ice-lake. Skylake-X. Sapphire Rapids. Etc. etc.

All under the "Core i7" name, despite being a ton of different computers.

---------

The "innovation" was realizing that customers want a long-running name based on price. The Intel i7 is the $300 processor, be it from 2008 or from 2020. Customers otherwise don't really care about the specific hardware details (AVX, BMI instructions, 256-bit or 128-bit Load/store mechanisms. AVX512, etc. etc.)

For the technical people who DO care about those details, Intel (and AMD) release manuals on the details. We know its more important to read the number that comes after the name. "Ryzen 9 3950k", the "3950" is way more important from an architectural perspective than the "Ryzen 9" part.

The "Ryzen 9" or "Core i7" part is just simplified marketing, for the people who are more concerned with price points than technical details.


Zen was the first core design, Zen+ was an enhancement on it, Zen 2 the newest generation. This is analogous to intel chip generations.

Ryzen 3, 5, 7 and 9 are like your Core i3, i5, i7 and i9 - market differentiators.

I agree that when you start looking at the actual model numbers, they're all over the place. Zen 2 laptop products are 4000 series, but Zen 2 desktop products are 3000. I think this was a mistake, personally.


This is standard for CPU manufacturers. I haven't had a clue what the hell has been going on since they killed off the Pentium 3.


Don't forget gems like the Ryzen 5 3400G, which is a Zen 1 (not Zen2) APU.


The 3400G is Zen+. It's the 2400G which is Zen 1.


Yeah. Other "favorites", which include also unclear directions where products are heading, are eg Microsoft Xbox naming: Xbox, Xbox 360, Xbox One, Xbox One S, Xbox Series X (compare against PlayStation 1--5), and the Google chat/videocall product lines: Duo, Hangout, Meet, etc.


To be honest Nintendo is just as bad at naming their consoles but for some reason get a lot less hate (outside of Wii U)

Nintendo Entertainment System, Super Nintendo Entertainment System, Nintendo 64, GameCube, Wii, Wii U, Nintendo Switch, Nintendo Switch Lite

or the handheld ones

Game Boy, Game Boy Pocket, Game Boy Light, Game Boy Color, Game Boy Advanced, Game Boy Advanced SP, Game Boy Advanced Micro, Nintendo DS, Nintendo DS Lite, Nintendo DSi, Nintendo DSi XL


Don't forget about the crown jewel of the Nintendo name collection, the "New" Nintendo 3DS / "New" Nintendo 3DS XL.

Which is a whole heck of a lot worse than the Wii U, in my opinion.


you missed the "Xbox One X"


zen, zen+ and zen2 are the architecture. Threadripper and Ryzen are product lines. I find that quite straight forward.

The really confusing part is, is that the Ryzen 4000 APUs will be zen2 architecture but the desktop CPUs without APUs or the mobile Ryzen 4000 series are zen3 architecture.


While unfortunate, this has been the case since Ryzen launched. Ryzen 2000G(E)/U series were Zen-based like Ryzen 1000(X), Ryzen 3000G(E)/H/U series were Zen+-based like Ryzen 2000(X), Ryzen 4000G(E)/H(S)/U is Zen2-based like Ryzen 3000(X(T)).

I suspect they do this because the APUs typically launch half a year after the GPU-less variants and

What's really confusing and unfortunate is that there are some Ryzen 1000 series variants (Ryzen 3 1200, Ryzen 5 1600) that were re-released well over a year after their initial launch and which are actually Zen+ based.


Actually the product is called AMD Ryzen™ Threadripper™ by AMD, see https://www.amd.com/en/products/ryzen-threadripper, in general that is reduced to Threadripper for sanity :)

I found a nice walkthrough of Ryzen products at this page: https://specby.com/ryzen-explained-r3-r5-r7-threadripper-epy...

And the CPU and GPU roadmap is worth a look as well: https://www.digitaltrends.com/computing/amd-ryzen-radeon-roa...


Ryzen 4000 mobile parts were released early this spring with Zen 2 cores in a monolithic design. The model numbers end in U or H.

Ryzen 4000 APUs were just announced and also use Zen 2 cores in a monolithic design. These have model numbers that end in G.

Zen 3 based desktop parts are expected late this year. If they follow past naming, they will also be Ryzen 4000 with model numbers sporting an optional X at the end, or no letter suffix.


And zen3 is planned for release later this year...


zen3 has already launched with mobile processors


Those are Ryzen 4 but Zen 2 (yes, really).


Good for them. I daily drive an AMD Hackintosh (3900X) and the price to performance ratio of this chip is excellent.

More broadly, consumers are real winners with this zen-powered competition of the last few years. Intel first dropped prices aggressively and now with them shaking up the tech org it seems likely the two companies will have to fight one another for consumer dollars for years to come.


What are the things that don't work? Does power management work? Suspend/resume? Audio? Thunderbolt? Multiple monitors?


Sleep is pretty tough to get working, but some folks have managed recently; wifi works with a compatible card; facetime/imessage all works flawlessly;

lots more info at amd-osx.com


Haven't looked into Hackintoshes for a while. Surprised to read that AMD chips are compatible now. Nice.


Tempting... Does it crash at all? How is upgrading for security updates?


No crashes, security updates work. AMD Hackintoshes are solid.


They are "solid" in the context of hackintosh community (not to mention non-Intel hackintoshes are considered to be less solid and more risky even by the community). I would dare to say they are not solid in the understanding and expectations of anyone else, and I'm saying that from a perspective of hackintosh user, one of the most recommended "golden builds" with components carefully selected to be as close to real Mac as possible.

And it is still full of issues, intermittent, persistent, every OS update is a stress, every Clover/drivers update is a stress and risk and so on. Yet, for a hackintosh, it is solid.

I wouldn't recommend it to anyone and I regret spending money on it ;)


Is your workflow OSX specific ? I switched to MBP and OSX 2 years ago for iOS development but OSX has been getting slower and slower with each update. And I'm running a 15" i9 with 32gb ram and Vega 20 - I see noticeable UI lag on my 5k monitor with native lightweight apps (eg resizing telegram/WhatsApp), Chrome is getting slower and Firefox is no champ either.

I recently booted to win 10 bootcamp for some game and was shocked at how much smoother the experience was. Need to do some benchmarking but just running VS code and docker felt noticeably faster on win 10 - same machine - and Macs have terrible windows drivers


I've spent the last 3 months slowly trying to move to WSL/WSL2 on the weekend and the experience has been really bad IMO.

Right now I'm in some state where I somehow deleted my Ubuntu WSL vm and nothing I do will get it to reinstall so that I can use WSL again. I'm so sick of dealing with this OS. It actually reminds me of trying to get my hackintosh to work and wasting an entire weekend testing different .kexts before I could even get to doing the actual work I wanted to do (code).

With that said Catalina/Mojave have been insanely buggy and I'm dying for a middle ground between osx and windows that isn't linux. I wish cocoa was opensourced.

But at least on my 16" mbp I can open it, maybe have sound not work, docker/windowserve/kernel-task consume all of my memory for no reason and have to restart it every few days but I can usually just open it and code and not worry about breaking ancillary stuff that takes me a day or three to fix.


I think I could say my workflow isn't Mac OS specific, but for my own use, I think it is. I require 1st class Unix userland tools (which WSL isn't), fast native terminal (which nothing on Windows and most on Linux aren't) and due to me not being 20 anymore, an OS that "Just Works" (which Linux isn't for sure and Windows most often isn't either) that doesn't actively spy on me (which Windows does) and runs on a well made hardware (which Mac OS does only on Apple machines). I've talked about that at lengths, feel free to check my comments, for me Mac OS is the only viable OS right now.


You should give openCore a try. Everyone seems to have a better experience with that now. I was surprised at how easy it was compared to clover. Don’t be the first to update to a new 10.x.y release, but even the latest 10.15.x releases became compatible pretty quickly.

My system is very stable (“solid”). My usecase is web development and occasional Xcode, so ymmv.


Good luck with Apple Silicon.


Apple will still support Intel for the next five years, if not longer. I'm not sure if this is a problem for today.


Obviously it is not a problem for today, nor during the transition phase.

Afterwards those red Fiats with Ferrari stickers won't do anymore.


Hackintosh is actually more like a Ferrari with a FIAT sticker, performance-wise.


Looking to buy a Ryzen laptop (G14) in the near future, those chips are amazing performers and have great battery life.

Well deserved record quarter.


I really want the G14 as a durable performant work machine but it has no webcam :(


I like the lack of webcam in my G14 - I see it as a safety feature. Also having a separate device for that, I can do upgrades.

What I don't like though is the lack of PageUp, PageDown, Insert, Home and End keys - this took some time getting used to.

Still, performance and battery life more than make up for all that. And the screen is also decent.


I wouldn't compromise my day-to-day work experience just because every other aspect of the G14 is perfect. It is an incredible machine (especially for the money), but if it doesn't get the job done it's ultimately worthless.

USB webcams or cellphones are a pain to deal with, especially if you just want to grab 1 device and run to a meeting room. "Oops i forgot my webcam brb". Cellphones are problematic because this means you now have to run some sort of hybrid of meeting software between PC and phone. This can increase cognitive load and distract from the actual purpose of the meeting.


IMO doesn't really matter anymore, considering most of us have a phone with a front facing camera with much better quality than most/all laptop cameras, it can easily substitute a laptops camera.


Is there some easy to use software to use your phone as a webcam for your laptop (android -> linux/windows/mac) ? And do you use a stand for the smartphone for that?

Otherwise you need to connect to each conference with multiple devices, choose which microphone to use, share a presentation on one device, but the camera on the phone, ... . Doable, but annoying.


Theres this: https://play.google.com/store/apps/details?id=com.dev47apps....

Its a little tempermental but works without to many issues.


Almost any smartphone is a better webcam than almost every webcam money can buy.


why do you need a webcam? i wfh and all my meetings are audio only and optionaly someone shares the screen to show a demo or a presentation. we don't show our faces. the majority of colleagues have duct tape on the laptop's webcam. so for me, the missing cam from g14, is a feature


Get an external webcam?


Yeah, you can tell they designed that machine way before the pandemic and WFH was the norm and they switched to a no-webcam-on-gaming-laptops mantra.

ASUS engineer: "laptop webcams have shitty quality and gamers don't use them anyway, let's just not include one and save ourselves the BOM cost; applause from bean-counters"

Covid-19 WFH: "I'm gonna end this man's whole career"

I'm sure their hindsight is now 20/20 though.


MacBook Pros got the right balance between usability, features and power a long time ago, other companies should just copy & modernize it. Not having webcam on a laptop is unacceptable (though having a physical switch on it is a great feature for privacy).


I disagree, other companies should definitely stay away from copying the modern Macs.

I like having choices regarding OS, hardware configuration, ports, keyboards, displays, upgradeability, reaprability etc..

If you want a Mac copy then the Mac will be the best anyway.


I don't like the post Steve Jobs direction that the Macbook Pro took, so it's a no-go for me. I had a company macbook pro in 2008 and I loved it (except the OS and the keyboard layout). It had great sound (I had more expensive laptops with worse speakers since then). Also the display was perfect for me (especially outside in sunshine...). Maybe it's just my memory, but I don't feel that the current laptop offerings are that much better.


Hard pass. People who do not buy Macbook pro do it because they don't like what's on offer. No point copying it. For example, latest laptops from Dell/ASUS/etc have 120Hz screens, sometimes even touch screens. If they were to just copy mac, we would never get these amazing features.


It was a shitty deal to not have a camera even before the pandemic. It seems they thought only gamers would want to buy it.


A couple of years ago I remember seeing a video where a Google engineer said they were working on CUDA to AMD compilers and a push to standardize CUDA - what happened to that ? Or am I misremembering something ?


This? https://research.google/pubs/pub45226/

AMD already has a cuda to rocm transpiler, but their libraries are so lacking that many things cannot be converted.


The real question facing us now, I believe, is how is AMD going to compete with ARM-based silicon?

I believe that ARM is on the path to dominance due to performance per watt. Does AMD have a path to continue to win at that game?


I think it ultimately boils down to who has the best CPU architecture if you are talking exclusively about performance-per-watt. Right now, it feels like AMD/ARM are going to be very competitive with each other in the mobile segment, but only on paper. They mostly stay in their own separate market arenas. Apple may disrupt this soon.

The bigger picture is that x86 is a platform that most of the business world runs on top of right now. ARM is certainly pushing into that arena, but AMD is keeping the x86 offering very attractive.

I am of the camp that there is nothing intrinsically wrong with x86, and especially not its recent implementations. It is an old & dirty ISA, but it gets the job done. Every scenario on earth has been thrown at it and it has adapted to suit. Decades of iteration and testing with billions of participants.

All AMD needs to do is continue cranking out 100W+ TDP parts that tear through workloads. The current style of ARM devices cannot keep up with power budgets like that. I believe they would have to completely redesign their architecture if they wanted to move from 5-15W up to something like the toasty 225W TDP of the 7742.


Side note: is there some reason why the columns in the tables are ordered most-recent, least-recent, second-most-recent?


It is not least-recent, it is YoY.

Current Quarter, YoY Quarter, last Quarter.


Man I really wish I bought AMD stock when Ryzen was announced.


If Intel has another delay, they're dead. They'll be like Boeing (but their mistake won't directly kill people). It'll mean that all engineering culture is dead and the best minds have already left the company. Everyone who at Intel who predicted the wrong schedule and not a realistic one should be fired. They have destroyed a national champion in their short term greed.


Agreed, the Opteron completely destroys the Pentium 4, there's no coming back from that!


Their best minds have definitely not left the company. Probably, just like with Boeing, they're just watching the clock, waiting for their retirement and for the quarterly bonus to come in.

Let's not pretend Intel is dead, they just had a record quarter and my friends working there still got sizable bonuses.


What’s the best case scenario for them though? The mobile market is already permanently blown for them. Best case AMD chips are heavily defective next gen and Intel price/performance blows AMD out of the water with 7nm...which AMD has now already shipped.

My pet theory is that the Trump funds to keep American microchip manufacturing afloat has made a Intel complacent. Maybe they’re just dunces though.


Best case for them is TSMC continues to be heavily capacity constrained for the next few years, handicapping AMD's ability to pick up significant market share while Intel resolves their fab issues. Intel may also have to heavily lean into backporting new designs that have been sitting on the shelf waiting for new nodes. 14nm is finally moving past Skylake-based architectures with Rocket Lake, and there should be opportunities to backport 7nm designs to 10nm as well. Not ideal, but if they can remain roughly competitive with AMD and just out-ship them, that could get them through the next few years without losing much market share.

A US government injection of cash to Intel's fab business seems like it could get bipartisan support if Taiwan/China continue to lead the market, but Intel's problems don't appear to be be cashflow related.


Companies have growth even after they're dead. When the growth slows, by then it's too late.


According to your logic, AMD was also dead a few years ago.


AMD is the exception, not the norm. There are very few companies that can do that turn around. And AMD was lucky that Intel wasn't able to continue their progress. Had Intel still been ahead of AMD, AMD's revenue's wont be growing.


Intel has about 10x the revenue and 10x the employees of AMD. AMD is doing well lately, but if times get tough Intel can survive for a very long time just on inertia, just like IBM and HP are surviving. AMD probably can't.

Intel also has plenty of time to get their mojo back if they still have the drive to succeed. A lot of very smart people work there. They just need leadership that can execute. In a lot of ways Intel was a victim of its own success, having a virtual monopoly on good CPUs until Ryzen came out. Leadership got lazy. Leadership needs to fix that. It's not fair to say that the engineering culture there is dead.


AMD has been around since 1969 and has survived rough times just fine.


Yeah, but they need those Intel patents.


As Intel needs theirs!


Without AMD's access to Intel patents we would all be using Itaniums by now, and I doubt AMD can manage to ever drive Intel to the ground.


Without Intel's access to AMD64-bit patents, Intel wouldn't have anything better than a 32-bit Pentium4.

Intel and AMD have each other in a MAD (mutually assured destruction) patent hold. If either pulls either patent portfolio from each other, they both die dramatic deaths.

Intel owns 32-bit x86 patents... while AMD owns the 64-bit patents. Modern x64 chips cannot function unless both parts are together.


AMD64 would never happen without Intel licenses, so Intel wouldn't never need to worry to access to something that legally could never happen.

So how are those AVX instructions support going on AMDs?


I'd assume the RAX register is more imporant, and in more widespread use than the YMM0 register.


For all we know Itanium was never meant to trickle down.


Servers were where the money was to reboot a CPU linage.




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: