Revenue: 19.7B, Net Income: 5.1B, Market Cap: 209.42B, P/E: 9.06
Revenue: 1.93B, Net Income: 157M, Market Cap: 79.2B, P/E: 133.82
But by the sentiment of all media reports INTC is in sharp decline & AMD is killing it, yet even in their most recent results Intel revenue/earnings still dwarfs AMD's.
2. Everybody loves a good "underdog takes over from the big bad empire in decline" story, both the press and the commenting public. The stories about big companies absolutely crushing their competitors in quality are just not as interesting.
3. AMD can't scale up as rapidly as a SAAS because fabs take a while to build. I'd wager Intel has (at least) five to ten years to come up with a good processor design before they get overtaken by AMD in sheer volume.
4. There are no doubt Chinese competitors with even lower sales and even higher P/E and growth rate than AMD. It'll be interesting to see how that plays out.
It’s not AMD vs Intel
It’s AMD vs INTEL vs Apple vs Amazon (and nvidia might be next.
I don't think having physical possession of the thing is always the best criteria for whether you really own it.
edit: reading more on the edge-tpu page, they definitely appear to be of limited use compared with the ones available to google's hosted plans.
Except Intel is still growing its revenue. 43%+ YoY in Q2 for their Datacenter.
For example, my parents and all their friends weren't well off enough to afford private university, but majors at public university were limited by the government based on projected need with slots offered to only the highest test scores. They couldn't test high enough to secure a public university slot for accounting, art, architecture, education, medicine, and trades such as automotive repair or plumbing. So they all ended up with computer science and engineering degrees. Fully paid for by the government.
They basically treat the semiconductor industry like how the US treats its defense industry.
I think is a good example of what happens when you do that.
It seems intuitive to me that as war technology progresses, number of humans becomes less of a factor in military strength, but I know so little about this area that I couldn't guess how greatly alternative educational opportunities impact military applications, nor at what point in the past/future the scale might tip between wanting policies that push more into armed forces vs. no longer being so important (and I assume it would be different for different countries, too).
But I do believe good, free education should be a key part of any country, regardless of whether it helps national défense or not. If that drives up the cost of recruiting people to the armed forces then fine - I'm no fan of them in general, but if people are going to risk their lives potentially in wars then it shouldn't be because their choices were limited to that vs a life of poverty.
The only chance to sit it out would be a giant technological leap as in quantum computing, I just don't see that leap.
I'd like to see it, but I'm not sure anyone can catch up now.
However, I think NVIDIA is still vulnerable—but against AWS/GCP/Azure, not Intel/AMD.
My opinion is that deep learning is moving to the cloud. That's a bigger conversation with a lot of nuances, but if you take that basic assumption, then the development of ASICs like TPU/Inferentia become a big threat to Nvidia.
If the biggest buyers of chips in deep learning are the clouds, and the clouds are increasingly developing their own chips for deep learning, Nvidia is in a tough spot. They'll always have a place among labs that use their own machine, and of course, Nvidia's business is bigger than machine learning, but in general I think the clouds are a real threat.
One of Nvidia's actual moats is their system building competency, which AMD lacks. They can sell you a box / a whole server room configuration, since their acquisition of Mellanox together with network equipment.
The cost of Hardware Development is the main cost contribution.
Which I think is not true with regards to both GPU and GPGPU computing. The major cost for GPU is Drivers, and CUDA for GPGPU. i.e It is Software.
Unlike ARM where AWS/GCP/Azure can make their chips and benefits from the Software ecosystem already in place for ARM, there is no such thing on GPU. Drivers and CUDA is the biggest moat around Nvidia's CPU. And unless Developers figure out a way to drive the cost of DL and Drivers down, there is no incentives to switch away from Nvidia's ecosystem.
That is why I am interested to see how Intel tackle this area. And if History will repeat itself again in the Voodoo, Rage 3D and S3 Verge era.
If AMD could provide a backend for the most popular frameworks then they could skip over the CUDA patent issue completely.
The real problem is that it seems like AMD’s not investing substantially in software teams to make it happen.
That is the state of Pytorch support for AMD GPUs.
I do. Not everything you can do with a CUDA card is deep learning. In fact that's just one of many applications.
There has been some progress, but PyTorch still isn't fully functional with ROCm yet and that feels like a good litmus test.
Really do not want just one players. And hope the high level plays have more completion.
Still interest in Taiwan part. Purely from economic point of view. How secure are we ok that front, if all eggs are in one basket. Hk is fallen. Taiwan or South China Sea is in play. That will affect the supply chain.
Intel is launching a GPU/Deep learning accelerator, Huawei is thinking about launching a GPU. Pytorch and Tensor flow work well enough on AMD GPUs. There are also custom deep learning ASICs from Google. There is simply too much competition at this point for CUDA to continue to be the standard.
Marketing fad or not, it’s not a bad business to be in.
It is barely starting. In 10 and 20 years it will be huge.
Nvidia will have to validate and launch all of it’s PCIe4 on Epyc/Ryzen processors, so it’s not like AMD won’t benefit from the Deep Learning hype
It also doesn’t help that AMD has stopped supporting ROCm for the current consumer GPUs.
AMD can't ramp up the manufacturing volume it has from TSMC up quickly. TSMC builds capacity to match roughly what it is contracted for. I can just assume that any extra capacity they planned sells for a very good price.
Neither AMD nor TSMC can fully exploit Intel troubles because they can't foresee what happens in Intel manufacturing. Intel is selling chips like hotcakes.
TL;DR Intel has high manufacturing volume and the ability to make money with less competitive product.
The problem for Intel is AMD is putting out a clearly better value proposition in the high-margin server space, with single socket systems beating Intel dual socket systems, and other advantages like more PCI lanes and lower power consumption.
That will drive Intel prices, and margins, down. That means a lower Intel stock price as probable future earnings shrink.
I haven't fact-checked that, but given Intel's market share, it sounds plausible.
Intel may be behind on process and microarchitecture, but as long as they can ship that volume, and with a far better gross margin than AMD (Intel 53%, AMD 44%), I wouldn't count them done just yet.
TSMC also knows full well that Intel will switch back to their own fabs the instant it's practical for them to do so, making them a less reliable long term customer than Apple, AMD, etc. and so won't be inclined to give them priority or much of a break on pricing.
The other factor is going to be that TSMC has 5nm in risk production already. If they bring 5nm fabs online before Intel has a real answer to their 7nm process, AMD could be buying capacity from the 7nm and 5nm fabs at the same time.
For the most part it's a worthwhile trade for AMD because you get much greater overall compute power with only marginally diminished frequencies, but it's still something for Intel to hang its hat on in benchmarks.
PE or net income is not really relevant for AMD as they are in very high growth phase (and because their gross margins are fine). The most important thing here is that Intel is failing to execute on the upcoming fab process which will unequivocally make them less competitive from 2021 - 2023 (at the very least). The uncertainty of Intel being able to execute here on out is also impacting their share price since they have had a long history of failing to execute on 10nm and now on 7nm.
How volume limited is TSMCs process?
Intel runs their own fab and for whatever reason they've messed this up repeatedly (unclear what the reason is, but it's at least partly a management/strategic failure). Their focus on old designs that are currently profitable instead of the future was a short term benefit and a long term mistake.
AMD uses TSMC (Taiwan Semiconductor Manufacturing Company) to fab their chips. Apple does the same.
Intel's profits from older technology and server sales have made them slow to recognize the severity of their situation. First they missed mobile, now they've had years of delays with their own manufacturing process, and now they're going to feel pressure from ARM on the desktop and probably on the server.
No US based fab for modern chips is a concern for national security (particularly given Chinese interest in eventually taking over Taiwan).
AMD spun off their fabs a while ago (Global Foundries) and uses TSMC for modern chip manufacturing. This gives them lower overhead now, but I'm not sure it's a much stronger position overall. There's a benefit to owning your own fab if you can pull it off.
I think Intel is at serious risk long term. They need someone who recognizes the existential crises they're in and can save them. Their current results are a lagging indicator.
Processor IP is Intel's bread and butter.
I think if they do it it's the beginning of irrelevance, really.
What might be more interesting is if Intel became champion of Risc-V...that would be like Microsoft and Linux.
Paradoxically this is likely WHY they are getting punished so much by the market. Their current revenue/margins are too juicy to give up, despite the fact that it's causing them massive pain on the technological front.
Very simple explanation: stocks reflect future (expected) performance, not present performance. So the stock market is really just telling you they expect Intel to continue failing.
Gross oversimplification because it's one ratio that can be interpreted in so so many ways. But, this is one way to look at it.
That is because AMD only has GPUs and CPUs while Intel has a lot more side business (VDSL modem chips, Thunderbolt, FPGAs, ...).
Additionally, the CPU side of Intel doesn't look very promising for the future (architectural issues like the whole sidechannel attacks, technical issues in their lithography process), they will have to invest a lot of money to get this under control - while AMD has a wildly positive outlook and is only limited by the capacity of TSMC's fabs - whatever they produce gets ripped out of their hands by customers.
From a financial point of view, Intel also has the problem that AMD was drastically undercutting prices for competitive products... part of the Intel stock price was the ridiculous amount of money they could squeeze out of customers for their top-notch processors for years. AMD all but flattened that as Intel was forced to cut their prices by a bunch.
Reality is, there is no way TSMC would be able to mfg. all of Intels cpus even if both wanted it. What I can see happening is that Intel licenses some IP from TSMC.
AMD has a ton of room to grow. Intel does too, but not in the processor business where it's had a near-monopoly in some spaces. Intel also has challenges in other areas, and ARM is looking strong too.
It also doesn't help Intel that the long-running data leakage issues hurt it a lot more than AMD.
AMD is in a better position to seriously increase the E by utilizing its P.
Intel has failed to use its P to seriously increase its E for a few years, and just experienced a string of setbacks with long-term consequences. (This is why buying AMD a couple weeks ago was an obvious good move.)
Stock markets assign high P/E for anything they consider as "Growth". TSLA, CMG is all considered as "Growth".
For some reason the present market is geared towards "Growth" than "Value".
As an investor it would seem that AMD has more potential upside. Add to that they are currently producing better technology at the moment...
Decline means decreasing. Which is true. But they are decreasing from a big number.
AMD stock is much more a meme stock than TSLA.
3-5 years from now, still 14nm CPU only without any volume productions on 7nm, 10nm? How much can those 14nm 300 Watts CPU can be sold for and who will buy them at that time?
Actually from the past 3-5 process development history of Intel, it is not hard to see what is likely to happen.
You can be be a market leader and still be a failing company.
Past revenue has never and will never be an indicator of future success.
They are doing absolutely spectacular work, but there's still much to do, and there are significant risks.
They have been making progress on the GPU side, but as long as they don't provide a CUDA-like ecosystem and experience, I don't see them challenging NVIDIA soon in the accelerator market.
I'm pretty confident that they will continue to outpace Intel on the CPU side, but with Amazon's Graviton2, and the recent TOP500-success of Fugaku -- pure ARM, no accelerators - there is still a tremendous amount of competition ahead.
10% per year next 20 years, or
20% per year next 10 years, or
30% per year next 8 years, or
50% per year next 5 years
to get P/E 25 with current price.
I don't think the price is unreasonably high but I don't think ROI will be very high if you buy AMD today.
That being said, I agree it looks like AMD stock is priced for something spectacular to happen, which makes me more excited about their chips than their stock.
If you want to sit on the sidelines and watch valuations soar to unreasonable levels and not try to claim a piece of that fine, but don’t cry when you see how much you missed out. AMD could be the next NVDA.
If you compare with intels numbers. Then amd is a dwarf. And the growth figures are loosely following intels:
1. Stronger x86 design: AMD's recent CPU releases have shown they are inching ahead of Intel on x86 design, and are able to achieve significantly better cost per $. At the same time, AMD are already well into shifting a big chunk their 7nm manufacturing to TSMC. Intel has only just started this process.
2. A strong GPU business: Yes, they are second to Nvidia, but given the design skills they are showing on the CPU side, I expect that gap will narrow very quickly. Both Sony and Microsoft have chosen AMD for CPU and GPU in the PS5 and Xbox Series X, with support for full 4k ray-tracing. Given how long this generation of consoles will be on the market for (likely 5-10 years at least), it is a strong forward indicator of roadmap strength.
tl;dr: I expect AMD will weather* the ARM storm better than Intel.
* Originally a typo as "whether". Thanks for the correction!
Keep in mind that Apple is also exclusively building Macs with AMD graphics cards. They don't even support Nvidia cards as eGPU anymore. The rumour is that Nvidia is not willing to do any customised designs and someone at Apple is very upset with Nvidia.
> "Nintendo Switch is powered by the performance of the custom Tegra processor."
Any reason to believe that will continue to be true when Apple move to their own ARM chips? No technical reason they couldn't keep using AMD GPUs, but Apple seems to be leaning pretty hard into getting as vertically integrated as possible.
I think that's a large component, but I'd like to add, ontop of maybe pricing and the like, Nvidia is being a dickhead in openness towards their hardware/software stack. Which, documentation of which is determined important for AAA game optimization over the course of operation of the console.
Additionally, there are important aspects of AMD's GPU architecture that are adventitious for teams squeezing the most amount of performance out of a fixed platform. Specifically, as far as I am aware, AMD's compute is much more flexible in context switching while the graphics pipeline is active, which at least used to be a problem for Nvidia's architecture.
Margin means nothing if people don't buy your product.
I don't think that follows. You realize the world is capacity constrained on leading-node fab capacity? And that by going fabless, AMD now has no guaranteed capacity?
Maybe there are enough suckers to keep Intel afloat. I couldn't say.
AMD currently has a process lead over Nvidia (and this is rumoured to be set to continue for a little while longer - apparently the first consumer Ampere chips are being fabbed on Samsung's inferior 8nm process due to lack of capacity at TSMC for the next few months)
Nvidia has clearly had an architecture advantage, although RDNA2 may close this gap, depending on how Ampere performs.
While Nvidia has had a much stronger showing in the GPGPU space, with CUDA helping it be the clear current winner, this also appears to have driven architecture decisions at Nvidia with the focus on tensor cores.
In gaming, Nvidia has put a lot of work into utilising these tensor cores for Deep Learning Super Sampling (DLSS). The idea being that you render at a lower resolution and then use deep learning to upscale in real-time to higher resolutions. DLSS 2.0 made some leaps in quality and DLSS 3.0 is on the horizon. It will be interesting to see:
a) How well they can get this working
b) Is AMD working on its own version of this?
c) If so, how well will the RDNA architecture be suited to this approach?
Will be interesting to watch how this plays out!
So how is AMD going to get relevant in the Hollywood and TV studios that are the big buyers of OctaneRender?
I just wanted to clarify to anyone else that was initially confused, that the parent is referring to Nvidia's next-generation GPU architecture, not the ARM CPU developer.
The 8nm rumors have been widely reported but at this point are just that, rumours.
Then Ivy Bridge, then Haswell. Crystalwell (laptop-only L4 cache version). Broadwell. Skylake. Ice-lake. Skylake-X. Sapphire Rapids. Etc. etc.
All under the "Core i7" name, despite being a ton of different computers.
The "innovation" was realizing that customers want a long-running name based on price. The Intel i7 is the $300 processor, be it from 2008 or from 2020. Customers otherwise don't really care about the specific hardware details (AVX, BMI instructions, 256-bit or 128-bit Load/store mechanisms. AVX512, etc. etc.)
For the technical people who DO care about those details, Intel (and AMD) release manuals on the details. We know its more important to read the number that comes after the name. "Ryzen 9 3950k", the "3950" is way more important from an architectural perspective than the "Ryzen 9" part.
The "Ryzen 9" or "Core i7" part is just simplified marketing, for the people who are more concerned with price points than technical details.
Ryzen 3, 5, 7 and 9 are like your Core i3, i5, i7 and i9 - market differentiators.
I agree that when you start looking at the actual model numbers, they're all over the place. Zen 2 laptop products are 4000 series, but Zen 2 desktop products are 3000. I think this was a mistake, personally.
Nintendo Entertainment System, Super Nintendo Entertainment System, Nintendo 64, GameCube, Wii, Wii U, Nintendo Switch, Nintendo Switch Lite
or the handheld ones
Game Boy, Game Boy Pocket, Game Boy Light, Game Boy Color, Game Boy Advanced, Game Boy Advanced SP, Game Boy Advanced Micro, Nintendo DS, Nintendo DS Lite, Nintendo DSi, Nintendo DSi XL
Which is a whole heck of a lot worse than the Wii U, in my opinion.
The really confusing part is, is that the Ryzen 4000 APUs will be zen2 architecture but the desktop CPUs without APUs or the mobile Ryzen 4000 series are zen3 architecture.
I suspect they do this because the APUs typically launch half a year after the GPU-less variants and
What's really confusing and unfortunate is that there are some Ryzen 1000 series variants (Ryzen 3 1200, Ryzen 5 1600) that were re-released well over a year after their initial launch and which are actually Zen+ based.
I found a nice walkthrough of Ryzen products at this page:
And the CPU and GPU roadmap is worth a look as well:
Ryzen 4000 APUs were just announced and also use Zen 2 cores in a monolithic design. These have model numbers that end in G.
Zen 3 based desktop parts are expected late this year. If they follow past naming, they will also be Ryzen 4000 with model numbers sporting an optional X at the end, or no letter suffix.
More broadly, consumers are real winners with this zen-powered competition of the last few years. Intel first dropped prices aggressively and now with them shaking up the tech org it seems likely the two companies will have to fight one another for consumer dollars for years to come.
lots more info at amd-osx.com
And it is still full of issues, intermittent, persistent, every OS update is a stress, every Clover/drivers update is a stress and risk and so on. Yet, for a hackintosh, it is solid.
I wouldn't recommend it to anyone and I regret spending money on it ;)
I recently booted to win 10 bootcamp for some game and was shocked at how much smoother the experience was. Need to do some benchmarking but just running VS code and docker felt noticeably faster on win 10 - same machine - and Macs have terrible windows drivers
Right now I'm in some state where I somehow deleted my Ubuntu WSL vm and nothing I do will get it to reinstall so that I can use WSL again. I'm so sick of dealing with this OS. It actually reminds me of trying to get my hackintosh to work and wasting an entire weekend testing different .kexts before I could even get to doing the actual work I wanted to do (code).
With that said Catalina/Mojave have been insanely buggy and I'm dying for a middle ground between osx and windows that isn't linux. I wish cocoa was opensourced.
But at least on my 16" mbp I can open it, maybe have sound not work, docker/windowserve/kernel-task consume all of my memory for no reason and have to restart it every few days but I can usually just open it and code and not worry about breaking ancillary stuff that takes me a day or three to fix.
My system is very stable (“solid”). My usecase is web development and occasional Xcode, so ymmv.
Afterwards those red Fiats with Ferrari stickers won't do anymore.
Well deserved record quarter.
What I don't like though is the lack of PageUp, PageDown, Insert, Home and End keys - this took some time getting used to.
Still, performance and battery life more than make up for all that. And the screen is also decent.
USB webcams or cellphones are a pain to deal with, especially if you just want to grab 1 device and run to a meeting room. "Oops i forgot my webcam brb". Cellphones are problematic because this means you now have to run some sort of hybrid of meeting software between PC and phone. This can increase cognitive load and distract from the actual purpose of the meeting.
Otherwise you need to connect to each conference with multiple devices, choose which microphone to use, share a presentation on one device, but the camera on the phone, ... . Doable, but annoying.
Its a little tempermental but works without to many issues.
ASUS engineer: "laptop webcams have shitty quality and gamers don't use them anyway, let's just not include one and save ourselves the BOM cost; applause from bean-counters"
Covid-19 WFH: "I'm gonna end this man's whole career"
I'm sure their hindsight is now 20/20 though.
I like having choices regarding OS, hardware configuration, ports, keyboards, displays, upgradeability, reaprability etc..
If you want a Mac copy then the Mac will be the best anyway.
AMD already has a cuda to rocm transpiler, but their libraries are so lacking that many things cannot be converted.
I believe that ARM is on the path to dominance due to performance per watt. Does AMD have a path to continue to win at that game?
The bigger picture is that x86 is a platform that most of the business world runs on top of right now. ARM is certainly pushing into that arena, but AMD is keeping the x86 offering very attractive.
I am of the camp that there is nothing intrinsically wrong with x86, and especially not its recent implementations. It is an old & dirty ISA, but it gets the job done. Every scenario on earth has been thrown at it and it has adapted to suit. Decades of iteration and testing with billions of participants.
All AMD needs to do is continue cranking out 100W+ TDP parts that tear through workloads. The current style of ARM devices cannot keep up with power budgets like that. I believe they would have to completely redesign their architecture if they wanted to move from 5-15W up to something like the toasty 225W TDP of the 7742.
Current Quarter, YoY Quarter, last Quarter.
Let's not pretend Intel is dead, they just had a record quarter and my friends working there still got sizable bonuses.
My pet theory is that the Trump funds to keep American microchip manufacturing afloat has made a Intel complacent. Maybe they’re just dunces though.
A US government injection of cash to Intel's fab business seems like it could get bipartisan support if Taiwan/China continue to lead the market, but Intel's problems don't appear to be be cashflow related.
Intel also has plenty of time to get their mojo back if they still have the drive to succeed. A lot of very smart people work there. They just need leadership that can execute. In a lot of ways Intel was a victim of its own success, having a virtual monopoly on good CPUs until Ryzen came out. Leadership got lazy. Leadership needs to fix that. It's not fair to say that the engineering culture there is dead.
Intel and AMD have each other in a MAD (mutually assured destruction) patent hold. If either pulls either patent portfolio from each other, they both die dramatic deaths.
Intel owns 32-bit x86 patents... while AMD owns the 64-bit patents. Modern x64 chips cannot function unless both parts are together.
So how are those AVX instructions support going on AMDs?