When the Intel 80386-33 came out we thought it was the pinnacle of CPUs, running our Novell servers! We now had a justification to switch from arcnet to token ring. Our servers could push things way faster!
Then, in the middle 1991, the AMD 80386-40 CPU came out. Mind completely blown! We ordered some (I think) Twinhead motherboards. They were so fast we could only use Hercules mono cards in them; all other video cards were fried. 16Mb token ring was out, so some of my clients moved to it with the fantastic CPU.
I have seen some closet-servers running Novell NetWare 3.14 (?) with that AMD CPU in the late '90s. There was a QUIC tape & tape drive in the machine that was never changed for maybe a decade? The machine never went down (or properly backed up).
> While the AM386 CPU was essentially ready to be released prior to 1991, Intel kept it tied up in court.[2] Intel learned of the Am386 when both companies hired employees with the same name who coincidentally stayed at the same hotel, which accidentally forwarded a package for AMD to Intel's employee.[3]
NW 3.12 was the final version I think. I recall patching a couple for W2K. NetWare would crash a lot (abend) until you'd fixed all the issues and then it would run forever, unless it didn't.
I once had a bloke writing a patch for eDirectory in real time in his basement whilst running our data on his home lab gear, on a weekend. I'm in the UK and he was in Utah. He'd upload an effort and I'd ftp it down, put it in place, reboot the cluster and test. Two iterations and job done. That was quite impressive support for a customer with roughly 5,000 users.
For me the CPU wasn't that important, per se. NWFS ate RAM: when the volumes were mounted, the system generated all sorts of funky caches which meant that you could apply and use trustee assignments (ACLs) really fast. The RAID controller and the discs were the important thing for file serving and ideally you had wires, switches and NICs to dole the data out at a reasonable rate.
Don't look too closely at the collision avoidance mechanism in 10base-T1S, standardized in 2020. Sure looks like a virtual token ring passing mechanism if you squint...
In 1996 we set up a rack (department store surplus) of Cyrix 586 (running on 486 socket C motherboards) running at 75mhz with 16mb of RAM and could serve 100 concurrent users with CGI scripts and image maps doing web serving and VOIP with over 1 million requests a month on a single T1 line.
Good luck doing that on a load balanced rack of 96 core AMD servers today.
Damn, first Intel missed out on Mobile, then it fumbled AI, and now it's being seriously challenged on its home turf. Pat has his work cut out for him.
They killed StrongARM because they believed the x86 Atom design could compete. Turns out that it couldn't and most of the phones with it weren't that great.
Intel should be focused on an x86+RISC-V hybrid chip design where they can control an upcoming ecosystem while also offering a migration path for businesses that will pay the bills for decades to come.
I'd argue that the Atom core itself could compete - it hit pretty much the same perf/watt targets as it's performance-competitive ARM equivalents.
But having worked with Intel on some of those SoCs, it's everything else that fell down. They were late, they were the "disfavored" teams by execs, they were the engineer's last priority, they had stupid hw bugs they refused to fix and respin, they were everything you could do to set up a project to fail.
This was the main thing, as by that point, all native code was being compiled to Arm and not x86. Using x86 meant that some apps, libraries, etc just didn't work.
Medfield was faster than A9 and Qualcomm Krait in performance, but not so much in power (see Motorola Razr i vs M where the dual-core ARM version got basically the same battery life as the single-core x86 version).
Shortly after though, ARM launched A15 and the game was over. A15 was faster per clock while using less power too. Intel's future Atom generations never even came close after that.
Exactly. Most people still dont get it. What killed Atom on Phone wasn't x86. It was partly software and mostly hardware and cost. It simply wasn't cost competitive, especially when Intel were used to high margin business.
Nepotism? Like execs from different divisions fighting each other?
To me it seems they just want to keep their lock-in monopoly because they own x86. Very rational albeit stupid, but of course the people who took those decisions are long gone from the company, many are probably retired with their short-term focused bonuses.
The opinion that x86 would always be king is nepotism/ego. It was obvious nearly 2 decades ago where compute was headed with cloud and mobile becoming the dominant areas. Neither of which x86 was well positioned for.
There was a story here a few days ago about the exact opposite: that Intel lost out to AMD on x86-64 because they were betting on Itanic to take over the 64-bit market.
> Intel should be focused on an x86+RISC-V hybrid chip design where they can control an upcoming ecosystem while also offering a migration path for businesses that will pay the bills for decades to come.
First I've heard of this. Is this actually a possibility?
RP2350 is using a hybrid of ARM and RISCV already. Also it's not really hard to use RISCV not as the main computing core but as a controller in the SoC. Because the area of RISCV cores are so small it's pretty common to put a dozen (16 to be specific) into a chip.
Maybe I'm just spitting out random BS, but if I understood Keller correctly when he spoke about Zen that (for it) it's not really a problem to change frontend ISA as large chunk of work is on the backend anyways. If that's the case in general with modern processors, would be cool to see a hybrid that can be switched from x86_64 to RISC-V and, to add even more avangarde to it, associate a core or few of FPGA on the same die. Intel, get on it!
The failed because contract chip manufacturing was a huge issue back then. And the bet on slightly the wrong implementation as well. The fundamentally ideas weren't broken.
Yeah, I didn't ascribe any particular reason for it. I was dissapointed to hear the news, it came quietly after a long period of silence, which came after a much longer period of hype.
> and, to add even more avangarde to it, associate a core or few of FPGA on the same die
The use cases for FPGAs in consumer devices are ... close to zero unless you're talking about implementing copy protection since reverse engineering FPGA bitstreams is pretty much impossible if you're not the NSA, MI6 or Mossad with infinite brains to throw at the problem (and more likely than not, insider knowledge from the vendors).
From what I gather the one time I got to speak with chip engineers is that real estate is still at a premium. Not necessarily the total size of the chip, but certain things need to be packed close together to meet the timing requirements. I think that means that you'd be paying a serious penalty to have two parallel sets of decoders for different ISAs.
Qualcomm made a 216-page proposal for their Znew[0] "extension".
It was basically "completely change RISC-V to do what Arm is doing". The only reason for this was that it would allow a super-fast transition from ARM to RISC-V. It was rejected HARD by all the other members.
Qualcomm is still making large investments into RISC-V. I saw an article estimating that the real reason for the Qualcomm v Arm lawsuit is that Qualcomm's old royalties were 2.5-3% while the new royalties would be 4-5.5%. We're talking about billions of dollars and that's plenty of incentive for Qualcomm to switch ISAs. Why should they pay billions for the privilege of designing their own CPUs?
Yeah Otellini disclosed Jobs asked them for a CPU for the iPhone and he turned the request down because Jobs was adamant on a certain price and he just couldn't see it.
Even if it was hard to foresee the success of the iPhone, he surely had the Core Duo in his hands when this happened even if it didn't launch yet so the company just found its footing again and they should've attempted this moonshot: if the volume is low, the losses are low. If the volume is high then economies of scale will make it a win. This is not hindsight 20/20, this is true even if no one could've foreseen just how high the volume would've been.
Not to mention that ARM keeps closing in on their ISA moat via Apple, Ampere, Graviton and so on. Their last bastion is the fact that Microsoft keeps botching Windows for ARM every time they try to make it happen.
Not really Microsoft, rather the Windows developer ecosystem, backwards compatibility is the name of the game in PC land, and as such there is very little value to add additional development costs to support ARM alongside x86, for so little additional software sales.
Apple doesn't matter beyond its 10% market share, they don't target servers any more.
Ampere is a step away to be fully owned by Oracle, I bet most HN ARM cheering crowd is blessifuly unaware of it.
Intel has come back recently with a new series of "Lunar Lake" CPUs for laptops. They are actually very good. For now, Intel has regained the crown for Windows laptops.
Maybe Pat has lit the much needed fire under them.
> Future Intel generations of chips, including Panther Lake and Nova Lake, won’t have baked-on memory. “It’s not a good way to run the business, so it really is for us a one-off with Lunar Lake,” said Gelsinger on Intel’s Q3 2024 earnings call, as spotted by VideoCardz.[0]
“It’s not a good way to run the business, so it really is for us a one-off with Lunar Lake,”
When you prioritize yourself (way to run the business) over delivering what customers want you're finished. Some companies can get that wrong for a long time, but Intel has a competitor giving the customers much more of what they want. I want a great chip and honestly don't know, care, or give a fuck what's best for Intel.
I thought the OEMs liked the idea of being able to demand high profit margins on RAM upgrades at checkout, which is especially easy to justify when the RAM is on-package with the CPU. That way no one can claim the OEM was the one choosing to be anti-consumer by soldering the RAM to the motherboard, and they can just blame Intel.
OEMs like it when it's them buying the cheap RAM chips and getting the juicy profits from huge mark-ups, not so much when they have to split the pie with Intel. As long as Intel cannot offer integrated RAM at price equivalent to external RAM chips, their customers (OEMs) are not interested.
Intel would definitely try to directly profit from stratified pricing rather than letting the OEM keep that extra margin (competition from AMD permitting).
X Elite is faster, but not enough to offset the software incompatibility or dealing with the GPU absolutely sucking.
Unfortunately for Intel, X Elite was a bad CPU that has been fixed with Snapdragon 8 Elite's update. The core uses a tiny fraction of the power of X Elite (way less than the N3 node shrink would offer). The core also got a bigger frontend and a few other changes which seem to have updated IPC.
Qualcomm said they are leading in performance per area and I believe it is true. Lunar Lake's P-core is over 2x as large (2.2mm2 vs 4.5mm2) and Zen5 is nearly 2x as large too at 4.2mm2 (Even Zen5c is massively bigger at 3.1mm2).
X Elite 2 will either be launching with 8 Elite's core or an even better variant and it'll be launching quite a while before Panther Lake.
LNL is a great paper launch but I have yet to see a reasonably priced LNL laptop so far. Nowadays I can find 16GB Airs and X Elite laptops for 700-900 bucks, and once you get into 1400 territory just pay a bit more for M4 MBPs which are far superior machines.
And also, they compete in the same price bracket as Zen 5, which are more performant with not that much worse battery life.
No idea when/what/how/etc that'll translate to actual production.
---
Doing a bit more poking around the net, it looks like "first half 2025" is when actual production is pencilled in for TSMC Arizona. Hopefully that works out.
No disagreement here; the link I provided was specifically for TSMC Washington.
I'm not saying that TSMC is never going to build anything in the US, but rather that the current Lunar / Arrow Lake chips on the market are not being fabbed in the US because that capacity is simply not online yet.
2025H1 seems much more promising for TSMC Arizona compared to the mess that is Samsung's Taylor, TX plant (also nominally under construction).
Yeah, but can they run any modern OS well? The last N intel laptops and desktops I’ve used were incapable of stably running Windows, MacOS or Linux. (As in the windows and apple ones couldn’t run their preloaded operating systems well, and loading Linux didn’t fix it.)
Very strange. Enough bad things can be said about Intel CPUs, but I have never had any doubts about their stability. Except for that one recent generation that could age to death in a couple of months (I didn't have any of these).
AMD is IME more finicky with RAM, chipset / UEFI / builtin peripheral controller quality and so on. Not prohibitively so, but it's more work to get an AMD build to run great.
No trouble with any AMD or Intel Thinkpad T models, Lenovo has taken care of that.
Is it? I presume that a large chunk of the AMD's $3.5B is MI3XX chips, and very little of Intel's $3.5B is AI, so doesn't that mean that Xeon likely still substantially outsells EPYC?
not necessarily. in the past 5 years, the x86 monopoly in the server world has broken. arm chips like graviton are a substantial fraction (20%?) of the server CPU market.
By whom though? I don't see how any company directly competing with Intel (or even orthogonal e.g. Nvidia and ARM) could be allowed to by Intel (they'd need approval in the US/EU and presumably a few other places) unless it's actually on the brink of bankruptcy?
That's just a thing that needs to be renegotiated - highly doubt these two are getting into a patent war given the state of x86. Unless Intel gets acquired by a litigious company to go after AMD :shrug:
I don't agree that this is surprising. To be "dominant" in this space means more than raw performance or value. One must also dominate the details. It has taken AMD a long time to iron out a large number of these details, including drivers, firmware, chipsets and other matters, to reach real parity with Intel.
The good news is that AMD has, finally, mostly achieved that, and in some ways they are now superior. But that has taken time: far longer than it took AMD to beat Intel at benchmarks.
One thing to remember is that the enterprise space is very conservative: AMD needed to have server-grade CPUs for all of the security and management features on the market long enough for the vendors to certify them, promise support periods, etc. and they need to get the enterprise software vendors to commit as well.
The public clouds help a lot here by trivializing testing and locking in enough volume to get all of the basic stuff supported, and I think that’s why AMD was more successful now than they were in the Opteron era.
Server companies have long term agreements in place...waiting for those to expire before moving to AMD is not unexpected. This was the final outcome expected by many.
Intel did an amazing job at holding on to what they had. From Enterprise Sales connection which AMD had very little from 2017 to 2020. And then bundling other items, essentially discount without lowering price, and finally some heavy discount.
On the other hand AMD has been very conservative with their EPYC sales and forecast.
Servers are used for a long time and then Dell/HP/Lenovo/Supermicro has to deliver them and then customers have to buy them. This is a space with very long lead times. Not surprising.
first 2 gens of epic didn't sell that much compared to Intel because companies didn't want to make huge bets on AMD until there was more confidence that they would stick around near the top for a while. also server upgrade cycles are lengthening (probably more like 5-7 years now) since CPUs aren't gaining per core performance as quickly
Complicated. Performance per watt was better for Intel, which matters way more when you're running a large fleet. Doesn't matter so much for workstations or gamers, where all that matters is performance. Also, certification, enterprise management story, etc was not there.
Maybe recent EPYC had caught up? I haven't been following too closely since it hasn't mattered to me. But both companies were suggesting an AMD pass by.
Not surprising at all though, anyone who's been following roadmaps knew it was only a matter of time. AMD is /hungry/.
You're thinking strictly about core performance per watt. Intel has been offering a number of accelerators and other features that make perf/watt look at lot better when you can take advantage of them.
AMD is still going to win a lot of the time, but Intel is better than it seems.
That is true, but the accelerators are disabled in all cheap SKUs and they are enabled only in very expensive Xeons.
For most users it is like the accelerators do not exist, even if they increase the area and the cost of all Intel Xeon CPUs.
This market segmentation policy is exactly as stupid as the removal of AVX-512 from the Intel consumer CPUs.
All users hate market segmentation and it is an important reason for preferring AMD CPUs, which are differentiated only on quantitative features, like number of cores, clock frequency or cache size, not on qualitative features, like the Intel CPUs, for which you must deploy different program variants, depending on the cost of the CPU, which may provide or not provide the features required for running the program.
The Intel marketing has always hoped that by showing nice features available only in expensive SKUs they will trick the customers into spending more for the top models. However any wise customer has preferred to buy from the competition instead of choosing between cheap crippled SKUs and complete but too expensive SKUs.
I think Intel made a strategic mistake in recent years by segmenting its ISA variants. E.g., the many flavors of AVX-512.
Developers can barely be bothered to recompile their code for different ISA variants, let alone optimize it for each one.
So often we just build for 1-2 of the most common, baseline versions of an ISA.
Probably doesn't help that (IIRC) ELF executables for the x86-64 System V ABI have now way to indicate precisely which ISA variants they support. So it's not easy during program-loading time to notice if your going to have a problem with unsupported instructions.
(It's also a good argument for using open source software: you can compile it for your specific hardware target if you want to.)
Wise customers buy the thing that runs their workload with the lowest TCO, and for big customers on some specific workloads, Intel has the best TCO.
Market segmentation sucks, but people buying 10,000+ servers do not do it based on which vendor gives them better vibes. People seem to generally be buying a mix of vendors based on what they are good at.
Intel can offer a low TCO only for the big customers mentioned by you, who buy 10000+ servers and have the force to negotiate big discounts from Intel, buying the CPUs at prices several times lower that their list prices.
On the other hand, for any small businesses or individual users, who have no choice but to buy at the list prices or more, the TCO for the Intel server CPUs has become unacceptably bad. Before 2017, until the Broadwell Xeons, the TCO for the Intel server CPUs could be very good, even when bought at retail for a single server. However starting with the Skylake Server Xeons, the price for the non-crippled Xeon SKUs has increased so much that they have been no longer a good choice, except for the very big customers who buy them much cheaper than the official prices.
The fact that Intel must discount so much their server CPUs for the big customers is likely to explain a good part of their huge financial losses during the last quarters.
Intel does a lot of work developing sdks to take advantage of its extra CPU features and works with open source community to integrate them so they are actually used.
Their acceleration primitives work with many TLS implementations/nginx/SSH amongst many others.
But those accelerators are also available for AMD platforms - even if how they're provided is a bit different (often on add-in cards instead of a CPU "tile").
And things like the MI300A mean that isn't really a requirement now either.
QAT is an integrated offering by Intel, but there are competing products delivered as add-in cards for most of the things it does, and they have more market presence than QAT. As such, QAT provides much less advantage to Intel than Intel marketing makes it seem like. Because yes, Xeon (including QAT) is better than bare Epyc, but Epyc + third party accelerator beats it handily. Especially in cost, the appearance of QAT seems to have spooked the vendors and the prices came down a lot.
I've only used a couple QAT accelerators and I don't know that field much... What relatively-easy-to-use and not-super-expensive accelerators are available around?
Performance per Watt was lost by Intel with the introduction of the original Epyc in 2017. AMD overtook in outright performance with Zen 2 in 2019 and hasn't looked back.
idk go look at the xeon versus amd equivalent benchmarks. theyve been converging although amd's datacenter offerings were always a little behind their consumer
this is one of those things where there's a lot of money on the line, and people are willing to do the math.
the fact that it took this long should tell you everything you need to know about the reality of the situation
AMD has had the power efficiency crown in data center since Rome, released in 2019. And their data center CPUs became the best in the world years before they beat Intel in client.
The people who care deeply about power efficiency could and did do the math, and went with AMD. It is notable that they sell much better to the hyperscalers than they sell to small and medium businesses.
> idk go look at the xeon versus amd equivalent benchmarks.
They all show AMD with a strong lead in power efficiency for the past 5 years.
I know what the benchmarks are like, I wish that you would go and update your knowledge. If we take cloud as a comparison it's cheaper to use AMD, think they're doing some math?
I think for _most_ people it comes down to this: how much can I cram into the platform. More lanes is more high speed storage, special purpose processing, and networking interfaces.
VMware users are starting to say that Epyc is too powerful for one server because they don't want to lose too much capacity due to a single server failure. Tangentially related, network switch ASICs also have too much capacity for a single rack.
Interpretation notes: first time, in the era during which said companies broke out "datacenter" as a reporting category. The last time AMD was clearly on top in terms of product quality, they reported 2006 revenue of $5.3 billion for microprocessors while Intel reported $9.2 billion in the same category. In those years the companies incompletely or inconsistently reported separate sales for "server" or "enterprise".
Sorry, you can have a cheap-ish FPGA that came out 10 years ago, or a new FPGA that costs more than your car and requires a $3000 software license to even program. Those are the only options allowed.
The COP is AMD/Xilinx. I have no idea what the agilex 3 and 5 costs are, I'm not an Altera user. I will note though, having used Lattice, Microchip, and (admittedly at the start of Titanium) Efinix, none of the tools come close to Vivado/Vitis. I'm on lattice at the moment and I've lost countless hours to the tools not working or working poorly on Linux relative to Xilinx. Hobbyist me doesn't care, I'll sink the hours in. Employee me does care, though.
Please, don't talk about how well AMD is doing! You'll only make the stock price slide another 10%, as night follows day... [irrational market grumbling intensifies]
The market can hardly be called irrational on this. AMD's market value pretty much already priced in that they would take over Intel's place in the datacenter, their valuation is more than double Intel's with a PE of 125, despite them being fabless and ARM gaining ground in the server space. That's why you are seeing big swings in prices, because anything short of "we are bankrupting Intel and fighting Nvidia in the AI accelerator space" is seen as a loss.
That's not how it works. You need to pump money into fabs to get them working, and Intel doesn't have money. If AMD had fabs to light up their money, they would also have a much lower valuation.
The market is completely irrational on AMD. Their 52-week high is ~225$ and 52-week low is ~90$. 225$ was hit when AMD was guiding ~3.5B in datacenter GPU revenue. Now, they're guiding to end the year at 5B+ datacenter GPU revenue, but the stock is ~140$?
I think it's because of how early Nvidia announced Blackwell (it isn't any meaningful volume yet), and the market thinks AMD needs to compete with GB200 while they're actually competing with H200 this quarter. And for whatever reason the market thinks that AMD will get zero AI growth next year? I don't know how to explain the stock price.
Anyway, they hit record quarterly revenue this Q3 and are guiding to beat this record by ~1B next quarter. Price might move a lot based on how AMD guides for Q1 2025.
Being fabless does have an impact because it caps AMD's margins and makes x86 their only moat. They can only extract value if they remain competitive on price. Sure that does not impact Nvidia, but they get to have fat margins because they have virtually no competition.
> The market is completely irrational on AMD. Their 52-week high is ~225$ and 52-week low is ~90$.
That's volatility not irrationality. As I wrote AMD's valuation is built on the basis that they will keep executing in the DC space, Intel will keep shitting the bed and their MI series will eventually be competitive with Nvidia. These facts make investor skittish and any news about AMD causes the stock to move.
> the market thinks AMD needs to compete with GB200 while they're actually competing with H200 this quarter. And for whatever reason the market thinks that AMD will get zero AI growth next year?
The only hyperscaler that picked up MI300X is Azure and they GA'ed it 2 weeks ago, both GCP and Azure are holding off. The uncertainty on when (if) it will catch on is a factor but the growing competition from those same hyperscaler building their own chip means that the opportunity window could be closing.
It's ok to be bullish on AMD the same way that I am bearish on it, but I would maintain that the swings have nothing to do with irrationality.
I am sure AMD has been delivering more value for even longer. I bet the currently deployed AMD Exaflops are significantly higher than Intel. It was a huge consideration for me when shopping between the two. As big as 50% more compute per dollar.
If Nvidia releases a good server CPU, they can eat into both Intel and AMD profits. Maybe it's not as lucrative as selling GPUs but having a good portion of the market may pay bigger dividends in the future.
If I were AMD CEO I would make the top priority to have a software stack on par with CUDA so that AMD GPUs have a chance in the data centers.
Except for some very, VERY, specific use cases (many of which are now irrelevant due to Optane's death) it is professionally negligent to recommend intel in the datacenter.
So AMD is first in datacenters and grows AI related chips quarter over quarter. And with PS 5 pro launching hopefully that will grow again their custom graphics chips sales. Looks like a solid buy for me at the moment.
When the Intel 80386-33 came out we thought it was the pinnacle of CPUs, running our Novell servers! We now had a justification to switch from arcnet to token ring. Our servers could push things way faster!
Then, in the middle 1991, the AMD 80386-40 CPU came out. Mind completely blown! We ordered some (I think) Twinhead motherboards. They were so fast we could only use Hercules mono cards in them; all other video cards were fried. 16Mb token ring was out, so some of my clients moved to it with the fantastic CPU.
I have seen some closet-servers running Novell NetWare 3.14 (?) with that AMD CPU in the late '90s. There was a QUIC tape & tape drive in the machine that was never changed for maybe a decade? The machine never went down (or properly backed up).