Hacker News new | past | comments | ask | show | jobs | submit login
Intel Problems (stratechery.com)
569 points by camillovisini 47 days ago | hide | past | favorite | 426 comments



I think one day we’re going to wake up and discover that AWS mostly runs on Graviton (ARM) and not x86. And on that day intel’s troubles will go from future to present.

My standing theory is that the m1 will accelerate it. Obviously all the wholly managed AWS services (Dynamo, Kinesis, S3, etc.) can change over silently, but the issue is EC2. I have a MBP, as do all of my engineers. Within a few years all of these machines will age out and be replaced with m1 powered machines. At that point the idea of developing on ARM and deploying on x86 will be unpleasant, especially since Graviton 2 is already cheaper per compute unit than x86 is for some work loads; imagine what Graviton 3 & 4 will offer.


> I have a MBP, as do all of my engineers. Within a few years all of these machines will age out and be replaced with m1 powered machines. At that point the idea of developing on ARM and deploying on x86 will be unpleasant

Is it not at least somewhat possible that at least some of those Apple laptops will age out and be replaced with GNU/Linux laptops? Agreed that developing on ARM and deploying on x86 is unpleasant, but so too is developing on macOS and deploying on Linux. Apple’s GNU userland is pretty ancient, and while the BSD parts are at least updated, they are also very austere. Given that friction is already there, is it likelier that folks will try to alleviate it with macOS in the cloud or GNU/Linux locally?

Mac OS X was a godsend in 2001: it put a great Unix underneath a fine UI atop good hardware. It dragged an awful lot of folks three-quarters of the way to a free system. But frankly I believe Apple have lost ground UI-wise over the intervening decades, while free alternatives have gained it (they are still not at parity, granted). Meanwhile, the negatives of using a proprietary OS are worse, not better.


> Is it not at least somewhat possible that at least some of those Apple laptops will age out and be replaced with GNU/Linux laptops?

Has Linux desktop share been increasing lately? I'm not sure why a newer Mac with better CPU options is going to result in increasing Linux share. If anything, it's likely to be neutral or favor the Mac with it's newer/ faster CPU.

> But frankly I believe Apple have lost ground UI-wise over the intervening decades, while free alternatives have gained it (they are still not at parity, granted).

Maybe? I'm not as sold on Linux gaining a ton of ground here. I'm also not sold on the idea that the Mac as a whole is worse off interface wise than it was 10 years ago. While there are some issues, there are also places where it's significantly improved as well. Particularly if you have an iPhone and use Apple's other services.


As much as I would like it to happen, I think it's unlikely Linux will be taking any market share away from Macs. That said, I could imagine it happening a couple ways. The first being an increasingly iPhonified and restricted Mac OS that some devs get fed up with.

The second would be Apple pushing all MacBooks to M1 too soon, breaking certain tools and workflows.

While I think both of those scenarios could easily happen, most devs will probably decide just put up with the extra trouble rather than switch to Linux.


> The second would be Apple pushing all MacBooks to M1 too soon, breaking certain tools and workflows.

I don't think this is an issue. Apple has been fairly straight about timelines and people who want to stick with Intel have plenty of time and options to get their stuff in a row in advance. More important, Apple fixed most of the major irritations with the MacBook Pro. If they hadn't launched the 16" MBP last year, people would have been stuck with a fairly old/ compromised design.

I suspect Apple is going to maintain MacOS right where it is now. People have been worried about "iOSification" for nearly a decade now and while there have been some elements, the core functionality is fundamentally the same.


Big Sur has me more worried about iOS-ification than ever before. The UI is a train wreck. It looks designed for touch and I have no idea why. I guess Sidecar?

They changed a ton about the UI in Big Sur, none of it for the better as far as I can tell. They took away even more choice, I can't have a "classic" desktop experience.

My biggest frustration is that they were willing to make such drastic changes to the desktop UI at all. I have to re-learn a ton of keyboard navigation now. And there doesn't seem to be a coherent design to the workflows.

Such a drastic change seems to be an admission that they thought the previous design language was wrong but they seem to have replaced it with... no vision at all?

I am hopeful for the next generation of M1 MacBook Pros and whatever the next MacOS is. Hopefully they get their design philosophy straight and stick with it.


I quite like the Big Sur UI and definitely don't consider it a train wreck. I've been a Mac user since the PowerBook G4, and the M1 MacBook Air with Big Sur is the best "desktop" computer I've ever owned. I have it connected to a beautiful 32" display, it's fast, silent and I find the UI very usable

I see a strong vision in Big Sur, one that pulls macOS visually into line with iOS, but there are a lot of rough edges right now. Especially with some Catalyst apps bringing iOS paradigms onto Mac. Even Apple's Catalyst apps (News in particular) are just gross, can't even change the font size with a keyboard shortcut


Visually it is a dumpster fire. Window borders are inconsistently sized, the close/minimize/full screen buttons aren’t consistently placed.

There’s an enormous amount of wasted space. You need that 32” monitor. My 16” RMBP now has as much usable space as my 13”.

Keyboard navigation is bizarre. Try this:

1) Open Mail.

2) Cmd W.

3) Cmd 1 (or is it Cmd 0?!, hint: it’s whatever Messages isn’t!)

4) Without using your mouse select a message in your inbox.

5) Without using your mouse navigate to a folder under your inbox.

6) Without using your mouse navigate to an inbox in another account.

All of this is possible in Catalina and earlier with zero confusion (selections are highlighted clearly) and can be done “backwards” using shift. In Big Sur some of it is actually impossible and you have to just guess where you start.

When native apps aren’t even consistent in their behavior and appearance that is a trainwreck.

You may be able to read the tea leaves and see a grand vision here but I have to use a half-baked desktop environment until Not-Jony Ive is satisfied they have reminded me they are not Jony Ive. To them I say, trust me, I noticed.


I don't have a problem with using the smaller MacBook Air 13" screen for development (I upgraded from a 15" MBP)

I'm not a huge Mail user so I'm not up-to-speed on keyboard shortcuts. But I was able to navigate with the keyboard easily — I hit tab to focus on the correct list then use the up/down arrows to select the inbox or message (your 4/5/6). After hitting Cmd+W both Cmd+0 or Cmd+1 bring the Mail window back for me. And Cmd+Shift+D still sends mail which is the main one I use

I am a huge Xcode user and primarily use the keyboard for code navigation, and that is as good as ever on the 13" MacBook Air in Big Sur. Also use Sketch a lot, and that has been just great too

I guess we have a very different perception of Big Sur but mine is generally favourable, and I don't see the wasted space that you see. There have been a few weekends where I have done all my work on the 13" MBA, which is only just now possible due to battery life, and the experience has been really, really nice


Yes but those shortcuts are inconsistent. Cmd 1 does not bring back Messages for example. I have to press tab several times in Mail to figure out where I am in Big Sur where in Catalina the selection is always highlighted. You can’t use the keyboard to change mailboxes in Big Sur as far as I can tell. There’s no clear “language” to the shortcuts.

You’d have more usable space with Catalina on that 13” screen. It was a noticeable loss of space upgrading from Catalina to Big Sur on a 16” RMBP. I used to have space for three partially overlayed windows on my 16” screen. Now I am lucky to get two. Usable space on a 16” Big Sur MacBook is similar to a 13” Catalina MacBook. I have both. My workflows changed. There is no benefit to me as a user.

Take a look at this visual comparison: https://www.andrewdenty.com/blog/2020/07/01/a-visual-compari...

Look at the “traffic light” buttons. Note how much thicker the top bar is in Big Sur. It’s 50% taller! That’s a lost row of text.


Both Messages and Mail both use Cmd+0 to bring the message viewer to the front. That's what the shortcut is listed as in the Window menu. Same with Calendar if you close the main calendar window

Messages does not have great keyboard navigation — I can't tab around like I can in Mail. I am putting this down to the fact it is a Catalyst app and they are a bit sloppy with consistency (not necessarily a Big Sur thing, as these were on Catalina)


You’re right about Messages vs Mail.

But in Catalina Cmd-1 selects the inbox and full sidebar navigation including between accounts is possible with arrow keys. Nested lists can expand and collapse with left and right.

In Big Sur Cmd-1 just reopens the window with no selection. In addition you cannot navigate the full sidebar with arrow keys.

Combine this with the lack of visual indication of your selection and keyboard navigation becomes a struggle.

I have not found a UI improvement in Big Sur.


> Such a drastic change seems to be an admission that they thought the previous design language was wrong but they seem to have replaced it with... no vision at all?

Sure, but Jonny Ive just officially left. He was head of all design (when he should have been head of hardware design only). It’s natural that there would be greater-than-usual changes as someone new took over.

Big Sur has some pretty terrible changes. There’s nothing surprising that changes occurred. The only surprise is how bad they are.


Yeah exactly. I understand what is going on here. I’m just frustrated I have to suffer through it for the vanity of Not-Jony-Ive.


The frustrating thing for me is the whole idea of rebooting things visually when it's not based on productivity or additional features. So much of this is just a visual reboot.

Overall though when I hear about the "iOSification" I worry foremost about locking down the OS which doesn't seem to be a big issue.

My personal computer is on Big Sur, but I've kept my work laptop on Catalina so a lot of this doesn't hit me work wise. It doesn't seem too horrible to me when working on private projects but that's a small percentage of my time.


> My biggest frustration is that they were willing to make such drastic changes to the desktop UI at all.

I feel you there, one of the reasons I didn't like Windows is big, seemingly random UI changes. I don't think Bir Sur is crazy like Windows NT -> Windows Vista crazy, just feels like a big change for MacOS which has been relatively stable for a few years.

Seems like Apple tends to go long when they make big sweeping UI changes, then in the following releases they dial things back or work out the glitches. It's frustrating for certain.


> Has Linux desktop share been increasing lately?

At least I feel I see a lot more Linux now, not just in the company I work for but also elsewhere.

The speed advantage over Windows is so huge that it is painful to go back once you've seen it.

MS Office isn't seen as a requirement anymore, nobody think it is funny if you use GSuite and besides, last time I used an office file for work is months ago: everything exist inæ Slack, Teams, Confluence or Jira and these are about equally bad on all platforms.

The same is true (except the awful part) for C# development: it is probably even better on Linux.

People could switch to Mac and I guess many will. For others, like me Mac just doesn't work and for us Linux is an almost obvious choice.


You have to be clear that you're talking about developers here.

For the rest of the company, MS Office may very well be essential. In particular, Excel is king for many business functions.


At the company I work for (400 people in Norway, offices in Sweden, Denmark and Romania as well) I don't know anyone who has a MS Office license.

There is a procedure to get one, but so far I don't know anyone who used it. Quite on the contrary I know one of our sales guys used to install and run Linux on his laptop.

Yes, I work and an above average technical company but we do have HR, sales etc and they aren't engineers (at least not most of them).

(email in my profile ;-)


> At least I feel I see a lot more Linux now, not just in the company I work for but also elsewhere.

I see it far more now than I did ~5-10 years ago when it was my daily driver. I'm just not sure if it's gotten a baseline support and flatlined or if it's growing consistently now.

Fully agree with almost all of your points, and if MacOS did go off the deep end in terms of functionality, I'd be back on Linux. It's why I'm a big fan of what Marcan is doing and follow it closely.

If Linux support was as good now as it was when I switched, I'd likely have never switched to the Mac.

But now I just stick around for the hardware.


> Has Linux desktop share been increasing lately?

I run on Mac for desktop, Linux for projects. I don’t know that more people have been switching than before, but I thought the author of this[0] piece illustrated well that it’s easier than ever. That said, the author states that they opt not to have a phone. That’s much less lock-in than most of the market.

[0]https://orta.io/on/migrating/from/apple


Fully agree, seems like a good chunk of developer focused software is cross platform at this point.


I develop on GNU/Linux begrudgingly. It has all of my tools, but I have a never-ending stream of issues with WiFi, display, audio, etc. As far as I'm concerned, GNU/Linux is something that's meant to be used headless and ssh'd into.


What distro do you use? I switched from Mac to Pop, and it’s great.

I had already decided that 2021 would be my year of the Linux desktop, but Apple forced my hand a bit early. My 2019 Mac’s WiFi went. Had to lose my primary dev machine for over a week as it shipped out to have a bunch of hardware replaced.

So, I built a PC with parts that have good Linux support. I think that’s the key. I imagine System76 machines would run smoothly. It’s definitely not as smooth a UX as Mac, but I like being in an open ecosystem on a machine I can repair. And it’s had a number of perks such Docker running efficiently for once.

Edit: I can now trivially repair my computer if it breaks. The entire rig cost about 1/4 what my MacBook cost, and it is much faster.


For what its worth I've been thinking about going to a Mac M1 laptop for email/HR plus an small-form-factor Ubuntu box for development (i.e. Intel NUC, Asus PN-50) that I can ssh into and run headlessly locally.

With a hybrid 1-2 days in the office and the fact that the NUC form factor has gotten smaller and higher performance, I could make a case to actually stick it in my laptop bag for the 1-2 days and do email/surf only on my train commute. I was never really that productive working on the train anyways.


Try a Raspberry Pi. It just works.


Except for when the wifi fails or your SD card that you boot off of gets corrupted or you try to do something that requires a bit more power than 1.2GHz and 1GB RAM can handle.


Raspberry Pi 4 has up to 8GB of RAM and a 1.5GHz CPU, but your point still stands. Even with those specs it won't provide a fully smooth desktop experience.


> Is it not at least somewhat possible that at least some of those Apple laptops will age out and be replaced with GNU/Linux laptops?

And I personally hope that by then, GNU/Linux will have an M1-like processor available to happily run on. The possibilities demonstrated by this chip (performance+silence+battery) are so compelling that it's inevitable we'll see them in non-Apple designs.

Also, as it usually happens with Apple hardware advancements, Linux experience will be gradually getting better on M1 Macbooks as well.


I think we can look to mobile to see how feasible this might be: consistently over the past decade, iPhones have matched or exceeded Android performance with noticeably smaller capacity batteries. A-series chips and Qualcomm chips are both ARM. Apple's tight integration comes with a cost when it comes to flexibility, and, you can argue, developer experience, but it's clearly not just the silicon itself that leads to the performance we're seeing in the M1 Macs.


I think there are serious concerns about Qualcomm's commitment to competitive performance instead of just being a patent troll. I think if AWS Graviton is followed by Microsoft[0] and Google[1] also having their own custom ARM chips it will force Qualcomm to either innovate or die. And will make the ARM landscape quite competitive. M1 has shown what's possible. MS and Google (and Amazon) certainly have the $$ to match what Apple is doing.

0:https://www.datacenterdynamics.com/en/news/microsoft-reporte... 1:https://www.theverge.com/2020/4/14/21221062/google-processor...


That's why they acquired Nuvia.


I wonder to what extent that's a consequence of Apple embracing reference counting (Swift/Objective C with ARC) while Google being stuck on GC (Java)?

I'm a huge fan of OCaml, Java and Python (RC but with cyclic garbage collection), and RC very likely incurs more developer headache and more bugs, but at the end of the day, that's just a question of upfront investment, and in the long run it seems to pay off - it's pretty hard for me to deny that pretty much all GC software is slow (or singlethreaded).


Java can be slow for many complex reasons, not just GC. Oracle are trying to address some of this with major proposals such as stack-allocated value types, sealed classes, vector intrinsics etc, but these are potentially years away and will likely never arrive for Android. However, a lot of Androids slowness is not due to Java but rather just bad/legacy architectural decisions. iOS is simply better engineered than Android and I say this as an Android user.


Not to mention it took Android about a decade longer than iPhone to finally get their animations silky smooth. I don't know if the occasional hung frames were the results of GC, but I suspect it.


Approximately zero MacBooks will be replaced by Linux laptops in the next couple years. There is no new story in the Linux desktop world to make a Linux Laptop more appealing. That people already selected to develop on MacOS and deploy to Linux tells you all you need to know there.

MacPorts and Homebrew exist. Both support M1 more or less and support is improving.

Big Sur is a Big Disaster but hopefully this is just the MacOS version of iOS 13 and the next MacOS next year goes back to being mostly functional. I have more faith in that than a serviceable Linux desktop environment.


> Big Sur is a Big Disaster

I may be naive but that seems like hyperbole. I updated to Big Sur shortly after is was released. It was a little bumpy here and there, but not more than any other update. I'd even argue it has been as smooth or smoother than any of my Linux major upgrades in the past 20 years.


I haven't updated but my impression has been that this is a "tock" release, as in "yeah we shipped some big UI changes but we'll refine them, very little of substance changed in the underlying systems". And that's great, and I look forward to being a medium-early adopter before the worst of the UI gets a big refresh next year.


Yeah that’s a great description.

I upgraded my personal laptop and it’s so bad I’m holding off on my work laptop until I am forced to upgrade.


It is mostly functional but visually a mess. “Better than Linux upgrades” is a pretty low bar.


This is why you run your environment on Linux and MacOS in Docker, so you don't have these screwy deployment issues caused by MacOs vs Linux issues.


Developing in macOS against Docker is beyond painful. 10-100x build times is not a reasonable cost for platform compat.


With some configuration you can map folders into drives on Docker. You can just build on MacOS and restart the app server in the container. I was using NodeJS though, so I'm not dealing with libc and syscall incompatibilities and such.


docker om MacOs is a second class citizen, because it runs in a VM. The networking is much more complicated because of this, which causes endless amounts of hard-to-debug problems and performance is terrible.


You can easily bring macOS up to Linux level GNU with brew.

I agree generally though. I see macOS as an important Unix OS for the next decade.


"Linux" is more than coreutils. The Mac kernel is no where close to Linux in capability and Apple hates 3rd party drivers to boot. You'll end up running a half-baked Linux VM anyway so all macOS gets you is a SSH client with a nice desktop environment, which you can find anywhere really.


> all macOS gets you is a SSH client with a nice desktop environment

Also proprietary software. Unfortunately, many people still need Adobe.

I personally like Krita, Shotcut, and Darktable better than any of the Adobe products I used to use, but it's a real issue.

E: Add "many people"


The micro-kernel design on macOS has benefits over Linux's monolithic kernel.

You also get POSIX compliance.


macOS doesn't have a microkernel, but it does have userland drivers and it's pretty good at being macOS/iOS. Linux's oom-killer doesn't work nearly as well as jetsam.


macOS has a Hybrid kernel with a decent portion being micro components I thought?


Mach started as a microkernel, but when they jammed together Mach and BSD they put it in the same process so it's not really separated anymore.

Recently there are some hypervisor-like things for security, and more things have been moving to userland instead of being kexts. I'd still say it's less of a microkernel than Linux since it doesn't have FUSE.


> bring macOS up to Linux level GNU with brew.

Ugh, not even.

Simple and straightforward shell scripts that work on decade-old Linux distributions don't work on Big Sur due to ancient macOS bsd tooling.

Just look at stackoverflow or similar sites: every answer has a comment "but this doesn't work on my macOS box."

(I've had to add countless if-posix-but-mac workarounds for PhotoStructure).


> Is it not at least somewhat possible that at least some of those Apple laptops will age out and be replaced with GNU/Linux laptops?

Sadly, fewer of my coworkers use Linux now than they did 10 years ago.


> GNU/Linux laptops

Could we do a roll call of experiences so I know which ones work and which ones don't? Here are mine.

    Dell Precision M6800: Avoid.
        Supported Ubuntu: so ancient that Firefox
        and Chrome wouldn't install without source-building
        dependencies.
        Ubuntu 18.04: installed but resulted in the
        display backlight flickering on/off at 30Hz.

    Dell Precision 7200:
        Supported Ubuntu: didn't even bother.
        Ubuntu 18.04: installer silently chokes on the NVMe
        drive.
        Ubuntu 20.04: just works.


Historically, Thinkpads have had excellent support. My T430S is great (although definitely aging out), and apparently the new X1 Carbons still work well. Also, both Dell and Lenovo have models that come with Linux if desired, so those are probably good ones to look at.


I'll have to look into modern thinkpads. I had a bad experience about ~10 years ago, but it wouldn't be fair to bring that forward.

> both Dell and Lenovo have models that come with Linux

Like the Dell Precision M6800 above? Yeah. Mixed bag.


Have used linux on thinkpads since the 90s.

Rules of thumb: - Use older thinkpads off business lease is great - stick to intel video graphics - max out the ram and upgrade storage (NVMe)


Most companies wouldn't end up trying to shove a disk into a computer though, they would buy from a vendor with support and never have compatibility issues. I have owned 3 System76 computers for this reason...


> they would buy from a vendor with support

Like the Dell Precision 6800 above? The one where the latest supported linux was so decrepit that it wouldn't install Firefox and Chrome without manually building newer versions of some of the dependencies?

"System76 is better at this than Dell" is valid feedback, but System76 doesn't have the enterprise recognition to be a choice you can't be fired for.

Maybe ThinkPads hit the sweet spot. I'll have to look at their newer offerings.


> Is it not at least somewhat possible that at least some of those Apple laptops will age out and be replaced with GNU/Linux laptops?

Some definitely will. Significant enough to assume they're not well-situated other-configs? Probably not. Even the most VIM- and CLI-oriented devs I know still prefer a familiar GUI for normal day to day work. Are they all going Ubuntu? Or Elementary? I mean, I welcome any migration that doesn't fracture the universe. But I don't think it's likely.


There is literally no chance of that. IT would find this an intolerable burden for them to manage, and I doubt the devs would like it either. Most of them seem pretty enthused to get their hands on a m1.

I’ve known colleagues that tried to run Linux professionally using well reviewed Linux laptops, and their experience has been universally awful. Like “I never managed to get the wifi to work, ever” bad. The idea of gambling every developer on that is a non-starter even at my level, let alone across the org.


FWIW, I have been running Linux as my desktop since 1999, and on a laptop since the mid-2000s. It is doable, and I no longer have the problems which used to be common. Once upon a time Ethernet, WiFi & graphics were screwy, but not for a long time now.


Building server software on Graviton ARM creates a vendor lock-in to Amazon, with very high costs of switching elsewhere. Despite using A64 ISA and ARM’s cores, they are Amazon’s proprietary chips no one else has access to. Migrating elsewhere gonna be very expensive.

I wouldn’t be surprised if they sponsor their Graviton offering taking profits elsewhere. This might make it seem like a good deal for customers, but I don’t think it is, at least not in the long run.

This doesn’t mean Graviton is useless. For services running Amazon’s code as opposed to customer’s code (like these PAAS things billed per transaction) the lock-in is already in place, custom processors aren’t gonna make it any worse.


I'm not necessarily disagreeing with you, but... maybe elaborating in a contrary manner?

Graviton ARM is certainly vendor lock-in to Amazon. But a Graviton ARM is just a bog-standard Neoverse N1 core. Which means the core is going to show similar characteristics as the Ampere Altra (also a bog-standard Neoverse N1 core).

There's more to a chip than its core. But... from a performance-portability and ISA perspective... you'd expect performance-portability between Graviton ARM and Ampere Altra.

Now Ampere Altra is like 2x80 core, while Graviton ARM is... a bunch of different configurations. So its still not perfect compatibility. But a single-threaded program probably couldn't tell the difference between the two platforms.

I'd expect that migrating between Graviton and Ampere Altra is going to be easier than Intel Skylake -> AMD Zen.


> you'd expect performance-portability between Graviton ARM and Ampere Altra

I agree, that would what I would expect too. Still, are there many public clouds built of these Ampere Altra-s? Maybe we gonna have them widespread soon, but until then I wouldn’t want to build stuff that only runs on Amazon or my own servers with only a few on the market and not yet globally available on retail.

Also, AFAIK on ARM the parts where CPUs integrate with the rest of the hardware are custom. The important thing for servers, disk and network I/O differs across ARM chips of the same ISA. Linux kernel abstracts it away i.e. stuff is likely to work, but I’m not so sure about performance portability.


> Also, AFAIK on ARM the parts where CPUs integrate with the rest of the hardware are custom. The important thing for servers, disk and network I/O differs across ARM chips of the same ISA. Linux kernel abstracts it away i.e. stuff is likely to work, but I’m not so sure about performance portability.

Indeed. But Intel Xeon + Intel Ethernet integrates tightly and drops the Ethernet data directly into L3 cache (bypassing DRAM entirely).

As such, I/O performance portability between x86 servers (in particular: Intel Xeon vs AMD EPYC) suffers from similar I/O issues. Even if you have AMD EPYC + Intel Ethernet, you lose the direct-to-L3 DMA, and will have slightly weaker performance characteristics compared to Intel Xeon + Intel Ethernet.

Or Intel Xeon + Optane optimizations, which also do not exist on AMD EPYC + Optane. So these I/O performance differences between platforms are already on the status-quo, and should be expected if you're migrating between platforms. A degree of testing and tuning is always needed when changing platforms.

--------

>Still, are there many public clouds built of these Ampere Altra-s? Maybe we gonna have them widespread soon, but until then I wouldn’t want to build stuff that only runs on Amazon or my own servers with only a few on the market and not yet globally available on retail.

A fair point. Still, since Neoverse N1 is a premade core available to purchase from ARM, many different companies have the ability to buy it for themselves.

Current rumors look like Microsoft/Oracle are just planning to use Ampere Altra. But like all other standard ARM cores, any company can buy the N1 design and make their own chip.


> > Also, AFAIK on ARM the parts where CPUs integrate with the rest of the hardware are custom. The important thing for servers, disk and network I/O differs across ARM chips of the same ISA. Linux kernel abstracts it away i.e. stuff is likely to work, but I’m not so sure about performance portability.

> Indeed. But Intel Xeon + Intel Ethernet integrates tightly and drops the Ethernet data directly into L3 cache (bypassing DRAM entirely).

This will be less of a problem on ARM servers as direct access to the LLC from a hardware master is a standard feature of ARM's "Dynamic Shared Unit" or DSU, which is the shared part of a cluster providing the LLC and coherency support. Connect a hardware function to the DSU ACP (accelerator coherency port) and the hardware can control, for all write accesses, whether to "stash" data into the LLC or even the L2 or L1 of a specific core. The hardware can also control allocate on miss vs not. So any high performance IP can benefit from it.

And if I understand correctly, the DSU is required with modern ARM cores. As most (besides Apple) tend to use ARM cores now, you have this in the package.

More details here in the DSU tech manual: https://developer.arm.com/documentation/100453/0002/function...


> I'd expect that migrating between Graviton and Ampere Altra is going to be easier than Intel Skylake -> AMD Zen.

Could you explain what migration problems are between Skylake and Zen, beyond AVX-512 ?


Ubuntu 64 looks the same on Graviton as on a Raspberry Pi. You can take a binary you've compiled on the RPi, scp it to the Graviton instance and it will just run. That works the other way round too, which is great for speedy Pi software builds without having to set up a cross-compile environment.


Yep just Aarch64. Probably can use qemu too.

Cross compilation is no big deal these days. Embedded devs cross compile to ARM all day every day.

The tooling will be there when it needs to be.


My Java and .NET applications don't care most of the time in what hardware they are running, and many of other languages managed languages I use also do not, even if AOT compiled to native code.

That is the beauty of having proper defined numeric types and memory model, instead of the C and derived approaches of whatever the CPU gives, with whatever memory model.


I think OP was talking about managed services, like lambda, Ecs and beanstalk internal control, EC2 internal management system, that is systems that are transparent for the user.

AWS could very well run their platform systems entirely on graviton. After all, serverless and cloud is in essence someone else's server. AWS might as well run all their paas software on in-house architecture


While there is vendor lock in with those services, it also has nothing to do with what CPU you are running. At that layer, CPU is completely abstract.


Maybe I wasn't clear enough. I am talking about code that runs behind the scenes. Management processes, schedulers, server allocation procedures, everything that runs on the aws side of things, transparent for the client.


Maybe I'm missing something, but don't the vast majority of applications don't care about what architecture they run on?

The main difference for us was lower bills.


> Maybe I'm missing something, but don't the vast majority of applications don't care about what architecture they run on?

There can be issues with moving to AArch64, for instance your Python code may depend on Python 'wheels' which in turn depend on C libraries that don't play nice with AArch64. I once encountered an issue like this, although I've now forgotten the details.

If your software is pure Java I'd say the odds are pretty good that things will 'just work', but you'd still want to do testing.


Sure, but you're talking about short term problems. RPi, Graviton, Apple Silicon, etc... are making AArch64 a required mainstream target.


That's true. AArch64 is already perfectly usable, and what issues there are will be ironed out in good time.


Even if the applications don't care, there's still the (Docker) container, which cares very much, and which seems to be the vehicle of choice to package and deliver many cloud-based applications today. Being able to actually run the exact same containers on your dev machine which are going to be running on the servers later is definitely a big plus.


Docker has had multiarch support for a while and most of the containers I’ve looked at support both. That’s not to say this won’t be a concern but it’s at the level of “check a box in CI” to solve and between Apple and Amazon there’ll be quite a few users doing that.


Our experience as well. We run a stack that comprises Python, Javascript via Node, Common Lisp and Ruby/Rails. It's been completely transparent to the application code itself.


> they are Amazon’s proprietary chips no one else has access to.

Any ARM licensee (IP or architecture) has access to them. They're just NeoVerse N1 cores and can be synthesized on Samsung or TSMC processes.


Really, you could make the argument for any AWS service and generally using a cloud service provider. You get into the cloud, use their glue (lambda, kinesis, sqs etc) and suddenly migrating services somewhere else is a multi-year project.

Do you think that vendor lock in has stopped people in the past (and future)? Thinking about those kinds of things are long term and many companies think short term.


Heck, Amazon themselves got locked-in to Oracle for the first 25 years of Amazon's existence. Vendor lock-in for your IT stack doesn't prevent you from becoming a successful business.


True, true (and heh, it was me who pushed for Oracle, oops)

But ... the difference is that Oracle wasn't a platform in the sense that (e.g.) AWS is. Oracle as a corporation could vanish, but as long as you can keep running a compatible OS on compatible hardware, you can keep using Oracle.

If AWS pulls the plug on you, either as an overall customer or ends a particular API/service, what do you do then?


You post a nasty note on Parler!

Oh wait.


Why would it be lock in. If you can compile for arm you can compile for x86.


Memory model, execution units, simd instructions...


The vast majority of code running is in python, js, jvm, php, ruby, etc. Far removed these concerns.


Some of those languages (especially python & php) utilise C based modules or packaged external binaries. Both of which have to be available / compatible with ARM.

When you run pip or composer on Amd64 they often pull these down and you don't notice, but if you try on arm you discover quickly that some packages don't support ARM. Sometimes there is a slower fallback option, but often there is none.


Those are pretty minor issues that will be fixed as arm servers get more popular


The real question is, can you compile for ARM and move the binary around as easily as you can for x86?

I'm reasonably sure that you can take a binary compiled with GCC on a P4 back in the day and run it on the latest Zen 3 CPU.


As far as I can tell, yes. Docker images compiled for arm64 work fine on the Macs with M1 chips without rebuilding. And as another commenter said, you can compile a binary on a Raspberry Pi 4 and move it to a EC2 graviton instance and it just works.


it will probably be a similar situation to x86, with various vendors implementing various instructions in some processors that won't be supported by all. I guess the difference is that there may be many more variants than in x86, but performance-critical code can always use runtime dispatch mechanisms to adapt.


It's true that there are extensions to x86, but 99,99% of software out there (the one you'd commonly install on Windows or find in Linux distribution repos) doesn't use those instructions or maybe just detects the features and then uses it.

I don't recall encountering a "Intel-locked" or "AMD-locked" application in more than 20 years of using x86. Ok, maybe ICC, but that one kind of makes sense :-)


Encountering SIGILLs is not super uncommon on heterogeneous academic computer clusters (since -march=native).

But yeah, typically binaries built for redistribution use a reasonably crusty minimum architecture. Reminds me of this discussion for Fedora: https://lists.fedoraproject.org/archives/list/devel@lists.fe...


Audio software usually runs better on Imtel than on AMD.


That doesn't mean compilers will emit such instructions; maybe hand written assembler will become less portable if such code is making use of extensions...but that should be obvious to the authors...and probably they should have a fallback path.


> can you compile for ARM and move the binary around as easily as you can for x86?

Yes.


As I understand it, ARM's new willingness to allow custom op-codes is dependent upon the customer preventing fragmentation of the ARM instruction set.

In theory, your software could run faster, or slower, depending upon Amazon's use of their extensions within their C library, or associated libraries in their software stack.

Maybe the wildest thing that I've heard is Fujitsu not implementing either 32-bit or Thumb on their new supercomputer. Is that a special case?

"But why doesn’t Apple document this and let us use these instructions directly? As mentioned earlier, this is something ARM Ltd. would like to avoid. If custom instructions are widely used it could fragment the ARM ecosystem."

https://medium.com/swlh/apples-m1-secret-coprocessor-6599492...


> Maybe the wildest thing that I've heard is Fujitsu not implementing either 32-bit or Thumb on their new supercomputer. Is that a special case?

What's wild about this? Apple dropped support for 32b (arm and thumb) years ago with A11. Supporting it makes even less sense in an HPC design than it does in a phone CPU.


It's interesting that if you step back and look at what Amazon has been most willing to just blow up and destroy, it is the idea of intellectual property of any kind. It comes out clearly in their business practices. This muscle memory may make it hard for ARM to have a long term stable relationship with a company like ARM.


What do you mean?

Also, I think there's a typo in your last phrase.


> Building server software on Graviton ARM creates a vendor lock-in to Amazon

Amazon already has lock-in. Lambda, SQS, etc. They've already won.

You might be able to steer your org away from this, but Amazon's gravity is strong.


This is kind of what should happen right? I'm not an expert, but my understanding is that one of the takeaways from the M1 success has been the weaknesses of x86 and CISC in general. It seems as if there is a performance ceiling which exists for x86 due to things like memory ordering requirements, and complexity of legacy instructions, which just don't exist for other instruction sets.

My impression is that we have been living under the cruft of x86 because of inertia, and what are mostly historical reasons, and it's mostly a good thing if we move away from it.


M1's success shows how efficient and advanced the TSMC 5 nm node is. Apple's ability to deliver it with decent software integration also deserves some credit. But I wouldn't interpret it as the death knell for x86.


> weaknesses of x86 and CISC in general

"RISC" and "CISC" distinctions are murky, but modern ARM is really a CISC design these days. ARM is not at all in a "an instruction only does one simple thing, period" mode of operation anymore. It's grown instructions like "FJCVTZS", "AESE", and "SHA256H"

If anything CISC has overwhelmingly and clearly won the debate. RISC is dead & buried, at least in any high-performance product segment (TBD how RISC-V ends up fairing here).

It's largely "just" the lack of variable length instructions that helps the M1 fly (M1 under Rosetta 2 runs with the same x86 memory model, after all, and is still quite fast).


Most RISCs would fail the "instruction only does one thing" test. ISTR there were instructions substantially more complex than FJCVTZS in the PowerPC ISA.

I think it's time for a Mashey CISC vs RISC repost:

https://www.yarchive.net/comp/risc_definition.html


RISC vs CISC isn't really about instructions doing "one simple thing period."

It's about increased orthogonality between ALU and memory operations, making it simpler and more predictable in an out-of-order superscalar design to decode instructions, properly track data dependencies, issue them to independent execution units, and to stitch the results back into something that complies with the memory model before committing to memory.

Having a few crazy-ass instructions which either offload to a specialized co-processor or get implemented as specialized microcode for compatibility once you realize that the co-processor is more trouble than it's worth doesn't affect this very much.

What ARM lacks are the huge variety of different instruction formats and addressing mode that Intel has; which substantially affect the size and complexity of the instruction decoder, and I'm willing to bet that creates a significant bottleneck on how large of a dispatch and reorder system they can have.

For a long time, Intel was able to make up this difference with process dominance, clever speculative execution tricks, and throwing a lot of silicon and energy at it which you can do on the server side where power and space are abundant.

But Intel is clearly losing the process dominance edge. Intel ceded the mobile race a long time ago. Power is becoming more important in the data center, which are struggling to keep up with providing reliable power and cooling to increasingly power-hungry machines. And Intel's speculative execution smarts came back to bite them in the big market they were winning in, the cloud, when it turned out that they could cause information leaks between multiple tenants, leading to them needing to disable a lot of them and lose some of their architectural performance edge.

And meanwhile, software has been catching up with the newer multi-threaded world. 10-15 years ago, dominance on single threaded workloads still paid off considerably, because workloads that could take advantage of multiple cores with fine-grained parallelism were fairly rare. But systems and applications have been catching up; the C11/C++11 memory model make it significantly more feasible to write portable lock-free concurrent code. Go, Rust, and Swift bring safer and easier parallelism for application authors, and I'm sure the .net and Java runtimes have seen improvements as well.

These increasingly parallel workloads are likely another reason that the more complex front-ends needed for Intel's instruction set, as well as their stricter memory ordering, are becoming increasingly problematic; it's becoming increasingly hard to fit more cores and threads into the same area, thermal, and power envelopes. Sure, they can do it on big power hungry server processors, but they've been missing out on all of the growth in mobile and embedded processors, which are now starting to scale up into laptops, desktops, and server workloads.

I should also say that I don't think this is the end of the road for Intel and x86. They have clearly had a number of setbacks of the last few years, but they've managed to survive and thrive through a number of issues before, and they have a lot of capital and market share. They have squeezed more life out of the x86 instruction set than I thought possible, and I wouldn't be shocked if they managed to keep doing that; they realized that their Itanium investment was a bust and were able to pivot to x86-64 and dominate there. They are facing a lot of challenges right now, and there's more opportunity than ever for other entrants to upset them, but they also have enough resources and talent that if they focus, they can probably come back and dominate for another few decades. It may be rough for a few years as they try to turn a very large boat, but I think it's possible.


> I'm willing to bet that creates a significant bottleneck on how large of a dispatch and reorder system they can have

My understanding is the reorder buffer of the m1 is particularly large:

"A +-630 deep ROB is an immensely huge out-of-order window for Apple’s new core, as it vastly outclasses any other design in the industry. Intel’s Sunny Cove and Willow Cove cores are the second-most “deep” OOO designs out there with a 352 ROB structure, while AMD’s newest Zen3 core makes due with 256 entries, and recent Arm designs such as the Cortex-X1 feature a 224 structure."

https://www.anandtech.com/show/16226/apple-silicon-m1-a14-de...


> These increasingly parallel workloads are likely another reason that the more complex front-ends needed for Intel's instruction set, as well as their stricter memory ordering, are becoming increasingly problematic; it's becoming increasingly hard to fit more cores and threads into the same area, thermal, and power envelopes. Sure, they can do it on big power hungry server processors, but they've been missing out on all of the growth in mobile and embedded processors, which are now starting to scale up into laptops, desktops, and server workloads.

Except ARM CPUs aren't any more parallel in comparable power envelopes than x86 CPUs are, and x86 doesn't seem to have any issue hitting large CPU core counts, either. Most consumer software doesn't scale worth a damn, though. Particularly ~every web app which can't scale past 2 cores if it can even scale past 1.


Parallelism isn't a good idea when scaling down, nor is concurrency often. Going faster is still a good idea on phones (running the CPU at higher speed uses less battery because it can turn off faster) but counting background services there is typically less than one core free, there is overhead to threading and asyncing, and your program will go faster if you take most of it out.


Isn't most of M1's performance success due to being a SoC / increasing component locality/bandwidth? I think ARM vs x86 performance on its own isn't a disadvantage. Instead the disadvantages are a bigger competitive landscape (due to licensing and simplicity), growing performance parity, and SoCs arguable being contrary to x86 producers' business models.


ARM instructions are also much easier to decode than x86 instructions which allowed the M1 designers to have more instruction decoders and this, IIRC, is one of the important contributors to the M1's high performance.


Umm, Intel laptop chips are SoC with onchip graphics, pci4, wifi, usb4, and thunderbolt4 controller, connectivity direct to many audio codec channels, plus some other functionality for dsp and encryption.


There isn't any performance ceiling issue. Intel ISA operates at a very slight penalty in terms of achievable performance per watt, but nothing in an absolute sense.

I would argue it isn't time for Intel to switch until we see a little more of the future as process nodes may shrink at a slower rate. Will we have hundreds of cores? Field programmable cores? More fixed function hardware on chip, or less? How will high-bandwidth high-latency gddr style memory mix with lower-latency lower-bandwidth ddr memory? Will there be on die memory like hbm for cpus?



On the flip side that post illustrates just how things can go wrong, too: Windows RT was a flop.


Precisely for the reasons he gave though. It wasn't a unified experience. RT had lackluster support, no compatibility and a stripped down experience.

They're trying to fix it with Windows on ARM now, but that's what people were asking for back then.


It is more a stance to show Microsoft is ready to put Windows on arm CPUs if x86 loses the market.


I can see this happening for things that run in entirely managed environments but I don't think AWS can make the switch fully until that exact hardware is on people's benches. Doing microbenchmarking is quite awkward on the cloud, whereas anyone with a Linux laptop from the last 20 years can access PMCs for their hardware


Very little user code generates binaries that can _tell_ it is running on non-x86 hardware. Rust is Arm Memory Model safe, existing C/C++ code that targets the x86 memory model is slowly getting ported over, but unless you are writing multithreaded C++ code that cuts corners it isn't an issue.

Running on the JVM, Ruby, Python, Go, Dlang, Swift, Julia or Rust and you won't notice a difference. It will be sooner than you think.


It's not the memory model I'm thinking of but the cache design, ROB size etc.

Obviously this is fairly niche but the friction to making something fast is hugely easier locally.


The vast majority of developers never profile their code. I think this is much less of an issue than anyone on HN would rank it. Only when the platform itself provides traces do they take it into consideration. And even then, I think most perf optimization is in a category of don't do the obviously slow thing, or the accidentally n^2 thing.

I partially agree with you though, as the penetration of Arm goes deeper into the programmer ecosystem, any mental roadblocks about deploying to Arm will disappear. It is a mindset issue, not a technical one.

In the 80s and 90s there were lots of alternative architectures and it wasn't a big deal, granted the software stacks were much much smaller and more metal. Now they are huge, but more abstract and farther away from machine issues.


"The vast majority of developers never profile their code."

Protip: New on the job and want to establish a reputation quickly? Find the most common path and fire a profiler at it as early as you can. The odds that there's some trivial win that will accelerate the code by huge amounts is fairly decent.

Another bit of evidence developers rarely profile their code is that I can tell my mental model of how expensive some server process will be to run and most other developer's mental models tend to differ by at least an order of magnitude. I've had multiple conversations about the services I provide and people asking me what my hardware is, expecting it to be run on some monster boxes or something when I tell them it's really just two t3.mediums, which mostly do nothing, and I only have two for redundancy. And it's not like I go profile crazy... I really just do some spot checks on hot-path code. By no means am I doing anything amazing. It's just... as you write more code, the odds that you accidentally write something that performs stupidly badly goes up steadily, even if you're trying not to.


> Find the most common path and fire a profiler at it as early as you can. The odds that there's some trivial win that will accelerate the code by huge amounts is fairly decent.

I've found that a profiler isn't even needed to find significant wins in most codebases. Simple inspection of the code and removal of obviously slow or inefficient code paths can often lead to huge performance gains.


I mean I love finding those "obvious" improvements too but how do you know you've succeeded without profiling it? ;)


Every piece of code I’ve looked at in my current job is filled with transformations back and forth between representations.

It’s so painful to behold.

Binary formats converted to JSON blobs, each bigger than my first hard drive (!), and then back again, often multiple times in the same process.


This isn't really about you or me but the libraries that work behind the spaghetti people fling into the cloud.


Yes, and just like Intel & AMD spent a lot of effort/funding for building performance libraries and compilers, we should expect Amazon and Apple invest into similar efforts.

Apple will definitely give all the necessary tools as part of Xcode for iOS/MacOS software optimisation.

AWS is going to be more interesting – this is a great opportunity for them to provide distributed profiling/tracing tools (as a hosted service, obviously) for Linux that run across a fleet of Graviton instances and help you do fleet-wide profile guided optimizations.

We should also see a lot of private companies building high-performance services on AWS to contribute to highly optimized open-source libraries being ported to graviton.


So far I found getting started repo for Graviton with few pointers https://github.com/aws/aws-graviton-getting-started


What kind of pointers were you expecting?

I found it to have quite a lot of useful pointers. Specifically –https://static.docs.arm.com/swog309707/a/Arm_Neoverse_N1_Sof...

https://static.docs.arm.com/ddi0487/ea/DDI0487E_a_armv8_arm....

- these two docs gives lot of useful information.

And the repo itself contain a number of examples (like ffmpeg) that have been optimized based on these manuals.


given a well designed chip which achieves competitive performance across most benchmarks, Most code will run sufficiently well for most use cases regardless of the nuance of specific cache design and sizes.

There is certainly an exception to this for chips with radically different designs and layouts, as well as folks writing very low-level performance sensitive code which can benefit from specific platform optimization ( graphics comes to mind ).

However even in the latter case, I'd imagine the platform specific and fallback platform agnostic code will be within 10-50% performance of each other. Meaning a particularly well designed chip could make the platform agnostic code cheaper on either a raw performance basis or cost/performance basis.


If you use a VM language like Java, Ruby, etc, that work is largely abstracted.


True, though the work/fixes sometimes take a while to flow down. One example: https://bugs.openjdk.java.net/browse/JDK-8255351


I honestly don’t know why you put Go or the JVM in this list. It isn’t that the language used properly has sane semantics in multithreaded code, it’s that generations of improper multithreaded code have appeared to work because the x86 memory semantics have covered up an unexpressed dependency that should have been considered incorrect.


I would think the number of developers that have “that exact hardware” on their bench is extremely small (does AWS even tell you what cpu you get?)

What fraction of products deployed to the cloud even has its developers seen doing _any_ microbenchmarking?


Professional laptops don’t last that long, and a lot of developers are given MBPs for their work. I personally expect that I’ll get a M1 laptop from my employer within the next 2 years. At that point the pressure to migrate from x86 to ARM will start to increase.


You miss my point - if I am seriously optimizing something I need to be on the same chip not the same ISA.

Graviton2 is a Neoverse core from Arm and it's totally separate from M1.

Besides, Apple don't let you play with PMCs easily and I'm assuming they won't be publishing any event tables any time soon so unless they get reverse engineered you'll have to do it through xcode.


Yes, the m1 isn’t a graviton 2. But then again the mobile i7 in my current MBP isn’t the same as the Xeon processors my code runs on in production. This isn’t about serious optimization, but rather the ability for a developer to reasonably estimate how well their code will work in prod (e.g. “will it deadlock”). The closer your laptop gets to prod, the narrower the error bars get, but they’ll never go to zero.

And keep in mind this is about reducing the incentive to switch to a chip that’s cheaper per compute unit in the cloud. If Graviton 2 was more expensive or just equal in price to x86, I doubt that M1 laptops alone would be enough to incentivize a switch.


That's true but the Xeon cores are much easier to compare and correlate because of the aforementioned access to well defined and supported performance counters rather than Apple's holier than thou approach to developers outside the castle.


We have MBPs on our desks but our cloud are Centos Xeon machines. The problems I run into are not squeezing every last ms of performance, since it's vastly cheaper to just add more instances. The problems I care about is that some script I wrote suddenly doesn't work in production because of BSDisms, or Python incompatibilities, or old packages in brew, etc. Would be nice if Apple waved a magic wand and replaced its BSD subsystem with Centos* but I won't be holding my breath :)

* yes I know Centos is done, substitute as needed.


I just wish my employer would let me work on a Linux PC rather than a MBP, then I wouldn't have this mismatch between my machine and server...


I think this is a slightly different point from the other responses, but this not true: if I am seriously optimizing something I need ssh access to the same chip.

I don't run my production profiles on my laptop - why would I expect to compare how my i5 or i7 chip on a thermally limited MBP to how my 64 core server performs?

It's convenient for debugging to have the same instruction set (for some people, who run locally), but for profiling it doesn't matter at all.


I profile in valgrind :/


This is typical Hacker News. Yes, some people "seriously optimize" but the vast majority of software written is not heavily optimized nor is it written at companies with good engineering culture.

Most code is worked on until it'll pass QA then thrown over the wall. For that majority of people, an M1 is definitely close enough to a graviton.


> typical hacker news

Let me have my fun!


Instruments exposes a fair number of counters, though–what's wrong with using it?


I actually recommend just using 'spindump' and reading the output in a text editor. If you just want to look through a callstack adding pretty much any UI just confuses things.


I am currently working on a native UI to visualize spindumps :(


Well try not to get the user lost in opening and closing all those call stack outline views, I'd rather just scroll in BBEdit ;)


It’s outline views, but I’ll see if I can keep an option to scroll through text too. (Personally, a major reason why I made this was I didn’t want to scroll through text like Activity Monitor does…)


I don't think it takes "exact" hardware. It takes ARM64, which M1 delivers. I already have a test M1 machine with Linux running in a Parallels (tech preview) VM and it works great.


While I generally agree with this sentiment a lot of people don't realize how much enterprise supply chain / product chain vastly varies from the consumer equivalent. Huge customers that buy intel chips at datacenter scale are pandered to and treated like royalty by both intel and amd. Companies are courted in the earliest stages of cutting edge technical development and product development and given rates so low (granted for huge volume) that most consumers would not even believe. The fact that companies like Serve The Home exist proves this - for those who don't know, the realy business model of Serve The Home is to give enterprise clients the ability to play around with a whole data center of leading edge tech, Serve The Home is simply a marketing "edge api" of sorts for the operation. Sure it might look like intel isn't "competitive" but many of the intel V amd flame wars in the server space for un released tech have already had their bidding wars settled years ago for this very tech.

One thing to also consider is why amazon hugely prioritizes using their "services" and not deploying on bare metal is likely because they can execute their "services" on cheapo arm hardware. Bare metal boxes and VM's give the impression that customer's software will perform in an x86 esque matter. For amazon, the cost of the underlying compute per core is irrelevant since they've already solved the issue of using blazing fast network links to mesh their hardware together - in this way, the ball is heavily in Arm's court for the future of Amazon data centers, although banking and gov clients will likely not move away from X86 any time soon.


I commented [1] on something similar a few days ago,

>Cloud (Intel) isn’t really challenged yet....

AWS are estimated to be ~50% of HyperScalers.

HyperScalers are estimated to be 50% of Server and Cloud Business.

HyperScalers are expanding at a rate faster than other market.

HyperScaler expanding trend are not projected to be slowing down anytime soon.

AWS intends to have all of their own workload and SaaS product running on Graviton / ARM. ( While still providing x86 services to those who needs it )

Google and Microsoft are already gearing up their own ARM offering. Partly confirmed by Marvell's exit of ARM Server.

>The problem is single core Arm performance outside of Apple chips isn’t there.

Cloud computing charges per vCPU. On all current x86 instances, that is one hyper-thread. On AWS Graviton, vCPU = Actual CPU Core. There are plenty of workloads, and large customers like Twitter and Pinterest has tested and shown AWS Graviton 2 vCPU perform better than x86. All while being 30% cheaper. At the end of the day, it is workload / dollars that matters on Cloud computing. And right now in lots of applications Graviton 2 are winning, and in some cases by large margin.

If AWS sell 50% of their services with ARM in 5 years time, that is 25% of Cloud Business Alone. Since it offer a huge competitive advantage Google and Microsoft has no other choice but to join the race. And then there will be enough of a market force for Qualcomm, or may be Marvell to Fab a commodity ARM Server part for the rest of the market.

Which is why I was extremely worried about Intel. (Half of) The lucrative Server market is basically gone. ( And I haven't factored in AMD yet ) 5 years in Tech hardware is basically 1-2 cycles. And there is nothing on Intel's roadmap that shown they have the chance to compete apart from marketing and sales tactics. Which still goes a long way if I have to be honest, but not sustainable in long term. It is more of a delaying tactics. Along with a CEO that despite trying very hard, had no experience in market and product business. Luckily that is about to change.

Evaluating ARM switch takes time, Software preparation takes time, and more importantly, getting wafer from TSMC takes time as demand from all market are exceeding expectations. But all of them are already in motion, and if these are the kind of response you get from Graviton 2, imagine Graviton 3.

[1] https://news.ycombinator.com/item?id=25808856


>Which is why I was extremely worried about Intel. (Half of) The lucrative Server market is basically gone.

Right. I suspect in time we'll look back to this time, and realize that it was already too late for Intel to right the ship, despite ARM having a tiny share of PC and server sales.

Their PC business is in grave danger as well. Within a few years, we're going to see ARM-powered Windows PCs that are competitive with Intel's offerings in several metrics, but most critically, in power efficiency.

These ARM PCs will have tiny market share (<5%) for the first few years, because the manufacturing capacity to supplant Intel simply does not exist. But despite their small marketshare, these ARM PCs will have a devastating impact on Intel's future.

Assuming these ARM PCs can emulate x86 with sufficient performance (as Apple does with Rosetta), consumers and OEMs will realize that ARM PCs work just as well as x86 Intel PCs. At that point, the x86 "moat" will have been broken, and we'll see ARM PCs grow in market share in lockstep with the improvements in ARM manufacturing capacity (TSMC, etc...).

Intel is in a downward spiral, and I've seen no indication that they know how to solve it. Their best "plan" appears to be to just hope that their manufacturing issues get sorted out quickly enough that they can right the ship. But given their track record, nobody would bet on that happening. Intel better pray that Windows x86 emulation is garbage.

Intel does not have the luxury of time to sort out their issues. They need more competitive products to fend off ARM, today. Within a year or two, ARM will have a tiny but critical foothold in the PC and server market that will crack open the x86 moat, and invite ever increasing competition from ARM.


I think the irony few would have predicted is that Apple switching to Intel started all of this.

The effort to switch over from PPC and the payoff of doing so was still a recent memory when the iPhone came out, and so they partly pivoted again to ARM, and then smart phones ate the laptop and desktop business, increasing the overall base of non-x86 competence in the world. If Apple had not walked through that door, someone else would have, but Apple has a customer relationship that gives them some liberties that others don’t necessarily enjoy.


As long as Intel is willing to accept margin will never be as good they once were. I think there are lots of things they could still do.

The previous two CEO choose profits margin. And hopefully we have enough evidence today that was the wrong choice for the companies long term survival.

It is very rare CEO do anything radical. It is something I learned and observe the difference between a founder and a CEO. But Patrick Gelsinger is the closest thing to that.


I guess I don't understand why the M1 makes developing on Graviton easier. It doesn't make Android or Windows ARM dev any easier.

I guess the idea is to run a Linux flavor that supports both the M1 and Graviton on the macs and hope any native work is compatible?


It's not hope; ARM64 is compatible with ARM64 by definition. The same binaries can be used in development and production.

Windows ARM development (in a VM) should be much faster on an M1 Mac than on an x86 computer since no emulation is needed.


>It's not hope; ARM64 is compatible with ARM64 by definition

Linux, mac or windows ARM64 binaries are not cross compatible by definition, thus my question. Is everyone excited to run a Graviton supported Linux distro on these M1s or is there something else?

I would also be surprised if every M1 graphics feature was fully supported on these Amazon chips.

If we're talking about cross compatibility then we can't use any binaries compiled for any M1 specific features either...So no. Its not compatible by definition.


dev in a linux vm/container on your M1 macbook, then deploy to a graviton instance.


Aren't most of us already programming against a virtual machine, such as Node, .NET or the JVM? I think the CPU architecture hardly matters today.


Many people do code against some sort of VM, but there are still people writing code in C/C++/Rust/Go/&c that gets compiled to machine code and run directly.

Also, even if you're running against a VM, your VM is running on an ISA, so performance differences between them are still relevant to your code's performance.


C, C++, Rust, & Go compile to an abstract machine, instead. It is quite hard these days to get it to do something different between x86, ARM, and Power, except relying on memory model features not guaranteed on the latter two; and on M1 the memory model apes x86's. Given a compatible memory model (which, NB, ARM has not had until M1) compiling for the target is trivial.

The x86 memory model makes it increasingly hard to scale performance to more cores. That has not held up AMD much, mainly because people don't scale things out that don't perform well when they do, and use a GPU when that does better. In principle it has to break at some point, but that has been said for a long time. It is indefinitely hard to port code developed on x86 to a more relaxed memory model, so the overwhelming majority of such codes will never be ported.


Note that M1 only uses TSO for Rosetta; ARM code runs with the ARM weak memory model.


> It is indefinitely hard to port code developed on x86 to a more relaxed memory model, so the overwhelming majority of such codes will never be ported.

Most code should just work, maybe with some tsan testing. There's other ways to test for nondeterminism e.g. sleep the different threads randomly.

It helps if you have a real end to end testsuite; for some reason all the developers I've met lately think unit tests are the only kind of tests.


> which, NB, ARM has not had until M1

This isn't true at all: other ARM cores have gone all the way to implement full sequential consistency. Plus, the ARM ISA itself is "good enough" to do efficient x86 memory model emulation as part of an extension to Armv8.3-A.


Having worked some on maintaining a stack on both Intel and ARM, it matters less than it did, but it's not a NOOP. e.g. Node packages with native modules are often not available prebuilt for ARM, and then the build fails due to ... <after 2 days debugging C++ compilation errors, you might know>.


If it can emulate x86, is there really a motivation for developers to switch to ARM? (I don't have an M1 and don't really know what it's like to compile stuff and deploy it to "the cloud.")


Emulation is no way to estimate performance.


Sure, but as a counter example Docker performance on Mac has historically been abysmal[0][1], but everyone on Mac I know still develops using it. We ignore the performance hit on dev machines, knowing it won't affect prod (Linux servers).

I don't see why this pattern would fail to hold, but am open to new perspectives.

[0] https://dev.to/ericnograles/why-is-docker-on-macos-so-much-w...

[1] https://www.reddit.com/r/docker/comments/bh8rpf/docker_perfo...


The VM that uses takes advantage of hardware-accelerated virtualization, for running amd64 VMs on amd64 CPUs. You don't have hardware-accelerated virtualization for amd64 VMs on any ARM CPUs I know of...


Abysmal Docker performance on non-Linux is mainly because of filesystem isn't native, not by CPU virtualization.


How much does arch matter if you're targeting AWS? Aren't the differences between local service instances vs instances running in the cloud a much bigger problem for development?


Yeah and I assume we are going to see Graviton/Amazon linux based notebooks any day now.


Honestly, if Amazon spun this right and they came pre-setup for development and distribution and had all the right little specs (13 and 16 inch sizes, HiDPI matte displays, long battery life, solid keyboard, macbook-like trackpad) they could really hammer the backend dev market. Bonus points if they came with some sort of crazy assistance logic like each machine getting a pre-setup AWS Windows server for streaming windows X86 apps.


> could really hammer the backend dev market

That's worth, what, a few thousand unit sales?


If they could get it to $600-800 and have an option for Windows, decent trackpad/keyboard, you could sell them to students just as well. Shoot, if the DE for Amazon Linux was user friendly enough they wouldn’t even need windows, since half of schools are on GSuite these days.


The point wouldn't be to sell laptops.


Exactly.


Like a Bloomberg machine for devops.


Can't take Graviton seriously until I can run my binaries via Lambda on it.


At that point if it will be trouble for Intel it would be a death sentence for AMD...

Intel has fabs, yes it’s what maybe holding them back atm but it also a big factor in what maintains their value.

If x86 dies and neither Intel nor AMD pivot in time Intel can become a fab company they already offer these services, yes no where near the scale of say TSMC but they have a massive portfolio of fabs and their fabs are located in the west, they also have a massive IP portfolio related to everything form IC design to manufacturing.


> Intel can become a fab company

Not unless they catch up with TSMC in process technology.

Otherwise, they become an uncompetitive foundry.


The point is Intel can't compete as a fab or as a design house.

It's doubtful if Intel would have been able to design an equivalent to the M1, even with access to TSMC's 5nm process and an ARM license.

Which suggests there's no point in throwing money at Intel because the management culture ("management debt") itself is no longer competitive.

It would take a genius CEO to fix this, and it's not obvious that CEO exists anywhere in the industry.


I don't know how you can predict the future like this. Yes, intel greedily choose not to participate in the phone soc market and are paying the price.

But their choice not to invest in EUV early doesn't mean that they will never catch up. They still have plenty of cash, and presumably if they woke up and decided to, they wouldn't be any worse off than Samsung. And definitely better off than SMIC.

Similarly, plenty of smart microarch people work at intel, freeing them to create a design competitive with zen3 or the m1 is entirely possible. Given amd is still on 7nm, and are just a couple percent off of the M1 seems to indicate that if nothing else intel could be there too.

But as you point out Intel's failings are 100% bad mgmt at this point. Its hard to believe they can't hire or unleash whats needed to move themselves forward. But at the moment they seem to be very "IBM" in their moves, but one has to believe that a good CEO with a good engineering background can cut the mgmt bullcrap and get back to basics. They fundamentally just have a single product to worry about unlike IBM.


AMD looked just as bad not so long ago.


Plus even though Intel has been super fat for 3 decades or so, everyone has predicted their death for at least another 3 decades (during their switch from memory to CPUs and then afterwards when RISCs were going to take over the world).

So they do have a bit of history with overcoming these predictions. We'll just have to see if they became too rusty to turn the ship around.


AMD looked far worse... if Intel is “dying” with a yearly revenue of ~$70B or so AMD should’ve been bankrupt 10 times already.

Intel is managing to compete in per core performance while being essentially 2-3 nodes behind, and generating times the revenue of their competitor.

Zen is awesome and we need more competition but Intel isn’t nearly as far behind as it was during the P4 days and it’s revenue is nearing ATH and it’s business more diversified it ever been, if you exclude the datacenter and client computing groups it still is bringing in more revenue than AMD.


>Not unless they catch up with TSMC in process technology

1. Intel doesn't have to catch up. Intel's 14nm is more than enough for a lot of fabless. Not every chip needs cutting edge node

2. split up Intel foundry into a pure play allowed Intel to build up an ecosystem like TSMC.

3. Intel's 10nm is much denser than TSMC's 7nm. Intel is not too far behind. they just needs to solve the yield problem. split up Intel's design and foundry allowed each group to be more agile and not handcuffed to each other.

in fact Intel Design should licensed out x86 like ARM. why not take best biz model from the current leaders? Intel Design take ARM business model and Intel foundry take TSMC business model.


The ARM business model isn't that profitable. Intel's market cap right now is about 240 billion, 6 times the amount Nvidia is paying for ARM.


>Intel's market cap right now is about 240 billion, 6 times the amount Nvidia is paying for ARM

so what? yahoo was a giant in its heyday. blackberry was the king with its phone. no empire stay on top forever.

Apple/Amazon created its own cpu. ARM killing it in mobile space.

intel is the king right now but with more and more its customers design their own cpu. how long before intel fall?


ARM Ltd. is earning relatively very little from this and there seems to be little reason why would that change in the future. This is why it can’t really survive as an independent company.

If you compare net income instead of mkt. cap Intel is ahead by 70 times (instead of 6) and is relatively undervalued compared to other tech companies.


You don’t have to be a bleeding edge foundry, there are tons of components that cannot be manufactured on bleeding edge nodes nor need too.

Intel can’t compete right now on the bleeding edge node but they outcompete TSMC by essentially every other factor when it comes to manufacturing.


How hard would it be for AMD to make an ARM64 based partly on the IP of the Zen architecture? Seems like AMD could equal or beat M1 if they wanted.


>> Seems like AMD could equal or beat M1 if they wanted.

Sometime around 5? years ago AMD was planning to have an ARM option. You'd get essentially an ARM core in an AMD chip with all the surrounding circuitry. They hyped it so much I wondered if they might go further than just that.

Further? Maybe a core that could run either ISA, or a mix of both core types. I dunno, but they dumped that (or shelved it) to focus on Zen, which saved them. No doubt the idea and capability still exist within the company. I'd like to see them do a RISCV chip compatible with existing boards.


"Seattle". tried it, couldn't meet perf targets, canned it.


They already have that: https://www.amd.com/en/amd-opteron-a1100

Didn't sell very well.


AMD makes great designs, switching to ARM/RISC-V would make them lose value but not kill them.


And Intel doesn’t?


AMD also has a GPU division.


Intel makes more money selling their WiFi chipsets than AMD makes on selling GPUs heck even including consoles...


got a source for that? sounds hard to believe.


Computing and Graphics which includes the Radeon technology group revenue for AMDs last quarter was £1.67B industry estimates are that $1.2-1.3B were from CPU sales.

Intel’s Internet of Things group alone revenue last quarter was $680M and they hit $1B IOTG revenue previously

https://www.statista.com/statistics/1096381/intel-internet-o...


The thing about all of these articles analyzing Intel's problems is that nobody really knows the details of Intel's "problems" because it comes down to just one "problem" that we have no insight into: node size. What failures happened in Intel's engineering/engineering management of its fabs that led to it getting stuck at 14 nm? Only the people in charge of Intel's fabs know exactly what went wrong, and to my knowledge they're not talking. If Intel had kept chugging along and got down to 10 nm years ago when they first said they would, and then 7 nm by now, it wouldn't have any of these other problems. And we don't know exactly why that didn't happen.


Intel's problem was that they were slow getting their 10nm design online. That's no longer the case. Intel's new problem is much bigger than that at this point.

Until fairly recently, Intel had a clear competitive advantage: Their near monopoly on server and desktop CPUs. Recent events have illustrated that the industry is ready to move away from Intel entirely. Apple's M1 is certainly the most conspicuous example, but Microsoft is pushing that way (a bit slower), Amazon is already pushing their own server architecture and this is only going to accelerate.

Even if Intel can get their 7nm processes on line this year, Apple is gone, Amazon is gone, and more will follow. If Qualcomm is able to bring their new CPUs online from their recent acquisition, that's going to add another high performance desktop/ server ready CPU to the market.

Intel has done well so far because they can charge a pretty big premium as the premier x86 vendor. The days when x86 commands a price premium are quickly coming to and end. Even if Intel fixes their process, their ability to charge a premium for chips is fading fast.


We actually have a lot of insight in that Intel still doesn't have a good grasp on the problem. Their 10nm was supposed to enter volume production in mid 2018, and they still haven't truly entered volume production today. Additionally Intel announced in July 2020 that their 7nm is delayed by at least a year which means they figured out their node delay problem.


> We actually have a lot of insight in that Intel still doesn't have a good grasp on the problem. Their 10nm was supposed to enter volume production in mid 2018, and they still haven't truly entered volume production today. Additionally Intel announced in July 2020 that their 7nm is delayed by at least a year which means they figured out their node delay problem.

Knowing something happened is not the same as knowing "why" it happened. That's the point of my comment. We don't know why they were not able to achieve volume production on 10 nm earlier.


I'll also add that it's fascinating that both 10 nm and 7 nm are having issues.

My understanding (and please correct me if I'm wrong), is that the development of manufacturing capabilities for any given node is an independent process. It's like building two houses: the construction of the second house isn't dependent on the construction of the first. Likewise, the development of 7 nm isn't dependent on the perfection of 10 nm.

This perhaps suggests that there is a deep institutional problem at Intel, impacting multiple manufacturing processes. That is something more significant that a big manufacturing problem holding up the development of one node.


I think that's not quite right. While it's true that for each node they build different manufacturing lines, generating the required know-how is an iterative/evolutionary process in the same way that process node technology usually builds on the proven tech of the previous node.


SemiAccurate has written a lot about the reasons, for me the essence from that was: complacency, unrealistic goals, they didn't have a plan B in case schedule slips.


I think it's just a difficult problem. Intel is trying to do 10 nm without EUV. TSMC never solved that problem because they switched to EUV at that node size.


Why do they not want to use EUV?


Wild speculations: "newness" budget for 10nm was already used up by other innovations. Or they earmarked all EUV resources for 7nm or 5nm. EUV steppers don't exactly grow on trees.


A key issue is volume. Intel is doing many times less volume than the mobile chipmakers. So intel cant spend as much to solve the problem.

It's a bad strategic position to be in, and I agree with Ben's suggestions as one of the only ways out of it.


The point of my comment is that Intel doesn't know either and that's a bigger problem.


Wasn’t the issue that the whole industry did a joint venture, but Intel decided to go it alone?

I worked at a site (in a unrelated industry) where there was a lot of collaborative semiconductor stuff going on, and the only logo “missing” was Intel.


Didn't Samsung also go it alone, or am I mistaken?


Samsung is the opposite of Intel: gaining market as mobile takes over in the collapse of Intel's former moat. They have more money to solve their problems.


I think it's pretty clear from the article what happened. They didn't have the capital (stemming from a lack of foresight and incentives) to invest in these fabs, relative to their competition.

If you look at this from an engineering standpoint, I think you'll miss the forest for the trees. From a business and strategy standpoint, this was classic case of disruption. Dominant player, Intel, was making tons of money on x86 and missed mobile opportunity. TSMC and Samsung seized on the opportunity to manufacture these chips when Intel wouldn't. As a result, they had more money to build/invest in research to build better fabs, which could be funded by the many customers buying mobile chips. Intel, being the only customer of their fabs, would only have money to improve their fabs if they sold more x86 chips (which were stagnating). By this time, it was too late.


I found the geopolitical portion to be the most important aspect here. China has shown a willingness to flex its muscles on enforcing its values beyond their borders. China is smart, and plays a long game. We don't want to wake up one day and find they've flexed their muscles on their regional neighbors similar to their rare earths strong-arming from 2010-2014 and not have fab capabilities to fall back on in the West.

(For that matter, I'm astounded that after 2014 the status quo returned on rare earths with very little state-level strategy or subsidy to address the risk there.)


Ben missed an important part of the geopolitical difference between TSMC and Intel: Taiwan is much more invested in TSMC's success than America is in Intel's.

Taiwan's share of the semiconductor industry is 66% and TSMC is the leader of that industry. Semiconductors helps keep Taiwan from China's encroachment because it buys them protection from allies like the US and Europe, whose economies heavily rely on them.

To Taiwan, semiconductor leadership is an existential question. To America, semiconductors are just business.

This means Taiwan is also likely to do more politically to keep TSMC competitive, much like Korea with Samsung.


Taiwan nor TSMC cannot produce the key tool to make this all work: The photolithography device itself.

Only ASML currently has that technology.

And it turns out, the photolithography device isn’t really a plug and play device. It’s very fussy. It breaks often. And it requires an army of engineers (as cheap as possible), to man the devices, and to produce the required yield, in order to make the whole operation profitable.

This is the Achilles’ Heel of the whole operation.

I suspect that China is researching and producing their own photolithography devices, independent of American, or western technology. And when they crack it, then they will recapture the entire Chinese market for themselves. And TSMC will become irrelevant to any strategic or tactical plans for them.


> Semiconductors helps keep Taiwan from China's encroachment because it buys them protection from allies like the US and Europe, whose economies heavily rely on them.

Are there any signed agreements that would enforce this? If China one day suddenly decides to take Taiwan, would the US or Europe step in with military forces?


The closest I've found is this: https://en.wikipedia.org/wiki/Taiwan_Relations_Act

Not guaranteed "mutual defense" of any sort, but the US at least has committed itself to helping Taiwan protect itself with military aid. The section on "Military provisions" is probably most helpful.


China's GDP is projected to surpass the US GDP in 2026 [1]. After that it won't be long until the Chinese defense spending will surpass the US one. And after that, it won't be long until the US and its allies will realize it will be healthy for them to mind their own business when China takes over Taiwan.

[1] https://fortune.com/2021/01/18/chinas-2020-gdp-world-no-1-ec...


There are no official agreements since neither US nor any major European countries recognize Taiwan/ROC but US has declared multiple times that they would defend Taiwan (see ‘ Taiwan Relations Act’ & Six Assurances’)


Not an agreement, but the US stance towards the defense of Taiwan (ROC) was recently declassified early: https://www.realcleardefense.com/articles/2021/01/15/declass...


https://en.wikipedia.org/wiki/Taiwan_Relations_Act

> The Taiwan Relations Act does not guarantee the USA will intervene militarily if the PRC attacks or invades Taiwan


It would not be wise to commit to intervene in all circumstances. Similarly, also the NATO treaties do not specify in detail how the allies have to react in case of an attack.


>I'm astounded

Our political system and over financialized economy seem to suffer from same hyper short term focus that many corporations chasing quarterly returns run in to. No long term planning or focus, and perpetual "election season" thrashing one way or another while nothing is followed through with.

Plus, in 2, 4 or 8 years many of the leaders are gone and making money in lobbying or corporate positions. No possibly short-term-painful but long term beneficial policy gets enacted, etc.

And many still uphold our "values" and our system as the ideal, and question any that would look towards the Chinese model as providing something to learn from. So, I anticipate this trend will continue.


It appears the Republicans are all-in on the anti-China bandwagon. Now you just have to convince the Democrats.

I don't think this will be hard. Anyone with a brain looking at the situation realizes we're setting ourselves up for a bleak future by continuing the present course.

The globalists can focus on elevating our international partners to distribute manufacturing: Vietnam, Mexico, Africa.

The nationalists can focus on domestic jobs programs and factories. Eventually it will become clear that we're going to staff them up with immigrant workers and provide a path to citizenship. We need a larger population of workers anyway.


My impression was that Republicans were only half-hearted about China as that issue made it's way through President Trump's administration. The general tone I felt was that things like tariffs were tolerated in support of their party's leader, not the tariffs themselves. And the backtracking on sanctions on specific Chinese firms indicated there was little/no significant GOP support pushing President Trump to follow through. The requirement that TikTok sell off its US operations was watered down into a nice lucrative contract for Oracle, though all that's in limbo and the whole issue has lost steam, its fate possibly resting in the courts, or with a new administration that will be dealing with many larger issues.

The molehill -> mountain issue of Hunter Biden's association with a Chinese private equity fund will raise lots of loud rhetoric, but more for partisan in-fighting than action against China.

Meanwhile the US, the West, Corporations will pay lip service to decrying human rights violations and labor conditions. China will accept this as the need to save face, while any stronger action will be avoided to prevent China from flexing its economic muscles against the corporations or countries that rely on their exports. No company wants to be the next hotel chain forced to temporarily take down their website & issue an embarrassing apology. No country wants to be the next Japan, cut off from rare earth exports.

Just look at Hong Kong: Sure the US has receded from such issues in the last 4 years, but it's not like any other country did anything more than express their displeasure in various diplomatically acceptable ways.


Hong Kong was a lost cause to begin with. With China having full sovereignity over Hong Kong and the Sino-British Joint Declaration being useless (not enforceable in practice and not even violated, at least on paper), the West could do little more about Hong Kong than about Xinjiang or Tibet.


Trump was all in on the anti-China bandwagon. The traditional Republicans were just tolerating Trump long enough to get their agenda passed - conservative judges and tax cuts. Republicans traditionally have been about free trade.


> [...] and not have fab capabilities to fall back on in the West.

I'm not too concerned:

- There are still a number of foundries in western countries that produce chips which are good enough for "military equipment".

- Companies like TSMC are reliant on imports of specialized chemicals and tools mostly from Japan/USA/Europe.

- Any move from China against Taiwan would likely be followed by significant emigration/"brain drain".


National security doesn't just extend to direct military applications. Pretty much every industry and piece of critical infrastructure comes into play here. It won't matter if western fabs can produce something "good enough" if every piece of technological infrastructure from the past 5 years was built with something better.

As for moves again at Taiwan, China hasn't given up that prize. Brain drain would be moot if China simply prevented emigration. I view Hong Kong right now as China testing the waters for future actions of that sort.

Happily though I also view TSMC's pending build of a fab in Arizona as exactly that sort of geographical diversification of industrial and human resources necessary. We just need more of it.


>As for moves again at Taiwan, China hasn't given up that prize.

CCP hasn't give up since KMT high-tailed to Taiwan. for more than 40+ yrs American cozy up with the Chinese govrt and doing business with China.

American told Taiwan govrt not to "make trouble" but we all know China is the one who make all the troubles with military threat and flying aircraft over Taiwan, day in and day out.

Taiwan have build up a impressive defensives from buying weapon (US) to develop its own. yes, China can take Taiwan. that's 100% but at what price.

that's what Taiwanese is betting on, China will think twice about invading.


I bet TSMC has a number of bombs planted around the most critical machines, much like Switzerland has bombs planted around most critical tunnels and bridges.

Trying to grab Taiwan with force alone, even if formally successful, would mean losing its crown jewels forever.


The bombs have been removed some years ago in Switzerland as the risk of them going off was deemed greater than the risk of sudden invasion.

Just to nitpick, your point absolutely stands


TSMC is not really that important. It’s currently only useful for the cutting edge of CPUs, and especially for mobile phones, that can get the battery boost from using a more efficient processor.

Military hardware uses CPU technology that’s 10+ years old, of which the Chinese are capable of fabricating themselves on the mainland. The stuff needs to be rugged, and likely, radiation-proofed.

And besides, isn’t it easier for Switzerland to just launch missiles at the bridges, instead of actively maintaining explosive devices on each bridge?


Maybe TSMC is not that important for making advanced weapons. It is still important for billion-dollar markets like cell phones, cloud servers, desktop and laptop computers, etc. For instance, Apple, which happens to somehow rack in money selling computing devices, is very dependent on TSMC, without a viable replacement available now.


>we all know China is the one who make all the troubles with military threat and flying aircraft over Taiwan, day in and day out.

As they are fully well allowed to as Taiwan is their own territory. You and I might disagree but out of the 195 countries on earth the only ones who recognize Taiwan is a country are these few:

Guatemala, Haiti, Honduras, Paraguay, Nicaragua, Belize, Saint Lucia, Saint Vincent And The Grenadines, Marshall Islands, Saint Kitts And Nevis, Palau, Tuvalu, Nauru, Vatican City.

Comparing that to the 139 countries that recognise the State of Palestine (where Israel can still do as they damn well please!) and it is quite easy to see that while some might pretend to care about Taiwan (like the US) all they really care about in the end is the money to be made from trading with/in PRC.

>China will think twice about invading.

PRC doesn't invade. It isn't the US. It gains influence in much more subtle and intelligent ways than bombing people but if they wanted to they could do to Taiwan what Israel does to Palestine and no one would do anything but talk talk talk.


>As they are fully well allowed to as Taiwan is their own territory.

Taiwan is a fully functional country. it doesn't pay taxes to China. its military is not under China. a lot of countries doesn't even have visa requirement for Taiwanese. Taiwanese passport is better than Chinese

>It isn't the US. It gains influence in much more subtle and intelligent ways than bombing people

LOL. China so intelligently gains influences(?). Taiwanese voted DPP's Tsai and gave her the largest votes in the history of Taiwan. Taiwanese have time and time again voted against China "influences"


> out of the 195 countries on earth the only ones who recognize Taiwan is a country are these few:

There would be a lot more if China wasn't so threatening about it.

The moment there is an opening or weakness, much more of the world will jump on board


Taiwan's defenses can hold up against minor probing from China, nothing truly sustained.

The true deterrent to China isn't any treaty agreement to protect Taiwan, which doesn't exist. It's the realpolitik of 30,000 US troops in Taiwan.

Any significant & sustained attack against Taiwan would harm US troops. They'd be little more than a speedbump for China if China acted quickly & in force, but that speedbump would require a significant-- and perhaps disproportionate-- response from the US.


> Taiwan's defenses can hold up against minor probing from China, nothing truly sustained.

It's the same for every small country bordering a potentially hostile much larger neighbor. Nobody excepts the small country to be able to withstand a full-scale invasion. The point is to make that invasion expensive enough to not be worth it.


You're not even wrong. With Taiwan being an island, any prospects of invasion are going to be bloody. Successful marine invasion are surprisingly rare in history. Usually, they are only successful because the defender could not set up a defense, or the attacker committed overwhelming resources. However, Taiwan's strategic position (it guards access to the mainland from the ocean) and the ideological accomplishment of having tied up this loose end might make it worth it. And in the event Taiwanese economy and its importance declines, it could become harder for the US to justify defending it.


The issue isn't just military equipment though. When your entire economy is reliant on electronic chips, it's untenable for all of those chips to come from a geopolitical opponent. That gives them a lot of influence over business and politics without having to impact military equipment.


Yeah, for some reason, I assumed that military equipment mostly used, like, low performance but reliable stuff. In-order processors, real time operating systems, EM-hardening. Probably made by some company like Texas Instruments, who will happily keep selling you the same chip for 30 years.


Well, I doubt the US military is too concerned with chasing the transistor density per cm^2, but there are cutting edge areas of military tech in the form of weapons guidance, fighter jet avionics & weapons systems, etc. that may require more advanced capabilities in components-- I don't know.

(As an aside, when we sell things like fighter jets to other countries, they do not get the same avionics & weapons/targeting systems that we use)


That’s a good comparison... CPUs are increasingly a commodity.


Applications are open for YC Summer 2021

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: