Hacker News new | past | comments | ask | show | jobs | submit login
ARM Mac: Why I'm Worried About Virtualization (bmalehorn.com)
156 points by bmalehorn 84 days ago | hide | past | favorite | 305 comments



I'm actually not worried, for a few reasons;

- I already do cross-arch development day-in and day out between x86 and ARM, and have only run into hard blockers on a library or tool a handful of times. The solve was generally pretty straightforward to either use an ARM-compatible alternative, or to cross-compile it myself.

- We've done this many many times before and it's not that bad. I know I'm not the only one old enough here to remember the days of having heterogeneous fleets across PPC, SPARC, and x86. Or even more recent - different extensions for x86 with different chipset manufacturers.


But back in those very heterogeneous days (don’t forget go throw in Alpha and MIPS!) - computers running more exotic processors like SPARC were workstation-class and their manufacturers were responsive to requests from the SE community - so while I share your lack of concern about heterogeneity - I am concerned that Apple won’t be making their ARM platform the best for developer workloads (at least, non-iOS, non-macOS workloads) - remember that besides freeing them from Intel’s slower release schedule, Apple’s other main incentive for adopting ARM is to ensure their computers have a great performance-per-watt ratio but with a low power budget. I know their latest A-series chips are very, very competitive (sorry, I mean: mopping the floor) with Intel’s current mainstream chips, Apple’s expertise is still with low-power mobile devices - I’m not convinced Apple will be switching away from Xeon Chips in the MacPro or i7 (i9?) chips in the high-end MBPs - but most SEs I know using MBPs have their 13-inch models which are moving to ARM right away.

In short: I feel Apple’s consumer-oriented direction is starting to be at-odds with what they need to do in order to remain a compelling general development platform.

Remember that macOS became a favourite for web-application development only around 12-13 years ago (prior to that it was seen as an OS for creative-types) - because they were selling nice hardware with an equally nice Unix-family OS with a compelling desktop experience - take a look at typical Linux desktop distros from around the same time: visual eyesores and incompatible with most laptops thanks to OEM driver issues. Apple wasn’t specifically targeting software developers at all - they were even showing ominous signs of disinterest by discontinuing their X Windos server and going-back on their promise of establishing Java as a pillar of the OS.

With the move to ARM on laptops I think Apple will just lock-down the bootloader and won’t look back.

What’s funny now is Windows 10’s WSL and Windows Terminal, Docker support, etc are suddenly making Microsoft look good as an OS for writing code for non-Microsoft platforms. And at least with a Windows laptop - even ARM Windows laptops - you can tinker with the bootloader and fire-up Slackware if you really wanted to.


Yep, I think we're of one mind here. I'm not worried about the ARM architecture shift, but the whole direction of the platform makes me expect I won't be using a Macbook as my primary dev machine much longer.

Interestingly enough - for personal hacks (mostly cross-compiling Golang to ARM, natch) I'm actually using WSL lately, and it's definitely good enough. Not perfect, but nothing much is.


> Not perfect, but nothing much is.

Has anyone ported Plan9 to ARM yet?


Shure!!

http://9front.org/releases/

.386.iso 386 pc

.amd64.iso amd64 pc

.pi.img arm paspberry pi 1, 2 and 3

.pi3.img arm64 raspberry pi 3 and 4


From Wikipedia:

https://en.wikipedia.org/wiki/Plan_9_from_Bell_Labs

> Platforms: x86 / Vx32, x86-64, MIPS, DEC Alpha, SPARC, PowerPC, ARM


As a life long Mac user my next machine is going to be a windows box. It's not so much hatred for Apple that's driving me. It just seems that Microsoft has a huge financial incentive now to play nicely with Linux due to their cloud business being pretty success and Apple with their iOS hegemony does not. The product decisions each company is making seem to reflect that. Also $4k buys a lot more windows box than it does apple.


Re: Azure.

Microsoft is lucky, more than anything else (Azure is a very nice platform now, but it was very feature-anaemic in comparison until about 5 years ago)

AWS is king, so anyone with any reason to not use Amazon will automatically use the next-biggest/next-best Cloud provider - and that’s Microsoft. Which is odd: someone waking up from a 10-year coma would presume it would be Oracle or IBM, or from a major vendor of webhosting or VPS. Oracle was slow to get in, and IBM cheated by just buying Softlayer and then spectacularly cocking it up: https://techcrunch.com/2020/06/09/ibm-cloud-suffers-prolonge...


I'm actually happy about this! With the recent decline of pretty much every UNIX vendor/platform and the now-deep proliferation of Intel/Linux, it's beginning to feel like MS/IE6 in the early 2000's all over again. We need diversity and competition for both OS's and processors (just as we did with browsers back then).

In fact, let's bring back Ultrix/OSF/1, DG-UX, Solaris! (... we can skip HP/UX and SCO because they're truly awful). Note that OpenVMS has already apparently made it's x86_64 comeback!


Actually I liked HP Vaults, long before there was any talk about containers and similar on UNIX.


Tons of x86 code accesses misaligned addresses.


Well, that's only because it's efficient to do so on x86. Code recompiled for ARM isn't going to do that.


It’s not that simple if that code is using explicit struct layouts or x86/x64 intrinsics.

Forgive my ignorance though - but what ISA extensions are in Apple’s ARM chips for SIMD? Intel poured a lot of effort into SSE+AXE - Does Apple have a compete there?


> It’s not that simple if that code is using explicit struct layouts or x86/x64 intrinsics.

Performance sensitive code that relies on alignment guarantees and other platform details will not work and need to be updated, yes. IIRC Apple's chips do NEON for SIMD, not sure if they support SVE yet. (But I figure they will have to once it becomes a required part of the ARM standard…)


I actually wouldn't be surprised if Apple were to put SVE in their first desktop class ARM chip. They were the first to move to 64 bits after all and SVE has been optional in the standard for quite a while and they wouldn't be the first implementer this time.


Apple wasn't the first with a 64-bit ARM chip - AppliedMicro released theirs in 2011: https://www.businesswire.com/news/home/20111027006673/en/App... - the Apple A7 wasn't released for another 2 years after that.


The chip is announced in 2011 but when the chip available as a server? I assume near the time to iPhone 5s (2013/9).

https://www.servethehome.com/hp-announces-retail-availabilit...


It is very likely that ARM-based Macs will lack a performant hypervisor upon release. We will have to see how VMWare responds. I'd bet it will inspire new products and innovation and the desktop space will move towards a less x86-x64 centric world. In the end it is a short term problem. Someone will respond and provide a performant hypervisor that can run on an ARM host and virtualize x86-x64 and ARM guests.

It's true it will cause some pain in the first year or two, but even as a heavy VMWare Fusion user I am really looking forward to the benefits of a vertically integrated laptop.


Apple has Hypervisor.framework which has been updated for ARM Mac[1].

Xhyve and HyperKit (used by Docker for Mac) uses Hypervisor.framework exclusively. The last time I tried Hypervisor.framework on x86-64, the CPU performance was quite fine (matches that of VMware/VirtualBox), but I/O was pretty abysmal. Emulating x86-64 on ARM is probably going to be a role of something similar to QEMU.

[1]: https://developer.apple.com/documentation/hypervisor/apple_s...


Parallels already announced they’re going to have an option. TBD on what it looks like.

https://www.parallels.com/blogs/apple-silicon-wwdc/


They showed it running Debian in the keynote, but it was ARM Debian. It’s gonna be awhile, if ever, til we get x86 support on Apple Silicon.


Cool! Does docker no longer use virtual box on macOS?


Based on this article, it now uses Hypervisor.framework: https://www.quora.com/What-hypervisor-does-Docker-use

> In Linux, Docker uses kernel features. It does not use any hypervisor. > In Windows, Docker desktop uses Hyper-V. > In Mac, Docker desktop uses Apple Hypervisor.framework.


Depends on whether you install Docker for Mac (uses HyperKit) or Docker Toolbox (uses VirtualBox)


If I understand correctly, hypervisors don't emulate hardware, that's what emulators (like QEMU) do. That would mean that physically the most performant option to run x86 code on an ARM CPU is dynamic translation (like QEMU-TCG or the new Rosetta JIT support).


Broadly speaking yes. Generally, hypervisors mediate access to shared hardware whereas emulators implement simulated hardware in software.

The very first hypervisors worked using dynamic binary translation. They would run a "guest" operating system by executing a stream of native instructions directly on the host CPU. This stream would be dynamically translated to remove and trap in software any privileged operations so the hypervisor could handle them. Modern hypervisors take advantage of hardware features that allow you to more efficiently trap on privileged operations. ARM started to add some of these features starting in 2013 [1]. In contrast Intel first started adding these features to the Pentium 4 in 2005 [2]. When such hardware features were released, they actually were not faster than the software translation. These days the hardware based options are faster. There is even hardware support for running nested hypervisors. So the first question we need to ask is how hypervisors implemented with ARMs hardware features stack up to Intel. I have no doubt that parity at a minimum will be reached I just don't know what the current state of play is. As indicated in my original comment, if I had to bet on release we wont quite have the performance or feature set you would be used to with a product like VMWare Fusion.

The second question we need to ask is whether there is a way to efficiently emulate x86-64 processors on ARM hosts. Even better if you can do this while taking advantage of the supporting infrastructure hypervisors already have in terms of the emulated devices and other features. QEMU just gets you the CPU and a short list of a devices. The fully experience of a seamlessly virtualized guest requires a lot more than that. But at the core you are right that it is going to require QEMU-TCG, Rosetta 2, or some similar technology because the silicon just is not there to execute x86-64.

Exciting stuff! We'll see where it all lands.

[1] https://lwn.net/Articles/557132/

[2] https://en.wikipedia.org/wiki/X86_virtualization#Intel-VT-x


Rosetta 2 is ideally a static binary transformation, it only falls back to emulation when this doesn't work. So it's a bit different than TCG :)


With just less than 10% of market share, do you really think it will change the whole thing? Unless Microsoft pushes for ARM too, I don't see any changes soon


We'll have to see. As another person already pointed out, VMWare has experimented with ESXi on ARM and they claim their customers could realize significant cost savings by migrating to ARM [1]. So if they've already done a good amount of engineering work on it, we may well see VMWare Fusion on ARM that can efficiently run ARM guests. They plan on releasing a tech preview in July [2].

Whether you can stick an emulated x86-64 CPU in there is another matter. It's a much bigger engineering lift and unless Apple puts some resources into it it's not clear to me a virtualization company by themselves would want to incur the cost. I hope there is enough demand for it and that someone will provide it. For me personally the only reason I run VMWare Fusion is to access x86-only Windows applications for which there is no replacement.

[1] https://blogs.vmware.com/vsphere/2019/10/esxi-on-arm-at-the-....

[2] https://twitter.com/VMwareFusion/status/1275466832002945024


Microsoft has been dabbling with ARM for a long time now.

It will all come down to whether this move gives Apple a significant performance and/or battery life advantage. If Apple pulls it off it will force Microsoft and other vendors to respond.


They announced there will be no Bootcamp, I hope there will be a possibility to install other OSes as well on their new machines.

This is critical especially when they make their older devices "EOL" and there are no more OS updates.


VMWare and Paralells have both said they will have support. I don’t know what that means, yet.

I would not personally buy a Mac product for the next 2-3 years.


Microsoft just announced their own OpenJDK variant for Windows ARM.


if Microsoft ports the whole business applications to ARM, there might be hope, but without them, It's hard


Office for ARM will be supported on Windows 10X, already exists for tablets and phones, Apple also demoed the beta for ARM Mac at WWDC keynote.


also SharePoint, Projects, Dynamics?


SharePoint, Dynamics run on whatever .NET runs, and that includes ARM CPUs.

Project is part of Office.

In any case I though we were talking about consumer devices here.


VMWare did have a hypervisor for Raspberry Pi.


Yes, VMWare ESXi runs on Arm just fine. :-)


Can you run an x86 guest under it?


No. Virtualization (dividing a host into different logical hosts but executing unmodified CPU instructions, like VMware, VirtualBox, ...) and emulation (translating instructions, like Rosetta) are two different beasts.


That's pretty much what I assumed from what I know of VMWare. It's going to be a big issue for future Macs, there are entire segments of developers who may have to abandon Macs if we can't run VMs of x86 operating systems.


I'd say that I'm excited for ARM. That doesn't mean the transition will be seamless or easy.

I know that a big complaint about the move is "great, now I'm doing ARM locally and deploying to x86". I think this is a legitimate concern, for now, but I also strongly believe it is inevitable that, within the next decade, deploying to x86 in the Cloud will be as "weird" as ARM would be today. The benefits are way too numerous.

Well, more accurately, I think it'll be a "I'm on Fargate, oh wow, Fargate runs on ARM, I had no idea" kind of thing. Ok, the article outlines why you may need some idea, but come on; we're talking about one line where I'm downloading the x86 version of a dependency instead of an ARM version. That's an easy fix.

I don't know what this means for open accessibility of hardware. Right now, I could go buy and run locally the Intel Xeon chip powering my app in the cloud; when things move to ARM, it absolutely will be "AWS Graviton" (not sold outside AWS) or "Azure ARM Whatever" (not sold outside Azure). This sucks for accessibility, but, actually, does it? ARM enables the cloud providers to do this; they could never design their own x86 chips. As long as we're all standardized on the same ISA, and the chips generally have the same characteristics, I'm looking forward to a very bright future where vendors are now also competing against one-another in the silicon. And I may not be able to buy an AWS Graviton, but I'm sure (well, hopeful) that one day I'll be able to build an ARM desktop that isn't a Raspberry Pi. AWS will have their chips, Quallcomm has theirs, Apple has theirs, Microsoft and Google have some, and they're all competing against one another.

Ok, maybe this is a pipe dream. But, I'm definitely in the short Intel camp, at least on the long-term.


This touches on an interesting question which I think underlying some of the concerns here today: Who will build ARM chips comparable to an i7 say that I can go out and buy and plug into my machine at home?

No-one does now and it's not obvious who would as we speak today. But if the demand is there then even with lots of obstacles to overcome, of course, then they can and will.


Not sure about the socket used (it might be soldered down), but aarch64-based workstations are already available for the general public, e.g. [1].

[1] https://www.anandtech.com/show/15737/arm-development-for-the...


This seems like a weird benchmark, reading from /dev/urandom and gzipping random data does not seem like something most folks will want to do. It even appears like /dev/urandom speeds differ greatly on various architectures [0] and there are issues with /dev/random being fundamentally slow due to the entropy pool [1] (but I guess this is why the author uses /dev/urandom).

It would be better to measure something more related to what docker users will actually do, like container build time of a common container, and/or latency of HTTP requests to native/emulated containers running on the some container.

One reason to feel positive about the virtualization issues is that Rosetta 2 provides x86->ARM translation for JITs which an ARM-based QEMU could perhaps integrate into it's own binary translation [2].

[0] https://ianix.com/pub/comparing-dev-random-speed-linux-bsd.h... [1] https://superuser.com/questions/359599/why-is-my-dev-random-... [2] https://developer.apple.com/videos/play/wwdc2020/10686/


Author here.

I'm glad somebody said something! Yes the gzip perf test is pretty silly, but illustrates a significant difference. /dev/urandom throughput on this setup was about 100 MB / s so it wasn't a bottleneck for this test - the bottlneck was gzip.

Feel free to come up with a performance test yourself! I personally want to know what an HTTP test would look like. You can run an ARM image by running:

    docker run -it arm64v8/ubuntu
Unfortunately, Rosetta 2 is not going to help here. Rosetta 2 translates x86 -> ARM, but only on Mac binaries. It does not translate Linux binaries, and cannot reach inside a Docker image.


Was your emulation done with qemu user space emulator[1] (the syscall translation layer) or qemu system emulator[2] (the VM)? If it was qemu-system you might have better numbers with qemu-user-static, which does binary translation similar to Rosetta 2 rather than a being a full system emulator with all its overhead.

You can probably use qemu-user-static to translate x86-64-only binaries in a Linux container on an ARM machine, too, but I have never tried.

[1]: https://www.qemu.org/docs/master/user/main.html

[2]: https://www.qemu.org/docs/master/system/index.html


I ran this on a Linux laptop - it looks like it's running qemu-user-static:

    root        9934  103  0.0 125444  6664 pts/0    Rl+  12:25   0:12 /usr/bin/qemu-aarch64-static /usr/bin/gzip
So it might be that Docker already runs a native x86_64 Linux, then uses qemu-static binary translation.


That's strange, in my experience it shouldn't have 6x slowdown. Probably might be due to several factors, but here's your test, running on my system without Docker:

Ryzen 3900X (host machine)

    $ dd if=/dev/urandom bs=4k count=10k | gzip >/dev/null
    10240+0 records in
    10240+0 records out
    41943040 bytes (42 MB, 40 MiB) copied, 1.02284 s, 41.0 MB/s
qemu-aarch64-static

    $ dd if=/dev/urandom bs=4k count=10k | proot -R /tmp/aarch64-alpine -q qemu-aarch64-static sh -c 'gzip >/dev/null'
    10240+0 records in
    10240+0 records out
    41943040 bytes (42 MB, 40 MiB) copied, 3.33964 s, 12.6 MB/s


From the article:

> Emulators can run a different architecture between the host and the guest, but simulate the guest operating system at about 5x-10x slowdown.

I think this is a misleading statement because it implies that there is a constant performance overhead associated with CPU emulation. In reality, the performance relies heavily on the workload, more so with JIT-ed emulators.

Regarding this specific benchmark, I think there are two main factors contributing to the poor performance. The first factor is that the benchmark completes in a short period of time. With JITs, performance tends to improve for long running processes because JITs can cache translation results allowing you to amortize the translation overhead. Another factor is that your benchmark is especially heavy on I/O, meaning that it spends a lot of time translating syscalls instead of running native instructions.

I'd also like to add that CPU emulators sans syscall translation should work for any binaries, even those targeted for Linux. It would require a copy of the Linux kernel, but Docker won't work without it anyways.


So I'm not familiar with how Darwin does things, but on most FOSS unixes it's easy to use qemu to run one arch on another, either full system or just user mode emulation (which when wired up correctly lets you seamlessly execute ex. ARM binaries on an x86 system). I would expect it to be easy enough to either set up user mode translation, or just swap Docker's backing hypervisor with an x86 VM. Or, worst case, just run qemu-system-x86_64 on your ARM Mac, run Linux inside that VM, and run Docker on that Linux; SSH in and it should be mostly transparent.


One benchmark would be to track down a python/JS/etc based "hello world" demo container. Base one version on Intel and the other on ARM, and measure each versions container build-time and request latency after it is set-up.

If changing the base image is all that's needed and both Dockerfiles otherwise assume ubuntu, this should not take too long.


It didn’t take months, the time I did it (running Docker on a Pinebook, which was not a great experience). It took a couple of hours to flip some base images away from Alpine, as Debian already has a load of ARM packages built.


That’s under a case where all libraries you need supports aarch64 architecture, which is sort of true for popular libraries, but not all libraries.


> It took a couple of hours to flip some base images away from Alpine, as Debian already has a load of ARM packages built.

Why did you have to switch from Alpine to Debian? Alpine supports ARM quite happily, and it looks like they're shipping Docker images for ARM (and other architectures, too).


Not op, but alpine package manager leaves a lot to be desired especially compared to ubuntu. Also much easier to set locale. Since minimal ubuntu & debian exist, I think the question should be: "Why would you use alpine?" especially considering potentially slower performance:

https://pythonspeed.com/articles/alpine-docker-python/


> Not op, but alpine package manager leaves a lot to be desired especially compared to ubuntu.

How so? If anything, apk is way nicer than apt in a container build script (or anything automated); with apt you have to use -y and maybe force the noninteractive frontend, where `apk add foo` just works, correctly, automatically, with no effort required.

> Also much easier to set locale.

> considering potentially slower performance:

It's slower at installing python packages from pypi since it can't use cached versions. That's not the same thing as "it's slow".

> Since minimal ubuntu & debian exist, I think the question should be: "Why would you use alpine?"

Because minimal ubuntu is still ~3 times the size of alpine, alpine is much smaller and simpler, alpine defaults to staying small (even if you remember to --no-install-recommends, deb packages are bigger and less modular), and I don't have to remember how to force apt to run in "no really install without asking questions" mode.


Having built several Alpine and Debian-based images, Alpine has always been very nice for the happy path, but much more hassle to get out of a hole when something broke due to software misbehaving due non-Alpine assumptions.

Debian in Docker in comparison offer less surprises, but you have to consistently do the right incantations.

Regarding missing binary wheels on ARM: with more ARM laptops in the wild those would eventually become more common.


This assumes that your Docker workload can run on an ARM system without lots of hacking, and also that you trust the ARM-compiled version you're running locally to function identically to the x86-compiled version running on your server.


No, it doesn't. If I'd made the statement "all you need to do is…" then it would have involved some assumptions. What I said was "I did this and all it took was…" no assumptions, just experience.


It's not the worst assumption on HN by far


If only people writing applications could find some way of testing that their apps function properly.


So many wrong assumptions ...

1. If emulating aarch64 (arm64) on x86_64 is 6x slower (on your system, btw, it's not an universal constant), it doesn't mean emulating x86_64 on aarch64 will be 6x slower. It'd probably be worse, or at least that's my gut feeling.

2. Generic container images like the Ubuntu mentioned usually have aarch64 (arm64) support, so running the x86_64 image makes no sense for the presented use-case.

3. You won't be able to use most software because they don't release ARM binaries ... and the example uses `wget` && `tar xf`, with no binary signature check. As someone who has been porting stuff from x86_64 to aarch64 for a couple of years, I admit I've seen this pattern frequently. The most obvious solution is to build from sources, which would have been better off on x86_64 too, instead of fetching a prebuilt (and unverified) binary from the internet. Maybe there are some CPU flags the compiler could notice and apply optimizations which are not included in the prebuilt binary.

I'm not an Apple fan and I'm certainly not a fan of cross-architecture development either. I do agree with the general idea behind the article, however I find it a bit hand wavy.


> Generic container images like the Ubuntu mentioned usually have aarch64 (arm64) support, so running the x86_64 image makes no sense for the presented use-case.

I think the argument here is you can't build your own docker images that you use in production and run them on your mac without emulation (unless your production workload also runs on ARM).


That's a fair point. Emulation implies other limitations too - code compiled on your machine might leverage only the CPU features emulated, which would lead to sub-optimal binaries, not to mention much slower builds.


If you don’t have an environment between your laptop and prod you got more things wrong than this ARM migration.


Author here.

> 1. If emulating aarch64 (arm64) on x86_64 is 6x slower (on your system, btw, it's not an universal constant), it doesn't mean emulating x86_64 on aarch64 will be 6x slower. It'd probably be worse, or at least that's my gut feeling.

Yup, performance benchmarks are inherently flawed and nobody knows anything right now without the hardware. However if ARM -> x86 emulation is anything like x86 -> ARM emulation, I would expect a really big performance loss.

> 2. Generic container images like the Ubuntu mentioned usually have aarch64 (arm64) support, so running the x86_64 image makes no sense for the presented use-case.

Ah actually I address this in the article, and even run an arm64 image. The short version is, it would be a lot of work to convert your whole backend infrastructure to ARM just because you got a new laptop.

> 3. You won't be able to use most software because they don't release ARM binaries ... and the example uses `wget` && `tar xf`, with no binary signature check. As someone who has been porting stuff from x86_64 to aarch64 for a couple of years, I admit I've seen this pattern frequently. The most obvious solution is to build from sources, which would have been better off on x86_64 too, instead of fetching a prebuilt (and unverified) binary from the internet. Maybe there are some CPU flags the compiler could notice and apply optimizations which are not included in the prebuilt binary.

Yes, if only everything were built from source! I'm not saying there's no solution, just that the solution would be a lot of work. If the library is obscure enough and the errors are strange enough, it might be so much work as to be impossible to the busy web developer.

My goal was to write a kind of hand-wavy article to get people talking about this problem.


I agree on the performance loss. Just for kicks, I ran the same commands on some real aarch64 (32 cores, 3.0GHz, ARMv8.? - can't remember and already logged off the machine, but I can double check tomorrow). Without further context, numbers:

  someuser@some-aarch64-machine:~$ docker run arm64v8/ubuntu bash -c 'dd if=/dev/urandom bs=4k count=10k | gzip > /dev/null'
  10240+0 records in
  10240+0 records out
  41943040 bytes (42 MB, 40 MiB) copied, 2.18298 s, 19.2 MB/s
  someuser@some-aarch64-machine:~$ docker run amd64/ubuntu bash -c 'dd if=/dev/urandom bs=4k count=10k | gzip > /dev/null'
  warning: TCG doesn't support requested feature: CPUID.01H:ECX.vmx [bit 5]
  warning: TCG doesn't support requested feature: CPUID.01H:ECX.vmx [bit 5]
  warning: TCG doesn't support requested feature: CPUID.01H:ECX.vmx [bit 5]
  10240+0 records in
  10240+0 records out
  41943040 bytes (42 MB, 40 MiB) copied, 6.72324 s, 6.2 MB/s


Awesome, thanks for testing this out!

A 3x slowdown is not as bad as 6x, but it's still quite a bit. I also saw a slowdown of ~4x when I tried this experiment on a native Linux x86_64 running ARM - perhaps the Mac -> Linux virtualization slowed it down further.

5x may have been a bit alarmist, but regardless we should brace ourselves for a big performance hit on x86_64 virtualization.


I'm surprised it's only a 3x slowdown. But the single-thread performance of native execution (without emulation) is worse on aarch64, which was expected. Imo, a better benchmark would take into account the multithread performance with/without emulation.


Yes, agreed. And the examples exposed are not fair. There are a lot of optimizations one can do in Docker, specially when dealing with I/O workloads (dd example in the article). Cloud providers have been doing this for long, long time already.. Why the author did not mention those, it is to be seen..


Are there any excited embedded developers in the crowd? I have done a little embedded work and cross compiling has always been a huge pain in the ass to setup. I know some people have even gone as far as purchasing expensive niche workstations with ARM CPUs specifically to avoid this problem. I feel like having a mainstream ARM platform like the MBP will make compiling software for ARM-based single board computers a breeze.


You'll still have a completely separate toolchain. First, a lot (most?) embedded development is not done on ARM A. ARM Cortex-M is probably the most popular embedded platform in industry and what it shares with "ARM Cortex-A" is the brand "ARM"; otherwise it is a separate architecture and instruction set.

Even if you are talking about doing ARM Cortex-A series, you aren't going to be using the same libraries on the embedded device that you use on a Mac. You'd most likely be using either Linux (ala Raspberry Pi) or an RTOS; either way you have a different compiler and stdlib to use.


My expensive niche workstation = raspberry pi, pick your ARMv flavor.

Most tools are adopting Linux remote build + remote debug, wherein you ssh in and hook into the compiler and debugger all from the comfort of CLion/VS2019/VSCode.

If they don't have remote build, there is often building locally, with a copy of the root filesystem, using a cross-compiler, then remote deploy + debug. The most annoying part of this process is fixing all the symlinks not supported on NTFS.

Expensiver niche workstation = $500 dev kit directly representative of your target, but with everything exposed.

The interesting thing is now we need ARM -> x86 remote build or cross-compilation tools, of which I know of none.


There isn't a single embedded target that uses MacOS's libc so you will still have to set up a specialized environment.


I'd much rather have a more powerful x86 workstation for the same money than an ARM laptop. Never really had problems with cross-compile. And without support for running Linux natively, it doesn't get me much for even for the parts of testing that don't need the specific target (well, VMs maybe).


To be clear, we don't know if the ARM MacBook will be able to run Linux natively. We only know that Apple won't continue support for Boot Camp and therefor Windows anymore. Linux was never supported.


There won't be native Linux. https://news.ycombinator.com/item?id=23640746 (Craig Federighi confirms Apple Silicon Macs will not support booting other OS)

EDIT: fixed link


I don't understand your point. Native Linux isn't restricted to x86_64.

EDIT: Your new link doesn't tell a different story. From the comments:

> It is still possible to disable secure boot using csrutil. Apple has never officially supported booting Linux on a Mac.

> https://twitter.com/never_released/status/127585087215369011...


I think they didn't mean to link to that specific comment, but the overall submission.


I have to say I don't understand your point either.

Okay, so we can disable secure boot. What is the next step? How does one get to running Linux natively from there?


Reminds me of the argument made in https://www.realworldtech.com/forum/?threadid=183440&curpost....

I cross compiling Linux kernels daily. I think Clang makes this simpler, but missing C runtime for cross compiling userspace executables still leaves much to be desired.

I think Zig is doing interesting things here. Clang should just straight up adopt this, IMO. https://andrewkelley.me/post/zig-cc-powerful-drop-in-replace...


I dunno if I count, I have a RPi 3, original Droid (oh yeah, it's still working), an LG L9 and a HTC One M8. And I'm building kernels for them (mostly because I'm overclocking the shit out of everything and I have no choice but to build custom kernels).

32 bit on the phones, a real pain in the ass to cross compile, but it's a fun learning experience (I'm just a noob to any programming). I'd love to get paid for this tbh :D


I solved that problem (mostly!) by using Go! Occasionally I do have to go dig out something like an arm-brcm-linux-gnueabi-gcc to cross compile C, but mostly I use Go.


My 2017 Macbook pro still has quite a lot of life in it, but it seems unlikely I will replace it with another Mac in a few years. Before the Mac I had a Lenovo X1 Carbon running Linux and it was great; and even then was a better development environment in some ways (docker has better filesystem performance, pacman is much better than homebrew). I do use some audio processing applications and my kids play a few games that do not run at all on Linux. I may try WSL again instead of going straight back to Linux, but its hard to imagine Mac will be the best OS for me.


I recently had to get a new laptop and went with windows for the first time in a decade. Macs are still great to develop on for now, but looking at the trajectory apple has taken, the growing pains ARM will likely bring, and also the trajectory of Windows, it seemed like Windows would be the safer choice over the next few years.


If the worst of every possible thing happens and you avoid the most obvious solutions and one is very, very slow, then yes, you're right to worry.

Or, you could use already-extant Debian ARM releases and spend minutes rather than months switching over.


I'd like to advocate for remote development environments. Most of my day is spent typing into a tmux session on a cloud-hosted box. (I picked up a Magic Keyboard for my 11" iPad Pro, and thanks to Blink it's a great glass terminal. It's not going to work if you're debugging let's say a React app, but I've been very happy on it the last several days churning out Golang.)

Running stuff on your laptop makes it run slow, get hot, and burn battery. I've considered getting a small x86 or ARM media appliance as a (physically local) remote server for when I can't count on an Internet connection. A media PC costs how much? The big holdup has been the tyranny of choice I'm confronted with. (Suggestions are welcome!)

I think very few people would be surprised if the coming of ARM Macs will, along with AWS's ARM moves (and Microsoft's), drive acceptance and adoption of ARM-based server computing. The mechanism won't be anything formal, just the vague pressure that comes from people wanting their programs and libraries to compile locally.


I have a small server at home running proxmox. I have a couple containers (lxd) running for personal dev projects. I agree that if you can do it like this, it's nice. I can be pretty much anywhere and open up a terminal, vpn in, and pick up where I left off thanks to tmux.


    I would expect about a 5x slowdown running Docker images.
    
    Docker on a Mac utilizes a hypervisor. Hypervisors rely on running the same architecture on the host as the guest, and are about about 1x - 2x as slow as running natively.
    
    Since you're running ARM Mac, these hypervisors can only run ARM Linux. They can't run x86_64 Linux.
    
    What will happen instead? These tools will fall back on emulators.
Most of the software I run in Docker already supports ARM. I'd imagine that a lot of (most of?) us that use Docker do, too.


It'll be annoying maintaining multiple docker images. Kind of defeats the purpose.


The loss of Boot Camp is huge. One of the reasons I develop on a Mac is so that I can use a single machine for all development (including macOS, iOS, and Windows development). Most of the time, developing for Windows on Parallels works fine, but there are some cases where it's necessary to boot directly into Windows to test or debug adequately. I hope Apple is able to reach an agreement with Microsoft or at least continues shipping Intel-based Macs until then.


It will ship Intel based laptops for at least a couple more years (at the very least, the models that will already be out at the switch), and will support them for a lot more; so just buy the best intel based once following the switch, and it will last you 4-5 more years.

But also: Getting a cloud windows station or an el-cheapo-$500-under-the-desk-when-you-really-need-it Windows machine is probably worth it if you're doing professional work. It would quickly cost much less than the time you lose when rebooting to the the other OS, from my experience.


Apple will singlehandedly make 2021 "the year of Linux on the desktop".


Every time Microsoft or Apple majorly screws something up, people say this. It still hasn't happened yet.

However, I think Apple has been a far greater threat to Linux adoption than Microsoft. Why? Because it gives techies the *nix environment they want, with the software and hardware support no one will give them on Linux.

There is real value in proprietary commercial end-user application software. Most companies who make such software couldn't care less about supporting Linux. So if you want to use Linux, you have to use F/OSS alternatives and continue to try convincing everyone that somehow they're better than the commercial options... even when the rest of the world has agreed that they're really not.

The whole incentive structure around F/OSS development really doesn't work for software where the profit motive is in the product itself... Not some nebulous "support contract" that you don't actually need. (Which is a far bigger issue for end-user applications.)


> Because it gives techies the *nix environment they want, with the software and hardware support no one will give them on Linux.

The UNIX experience on the Mac is pretty shitty. Ancient versions of all the tooling. Command-line utils have that weird BSD well-water flavor. No package management. Funny Docker quirks.

The hardware used to be pretty nice, but honestly I'm still having trouble forgiving them for getting rid of the physical ESC key and turning volume control into a two-step routine on the TouchBar.

Honestly if I'm doing server-side development, I much prefer using my ThinkPad (Ubuntu) over my MacBook. About the only thing I miss is the far superior touchpad on the Macbook. That's it.


> Ancient versions of all the tooling. Command-line utils have that weird BSD well-water flavor. No package management.

Can be easily fixed by installing homebrew. Also you claim it is a shitty Unix experience while complaining about BSD flavoured tools.

> Funny Docker quirks.

How is that Unix related? BSD has similar issues.

Maybe you should have written the GNU/Linux experience is pretty shitty on macOS but no one claimed otherwise.


>> Because it gives techies the nix environment they want, with the software and hardware support no one will give them on Linux.

> The UNIX experience on the Mac is pretty shitty. Ancient versions of all the tooling. Command-line utils have that weird BSD well-water flavor. No package management. Funny Docker quirks.

It doesn't have to be the best nix environment. Hell, it doesn't even have to be a good one. It just has to be "good-enough". For this, they still have an advantage over Windows. And compared with Linux, they still have the advantage that by-and-large, things "just work". I have never personally been able to say that about a Linux desktop I've had. There is always one more thing to tweak, one more knob to turn, etc...

I'm with you on the ESC key and touch bar though... which they thankfully fixed the missing ESC key.


Exactly what advantage does macOS shitty Unix environment have over Windows 10 w/ WSL2?


Everything WSL2 does has already been do-able with Windows for at least 10 years. It’s a VM with some file system sharing. You might as well just ask what advantage does MacOS have over Windows. WSL hasn’t changed anything really.


It's not running in a virtual machine.


Hardly an advantage with all the other negatives.


They _had_ the advantage over Windows. WSL2 is so well-integrated as of the latest major release that I think this advantage has now flipped.


Isn’t WSL2 basically a virtual machine with better integration? Is it still super slow when accessing /mnt?

I can also install something like multipass on macOS if I want a good integrated virtual machine.


> by-and-large, things "just work".

Last week I had to drop to vi and edit nfs.conf on a friend's Mac to solve very slow transfer rates. "Just works" within a very narrow definition of primitive use cases.


I was with you until they released the newest MBP. Now my beloved ESC is back. I already do all my development in a Vagrant instance so I've never been bothered by the tooling. In all other regards I prefer MacOS as a desktop environment to the currently available Linux choices.


> I'm still having trouble forgiving them for getting rid of the physical ESC key and turning volume control into a two-step routine on the TouchBar

To be fair it's always been a one-step routine on the touchbar (touch and drag the icon) and they brought back the escape key.


Touch and drag is easy to jump the volume up/down variable amounts but really sucks compared to a single key press for "just a little louder" or "just a little quieter." There is something satisfying in the discrete steps of volume notches.

On the other hand, my speakers have a physical volume dial that provides feedback via friction on movement so I like that better than touchbar or physical up/down buttons.


So there’s actually two types of touch and drag you can do. One is holding until the slider appears and then dragging, the other is flicking the icons left and right which causes single step increments.

Discoverability certainly sucks for that second one.


UNIX experience on Mac is as UNIX as it gets, given that it is certified as a proper UNIX.

Linux is its own thing and trying to mix UNIX with Linux is always going to lead to disappointment.


> The UNIX experience on the Mac is pretty shitty. Ancient versions of all the tooling. Command-line utils have that weird BSD well-water flavor. No package management. Funny Docker quirks.

Very, very true. And Homebrew is actively becoming worse now. A few years back, Homebrew was great - now using it feels like using some weird underground software stack that exists only because Apple hasn't come around to nixing it yet.


Tap the volume icon and swipe left or right without letting go to do it one fell swoop.


> The UNIX experience on the Mac is pretty shitty. Ancient versions of all the tooling. Command-line utils have that weird BSD well-water flavor.

Well, OS X is Unix, but GNU is not Unix.


Correction: macOS is UNIX®


> Ancient versions of all the tooling. Command-line utils have that weird BSD well-water flavor. No package management.

That's what MacPorts is for ;)


Package management == Homebrew. No issues there.

Not a fan of the touchbar of course.

Otherwise everything else I need is usable on a Mac.


> However, I think Apple has been a far greater threat to Linux adoption than Microsoft. Why? Because it gives techies the *nix environment they want, with the software and hardware support no one will give them on Linux.

With WSL, you basically get an actual Linux userland (with WSL2, I think you get an actual Linux kernel too), not just a Unix that's like Linux but different enough to be annoying. But I'm not sure that will be enough to convince people to move to Windows.


If Microsoft Office comes on Linux, I am switching. And yes, I have used LibreOffice, it simply isn't that good.


If Microsoft had been more serious regarding POSIX personality, Linux would never had taken off.

Most devs only want some kind of CLI and POSIX like capabilities.


> Most devs only want some kind of CLI and POSIX like capabilities.

which is why WSL is so great...


Indeed, and WSL instead of a pure POSIX, because that allows to tap into the ecosystem that thinks Linux === UNIX, without having to recompile anything.

An approach already taken by other UNIX clones with their Linux compatibility syscalls layer.


You forgot X11 and Motif. Without that Windows would be useless back in the day.


Not really, because there were plenty of Win32 X Servers.

Back in the day I was using Hummingbird.


You forgot about porting and compiling software which depended on X11 and Motif. Linux won because of that.


What for? That is what having something like Hummingbird took care of.

I used to admin UNIX and develop for it from Windows NT/2000 workstations.

Also the FOSS version of Motif only appeared when Motif wasn't that much relevant and most enterprise shops were migrating to CORBA and Web as integration points.


No, I mean when Motif + Slackware were an alternative to SGI machines in a brief time. Those were supported by big multimedia companies.


To give a different perspective: I find Apple's move to ARM the most exciting thing to happen in desktop computers in many years. I'm typing this from a PC running Win10. Current plan is: as soon as ARM desktop Macs become available (and assuming they don't screw it up in some weird way), I'd like to switch.


If it's ARM that excites you, Surface Pro X (Windows) and Pinebook (Linux) exist today. If it's macOS, you could switch now. What about the combo of macOS + ARM do you find compelling?


Windows app developers couldn't care less about ARM Windows. MacOS app developers will have to if they want to stay relevant.


Because of the network effects of software development. The number of people that use those devices you list is basically a rounding error, so they're ignored by most software development.

Apple has a monopoly on their hardware, and they will likely sell a significant number of devices. This will lead to a lot more development for ARM that never would have happened otherwise.

That, in turn, may make tilt the balance in favor of ARM for a lot of other use cases outside of OSX when other tools, applications, and hardware vendors better support ARM.


So, and I say this as someone who's stoked to see any non-x86 system going mainstream... why do you want to switch? What benefit do you see to switching to an ARM laptop? Or is it just "this is a good thing (in general) so I want to get on board with it"?


I'm also excited about a non-x86 architecture on the desktop again. Monoculture is bad, and I find x86 to be especially ugly...

As for switching... I'm increasingly unhappy with Microsoft's complete disregard of user privacy. Apple isn't perfect with that either, but IMO much better at least. For my use cases, Win and Mac are the only credible options due to software availability. So to get away from Windows.. there's not much choice these days.


Why do you feel like proprietary software monocultures are better than commoditized hardware monocultures? OS vendors selling 100% custom silicon is not the path to diversity and freedom of choice.


Having an x86 dev machine is useful because it matches most production environments pretty closely. This might be changing somewhat with AWS Graviton, it's not the default yet.

What makes ARM so exciting? Maybe battery use will be better? Maybe it will be slightly faster? Maybe? There's also been a lot of tuning done for laptop workloads on x86, it's definitely a maybe. I expect the only noticeable changes for most users to be somewhat better battery life, some apps not working, and occasionally having to know which package to download.


I expect Lenovo will move a few extra units because of this decision.


When was the year of the macOS desktop?


It's not about stealing users from macOS. It's about stealing developers. Hell, Apple is at the mercy of Microsoft and Adobe right now. I'd bet they had to line their pockets very well, so that they don't get any funny ideas.

But Apple can't just pay-up every cross-platform software developer. Smaller developers will have to re-evaluate whether macOS remains a viable target platform for them. Which can translate to a dev-gain for Linux. The catch is that Linux is in much better position to translate an influx of developers to an influx of new users: Linux runs on what you have, while macOS requires you buy Apple hardware.

And let's not forget about macOS as a gaming platform. Linux has made a huge leap forward with Steam Proton. On macOS there's still a ton of games not supporting x86_64 (Catalina), and situation won't get better by transition to ARM.


Hell, Apple is at the mercy of Microsoft and Adobe right now. I'd bet they had to line their pockets very well, so that they don't get any funny ideas.

I’m sure that's not the case. It's much more in Adobe’s interest to be ARM-ready on day 1; there are plenty of Photoshop alternatives in the Mac App Store—notice they demoed Affinity Photo running on the ARM Mac and Affinity is much better at using Apple's native APIs and technologies than Adobe every was. And it's just a one-time cost of $49.99 vs. renting Photoshop from Adobe. Users of Apple devices are continue to be a large segment of their customer base.

Adobe has already migrated all of their core applications to a new codebase that should be relatively easy to bring to ARM Macs. Photoshop and several other of their apps already runs on iPadOS, so it won't be that big a deal to move it over to Big Sur.


> ... while macOS requires you buy Apple hardware ... macOS as a gaming platform ...

So nothing is changing. ARM Mac isn’t going to change the Linux desktop/laptop story.


That may be true but it’s not accounting for the iPhone and iPad developers out there that will be able to easily target macOS after the switch.


macOS users actually pay those smaller developers, it would be foolish for them to expect anything from Linux users, it is hard but it is the reality.

Hence why plenty just target Android, although some of their apps could easily be targeting GNU/Linux as well (specially the ones that are mostly NDK glue + whatever framework).


Pretty much when the choice was between Vista and Tiger


1996


2012 ?


2001-2002 with OSX.


I thought that was WSL2 on Windows 10?


This is ridiculous. Your code will just have to support multiple architectures, which is very easy with modern languages and tooling.


I think you missed the point. What about all the dependencies of your code that are only compiled for x86_64? The article isn't talking about native apps on the laptop, it's about apps that run on a server but that you are developing locally.

You can't run your x86 docker image on your ARM mac without emulation. You can't run your x86 Windows VM without emulation etc.

Of course there are solutions like using a remote server or a VM in the cloud, but if you're buying a decent machine then you would normally expect to be able to run these things locally.


Do you have any examples of this? The last time I tried an AWS ARM server, it was literally no modification other than changing the server type — Linux has run on ARM for many years and Apple is far from the first company to use the platform.

For example, back in 2017 Cloudflare was basically looking at this as a question of which hardware ran most cost-effectively rather than having engineering heroics first: https://blog.cloudflare.com/arm-takes-wing/


Mono and .NET Core run terribly on ARM. I have several containerized C# apps that I need to run on x86_64 hosts, because on ARM they'll just crash randomly.


I want to say Oracle database clients as an example.

My company definitely had problems to get database drivers to work generally speaking and on both 32 bits and 64 bits. Have a look at postgres, oracle, cassandra, redis, sybase to name a few, I am not sure which one was worse, it wasn't me doing the work. But I've seen some of the C and C++ dependencies that needed to compile with the errors that happened and that was horrendous.


Fair enough — I've used Oracle's products enough to know that software distribution and packaging is not a priority there.


I've been using postgres, cassandra, redis and mysql/mariadb on aarch64. Only ran into issues with MySQL, which we rootcaused to some weird atomic locks not working as expected on the first generation of ARMv8(.0) a couple years back.


That’s a good moment to make your c/c++ code more robust, and cross-platform, like the languages they are


The problem wasn't in our code, it was in the database code that was either from open source or from a vendor.

If you want a sample. Try to install the cassandra client library in python. It will pull in and compile all sort of shit. That's supposed to be python and easily cross platform.


Yes, so time for them to clean up their mess ;)


Reading the comments here flow something like this: 1. complain that switching to ARM is a mess 2. no it's not a mess, it's easy 3. no, see it's a mess 4. fine, clean up the mess.

Yes, we should clean up the mess! It doesn't mean it's not a mess. And I think the mess is actually still understated.


I think it will come down to PRs accepted. So it's your mess in the end as it depends on the companies strategy.


AWS has put out some very impressive ARM instances running on the Graviton2 processors. The reviews I read show better performance per dollar. So maybe the solution is to further embrace ARM and run your code on ARM servers?


This will always be a problem, unless one emulates the x86_64 architecture, which is then again the other problem w/ Docker. I assume main libraries (i.e. dependencies) are already or will be ported, so theoretically recompilation should work for many, although not the most applications. Other applications will need to either change dependencies, or port dependencies to enable full performance.


Why not re-compile the dependencies so they run locally too? Am I missing some constraint?


I mean the post is technically right. You're probably better of skipping the first generation of ARM Macs until software support happens unless you're someone who wants to work on that software support.


Hardly. You’re investing a great deal of effort in building a parallel set of images that probably will never see production.

The one use case when this might be viable is targeting AWS Graviton2. Does anybody know if you can run an emulated Graviton2 on ARM Mac?


Parallels on Arm macOS simulates the Snapdragon 835 SoC to emulated operating systems.


There are very few packages compiled for ARM on PyPI, and there are more than a few packages on PyPI that are a pain to build from source.


Yeah, ARM is coming and docker will move to support it. We already have this: https://docs.docker.com/buildx/working-with-buildx/

Things like the pinebook pro (and hopefully more linux ARM devices) will keep pushing this further.


Docker came to the ARM-based raspberry pi in 2016: https://www.raspberrypi.org/blog/docker-comes-to-raspberry-p...


I know of several in production embedded devices running docker, and have worked on a few...It's already here and works great.


I remember the days when having to switch between x86, ppc, sparc, etc was a thing (not to mention the many flavors of UN*X) and we survived. In fact I think it was more fun back before the x86/Linux server domination. Architecture diversity is good.


I'm more convinced dropping dual boot and supporting virtualization is the right move.

Only the host OS is going to have the right drivers for the trackpad, wi-fi, GPU, power management, etc. etc. Through virtualization, the guest OS doesn't have to worry about constantly evolving hardware models.

Virtualized OS performance is already very good, and USB passthrough has existed for a while. Snapshots are a godsend.

What won't work are things like CUDA for eGPUs over Thunderbolt 3, and you'll have to share disk and RAM with the host OS.

But for most use cases it's probably the right choice. (This doesn't address the author's concern about moving away from x86.)


> Why can't you update the Docker image to also support ARM? You theoretically could switch your backend to run ARM Linux. However, this would take months - renting out ARM instances, re-building all repositories, and a tense switch over.

I don't see why this would be so hard. If anything, I expect to see a massive upswing in things like AWS Graviton2 uptake, and a lot of common Docker images being built with ARM versions out of the box. It might be about a year or so, but eventually we'll be able to just go ARM-native the whole way.

What Apple needs to do is make a first-class, WSL-tier implementation of Docker for Mac for ARM.


I'm kinda thankful to Apple for biting the bullet on this one. For whatever reason people will move mountains for Apple where other companies' products would just languish and die. The second order effects of ARM being something that's "safe" for people to use should be great!


The Honeycomb.io folks reported 40% more capacity per dollar on Graviton 2 over x86. That alone should motivate people to start looking into ARM backends.


>>> expect ... a lot of common Docker images being built with ARM versions out of the box.

This has no chance of happening. The common cloud CI systems do not support ARM at all (travis, circle CI and co). There is only a minority of developers with macbook and the rest is not going to spend $2000 to buy one just to build some docker images.


Travis has ARM in beta:

https://docs.travis-ci.com/user/multi-cpu-architectures

GitHub lists it as a feature now:

https://github.com/features/actions

I'd be very surprised if this didn't become more common given the high levels of interest people are showing towards ARM server offerings in the cloud space.


They will. Trust me. And you can buy a cluster of RPis to build your images for $2000


It's ludicrous to assume that anybody has $2000 to spend on fantasy hardware. It's month of disposable income outside of the SV bubble.

Better question might be, how many of the most common open source projects are managed by volunteers in their spare time? These will not build for ARM, unless there is a free tool doing it automatically for them. Currently GitHub + travis/circle can do for x64 on every push to master.


You don't need a cluster of them, one $50 machine is enough for home use. Travis et. al. will be buying those 96-core Marvell ARM machines and plowing through builds.


Do they run their own metal? I thought they were on AWS.


Well, if they are, that makes things even easier -- Amazon has offered ARM instances since 2018 [1].

[1]: https://aws.amazon.com/blogs/aws/new-ec2-instances-a1-powere...


I think they have both. They offer POWER and LinuxONE as options and those aren't available in your average cloud provider. They probably have a sweet deal with IBM Cloud.

I'd love to have a POWER and an IBM LinuxONE in my shed, but that's not going to happen.

Maybe the POWER, but the Z most likely not.


> There is only a minority of developers with macbook

I travel the world meeting developers from multiple communities. I very rarely see one without a MacBook.


I don't think this is actually the big problem some people make it out to be. You can cross compile ARM binaries on x86. You can even run ARM binaries on x86 (with qemu.) Any CI system can easily call scripts to do this.


Didn't they show off a native arm docker image running in the keynote?


I think it was running in Parallels


For my most recent project[1], I wanted to see if Amazon’s Graviton instances would be a good choice for my docker deployments (I was deploying MongoDB, an Express server, and several instances of the Janus WebRTC server). I was developing in Pop OS on an x86_64 desktop (since we’re gonna have to start specifying now) and found the toolchain around building ARM64 images to be pretty simple once I got it set up.

I benchmarked some `t2a.nano`s against some `a1.medium`s and found that the `nano`s were sufficient for my needs, so I went with them (they are cheaper than `a1.medium`s, even if the `a1`s have a better price-to-performance ratio).

I didn’t find it too difficult to rebuild any of these projects for cross-architecture usage. Even Janus, which has a TON of C/C++ dependencies (some of which have to be compiled from a particular version of the source) easily built for ARM with no change in the Dockerfile.

So I kind of feel like OP is exaggerating the effort required to migrate servers to ARM. Sure it might be a hassle when you have tons of microservices, but you can move them incrementally, and most things recompile with no changes. And regardless of what architecture your dev machine is, you’ll want to be able to compile for and work with both architectures if you want to get the most out of the infrastructure on offer in 2020.

[1] Shameless plug: https://chrisuehlinger.com/blog/2020/06/16/unshattering-the-...


Cool, thanks for sharing. It's these kind of experiences that I was hoping to gather from making this post.

Did you notice at the end that you did NOT end up choosing ARM? You ended up going with x86_64 because that's what made more sense for your backend. That's part of my point - developers should choose their backend architecture based on the performance and pricing of their backend, not their development laptop. And if that decision is "we should keep using x86", then there will be a big performance hit in development.


Back in the UNIX glory days, I was responsible for keeping a software stack running across Windows NT (later 2000), Aix, HP-UX, Solaris, each with its own CPU architecture.

This is just another CPU story, no big deal.


As a Linux tech, I welcome this Apple move honestly.

Having a proper competitor for x86/x64 is a good thing.

The fact docker is slower on ARM (at the moment!) is mostly due to the lack of interests for optimizations.

With Apple starting to produce MacARM machines, and maybe more ARM servers in the wild, docker (and other platforms/frameworks) will start to get more performant on ARM as well.


Do we know that Boot Camp isn’t supported by Big Sur? Or is it just that one can’t run an x86 OS on an ARM architecture? In other words - will I still be able to dual-boot into something like ARM-flavored Linux?


  one can’t run an x86 OS on an ARM architecture
This is the limitation. There is an ARM version of Windows, but the comments from Microsoft don't sound terribly promising:

  “Microsoft only licenses Windows 10 on ARM to OEMs. We have nothing further to share at this time.” [1]
And Apple has more firmly stated that this won't be an option:

  “We’re not direct booting an alternate operating system,” says Craig Federighi, Apple’s senior vice president of software engineering. “Purely virtualization is the route. These hypervisors can be very efficient, so the need to direct boot shouldn’t really be the concern.” [1]
1: https://www.theverge.com/2020/6/24/21302213/apple-silicon-ma...


And the iPhone didn’t permit native third part apps when launched. Not because they weren’t ready to announce it yet but because at launch time they figured web apps would be enough.

I have no idea what plans they might have but I would be surprised if you couldn’t install some ARM Linux distros on their laptops sometime next year.


Boot Camp support will remain for Intel machines. Current understanding is that ARM machines will have no Boot Camp capability at all.


If you stay back on x86 virtualization will be slow, but if you jump to ARM this is great!

> However, this would take months

(infomercial arms)

I'm sure it won't make sense for everyone, but I'm just as sure it will make sense for many.


On the flip side, I guess ARM Macs will now allow the use of hypervisors for Android simulators, instead of a full hardware virtualization.

... thus making Android development better on Macs?


Android Emulator developer here. In addition to what the other comments said about android emulation with hypervisors existing already for x86, we're also looking into the Hypervisor.framework API for Apple silicon.

It won't be a trivial task (hoping for pre-existing code to port over maybe?) but we have the other pieces like using Hypervisor.framework for x86 already, and being able to cross compile the other code for arm64, so that would be the only major task left.

On the subj. of better GPU support, it depends on what it's actually like using the drivers, but from previous experience with the GPUs and drivers shipped with macOS, there shouldn't be any special kind of trouble at least. We may have to use Metal if Apple also gets rid of opengl support on those new machines, but there are also existing translators for gles and vk to metal. The graphics hw itself, is actually the least of our worries due to how consistent the hardware is likely to be---we'd have to deal with a much fewer set of hw/driver quirks versus other host OS platforms.


Hello,

OpenGL is deprecated but still supported on Apple Silicon, even for arm64 apps.


Nice, that's good news.


Android emulators on desktop generally run amd64 images of Android using existing virtualisation hardware and software.

At best, you can say that you can run ARM-only games at native speed now, but as a developer you won't really notice much different (assuming the processors aren't slower than Intel's)


Android already had x86 builds and that was the preferred way to run the emulator so its mostly likely a neutral change (once they get hardware virtualization for arm working). The GPU aspect of things might improve maybe?


> ec2 only offers 6 general-purpose ARM instance sizes

m6g, c6g, r6g each support 6 sizes for a total of 24


Disclosure: I work at AWS building cloud infrastructure

C6g, M6g, and R6g (powered by AWS Graviton2) each support 8 sizes, along with bare metal. A1 instances (powered by AWS Graviton) have 5 sizes, along with bare metal.

That's a total of 33 distinct instance sizes.


Thanks, I didn't know about c6g, m6g, r6g. I've updated the post to remove this mention - I only counted a1 instances.

That still leaves storage optimized and GPU optimized instances missing. I'm guessing storage should be easy enough to add, but what about GPU?

From my novice experience with GPUs, they need finicky drivers that must be ported by the GPU manufacturers, so I figure it might take a while to get competitive ARM GPU instances.


Configurations of these instances with NVMe local storage are coming.

NVIDIA is supporting Arm for CUDA development, see https://nvidianews.nvidia.com/news/nvidia-brings-cuda-to-arm... and https://blogs.nvidia.com/blog/2019/11/18/ngc-containers-arm/


This seemed like the weakest argument in the article; as arm becomes more popular it will get more support.


If you hadn't sold yourself out of the free market, you would be able to choose what architecture machine you bought.


A free market wouldn't have saved you because the market has every incentive to gravitate to a single architecture because vendors and customers want the best software compatibility. The more popular an architecture (or really any platform) gets the more software that's written for it until it starves out competitors because they can't run the software their customers want.


> Docker on a Mac utilizes a hypervisor. Hypervisors rely on running the same architecture on the host as the guest, and are about about 1x - 2x as slow as running natively.

That doesnt sound right to me. Perhaps on IO bound tasks, if you are using emulated devices. On CPU bound tasks you should see near native performance.


There's another unintended consequence of this virtualization - docker is already has very high CPU usage on my macbook, anywhere from 50-100%. Because of which it is always hot and toasty. This is caused already caused my screen to start deteriorating (https://www.ifixit.com/Answers/View/567125/Horizontal+line+o...) and the battery has degraded considerably too, even when I'm not coding on it and docker is shut down. This means a significant hit to the longevity of such devices as they not meant to be pushed so hard 40 hours a week. With ARM macs I see it getting even worse.


On the bright side, it looks like low-latency streaming is good enough that as long as you have internet connection, Boot Camp is not really necessary. This works well for me: https://shadow.tech/usen/


A lot of developer-centric focus discussion on how Docker would work (hint: it does), but VirtualBox is still pretty common in the sysadmin world and other industry circles. Moreover, there seems to be no way it will ever work. It will be interesting to see how that turns out.


Author here. That's a major point of the article - "are we screwed?" I'm not an expert on virtualization but I wanted to see some discussion on this topic, because it feels like we might be screwed and nobody is talking about. Anyway I was happy to see Docker worked, at least on a basic level.


Cool. Thanks for writing it. It summarizes and collects a lot of issues we were all grumbling about here and there. The main hurdles for Docker are organization, not technical. However, the other issues you bring up are going to be more technical (same as you, though, not a hypervisor expert and/or we're going to be at the mercy of big vendors like Apple, Oracle, and Microsoft. Those are much harder problems to overcome.


Any company with a load of binaries built without any effort towards supporting cross platform builds that uses Docker is gonna have a bad day with this. They buy a bunch of new MacBooks and then find they can't use them until they spend a few weeks porting everything.


I suppose that depends what they’re written in. Some languages (e.g. Go) far simpler than others.


My guess is that Apple will end up copying Microsoft and providing a WSL style Linux kernel "shim" into Darwin (pretty easy as it's already UNIX) and use Rosetta2 to translate any x86_64 containers to aarch64). No need for a hypervisor.


I wonder if Apple will still allow a dual boot with a native (arm64 in this case) Linux


There's no official support for ARM-based Macs: https://news.ycombinator.com/item?id=23640746


Well given the roadblocks they put for their recent x86-64 MacBooks running Linux, it's very unlikely.


Short answer: no.

Slightly longer (but not much) answer: https://www.youtube.com/watch?v=Hg9F1Qjv3iU&feature=youtu.be...


Once again -- Apple will do what the entire industry without Apple couldn't do. In this case, force a migration to ARM-based servers, so that prod is running on the same architecture as the developer's machine.

Apple is finally killing x86.


Hooray, we have successfully fixed the mistake with an open platform and will now be relegated to incompatible hardware without any competition.

At last, the future will surely be bright!


How was x86 any more "open" of a platform? If anything, x86 is a far more "closed" platform, as there are only two remaining manufacturers of x86 parts, and there is no licensing process to join them. Meanwhile, there are hundreds of ARM licensees, and the process for becoming a licensee is all documented online [1].

[1]: https://www.arm.com/why-arm/how-licensing-works


How? It is absolutely inconceivable how much more open it is.

Also, the CPU is but a minor part of the puzzle. But still that is still twice as many as apple (good luck exchanging that apple-arm with any other brand).

Please let me know how open you think the next apple ARM platform is when you try to boot any OS not written by apple.

Please compare that with a computer built from AMD/Intel with a motherboard out of dozens of manufacturers etc. Any ATX power supply etc. Pretty much any PCI-E graphics card etc.


You are confusing the x86 CPU architecture (which is closed) with the PC platform (which is relatively open).

> Also, the CPU is but a minor part of the puzzle. But still that is still twice as many as apple (good luck exchanging that apple-arm with any other brand).

Even on x86, interchangeable CPUs are the exception, not the rule. Intel and AMD CPUs haven't even used the same socket since the 1990s, and even within those manufacturers, socket incompatibilities are common.

Software interchangeability is more of an operating systems issue than an architectural one. With appropriate software shims, though, there is no reason to suspect that (for example) Linux ARM software could be run on an Apple ARM CPU. In fact, it's quite likely that tools like the Android emulator will do exactly that.

> Please compare that with a computer built from AMD/Intel with a motherboard out of dozens of manufacturers etc. Any ATX power supply etc. Pretty much any PCI-E graphics card etc.

Server-class ARM hardware generally does use similar parts as x86 servers, including power supplies and PCIe peripherals.


I am not. But also, how many x86 processors are sold outside of the PC platform? They have a symbiotic relationship. The foundation of everything we have today.

> Software interchangeability is more of an operating systems issue than an architectural one.

Not if the architecture is designed around keeping others out. But just not telling anyone how to do it is enough in 99% of cases. Some hacker might post a buggy proof-of-concept for an obsolete device that no one will run.

> Server-class ARM hardware generally does use similar parts as x86 servers, including power supplies and PCIe peripherals.

Wanna place a bet on what apple is going to do?


It's more open in terms of what software you can run. And if you cared about hardware being open, RISC-V is where you'd have to go. ARM certainly isn't open hardware.


x86 might be a platform with few vendors. ARM isn't even a platform. Most SoCs are meant to run a single OS and that's about it. Not exactly what I'd call a platform because platforms let their users build on top of them. That includes running whatever OS you want to run on that processor.

What you might be worried about is a duopoly which has nothing to do with whether something is a software platform or not. For example Microsoft has a monopoly on Windows. That doesn't stop Windows from being a platform for which you can write arbitrary software. Apple has a monopoly on iOS but it's not possible for users to write their own software, they have to join a developer program that can always exclude them. This is what one would call a closed platform. ARM is closer to the iOS model than to Windows.


Precisely. And the fact that you can run ARMv8 on both a Raspberry PI and the Fujitsu Fugaku which is now No. 1 on the Top 500 says something about what is possible as a result.


Actually the original mistake was IBM's to make.


Yes, that was my point, we are still "suffering" from it.


There isn't really any advantage to ARM compared to modern x86. Closed ARM is IMO worse than x86. Now, if someone made the x86 license more open that would be cool, but so far it's all downsides at least for me.


> Closed ARM?

Is there any suggestion that the architecture that Apple is using is any different to what is being used by lots and lots of other licensees? If not then it's much more open than x86.

If you mean that you can't buy an ARM CPU today to plug into your own motherboard then understood but that's probably now a matter of time. At least making such a CPU is possible - no-one is going to make x86 more open.


x86 is already relatively open, there's more than four companies developing x86 chips, though for now they are mostly focused on very low power outside of Intel and AMD.

There is no indication that I'll ever be able to buy an ARM CPU and plug it into motherboard with the feature set I choose and plug into it peripherals that follow a standard and open interface with good performance. The only companies that make ARM CPUs that could at all be useful in such an open platform don't make CPUs fast enough. I don't think Apple or Amazon will ever sell me a socketable CPU with high speed PCI support and XMP memory support.


ARM has 250 licensees for application processors with 8 licensees (including AMD and Nvidia) who can develop their own architectures around the 64 bit ARM instruction set.

You can buy socketed ARM CPUs today that are likely to be more than fast enough - from Marvell for example.

You're 100 % right that there isn't an ARM ecosystem at the moment in the way that there is a PC ecosystem with all its flexibility.

But that's not a feature of the ARM architecture or what ARM as a company does - it's because x86 has historically dominated the desktop.


There's only about two ARM licensees that can even come close to desktop tier performance. The instruction set doesn't matter if only one company can make a fast enough CPU, using a massive number proprietary changes to the original architecture.

When we'll have three ARM companies making socketed CPUs with standardized I/O between them that are faster or very close to AMD or Intel, then it will be good.


Preferably AMD just makes an ARM based chip. They have done so in the past.


If it takes a dictator to force a platform upon everyone is it really that great of a platform?


But linux can be run on arm natively. Moreover, most packages are also compiled for arm. So apt-get install will work just the same. I'm sure they will be able to target Apple's specific arm chips when they come out.


Apple released a list of open source projects that they have ported to ARM, they intend to upload patches to each of these projects:

  - Bgfx
  - Blender
  - Boost
  - Skia
  - Zlib-Ng
  - Chromium
  - cmake
  - Electron
  - FFmpeg
  - Halide
  - Swift Shader
  - Homebrew
  - MacPorts
  - Mono
  - nginx
  - map
  - Node
  - OpenCV
  - OpenEXR
  - OpenJDK
  - SSE2Neon
  - Pixar USD
  - Qt
  - Python 3
  - Redis
  - Cineform CFHD
  - NumPy
  - Go
  - V8


Great to see Homebrew and MacPorts on that list. That's a huge signal that Apple plans on supporting developer / tinkerer use cases.


Apple engineers have to install packages too :)


I feel like the ARM transition is why MacOS hasn't progressed much (IMHO) in the past 2 or 3 years. It was a common assumption around here that they were diverting all their top resources to iOS/iPadOS development.

The reality seems to be that their top MacOS developers have been busy laying groundwork for the ARM transition. There's so much to be done.


Most of those were already ported into ARM Linux/BSD several years ago.


I'm curious to see the Google projects that have patches from Apple, since all of those run on Android which is for all intents and purposes ARM.

I'm guessing new ARMv8 ISA features, PAC/BTI/MTE?


Yes, ARM Mac will make fully use of ARM security and hardware mitigations against typical C exploits, there are a couple of WWDC talks about it.


Interesting - would you mind sharing a link?


Has anyone seen those patches start to land? Just curious what the timelines on those are.


It’s interesting that Electron is on that list.


It's also interesting that this won't fix old Electron applications. The idea of easy cross platform development via Electron is a myth because most developers won't support your platform even if all support requires is checking a box. When you consider that this is the primary justification for using Electron over other stacks it just makes your blood boil. All the downsides with none of the benefits.


Crucial piece of tech for many products like VSCode, Slack, Discord, etc.


Given how much Microsoft's React Native team bashes Electron with their performance bar charts (300x more bloat than RN), I look forward that, as soon as it is mature across Linux, macOS and Windows, they replace Electron with RN on VSCode.


Makes sense; port Electron and you get a bunch of apps for free.


Applications are open for YC Winter 2021

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: