- I already do cross-arch development day-in and day out between x86 and ARM, and have only run into hard blockers on a library or tool a handful of times. The solve was generally pretty straightforward to either use an ARM-compatible alternative, or to cross-compile it myself.
- We've done this many many times before and it's not that bad. I know I'm not the only one old enough here to remember the days of having heterogeneous fleets across PPC, SPARC, and x86. Or even more recent - different extensions for x86 with different chipset manufacturers.
In short: I feel Apple’s consumer-oriented direction is starting to be at-odds with what they need to do in order to remain a compelling general development platform.
Remember that macOS became a favourite for web-application development only around 12-13 years ago (prior to that it was seen as an OS for creative-types) - because they were selling nice hardware with an equally nice Unix-family OS with a compelling desktop experience - take a look at typical Linux desktop distros from around the same time: visual eyesores and incompatible with most laptops thanks to OEM driver issues. Apple wasn’t specifically targeting software developers at all - they were even showing ominous signs of disinterest by discontinuing their X Windos server and going-back on their promise of establishing Java as a pillar of the OS.
With the move to ARM on laptops I think Apple will just lock-down the bootloader and won’t look back.
What’s funny now is Windows 10’s WSL and Windows Terminal, Docker support, etc are suddenly making Microsoft look good as an OS for writing code for non-Microsoft platforms. And at least with a Windows laptop - even ARM Windows laptops - you can tinker with the bootloader and fire-up Slackware if you really wanted to.
Interestingly enough - for personal hacks (mostly cross-compiling Golang to ARM, natch) I'm actually using WSL lately, and it's definitely good enough. Not perfect, but nothing much is.
Has anyone ported Plan9 to ARM yet?
.386.iso 386 pc
.amd64.iso amd64 pc
.pi.img arm paspberry pi 1, 2 and 3
.pi3.img arm64 raspberry pi 3 and 4
> Platforms: x86 / Vx32, x86-64, MIPS, DEC Alpha, SPARC, PowerPC, ARM
Microsoft is lucky, more than anything else (Azure is a very nice platform now, but it was very feature-anaemic in comparison until about 5 years ago)
AWS is king, so anyone with any reason to not use Amazon will automatically use the next-biggest/next-best Cloud provider - and that’s Microsoft. Which is odd: someone waking up from a 10-year coma would presume it would be Oracle or IBM, or from a major vendor of webhosting or VPS. Oracle was slow to get in, and IBM cheated by just buying Softlayer and then spectacularly cocking it up: https://techcrunch.com/2020/06/09/ibm-cloud-suffers-prolonge...
In fact, let's bring back Ultrix/OSF/1, DG-UX, Solaris! (... we can skip HP/UX and SCO because they're truly awful). Note that OpenVMS has already apparently made it's x86_64 comeback!
Forgive my ignorance though - but what ISA extensions are in Apple’s ARM chips for SIMD? Intel poured a lot of effort into SSE+AXE - Does Apple have a compete there?
Performance sensitive code that relies on alignment guarantees and other platform details will not work and need to be updated, yes. IIRC Apple's chips do NEON for SIMD, not sure if they support SVE yet. (But I figure they will have to once it becomes a required part of the ARM standard…)
It's true it will cause some pain in the first year or two, but even as a heavy VMWare Fusion user I am really looking forward to the benefits of a vertically integrated laptop.
Xhyve and HyperKit (used by Docker for Mac) uses Hypervisor.framework exclusively. The last time I tried Hypervisor.framework on x86-64, the CPU performance was quite fine (matches that of VMware/VirtualBox), but I/O was pretty abysmal. Emulating x86-64 on ARM is probably going to be a role of something similar to QEMU.
> In Linux, Docker uses kernel features. It does not use any hypervisor.
> In Windows, Docker desktop uses Hyper-V.
> In Mac, Docker desktop uses Apple Hypervisor.framework.
The very first hypervisors worked using dynamic binary translation. They would run a "guest" operating system by executing a stream of native instructions directly on the host CPU. This stream would be dynamically translated to remove and trap in software any privileged operations so the hypervisor could handle them. Modern hypervisors take advantage of hardware features that allow you to more efficiently trap on privileged operations. ARM started to add some of these features starting in 2013 . In contrast Intel first started adding these features to the Pentium 4 in 2005 . When such hardware features were released, they actually were not faster than the software translation. These days the hardware based options are faster. There is even hardware support for running nested hypervisors. So the first question we need to ask is how hypervisors implemented with ARMs hardware features stack up to Intel. I have no doubt that parity at a minimum will be reached I just don't know what the current state of play is. As indicated in my original comment, if I had to bet on release we wont quite have the performance or feature set you would be used to with a product like VMWare Fusion.
The second question we need to ask is whether there is a way to efficiently emulate x86-64 processors on ARM hosts. Even better if you can do this while taking advantage of the supporting infrastructure hypervisors already have in terms of the emulated devices and other features. QEMU just gets you the CPU and a short list of a devices. The fully experience of a seamlessly virtualized guest requires a lot more than that. But at the core you are right that it is going to require QEMU-TCG, Rosetta 2, or some similar technology because the silicon just is not there to execute x86-64.
Exciting stuff! We'll see where it all lands.
Whether you can stick an emulated x86-64 CPU in there is another matter. It's a much bigger engineering lift and unless Apple puts some resources into it it's not clear to me a virtualization company by themselves would want to incur the cost. I hope there is enough demand for it and that someone will provide it. For me personally the only reason I run VMWare Fusion is to access x86-only Windows applications for which there is no replacement.
It will all come down to whether this move gives Apple a significant performance and/or battery life advantage. If Apple pulls it off it will force Microsoft and other vendors to respond.
This is critical especially when they make their older devices "EOL" and there are no more OS updates.
I would not personally buy a Mac product for the next 2-3 years.
Project is part of Office.
In any case I though we were talking about consumer devices here.
I know that a big complaint about the move is "great, now I'm doing ARM locally and deploying to x86". I think this is a legitimate concern, for now, but I also strongly believe it is inevitable that, within the next decade, deploying to x86 in the Cloud will be as "weird" as ARM would be today. The benefits are way too numerous.
Well, more accurately, I think it'll be a "I'm on Fargate, oh wow, Fargate runs on ARM, I had no idea" kind of thing. Ok, the article outlines why you may need some idea, but come on; we're talking about one line where I'm downloading the x86 version of a dependency instead of an ARM version. That's an easy fix.
I don't know what this means for open accessibility of hardware. Right now, I could go buy and run locally the Intel Xeon chip powering my app in the cloud; when things move to ARM, it absolutely will be "AWS Graviton" (not sold outside AWS) or "Azure ARM Whatever" (not sold outside Azure). This sucks for accessibility, but, actually, does it? ARM enables the cloud providers to do this; they could never design their own x86 chips. As long as we're all standardized on the same ISA, and the chips generally have the same characteristics, I'm looking forward to a very bright future where vendors are now also competing against one-another in the silicon. And I may not be able to buy an AWS Graviton, but I'm sure (well, hopeful) that one day I'll be able to build an ARM desktop that isn't a Raspberry Pi. AWS will have their chips, Quallcomm has theirs, Apple has theirs, Microsoft and Google have some, and they're all competing against one another.
Ok, maybe this is a pipe dream. But, I'm definitely in the short Intel camp, at least on the long-term.
No-one does now and it's not obvious who would as we speak today. But if the demand is there then even with lots of obstacles to overcome, of course, then they can and will.
It would be better to measure something more related to what docker users will actually do, like container build time of a common container, and/or latency of HTTP requests to native/emulated containers running on the some container.
One reason to feel positive about the virtualization issues is that Rosetta 2 provides x86->ARM translation for JITs which an ARM-based QEMU could perhaps integrate into it's own binary translation .
I'm glad somebody said something! Yes the gzip perf test is pretty silly, but illustrates a significant difference. /dev/urandom throughput on this setup was about 100 MB / s so it wasn't a bottleneck for this test - the bottlneck was gzip.
Feel free to come up with a performance test yourself! I personally want to know what an HTTP test would look like. You can run an ARM image by running:
docker run -it arm64v8/ubuntu
You can probably use qemu-user-static to translate x86-64-only binaries in a Linux container on an ARM machine, too, but I have never tried.
root 9934 103 0.0 125444 6664 pts/0 Rl+ 12:25 0:12 /usr/bin/qemu-aarch64-static /usr/bin/gzip
Ryzen 3900X (host machine)
$ dd if=/dev/urandom bs=4k count=10k | gzip >/dev/null
10240+0 records in
10240+0 records out
41943040 bytes (42 MB, 40 MiB) copied, 1.02284 s, 41.0 MB/s
$ dd if=/dev/urandom bs=4k count=10k | proot -R /tmp/aarch64-alpine -q qemu-aarch64-static sh -c 'gzip >/dev/null'
10240+0 records in
10240+0 records out
41943040 bytes (42 MB, 40 MiB) copied, 3.33964 s, 12.6 MB/s
> Emulators can run a different architecture between the host and the guest, but simulate the guest operating system at about 5x-10x slowdown.
I think this is a misleading statement because it implies that there is a constant performance overhead associated with CPU emulation. In reality, the performance relies heavily on the workload, more so with JIT-ed emulators.
Regarding this specific benchmark, I think there are two main factors contributing to the poor performance. The first factor is that the benchmark completes in a short period of time. With JITs, performance tends to improve for long running processes because JITs can cache translation results allowing you to amortize the translation overhead. Another factor is that your benchmark is especially heavy on I/O, meaning that it spends a lot of time translating syscalls instead of running native instructions.
I'd also like to add that CPU emulators sans syscall translation should work for any binaries, even those targeted for Linux. It would require a copy of the Linux kernel, but Docker won't work without it anyways.
If changing the base image is all that's needed and both Dockerfiles otherwise assume ubuntu, this should not take too long.
Why did you have to switch from Alpine to Debian? Alpine supports ARM quite happily, and it looks like they're shipping Docker images for ARM (and other architectures, too).
How so? If anything, apk is way nicer than apt in a container build script (or anything automated); with apt you have to use -y and maybe force the noninteractive frontend, where `apk add foo` just works, correctly, automatically, with no effort required.
> Also much easier to set locale.
> considering potentially slower performance:
It's slower at installing python packages from pypi since it can't use cached versions. That's not the same thing as "it's slow".
> Since minimal ubuntu & debian exist, I think the question should be: "Why would you use alpine?"
Because minimal ubuntu is still ~3 times the size of alpine, alpine is much smaller and simpler, alpine defaults to staying small (even if you remember to --no-install-recommends, deb packages are bigger and less modular), and I don't have to remember how to force apt to run in "no really install without asking questions" mode.
Debian in Docker in comparison offer less surprises, but you have to consistently do the right incantations.
Regarding missing binary wheels on ARM: with more ARM laptops in the wild those would eventually become more common.
1. If emulating aarch64 (arm64) on x86_64 is 6x slower (on your system, btw, it's not an universal constant), it doesn't mean emulating x86_64 on aarch64 will be 6x slower. It'd probably be worse, or at least that's my gut feeling.
2. Generic container images like the Ubuntu mentioned usually have aarch64 (arm64) support, so running the x86_64 image makes no sense for the presented use-case.
3. You won't be able to use most software because they don't release ARM binaries ... and the example uses `wget` && `tar xf`, with no binary signature check. As someone who has been porting stuff from x86_64 to aarch64 for a couple of years, I admit I've seen this pattern frequently. The most obvious solution is to build from sources, which would have been better off on x86_64 too, instead of fetching a prebuilt (and unverified) binary from the internet. Maybe there are some CPU flags the compiler could notice and apply optimizations which are not included in the prebuilt binary.
I'm not an Apple fan and I'm certainly not a fan of cross-architecture development either. I do agree with the general idea behind the article, however I find it a bit hand wavy.
I think the argument here is you can't build your own docker images that you use in production and run them on your mac without emulation (unless your production workload also runs on ARM).
> 1. If emulating aarch64 (arm64) on x86_64 is 6x slower (on your system, btw, it's not an universal constant), it doesn't mean emulating x86_64 on aarch64 will be 6x slower. It'd probably be worse, or at least that's my gut feeling.
Yup, performance benchmarks are inherently flawed and nobody knows anything right now without the hardware. However if ARM -> x86 emulation is anything like x86 -> ARM emulation, I would expect a really big performance loss.
> 2. Generic container images like the Ubuntu mentioned usually have aarch64 (arm64) support, so running the x86_64 image makes no sense for the presented use-case.
Ah actually I address this in the article, and even run an arm64 image. The short version is, it would be a lot of work to convert your whole backend infrastructure to ARM just because you got a new laptop.
> 3. You won't be able to use most software because they don't release ARM binaries ... and the example uses `wget` && `tar xf`, with no binary signature check. As someone who has been porting stuff from x86_64 to aarch64 for a couple of years, I admit I've seen this pattern frequently. The most obvious solution is to build from sources, which would have been better off on x86_64 too, instead of fetching a prebuilt (and unverified) binary from the internet. Maybe there are some CPU flags the compiler could notice and apply optimizations which are not included in the prebuilt binary.
Yes, if only everything were built from source! I'm not saying there's no solution, just that the solution would be a lot of work. If the library is obscure enough and the errors are strange enough, it might be so much work as to be impossible to the busy web developer.
My goal was to write a kind of hand-wavy article to get people talking about this problem.
someuser@some-aarch64-machine:~$ docker run arm64v8/ubuntu bash -c 'dd if=/dev/urandom bs=4k count=10k | gzip > /dev/null'
10240+0 records in
10240+0 records out
41943040 bytes (42 MB, 40 MiB) copied, 2.18298 s, 19.2 MB/s
someuser@some-aarch64-machine:~$ docker run amd64/ubuntu bash -c 'dd if=/dev/urandom bs=4k count=10k | gzip > /dev/null'
warning: TCG doesn't support requested feature: CPUID.01H:ECX.vmx [bit 5]
warning: TCG doesn't support requested feature: CPUID.01H:ECX.vmx [bit 5]
warning: TCG doesn't support requested feature: CPUID.01H:ECX.vmx [bit 5]
10240+0 records in
10240+0 records out
41943040 bytes (42 MB, 40 MiB) copied, 6.72324 s, 6.2 MB/s
A 3x slowdown is not as bad as 6x, but it's still quite a bit. I also saw a slowdown of ~4x when I tried this experiment on a native Linux x86_64 running ARM - perhaps the Mac -> Linux virtualization slowed it down further.
5x may have been a bit alarmist, but regardless we should brace ourselves for a big performance hit on x86_64 virtualization.
Even if you are talking about doing ARM Cortex-A series, you aren't going to be using the same libraries on the embedded device that you use on a Mac. You'd most likely be using either Linux (ala Raspberry Pi) or an RTOS; either way you have a different compiler and stdlib to use.
Most tools are adopting Linux remote build + remote debug, wherein you ssh in and hook into the compiler and debugger all from the comfort of CLion/VS2019/VSCode.
If they don't have remote build, there is often building locally, with a copy of the root filesystem, using a cross-compiler, then remote deploy + debug. The most annoying part of this process is fixing all the symlinks not supported on NTFS.
Expensiver niche workstation = $500 dev kit directly representative of your target, but with everything exposed.
The interesting thing is now we need ARM -> x86 remote build or cross-compilation tools, of which I know of none.
EDIT: fixed link
EDIT: Your new link doesn't tell a different story. From the comments:
> It is still possible to disable secure boot using csrutil. Apple has never officially supported booting Linux on a Mac.
Okay, so we can disable secure boot. What is the next step? How does one get to running Linux natively from there?
I cross compiling Linux kernels daily. I think Clang makes this simpler, but missing C runtime for cross compiling userspace executables still leaves much to be desired.
I think Zig is doing interesting things here. Clang should just straight up adopt this, IMO.
32 bit on the phones, a real pain in the ass to cross compile, but it's a fun learning experience (I'm just a noob to any programming). I'd love to get paid for this tbh :D
Or, you could use already-extant Debian ARM releases and spend minutes rather than months switching over.
Running stuff on your laptop makes it run slow, get hot, and burn battery. I've considered getting a small x86 or ARM media appliance as a (physically local) remote server for when I can't count on an Internet connection. A media PC costs how much? The big holdup has been the tyranny of choice I'm confronted with. (Suggestions are welcome!)
I think very few people would be surprised if the coming of ARM Macs will, along with AWS's ARM moves (and Microsoft's), drive acceptance and adoption of ARM-based server computing. The mechanism won't be anything formal, just the vague pressure that comes from people wanting their programs and libraries to compile locally.
I would expect about a 5x slowdown running Docker images.
Docker on a Mac utilizes a hypervisor. Hypervisors rely on running the same architecture on the host as the guest, and are about about 1x - 2x as slow as running natively.
Since you're running ARM Mac, these hypervisors can only run ARM Linux. They can't run x86_64 Linux.
What will happen instead? These tools will fall back on emulators.
But also: Getting a cloud windows station or an el-cheapo-$500-under-the-desk-when-you-really-need-it Windows machine is probably worth it if you're doing professional work. It would quickly cost much less than the time you lose when rebooting to the the other OS, from my experience.
However, I think Apple has been a far greater threat to Linux adoption than Microsoft. Why? Because it gives techies the *nix environment they want, with the software and hardware support no one will give them on Linux.
There is real value in proprietary commercial end-user application software. Most companies who make such software couldn't care less about supporting Linux. So if you want to use Linux, you have to use F/OSS alternatives and continue to try convincing everyone that somehow they're better than the commercial options... even when the rest of the world has agreed that they're really not.
The whole incentive structure around F/OSS development really doesn't work for software where the profit motive is in the product itself... Not some nebulous "support contract" that you don't actually need. (Which is a far bigger issue for end-user applications.)
The UNIX experience on the Mac is pretty shitty. Ancient versions of all the tooling. Command-line utils have that weird BSD well-water flavor. No package management. Funny Docker quirks.
The hardware used to be pretty nice, but honestly I'm still having trouble forgiving them for getting rid of the physical ESC key and turning volume control into a two-step routine on the TouchBar.
Honestly if I'm doing server-side development, I much prefer using my ThinkPad (Ubuntu) over my MacBook. About the only thing I miss is the far superior touchpad on the Macbook. That's it.
Can be easily fixed by installing homebrew. Also you claim it is a shitty Unix experience while complaining about BSD flavoured tools.
> Funny Docker quirks.
How is that Unix related? BSD has similar issues.
Maybe you should have written the GNU/Linux experience is pretty shitty on macOS but no one claimed otherwise.
> The UNIX experience on the Mac is pretty shitty. Ancient versions of all the tooling. Command-line utils have that weird BSD well-water flavor. No package management. Funny Docker quirks.
It doesn't have to be the best nix environment. Hell, it doesn't even have to be a good one. It just has to be "good-enough". For this, they still have an advantage over Windows. And compared with Linux, they still have the advantage that by-and-large, things "just work". I have never personally been able to say that about a Linux desktop I've had. There is always one more thing to tweak, one more knob to turn, etc...
I'm with you on the ESC key and touch bar though... which they thankfully fixed the missing ESC key.
I can also install something like multipass on macOS if I want a good integrated virtual machine.
Last week I had to drop to vi and edit nfs.conf on a friend's Mac to solve very slow transfer rates. "Just works" within a very narrow definition of primitive use cases.
To be fair it's always been a one-step routine on the touchbar (touch and drag the icon) and they brought back the escape key.
On the other hand, my speakers have a physical volume dial that provides feedback via friction on movement so I like that better than touchbar or physical up/down buttons.
Discoverability certainly sucks for that second one.
Linux is its own thing and trying to mix UNIX with Linux is always going to lead to disappointment.
Very, very true. And Homebrew is actively becoming worse now. A few years back, Homebrew was great - now using it feels like using some weird underground software stack that exists only because Apple hasn't come around to nixing it yet.
Well, OS X is Unix, but GNU is not Unix.
That's what MacPorts is for ;)
Not a fan of the touchbar of course.
Otherwise everything else I need is usable on a Mac.
With WSL, you basically get an actual Linux userland (with WSL2, I think you get an actual Linux kernel too), not just a Unix that's like Linux but different enough to be annoying. But I'm not sure that will be enough to convince people to move to Windows.
Most devs only want some kind of CLI and POSIX like capabilities.
which is why WSL is so great...
An approach already taken by other UNIX clones with their Linux compatibility syscalls layer.
Back in the day I was using Hummingbird.
I used to admin UNIX and develop for it from Windows NT/2000 workstations.
Also the FOSS version of Motif only appeared when Motif wasn't that much relevant and most enterprise shops were migrating to CORBA and Web as integration points.
Apple has a monopoly on their hardware, and they will likely sell a significant number of devices. This will lead to a lot more development for ARM that never would have happened otherwise.
That, in turn, may make tilt the balance in favor of ARM for a lot of other use cases outside of OSX when other tools, applications, and hardware vendors better support ARM.
As for switching... I'm increasingly unhappy with Microsoft's complete disregard of user privacy. Apple isn't perfect with that either, but IMO much better at least. For my use cases, Win and Mac are the only credible options due to software availability. So to get away from Windows.. there's not much choice these days.
What makes ARM so exciting? Maybe battery use will be better? Maybe it will be slightly faster? Maybe? There's also been a lot of tuning done for laptop workloads on x86, it's definitely a maybe. I expect the only noticeable changes for most users to be somewhat better battery life, some apps not working, and occasionally having to know which package to download.
But Apple can't just pay-up every cross-platform software developer. Smaller developers will have to re-evaluate whether macOS remains a viable target platform for them. Which can translate to a dev-gain for Linux. The catch is that Linux is in much better position to translate an influx of developers to an influx of new users: Linux runs on what you have, while macOS requires you buy Apple hardware.
And let's not forget about macOS as a gaming platform. Linux has made a huge leap forward with Steam Proton. On macOS there's still a ton of games not supporting x86_64 (Catalina), and situation won't get better by transition to ARM.
I’m sure that's not the case. It's much more in Adobe’s interest to be ARM-ready on day 1; there are plenty of Photoshop alternatives in the Mac App Store—notice they demoed Affinity Photo running on the ARM Mac and Affinity is much better at using Apple's native APIs and technologies than Adobe every was. And it's just a one-time cost of $49.99 vs. renting Photoshop from Adobe. Users of Apple devices are continue to be a large segment of their customer base.
Adobe has already migrated all of their core applications to a new codebase that should be relatively easy to bring to ARM Macs. Photoshop and several other of their apps already runs on iPadOS, so it won't be that big a deal to move it over to Big Sur.
So nothing is changing. ARM Mac isn’t going to change the Linux desktop/laptop story.
Hence why plenty just target Android, although some of their apps could easily be targeting GNU/Linux as well (specially the ones that are mostly NDK glue + whatever framework).
You can't run your x86 docker image on your ARM mac without emulation. You can't run your x86 Windows VM without emulation etc.
Of course there are solutions like using a remote server or a VM in the cloud, but if you're buying a decent machine then you would normally expect to be able to run these things locally.
For example, back in 2017 Cloudflare was basically looking at this as a question of which hardware ran most cost-effectively rather than having engineering heroics first:
My company definitely had problems to get database drivers to work generally speaking and on both 32 bits and 64 bits. Have a look at postgres, oracle, cassandra, redis, sybase to name a few, I am not sure which one was worse, it wasn't me doing the work. But I've seen some of the C and C++ dependencies that needed to compile with the errors that happened and that was horrendous.
If you want a sample. Try to install the cassandra client library in python. It will pull in and compile all sort of shit. That's supposed to be python and easily cross platform.
Yes, we should clean up the mess! It doesn't mean it's not a mess. And I think the mess is actually still understated.
The one use case when this might be viable is targeting AWS Graviton2. Does anybody know if you can run an emulated Graviton2 on ARM Mac?
Things like the pinebook pro (and hopefully more linux ARM devices) will keep pushing this further.
Only the host OS is going to have the right drivers for the trackpad, wi-fi, GPU, power management, etc. etc. Through virtualization, the guest OS doesn't have to worry about constantly evolving hardware models.
Virtualized OS performance is already very good, and USB passthrough has existed for a while. Snapshots are a godsend.
What won't work are things like CUDA for eGPUs over Thunderbolt 3, and you'll have to share disk and RAM with the host OS.
But for most use cases it's probably the right choice. (This doesn't address the author's concern about moving away from x86.)
I don't see why this would be so hard. If anything, I expect to see a massive upswing in things like AWS Graviton2 uptake, and a lot of common Docker images being built with ARM versions out of the box. It might be about a year or so, but eventually we'll be able to just go ARM-native the whole way.
What Apple needs to do is make a first-class, WSL-tier implementation of Docker for Mac for ARM.
This has no chance of happening. The common cloud CI systems do not support ARM at all (travis, circle CI and co). There is only a minority of developers with macbook and the rest is not going to spend $2000 to buy one just to build some docker images.
GitHub lists it as a feature now:
I'd be very surprised if this didn't become more common given the high levels of interest people are showing towards ARM server offerings in the cloud space.
Better question might be, how many of the most common open source projects are managed by volunteers in their spare time? These will not build for ARM, unless there is a free tool doing it automatically for them. Currently GitHub + travis/circle can do for x64 on every push to master.
I'd love to have a POWER and an IBM LinuxONE in my shed, but that's not going to happen.
Maybe the POWER, but the Z most likely not.
I travel the world meeting developers from multiple communities. I very rarely see one without a MacBook.
I benchmarked some `t2a.nano`s against some `a1.medium`s and found that the `nano`s were sufficient for my needs, so I went with them (they are cheaper than `a1.medium`s, even if the `a1`s have a better price-to-performance ratio).
I didn’t find it too difficult to rebuild any of these projects for cross-architecture usage. Even Janus, which has a TON of C/C++ dependencies (some of which have to be compiled from a particular version of the source) easily built for ARM with no change in the Dockerfile.
So I kind of feel like OP is exaggerating the effort required to migrate servers to ARM. Sure it might be a hassle when you have tons of microservices, but you can move them incrementally, and most things recompile with no changes. And regardless of what architecture your dev machine is, you’ll want to be able to compile for and work with both architectures if you want to get the most out of the infrastructure on offer in 2020.
 Shameless plug: https://chrisuehlinger.com/blog/2020/06/16/unshattering-the-...
Did you notice at the end that you did NOT end up choosing ARM? You ended up going with x86_64 because that's what made more sense for your backend. That's part of my point - developers should choose their backend architecture based on the performance and pricing of their backend, not their development laptop. And if that decision is "we should keep using x86", then there will be a big performance hit in development.
This is just another CPU story, no big deal.
Having a proper competitor for x86/x64 is a good thing.
The fact docker is slower on ARM (at the moment!) is mostly due to the lack of interests for optimizations.
With Apple starting to produce MacARM machines, and maybe more ARM servers in the wild, docker (and other platforms/frameworks) will start to get more performant on ARM as well.
one can’t run an x86 OS on an ARM architecture
“Microsoft only licenses Windows 10 on ARM to OEMs. We have nothing further to share at this time.” 
“We’re not direct booting an alternate operating system,” says Craig Federighi, Apple’s senior vice president of software engineering. “Purely virtualization is the route. These hypervisors can be very efficient, so the need to direct boot shouldn’t really be the concern.” 
I have no idea what plans they might have but I would be surprised if you couldn’t install some ARM Linux distros on their laptops sometime next year.
> However, this would take months
I'm sure it won't make sense for everyone, but I'm just as sure it will make sense for many.
... thus making Android development better on Macs?
It won't be a trivial task (hoping for pre-existing code to port over maybe?) but we have the other pieces like using Hypervisor.framework for x86 already, and being able to cross compile the other code for arm64, so that would be the only major task left.
On the subj. of better GPU support, it depends on what it's actually like using the drivers, but from previous experience with the GPUs and drivers shipped with macOS, there shouldn't be any special kind of trouble at least. We may have to use Metal if Apple also gets rid of opengl support on those new machines, but there are also existing translators for gles and vk to metal. The graphics hw itself, is actually the least of our worries due to how consistent the hardware is likely to be---we'd have to deal with a much fewer set of hw/driver quirks versus other host OS platforms.
OpenGL is deprecated but still supported on Apple Silicon, even for arm64 apps.
At best, you can say that you can run ARM-only games at native speed now, but as a developer you won't really notice much different (assuming the processors aren't slower than Intel's)
m6g, c6g, r6g each support 6 sizes for a total of 24
C6g, M6g, and R6g (powered by AWS Graviton2) each support 8 sizes, along with bare metal.
A1 instances (powered by AWS Graviton) have 5 sizes, along with bare metal.
That's a total of 33 distinct instance sizes.
That still leaves storage optimized and GPU optimized instances missing. I'm guessing storage should be easy enough to add, but what about GPU?
From my novice experience with GPUs, they need finicky drivers that must be ported by the GPU manufacturers, so I figure it might take a while to get competitive ARM GPU instances.
NVIDIA is supporting Arm for CUDA development, see https://nvidianews.nvidia.com/news/nvidia-brings-cuda-to-arm... and https://blogs.nvidia.com/blog/2019/11/18/ngc-containers-arm/
That doesnt sound right to me. Perhaps on IO bound tasks, if you are using emulated devices. On CPU bound tasks you should see near native performance.
Slightly longer (but not much) answer: https://www.youtube.com/watch?v=Hg9F1Qjv3iU&feature=youtu.be...
Apple is finally killing x86.
At last, the future will surely be bright!
Also, the CPU is but a minor part of the puzzle. But still that is still twice as many as apple (good luck exchanging that apple-arm with any other brand).
Please let me know how open you think the next apple ARM platform is when you try to boot any OS not written by apple.
Please compare that with a computer built from AMD/Intel with a motherboard out of dozens of manufacturers etc. Any ATX power supply etc. Pretty much any PCI-E graphics card etc.
> Also, the CPU is but a minor part of the puzzle. But still that is still twice as many as apple (good luck exchanging that apple-arm with any other brand).
Even on x86, interchangeable CPUs are the exception, not the rule. Intel and AMD CPUs haven't even used the same socket since the 1990s, and even within those manufacturers, socket incompatibilities are common.
Software interchangeability is more of an operating systems issue than an architectural one. With appropriate software shims, though, there is no reason to suspect that (for example) Linux ARM software could be run on an Apple ARM CPU. In fact, it's quite likely that tools like the Android emulator will do exactly that.
> Please compare that with a computer built from AMD/Intel with a motherboard out of dozens of manufacturers etc. Any ATX power supply etc. Pretty much any PCI-E graphics card etc.
Server-class ARM hardware generally does use similar parts as x86 servers, including power supplies and PCIe peripherals.
> Software interchangeability is more of an operating systems issue than an architectural one.
Not if the architecture is designed around keeping others out. But just not telling anyone how to do it is enough in 99% of cases. Some hacker might post a buggy proof-of-concept for an obsolete device that no one will run.
> Server-class ARM hardware generally does use similar parts as x86 servers, including power supplies and PCIe peripherals.
Wanna place a bet on what apple is going to do?
What you might be worried about is a duopoly which has nothing to do with whether something is a software platform or not. For example Microsoft has a monopoly on Windows. That doesn't stop Windows from being a platform for which you can write arbitrary software. Apple has a monopoly on iOS but it's not possible for users to write their own software, they have to join a developer program that can always exclude them. This is what one would call a closed platform. ARM is closer to the iOS model than to Windows.
Is there any suggestion that the architecture that Apple is using is any different to what is being used by lots and lots of other licensees? If not then it's much more open than x86.
If you mean that you can't buy an ARM CPU today to plug into your own motherboard then understood but that's probably now a matter of time. At least making such a CPU is possible - no-one is going to make x86 more open.
There is no indication that I'll ever be able to buy an ARM CPU and plug it into motherboard with the feature set I choose and plug into it peripherals that follow a standard and open interface with good performance. The only companies that make ARM CPUs that could at all be useful in such an open platform don't make CPUs fast enough. I don't think Apple or Amazon will ever sell me a socketable CPU with high speed PCI support and XMP memory support.
You can buy socketed ARM CPUs today that are likely to be more than fast enough - from Marvell for example.
You're 100 % right that there isn't an ARM ecosystem at the moment in the way that there is a PC ecosystem with all its flexibility.
But that's not a feature of the ARM architecture or what ARM as a company does - it's because x86 has historically dominated the desktop.
When we'll have three ARM companies making socketed CPUs with standardized I/O between them that are faster or very close to AMD or Intel, then it will be good.
- Swift Shader
- Pixar USD
- Python 3
- Cineform CFHD
The reality seems to be that their top MacOS developers have been busy laying groundwork for the ARM transition. There's so much to be done.
I'm guessing new ARMv8 ISA features, PAC/BTI/MTE?