Hacker News new | past | comments | ask | show | jobs | submit login
ARM Macs and Virtualization: It's going to be great (ml-illustrated.com)
47 points by mlillustrated 3 months ago | hide | past | favorite | 83 comments



"It's going to be great!"

"It's the end of the world!"

"This will make my life so much easier!"

"I'll never buy another Mac again!"

I've stopped clicking on rando blogs linked on HN. It's all bait at this point.


I don't know if it's a summer thing, but same here.

That said, I've clicked on every Apple-related link and made a point of mostly skimming through each of them. There was usually something valuable (by my standard), but yeah, a lot of it I really don't want clogging up my brain.


I have all of those feelings myself actually :) Overall I'm excited to see what happens.


You can't escape catastrophizing by staying on the news.yc domain, because there will be frothing 500-comment threads any time some UI get a design update.


And you don't even have to leave hacker news for comments like that. It's all very tiring.


I worked for several large companies where the on-machine (not containerized) development was very clean.

That hygiene required a large team of dev tools engineering to support it. For the smaller companies, dockerizing individual projects for development was... a regrettable but easy shortcut to not having to wrangle with local setup.

I wonder if this author never experienced or already forgot fighting with venv/pipenv/conda?


If this slows or stops the use of Docker in local development I’m all for it. Docker for development is a plague on this earth. It burns battery, slows builds, makes fan noise, eats huge amounts of bandwidth/HDD space, makes debugging a nightmare. All for what? It’s marginally better at setting up a self contained dev environment. Oh wait, Docker can’t do that; you also need docker compose.


Don't forget the prospect of Electron Apps on ARM Macs.

iOS and iPadOS never had the chance to run Electron-apps given they would be killed by the OS as it would eat up the RAM sideways. 16GB of RAM will still not be enough for it. Maybe it will run just fine, but hardly usable or "Desktop performance" like.

This makes me appreciate the iOS/iPad app compatibility on ARM Macs as a way to escape some apps that don't need to use Electron.


Electron is already compatible with ARM, as Microsoft added support in 2019 to allow targetting ARM64 devices like the Surface Pro X.


If 16GB of memory isn't enough to run a text editor or glorified IRC client, then the problem is probably the horribly bloated software.


They showed Docker for Mac built for ARM during the keynote, and mentioned that they have been working with Docker for when macOS 11 ships, so I doubt that's going to be the case.


Dev environments always end up with drift. That’s just the reality.

Docker is here to stay because it simplifies too many painful things, and now it has industry momentum behind it. Even our mostly non-software-engineer data scientist are shipping their own containers to prod with ease.

Besides, there’s nothing to say you can use both if that’s your preference.


What's your preference instead of Docker? Just run everything on the same host OS? Nix? More conventional VMs? Servers with specific configurations?


When possible I want to run my apps on OSX. It’s not always possible, but usually works.

The scripting environment managers have made big improvements in the past few years. Nvm/virtualenv/rvm give me 90% of the docker value prop with none of the performance impact or debug hassle.

Haskell/clojure/java all seem platform agnostic out of the box.

C/C++ is probably still a hassle with libs being platform specific.


The question is, why docker have that overhead. Virtualizing should not be very resource-intensive nowadays, everything is optimized at CPU level.


It's not that simple as "everything is optimized at CPU level," actually. Virtualization has very different performance impacts on different workloads.


Virtualization has pretty high overhead.

But on the Mac you have to pay the virtualization overhead to run Linux in a VM, then compound the docker overhead on top of that.

On MacBook Pro this is problematic because it eats battery and contributes to thermal problems.


Forgot to mention that disk performance on docker on the Mac is a big deal for building software.

This article shows overhead for a disk heavy workload: https://vivait.co.uk/labs/docker-for-mac-performance-using-n...

He gets 7 second load time natively, but a 56 second load on docker with inconsistent drive link.


docker on mac does indeed suck! but that's because you gotta run linux to run docker. Docker for dev on a linux box? Great stuff!


Docker for Windows is pretty sweet WSL2 is even sweeter.

Microsoft pretty much killed Linux on desktop for anything other than ideological reasons at this point.


Well, Windows forces you to use vs code remote or pay a huge performance penalty in drive performance in wsl 2,that's no little compromise.


>makes fan noise

I couldn't help but chuckle at including this particular gripe in a list of otherwise very impactful downsides of docker. I mean, I hate it when my laptop starts sounding like a jet engine too, but...


Completely agree. In case it's helpful, we're working on a cloud dev environment for Docker that addresses the resource issues by running containers in the cloud instead of locally: http://kelda.io/blimp

Of course, if you can avoid docker for local development, that's definitely easiest in many cases.


*extreme nerd voice: "just use Nix"


The Nix package manager solves the 'single docker container to be my build environment' thing well. nix-shell is significantly nicer to use than docker.

How is it for the docker-compose 'I need x, y, z running' use case? It's not something I need that much, there are various shell.nix hacks I've seen to (e.g.) get Postgres running for Elixir development. They work, but it's not as rich an approach as the rest of Nix.


totally that's a fair question. I've only been using it by building x, y, z separately and it does reduce my build times significantly. I also just love to be able to fetch binaries with absolutely no strings attached. But in terms of orchestration Docker has far easier interface with compose etc. As far as I know, Nix doesn't really have this sort of solution built in but I guess you could write a derivation where you orchestrate different binaries yourself, starting them with some sort of task manager. So far I just build the binaries with Nix and orchestrate them myself on the server


There must be some stuff, because NixOS exists. Whether it can work in a more standalone way I don't know.


Yeah, the thing is that most of us don’t have the patience of figuring out how Nix works , write derivations and stuff.

Docker and Compose look pretty straightforward in comparison, at least for local development.


Nix may not be straightforward to learn, but Docker isn't straightforward to use. Container orchestration is a pain to deal with, for instance.


Most of the problems you listed are because you want to use a laptop to work for some reason. Ewww. Why anyone would want to optimize for working in trains, planes, automobiles, hotel rooms and meetings - I'll never understand.

I've never had any of these problems with my very inexpensive tower computer, which I upgraded to 32gb of RAM and terabytes of disk space, that was probably a third of the cost of your laptop.


> Why anyone would want to optimize for working in trains, planes, automobiles, hotel rooms and meetings - I'll never understand.

It shouldn't take much understanding - many people work almost exclusively in those environments. Consultants, customer engineers, 'digital nomads'.

Then there's the group of us who would rather the (frankly epic) advances in hardware went towards actual visible software performance improvements instead of more layers of waste.


Better yet, why not have the best of both worlds?

I know most devs are hesitant to try anything that's "always online", but remote development apps have really gotten better recently. VSCode with the Remote extension makes doing development on a VPS a breeze, and unless you have a really bad internet connection, you probably won't even notice it's remote. And then you also get the benefits of the VPS having a faster internet connection (faster package downloads, etc), most likely better specs than your laptop for only $10-15/mo, more resiliency, and better security. It also completely solves the whole "my laptop is ARM but my servers are x86" problem that keeps getting talked about.


I use VS Code remote development via SSH to a $5 a month digital ocean server, and it’s so incredibly smooth and easy. You would never know it wasn’t local. I’ve been using Windows for decades so I’m very productive running windows on the desktop, but I prefer to use Linux on the server. VS Code remote over SSH is the perfect combination for me. For someone who loves macOS the situation should be exactly the same, regardless of whether they are on ARM or x86. If you haven’t tried it yet, I highly recommend giving it a try!


> Most of the problems you listed are because you want to use a laptop to work for some reason. Ewww. Why anyone would want to optimize for working in trains, planes, automobiles, hotel rooms and meetings - I'll never understand.

Because a lot of development requires only a couple cores, maybe 8GB of RAM, and a few free GB for the database or dataset (at most, probably a few hundred free MB). If you aren't simulating large networks, doing serious number crunching (HPC-styled or ML), or into graphics heavy development, a laptop is more than enough for most work.

I want to say most, but that's an assumption on my part, but a lot of computing is about encoding business rules that could be done by passing paper around an office or larger complex and making it digital (though obviously not with the speed and reliability that's often wanted or needed). Message passing, filtering, connecting to databases, verifying data integrity, connecting multiple DBs and auto populating them, etc. None of that requires a powerful computer to develop.

Even the embedded work I've done is really just business rules for safety critical systems: If this reaches some temp, send a signal to the pilot, pilot can optionally release halon. If pilot sends "release halon", then release the halon. Nothing about that or the target platform (16MHz processor with memory measured in KB, maybe double-digit MB) required a powerful computer for development considering that the earlier versions were written on, maybe, 386s.

Just because you do things that require lots of RAM, disk space, and fast cores don't assume everyone else needs that. I used to do a lot of graphics work, and the laptop barely did the job. A full tower was what I needed (especially as I wanted to use CUDA and/or OpenCL and move a lot of numeric work associated with it to the GPU). If I were doing machine learning I'd be in that same boat. But I'm not, most of us aren't.


>Why anyone would want to optimize for working in trains, planes, automobiles, hotel rooms and meetings - I'll never understand.

Because being stationary isn't that great for creativity and problem solving.

Additionally, there are whole careers being made on the ability to work while on the move.


Easy, because when one does consulting for Fortune 500 across the globe, it is expected that we code whenever we are while traveling to the next customer meeting on site.

And even when not traveling, we are expected to move across the building and join other teams for collaborative work.


That’s actually a good point. I’m not really sure how I’ve come to expect a laptop for development. At home I use a PC and it’s much faster in almost every way, despite being 4 years older than my laptop.


Can't take my workstation hiking or traveling.


I believe it’s going to be great for ANE users, but let’s just say that this author’s opinions on how local Docker shouldn’t be part of the “core development cycle” don’t match my environment. This is going to mean investment in Linux laptops, for all their problems, for us.


Yeah that part was just "you're holding it wrong". I'm surprised that the option of running ARM containers on an ARM Linux isn't even mentioned? This would surely be the way forward?


Not necessarily.

What if your laptop is arm but your production environment is x86 ?

You might think "well i'll have arm containers on my laptop and x86 containers in production" but then it means that dev and prod diverge: you're running different software, that might have different behaviors.

One of the core ideas of docker was that you could have ideally the same artifact both in developmend, testing, staging, qa and prod environments.

There's little way around the core problem: mac os x badly suited for running docker containers. The most performant way to run docker containers is to run them in native gnu/linux (hence: gnu/linux laptops)


Can you articulate any divergence you expect that wouldn’t otherwise be present running your container on another environment (prod)? I think “divergence” is a non-issue here if you’re testing locally, but the final build and push comes from another x86 CI host.


> What if your laptop is arm but your production environment is x86 ?

Thats what CI is for?

> it means that dev and prod diverge

It just means you're using a different architecture for your local development. Your DEV & TEST envs and your automated CI/CD runs can still use x86. Sure its a bit inconvenient, but for most types of development it won't matter much, as the stack doesn't compile to native anyway. I don't care if Java, JS or whatever runs on x86 or ARM.


bingo.

I use docker to run ALL dependencies for ALL the stuff I develop as a way to be able to really cleanly keep stuff organized, and avoid having to figure out how to totally purge random installs of postgres/etc.. from my system.

I also run a couple things 24/7 in docker: e.g. pihole

A huge slowdown or hugely increased CPU usage (and its not good as-is) for running stuff in docker is a non-starter for me. If thats how it works out, that would be the nudge I need to seriously investigate a linux laptop.


If organization is your goal, I'd recommend Nix over Docker. You can separate packages without having to maintain tons of separate containers.


"ah, you've got a problem? BUT: what if you get yourself into this completely different set of problems?"

jokes aside: the problem being discussed here is not organization, is running docker containers (with linux binaries inside) on mac os, on x86-64 today and arm tomorrow.


The parent commenter quite clearly talks about using Docker to "keep stuff organized," so no, I wasn't talking about a "completely different set of problems." I'd also like to add that the parent commenter hasn't mentioned the need to run x86 binaries. You obviously don't need to run x86 pi-hole on an arm device.


Except macOS isn't Linux, so that is the problem right there.


> this author’s opinions on how local Docker shouldn’t be part of the “core development cycle” don’t match my environment

Yeah, I think that's why making any blanket statement about this stuff doesn't work. Personally I went through a phase of Docker-ising everything I did because it made so much sense but really it just got in my way. My development work mostly consists of Node and Rust, both have self-contained package systems that mean encapsulating everything in Docker is unnecessary. But that's not the case for everyone!


I find Docker very complementary to Node, not because I'm putting my code in a container but for all the databases and other software. When I'm running tests I automatically spin up a Redis or a Postgres or whatever database(s) are required. This lets tests run in parallel, projects co-exist etc, without any cross-contamination or shared/mixing of dependencies.


I am yet to bother with Docker and unless a customer project requires it, I will keep staying away from it.

Still using VM images, and I bet 5 more years and everyone and their dog will be doing multi-scale lambda functions on cloud runtimes with complete disregard for the underlying OS.


There seem to be a quite few gushing Apple pieces recently.

"enable developers to write code on their ARM Macs, test across devices locally, and deploy to iOS devices and server-side"

I believe a step is missing, "Apple approve/sign the code".


For Mac apps, there is no human approval process, just an automated malware scan/code signing workflow.

Frankly, I find this less troublesome than Windows Defender refusing to run apps from indie developers who don't have a good enough "reputation score".


>Its going to be great! What about Docker being slow? The great news is you shouldn't use Docker! :D

What nonsense is this?


Author seems to be iOS developer with coreml experience...

No question that iOS dev will be better with these arm macs, as to development on Mac for other platforms it’s hard to say but my bet will be world of pain for the next few years.


This post hints at something that I've thought might be Apple's potential answer to "how will it make economical sense to make workstation-grade CPUs for the higher end of the Mac line, given the amount of such units sold?"

The potential answer is to go AWS. Build up a cloud platform using the same CPUs, move all Apple cloud infrastructure onto it, rent the rest of the capacity out to the public.

With this, plus the sales from high-end Macs/MacBooks, the R&D overhead for high end CPUs might make sense.


Do you think they have the kind of culture and experience inhouse to pull something like that off? I'm skeptical that it's in Apple's wheelhouse but it'd be interesting to see.


I honestly don't know, it just made sense as a possible solution given the limited insight I have.

Before AWS, I didn't expect Amazon to become the power that it is with regards to cloud either.

Perhaps Apple can provide some unique attributes of cloud infrastructure that's specifically useful to people who otherwise operate within their ecosystem, without feeling like they need to compete with Amazon/Google/MS in every aspect.


Amazon is already investing heavily in the graviton line of server-grade ARM64 CPUs, so there’s no need for any partnership.


For software dev I think this is the long term trend: run everything including the editor in the cloud, have just a cheap dumb terminal with you.

Audio and video on the ither hand are very latency-sensitive. Physics may prevent bringing those things to the cloud.


> Funnily enough, I’d say this is the biggest rationale for Apple going ARM for Macs, which is to make the development environment exactly the same as their deployment targets, namely iOS, iPadOS, Apple Watch, Apple TV, and soon, MacOS.

No it's not. This is a negligable advantage for iOS developers (I am one and don't care about this at all). Improving the developer experience a bit does not bring enough value to Apple to make such a risky and expensive move.


> ...the problem of x86 emulation can be dealt with if Apple offers ARM-based cloud servers.

Amazon has a very good ARM64 cloud offering with their M6g instances. Pure ARM64 server-side Linux stacks are not yet mainstream but the ARM64 Macs will help.


Apple dropping Intel changes the way I feel about Graviton. I thought it was sort of a ploy from Amazon to maybe get better deals from Intel and AMD and to give them a fixed cost way to run their own stuff.

It makes me think that Intel's issues are substantially bigger than some process issues and maybe an architecture generational issue to be sorted out. These are two big, high profile customers that have lost the faith.


Amazon’s James Hamilton has been bullish on server-side ARM for a long time. [1]

[1] https://perspectives.mvdirona.com/2015/10/arm-server-market/


Does anybody have any tips on minimizing CPU resource use when running Linux containers on a Mac? I'm currently running Docker Desktop Community 2.3.0.2. Thanks in advance for any help.


Eh: run Linux on your mac. You'll skip the whole linux vm overhead and nfs i/o overhead.

There's not much way around it.


Has there been an announcement from Apple to say they will virtualize Docker to run x86 images or is it just speculation?

I would have thought running arm64 Docker images would be the best way forward.


Docker will not run x86 images. https://youtu.be/Hg9F1Qjv3iU?t=3742


You'll need to maintain multiple sets of images and you'll need to do the work to grab them appropriately in your tool chain.

As far as I'm aware there's no universal docker images.


> The concern is that under x86 emulation instead of hypervisor, the performance hit could be significant.

You are not going to be able to run an x86 hypervisor/VM on Apple Silicon Macs. This is very clearly laid out by Apple: https://developer.apple.com/documentation/apple_silicon/abou...


That is about Rosetta, not Apple Silicon.


> That is about Rosetta, not Apple Silicon.

It's about Rosetta... on Apple Silicon: "Rosetta is a translation process that allows users to run apps that contain x86_64 instructions on Apple silicon."

The original post pre-supposes that you're going to be able to run x86 VMs on Apple Silicon Macs. You can't: "Rosetta doesn’t translate the following executables [...] Virtual Machine apps that virtualize x86_64 computer platforms"


Rosetta has the specific purpose of running MacOS applications that were compiled for x86 on Apple Silicon. It is not a general x86 emulator and therefore you can't run an arbitrary x86 VM on top of it.

There will be other applications created--probably not by Apple--that will provide general-purpose x86 emulation on Apple Silicon.


The original post is likely referring to Docker containers running qemu-user, not Rosetta running x86 VMs.


Look, I get it, Docker has its downsides and it wasn’t meant for local development.

But it just works (most of the times). I can worry about getting shit done, instead of messing around with web servers, interpreters and package managers (it’s fun the first couple of times, but then I want to get to the code).

The problem is that Docker on Mac sucks, because it’s basically a VM running Linux, and it eats up most of my 8 gigs of RAM (yeah I know, 2020).

Maybe, what Apple really needs is something like WSL2.


WSL2 basically is a VM running Linux :) There's actually Windows-native docker which runs Windows containers. But it's not very useful, because there are very few images compiled for Windows.

Docker is useful because of those thousands of companies publishing "official" images. You can't expect them to publish those images for every possible platform. They target Linux x64. May be they'll target Linux ARM, may be not.


Well, you’re right, but the hypervisor under-the-hood is better.


This announcement aligns well with Graviton. I can see people building and deploying ARM64 from start to finish, and running it in Amazon’s economical graviton fleet.


I am not an Apple developer. I do a lot with Docker and Linux system administration. I develop and maintain applications that run in Docker or on Linux directly. I also dual-boot Linux and Windows on my home PC.

I'm intrigued by macOS on ARM. Intrigued enough to buy an ARM Mac when they come out.

I'm not all-in on this by any means. It's more accurate to say I'm cautiously optimistic.


This one feels a lot less substantive than the original article it is intended to rebut.


It would be crazy to see Apple entering the cloud market with ARM based servers


Does ARM offer any advantage in the server space? Is there really demand for OS X cloud compute?


Spoiler: no it's not.




Applications are open for YC Winter 2021

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: