"It's the end of the world!"
"This will make my life so much easier!"
"I'll never buy another Mac again!"
I've stopped clicking on rando blogs linked on HN. It's all bait at this point.
That said, I've clicked on every Apple-related link and made a point of mostly skimming through each of them. There was usually something valuable (by my standard), but yeah, a lot of it I really don't want clogging up my brain.
That hygiene required a large team of dev tools engineering to support it. For the smaller companies, dockerizing individual projects for development was... a regrettable but easy shortcut to not having to wrangle with local setup.
I wonder if this author never experienced or already forgot fighting with venv/pipenv/conda?
iOS and iPadOS never had the chance to run Electron-apps given they would be killed by the OS as it would eat up the RAM sideways. 16GB of RAM will still not be enough for it. Maybe it will run just fine, but hardly usable or "Desktop performance" like.
This makes me appreciate the iOS/iPad app compatibility on ARM Macs as a way to escape some apps that don't need to use Electron.
Docker is here to stay because it simplifies too many painful things, and now it has industry momentum behind it. Even our mostly non-software-engineer data scientist are shipping their own containers to prod with ease.
Besides, there’s nothing to say you can use both if that’s your preference.
The scripting environment managers have made big improvements in the past few years. Nvm/virtualenv/rvm give me 90% of the docker value prop with none of the performance impact or debug hassle.
Haskell/clojure/java all seem platform agnostic out of the box.
C/C++ is probably still a hassle with libs being platform specific.
But on the Mac you have to pay the virtualization overhead to run Linux in a VM, then compound the docker overhead on top of that.
On MacBook Pro this is problematic because it eats battery and contributes to thermal problems.
This article shows overhead for a disk heavy workload: https://vivait.co.uk/labs/docker-for-mac-performance-using-n...
He gets 7 second load time natively, but a 56 second load on docker with inconsistent drive link.
Microsoft pretty much killed Linux on desktop for anything other than ideological reasons at this point.
I couldn't help but chuckle at including this particular gripe in a list of otherwise very impactful downsides of docker. I mean, I hate it when my laptop starts sounding like a jet engine too, but...
Of course, if you can avoid docker for local development, that's definitely easiest in many cases.
How is it for the docker-compose 'I need x, y, z running' use case? It's not something I need that much, there are various shell.nix hacks I've seen to (e.g.) get Postgres running for Elixir development. They work, but it's not as rich an approach as the rest of Nix.
Docker and Compose look pretty straightforward in comparison, at least for local development.
I've never had any of these problems with my very inexpensive tower computer, which I upgraded to 32gb of RAM and terabytes of disk space, that was probably a third of the cost of your laptop.
It shouldn't take much understanding - many people work almost exclusively in those environments. Consultants, customer engineers, 'digital nomads'.
Then there's the group of us who would rather the (frankly epic) advances in hardware went towards actual visible software performance improvements instead of more layers of waste.
I know most devs are hesitant to try anything that's "always online", but remote development apps have really gotten better recently. VSCode with the Remote extension makes doing development on a VPS a breeze, and unless you have a really bad internet connection, you probably won't even notice it's remote. And then you also get the benefits of the VPS having a faster internet connection (faster package downloads, etc), most likely better specs than your laptop for only $10-15/mo, more resiliency, and better security. It also completely solves the whole "my laptop is ARM but my servers are x86" problem that keeps getting talked about.
Because a lot of development requires only a couple cores, maybe 8GB of RAM, and a few free GB for the database or dataset (at most, probably a few hundred free MB). If you aren't simulating large networks, doing serious number crunching (HPC-styled or ML), or into graphics heavy development, a laptop is more than enough for most work.
I want to say most, but that's an assumption on my part, but a lot of computing is about encoding business rules that could be done by passing paper around an office or larger complex and making it digital (though obviously not with the speed and reliability that's often wanted or needed). Message passing, filtering, connecting to databases, verifying data integrity, connecting multiple DBs and auto populating them, etc. None of that requires a powerful computer to develop.
Even the embedded work I've done is really just business rules for safety critical systems: If this reaches some temp, send a signal to the pilot, pilot can optionally release halon. If pilot sends "release halon", then release the halon. Nothing about that or the target platform (16MHz processor with memory measured in KB, maybe double-digit MB) required a powerful computer for development considering that the earlier versions were written on, maybe, 386s.
Just because you do things that require lots of RAM, disk space, and fast cores don't assume everyone else needs that. I used to do a lot of graphics work, and the laptop barely did the job. A full tower was what I needed (especially as I wanted to use CUDA and/or OpenCL and move a lot of numeric work associated with it to the GPU). If I were doing machine learning I'd be in that same boat. But I'm not, most of us aren't.
Because being stationary isn't that great for creativity and problem solving.
Additionally, there are whole careers being made on the ability to work while on the move.
And even when not traveling, we are expected to move across the building and join other teams for collaborative work.
What if your laptop is arm but your production environment is x86 ?
You might think "well i'll have arm containers on my laptop and x86 containers in production" but then it means that dev and prod diverge: you're running different software, that might have different behaviors.
One of the core ideas of docker was that you could have ideally the same artifact both in developmend, testing, staging, qa and prod environments.
There's little way around the core problem: mac os x badly suited for running docker containers. The most performant way to run docker containers is to run them in native gnu/linux (hence: gnu/linux laptops)
Thats what CI is for?
> it means that dev and prod diverge
It just means you're using a different architecture for your local development. Your DEV & TEST envs and your automated CI/CD runs can still use x86. Sure its a bit inconvenient, but for most types of development it won't matter much, as the stack doesn't compile to native anyway. I don't care if Java, JS or whatever runs on x86 or ARM.
I use docker to run ALL dependencies for ALL the stuff I develop as a way to be able to really cleanly keep stuff organized, and avoid having to figure out how to totally purge random installs of postgres/etc.. from my system.
I also run a couple things 24/7 in docker: e.g. pihole
A huge slowdown or hugely increased CPU usage (and its not good as-is) for running stuff in docker is a non-starter for me. If thats how it works out, that would be the nudge I need to seriously investigate a linux laptop.
jokes aside: the problem being discussed here is not organization, is running docker containers (with linux binaries inside) on mac os, on x86-64 today and arm tomorrow.
Yeah, I think that's why making any blanket statement about this stuff doesn't work. Personally I went through a phase of Docker-ising everything I did because it made so much sense but really it just got in my way. My development work mostly consists of Node and Rust, both have self-contained package systems that mean encapsulating everything in Docker is unnecessary. But that's not the case for everyone!
Still using VM images, and I bet 5 more years and everyone and their dog will be doing multi-scale lambda functions on cloud runtimes with complete disregard for the underlying OS.
"enable developers to write code on their ARM Macs, test across devices locally, and deploy to iOS devices and server-side"
I believe a step is missing, "Apple approve/sign the code".
Frankly, I find this less troublesome than Windows Defender refusing to run apps from indie developers who don't have a good enough "reputation score".
What nonsense is this?
No question that iOS dev will be better with these arm macs, as to development on Mac for other platforms it’s hard to say but my bet will be world of pain for the next few years.
The potential answer is to go AWS. Build up a cloud platform using the same CPUs, move all Apple cloud infrastructure onto it, rent the rest of the capacity out to the public.
With this, plus the sales from high-end Macs/MacBooks, the R&D overhead for high end CPUs might make sense.
Before AWS, I didn't expect Amazon to become the power that it is with regards to cloud either.
Perhaps Apple can provide some unique attributes of cloud infrastructure that's specifically useful to people who otherwise operate within their ecosystem, without feeling like they need to compete with Amazon/Google/MS in every aspect.
Audio and video on the ither hand are very latency-sensitive. Physics may prevent bringing those things to the cloud.
No it's not. This is a negligable advantage for iOS developers (I am one and don't care about this at all). Improving the developer experience a bit does not bring enough value to Apple to make such a risky and expensive move.
Amazon has a very good ARM64 cloud offering with their M6g instances. Pure ARM64 server-side Linux stacks are not yet mainstream but the ARM64 Macs will help.
It makes me think that Intel's issues are substantially bigger than some process issues and maybe an architecture generational issue to be sorted out. These are two big, high profile customers that have lost the faith.
There's not much way around it.
I would have thought running arm64 Docker images would be the best way forward.
As far as I'm aware there's no universal docker images.
You are not going to be able to run an x86 hypervisor/VM on Apple Silicon Macs. This is very clearly laid out by Apple: https://developer.apple.com/documentation/apple_silicon/abou...
It's about Rosetta... on Apple Silicon: "Rosetta is a translation process that allows users to run apps that contain x86_64 instructions on Apple silicon."
The original post pre-supposes that you're going to be able to run x86 VMs on Apple Silicon Macs. You can't: "Rosetta doesn’t translate the following executables [...] Virtual Machine apps that virtualize x86_64 computer platforms"
There will be other applications created--probably not by Apple--that will provide general-purpose x86 emulation on Apple Silicon.
But it just works (most of the times). I can worry about getting shit done, instead of messing around with web servers, interpreters and package managers (it’s fun the first couple of times, but then I want to get to the code).
The problem is that Docker on Mac sucks, because it’s basically a VM running Linux, and it eats up most of my 8 gigs of RAM (yeah I know, 2020).
Maybe, what Apple really needs is something like WSL2.
Docker is useful because of those thousands of companies publishing "official" images. You can't expect them to publish those images for every possible platform. They target Linux x64. May be they'll target Linux ARM, may be not.
I'm intrigued by macOS on ARM. Intrigued enough to buy an ARM Mac when they come out.
I'm not all-in on this by any means. It's more accurate to say I'm cautiously optimistic.