I wonder if the Apple Silicon decision is going to have a large impact on what CPUs are used in the datacenter? I (and I suspect many others) would just default to x86_64, because that's what I develop on and compile to. I suspect many developers on Apple computers will start to look at the differences and choose whatever meets their requirements while minimizing costs. Should this trend continue, this could even have effects on the whole x86_64 ecosystem, as the datacenter CPU market has a large effect on the high end consumer CPU market.
That moment is now here. Sure, it's only a subset of the market, but Mac devices are a significant percentage of developer computers (28%), way higher than in the general population (17%).
You don’t necessarily need a majority to influence the population. Nassim Taleb has described several examples of “intolerant minorities” that influence the behavior of the whole. I wouldn’t be surprised if a similar thing happens with ARM.
Well, I do hope that in this case the “intolerant minority” will drive cross-compilation and platform independence of code. It would be sad - and in the long term unsuccessful - if it just Balkanized the development landscape.
Yeah it's quite fortunate that just as everyone is dropping x86 32 bit support, aarch64 comes along and prevents the ecosystem from hardcoding x86_64 too much. It makes sure that the computing landscape doesn't assume too much about the arch.
False. It is not always reasonable to adapt; for example, when the adapting to the world would worsen the world. Changing "reasonable" to "passive" and "unreasonable" to "active" resolves the inaccuracy in your statement, and more correctly names the risk inherent in passively adapting.
My point being that Apple adoption of ARM is irrelevant for servers, Apple products aren't even available in most countries below tier 2 in an amount that developers would care.
They will either be using Windows in some form (even if pirated), now with Linux VMs as standard feature, or some Linux variant.
>My point being that Apple adoption of ARM is irrelevant for servers, Apple products aren't even available in most countries below tier 2 in an amount that developers would care.
(a) It's not those countries that drive trends (if they were we wouldn't have heavy frameworks and a 10MB SPA bloat as the norm in web development),
(b) ARM was already trending for use in servers (heck, even supercomputers),
(c) and Apple's transition means tons of devs in the West (and elsewhere, e.g. I've worked outside of Europe and US and everybody had a Mac in several different companies) will end up with an ARM chip and start influencing server trends
(d) At first many of those devs will go to ARM for personal server needs, like people use Digital Ocean and Linode and VPS providers for tons of personal/small business products today
(e) As for Windows, it wont stay out of the ARM game long (it already has a leg in). And even your developing world devs will easily find cheapo ARM computers for Linux/Windows.
I happen to care for fellow developers, and outside our Fortune 500 customers orgs, I haven't seen anyone being so lucky, and even then, Apple hardware tends to stay at MBA levels, unless they are doing iOS development.
I have seen devs teams where iMacs are timeshared across teams.
>Lucky you to work for such wealthy organisations.
Are we talking about enterprises here, with say, millions in revenue and a few millions in profit, or small shops that can't afford webservers and maybe use some shared hosting?
>I happen to care for fellow developers, and outside our Fortune 500 customers orgs, I haven't seen anyone being so lucky
Well, outside of the virtue signalling (and/or implying that I don't care for fellow developers), this is neither here, nor there.
It's not developing world SMEs that set industry trends, or have historically done so.
And we were talking about whether there will be a trend towards ARM in datacenters (helped because of the fact devs will now get more familiar with ARM, with M1 and similar ARM moves from MS).
It's not about whether Macs (or ARM) will be the architecture of choice for cash strapped companies in Freedonia.
(Plus, IT demand and ability to work / compete across the world, makes devs an exception related to their countries average wages. A local dev wage might be 1/2 or 1/5th of what a dev gets in Silicon Valley, but it's not 1/10 or 1/20 - as their national average across professions can often be. And such salaries still have them afford nice tools, Macs or high end PCs or similar).
(Also, Fortune 500? Never worked in one such company, and the ones I did are more like Fortune 1,000,000 but still Macs are extremely popular with devs).
>OP was the one asserting how M1 is relevant for the data center.
And I think his assertion is on point, in the sense that the widespread availability of desktop ARMs on developers hands will help drive ARM adoption to the datacenter (and that's just with the M1/M... line, not to mention further ARM adoption by MS that's also possible).
For what is worth, Linus Torvalds made a similar point regarding the influence of developer architecture options to server architecture choices (and especially regarding ARM).
I think that the objection "but developing world / poorer companies don't use Macs/M1" is valid, but not really relevant, kinda moving the goalposts. It's not those that set industry trends (and surely not in the US).
>As for the rest I won't bother to reply.
Well, I've replied to the arguments, with what I think is the case (and what I've seen).
Not sure if you were bothered that I took offense to the "I happen to care for fellow developers" line. But it sounded like an insult that wasn't called for, and was not relevant to the discussion.
Given the rest of their comment, I thought "care for" meant "support". That is, system administration and developer support, rather than "care about"...
Ah! I thought it was about the parent "caring for" fellow less compensated developers (who can't afford an M1), vs me not caring and dismissing their importance in my argument.
Mac mini might be $700/€800, but 1) for an inadequate configuration, you need €1260 for something usable and 2) the cost of integrating the first Mac into Windows-optimized infrastructure is way above that. If you want things like Kerberos SSO (i.e. Active Directory), you need Apple-blessed MDM system to provision it. That one isn't coming in the price of the Mac mini.
So most companies, if they have a mac, it is one on the side, so they can avoid 2).
(How do I know? I'm the only guy using mac in our org).
The bit that I’ve been impressed by is the life span. Machines used every day for 8-13 years and still ok. The amount of pain they cause due to various bits of software beginning to fail and workarounds being required obviously grows, and the security side of things isn’t great either. But the machines end up rather cost effective when spread over that many years.
Not sure about minis but that was true for macbooks ten years ago, when you could easily upgrade memory, storage and batteries, and you could install a recent and fully working linux distro when apple stopped supporting you. That unfortunately hasn't been true for a while. My mid 2010 MBP is still perfectly usable, I don't expect my M1 air to last more than a couple, maybe half a dozen years.
It's a chicken and egg problem. Generally though, I expect ARM to be cheaper in the long run as there are more companies that can build ARM chips. Even if the users of those clouds won't see the difference because of profit margins, it's in the interest of Azure, AWS, etc to push users towards the cheaper hardware. Already the fact they use ARM at all improves their situation on the table with Intel and AMD quite a lot.
Did you work on 88K with DG/UX? I loved that OS. All my 88K work was done on a dumb Wyse 60 terminal logged into one, and I’d busily UUCP my files between the various machines depending on which ones I was working on.
But things are a bit different now, it’s much more convenient to work on a laptop, so most people do that. Maybe the pendulum will swing the other way and we’ll start timesharing again, with laptops as terminals instead of standalone machines, but even if that’s possible in theory, there are few providers of ARM in the cloud at the moment. It’s a chicken and egg problem and I think Linus is right, it’s going to take pressure from devs, most of whom IMO don’t care enough to make a difference.
The tipping point will be when deploying to x86 becomes annoying (for some measure of annoying). Right now, even though I’d personally like to deploy to ARM (or at least give it a red hot go), my cloud provider (Vultr) doesn’t support it yet, and I don’t personally plan to go ARM on my desktop until next year. But I’m a CPU enthusiast. Most people wouldn’t bother to go the the extra effort.
I worked with 88K DG/UX. Back in those times our company did a desktop environment for UNIX workstations (X Windows didn't come with one as standard). We had over 20 different versions of UNIX systems, including a Sony workstation.
88K DG/UX didn't last very long before they switched to x86, which we also had. Under my desk I had Motorola's 88110 system (a stackable with a SCSI floppy disk!) The thing that stood out most about DG was just how nice all their staff were. I mean really nice.
Even in those days it became very apparent that whatever workstations developers worked on were what was then best supported in the products even for non-GUI software. Slowly but surely the workstation vendors dropped support to server only, meanwhile Windows spread from workstations to servers even though it was considered inferior. Linux also spread for the same reasons.
Yeah I worked on Aviions and Clarriions in healthcare, I think we had the first DG RAID 5 array delivered in Western Australia, there’s a photo of me and my boss standing proudly on either side of it as it came off the back of the truck. All 2.5 GB of it!
The DG gear was awesome, the OS was great and even the support was terrific. Such a shame they died.
I never considered the rise of Windows as a development platform as being the driver for that failure but it makes sense. I used to work on X terminals and then Linux workstations, and with DG being strongly into GNU tools I didn’t see the Windows side of things at all until much later in my career.
But this really supports up the idea that if devs move to ARM, so will the cloud.
It was the UNIX used at the university compute center when I arrived there, and what we got to use the first couple of years, alongside Solaris on our own department.
We used IBM X Windows terminals, and green phosphor text ones to connect to them.
By the time I graduated, they had been replaced by Red-Hat Linux.
Does it really matter these days? 99% of development is done in platform agnostic languages where CPU architecture doesn’t matter anyway. Even endianness is mostly hidden away.
That’s assuming all current Mac developers decide to switch to ARM macs. I (Mac developer of 13 years) have switched away to linux. I’m sure there are others like me.
The desktop experience on macOS has gotten worse with every release while the desktop experience on Linux just keeps getting better and better. Fedora, openSUSE Leap, Ubuntu all offer a great experience.
Are macs still shipping with severely outdated CLI tools? 10 years ago, Mac users just got the ability to resize a window by any edge or corner... How's that really good window management system working out for ya?
It's shocking that the Mac UX is so good that 75% of the developers out there would rather not use one... Meanwhile developers are the group that uses them more than the general population so both groups obviously disagree that Macs have a good UX at all lul.
Windows is still shipping without CLI tools and Linux distros also mostly don't ship with build-essential or headers, -dev packages or every language toolchain. Every system requires setup and MacOS shipping with older GPL2 packages puts it squarely between the two alternatives in terms of setup and configuration cost.
As for the WM differences, its just a different way of thinking about it, or a preference. None of them really bother me, personally.
Every system requires setup, but only Linux systems will remain exactly as you have set them up year after year, if you wish. During the last 2 years, out of all the Linux, Mac and Windows systems in my house - the Macs have been the only systems broken in some major way by an OS update.
Having to remove some dangerously old GNU utils and constantly having to contend with Apple over your rights puts macOS squarely behind Linux and only just ahead of Windows (for work) in my book. If you haven't used Linux in years, try a rolling Arch based distro like Manjaro and you'll experience how little setup you actually need to get your job done. Heck, you don't even have to use the CLI to install every single piece of software you'll ever need. It literally takes minutes to setup. Windows and Mac simply can't compete in this area.
WM differences don't necessarily annoy me, but I don't tolerate people saying shit like "Mac UX is better than anything, period" because that is demonstrably not true for many, many, many values of "better". I have all three at home and at work and I use each one for what it's best at. Windows for gaming and entertainment. Linux for any sort of development. Macs for Apple/Mac/iOS stuff. They each have some obvious strengths, but anyone arguing that Macs are uniformly better for developers are clearly inexperienced IMO.
I think it depends on what we’re developing on our Macs. I’m writing Elixir, Ruby, JavaScript, and a little C at work. My work Mac is a 2018 MacBook Pro and my personal Mac is a 2015 MacBook Pro. Both are reasonably fast to get stuff done, and honestly I’m waiting for everything on Homebrew to be converted over before I make the switch.
Now, cross compiling does suck. I have a few raspberry pis in a kubernetes cluster at home and am looking forward to the native compilation that this new release provides.
Not the OP but I've been on an M1 Air for a couple of weeks. It's startlingly good. It's been a lot of work to recompile our various dependencies for am ARM64 VM but overall the machine is hands down the best I've ever used.
Went through the same process except with an M1 Macbook Pro. My coworkers volunteered me to see if we could get our dev stack building and running on ARM with hopes they could eventually dump their older x86 Macbook Pros, which run loud and hot.
It was a bit of a process but once we got it working, developing on the M1 has been great. Captured notes on the process here in case it's helpful for anyone later:
https://authzed.com/blog/onboarding-with-an-m1/
More than anything; starting on a new team, sorting out the provisioning process, meaningfully contributing to the team and doing a write up for their blog. I’d wager whoever hired you is very pleased.
I like the native posix terminal. Much of my work that I used WSL for works natively on Mac anyway.
The battery life is amazing, because I like to work away from an outlet occasionally when my mind needs it. This is the first laptop where I’ve felt comfortable leaving the house (for a full day of work) without a charger.
No fan noise, no heat, no throttling under typical workloads, and no worrying about blocking the intake vents like I did on my thinkpad.
I was concerned about the architecture support at first, but Rosetta 2 has been 100% seamless with the exception of some steam games I tested out, which I don’t care about anyway since this is 100% a development machine.
I don’t do super intensive workloads, but the m1 has been way faster for some of the nodejs work I’ve been doing. Some of my nodejs workloads are seeing a 10x performance increase over my i5 thinkpad.
And of course, you have to love the apple trackpad. I tried a few different Bluetooth mice, but I ended up just using the trackpad instead because the precision feels better than even an Mx master.
Why does it matter what CPU one is running on their desktop? In my laptop I have a consumer Intel CPU and AMD on a desktop, but my servers run on Xeon and ARM.
Even for a second I didn't think that oh I have a Intel desktop CPU, so I need to avoid ARM.
As long as project compiles for different architecture and you have good test coverage only thing that matters is performance and cost.
Just one datapoint, but we've had to cross the ARM bridge in the last couple of weeks after getting a few M1s. It's taken a lot of work getting all our dependencies compiling but now that we're there it's definitely compelling to start looking seriously at graviton2 machines for production.
So it matters in that not everyone currently has tooling working for ARM, but Apple silicon is going to push that along real fast.
Cross-compiling isn't fun. If my next startup is powered by a high performance C++ core that it's built on x86, you bet I'm going to run it on x86 instead of trying to cross-compile it for ARM, for which benefit by the way?
Idem if I run ARM locally, I'll deploy on ARM.
Of course this does not apply to interpreted languages or any of the modern languages (Go, Rust, Zig) that make cross compiling _somewhat_ easier.
To be honest I think the desktop question is misleading. x86 to most is a known quantity, there is very few specific reasons to choose ARM instead of x86.
But if desktop ARM becomes commonplace, it's a good indicator that ARM is finally competitive for "serious" workloads, and people might invest in it more.
As thing stands right now and IMO, it's getting there but Apple having M1 is not enough for me, as CTO, to look into ARM servers, if not as a curiosity.
I like my development environment to be as close as possible to production. I guess Docker for M1 will run ARM versions of containers? Then I can run the same containers on the server.
Totally. We've spent the last 2 weeks getting all our dependencies recompiled for arm64 and now we're there I'm pretty keen to see what our bang for buck looks like on gravton2 machines (we've been using 64 core c6g.metal machines for compiling).
There's just one final remaining vendor dependency that we're running via qemu on x86 that's holding me back at this stage.
I suspect you're right. I needed a cheap offsite host for running some monitoring a couple of years back, and deployed a small ARM host (via scaleway I think).
Running docker containers on it was OK, and cross-compiling my golang monitoring stuff was easy enough.
But over time I just wanted to use my existing pipelines and deploy x86 stuff. So in the end it was a novelty, and while cheaper than an amd64 host it was eventually replaced so that I had a consistent set of deployment targets.
I think it will. I am personally targeting arm64/Graviton2 on my side projects because the cost/performance improvement is noticeable. I think this will only snowball for adoption in work environments.
I've used the preview release, and my experience with ARM containers has been good - even seems a little more stable than the Intel one.
But I only did a x86 Linux container as a test, and it wasn't a great experience. Seemed visibly slower and crashed once.
So whether or not this suits your needs, I feel, depends on to what degree you need the "contract" that your local environment is the same as prod, because you're probably using x86 Linux/BSD boxes.
Yep. Given that the fastest Windows ARM device is an M1 Macbook Pro running Windows in Parallels, there's currently no competition for local platforms and ARM.
If Amazon and Nvidia's ARM server chips really pick up steam, then the M1 Macbooks would be the premiere local device.
Reminder: unlike the docker command line tool, and the dockerd server container management tool, the Docker Desktop products are a) not free software, b) not even source available, and c) send a lot of sensitive data about your system back to Docker Inc without consent.
I didn’t realize this for a while myself, as the docker CLI tools are all free software. The “Desktop” variants are not, and embed spyware.
Amusingly if you go to the docker desktop download page while using Linux (in my case Fedora) you get "download for: Unknown OS" and if you click on it, it redirects you to "https://one/" which of course fails. Not exactly confidence inspiring.
Thank you for highlighting this issue, I certainly think it is something which users should at least be aware of and they don't really do a good job in explaining the distinction.
Depends on how much of a replacement one is expecting (I guess it's like all commercial versus Open Source software in that way), but docker machine and VirtualBox are a low drama version. There used to be a working setup that used xhyve (on top of Hypervisor.framework) but I had woes that I don't recall the details about right now and even when it did work the networking was much harder to reason about than with VirtualBox
The other advantage of docker machine is that you can also ask for a cloud instance to run dockerd, and that's actually how gitlab-runner does it
Replacing Docker for Mac's kubernetes support is done with either kind or minikube depending on ones desire to have on a modern version versus "yeah, yeah, just run the VM and make kubectl work"
None of those setups have any provided gui/menubar widget support, or similar hand holding, which is why I started this the way I did
I use minikube for local k8s dev but you could also use it as basically a local Linux VM for running docker. It is (slightly) less ergonomic than Docker for Mac but not my much. For the most part I use the qinikube VM for everything docker related since it's annoying to run both a VM for Docker Desktop and for qinikube.
The whole point of docker was to have reproducible builds. What runs on the dev machine also runs on the server. But M1 macs change that in a major way. There might be bugs or available features in ARM versions of popular software, so what works on a dev machine might not work on the server and vice-versa.
I disagree on the importance you’re putting on what I think are edge cases for most development. When I’m building a binary, the biggest issue I tend to run into is the “machine” I’m building it on. Docker allows that “machine” to have reproducibly equivalent dependencies. Same version of Node, .NET, Java, etc. I don’t see how the kernel really matters that much here. I’ve never once cared what Linux kernel I’m on, but I can’t count the number of times I’ve had to figure which exact version of Cordova I need with Ionic or whatever mess of dependencies I need.
Maybe kinda, but really you'd want reproducible cross-compilation too, so a reproducible build should produce the same binary regardless of which machine it is run on.
"Identical binaries" is not the same thing as "reproducible builds". It's maybe one aspect of one type of CI/CD, but I would say not even the majority.
> A build is reproducible if given the same source code, build environment and build instructions, any party can recreate bit-by-bit identical copies of all specified artifacts.
I get what you're saying since it's a different architecture entirely, but it's not as if you run EXACTLY the same hardware as your server (e.g ECC RAM, number of physical processors, things like AVX-512, etc).
For that use case, you should continue to run x86 images. Unless there is a bug in underlying virtualization frameworks, you’re extremely unlikely to ever see that kind of issue.
I don’t think MacOS and iOS are similar enough for this to be an obvious jump. I think the iOS kernel is also prohibitively locked down to do anything like custom virtualization.
I’m just dreaming out loud here but I would love for the iPad to allow for a developer workflow similar to Fedora Silverblue. Perhaps not the layered OS images but the containerized sandboxes (toolbox) and the App Store instead of Flathub.
For me - the various things I use Docker for now. Firing up containers for ESPHome and related projects. Testing things out is great and doing it more readily sounds neat. It can be done is a fairly simple fashion via a gui (eg the Synology implementation). I’ve just finished messing about with that Grafana speed test project that was featured here last week, and doing it on an iOS could be good.
The purpose of Docker is to containerize specific applications, not to run general-purpose virtual machines. There are Windows Server base containers for Docker, but the point of them is to do things like deploy a server (that just happens to be on Windows Server rather than Linux) as a Docker image.
I went and checked the issue when i saw they've made a formal release but it doesn't appear to be resolved. So maybe keep it in mind if anyone has issues trying to use docker now :)
Edit: I see Docker has closed the issue and encouraged people to simply use ARM containers instead. Unfortunately that's not an option for cross-compilation when you're using x86 servers. Something for people to keep in mind when deciding what laptop to buy next, i guess.
I don't know what language you're using on Lambda that requires x86 compilation, but if you search the web for "cross-compiling" you'll find relevant results. For example, here's someone using Graviton ARM to compile Rust for Lambda x86: https://burgers.io/cross-compile-rust-from-arm-to-x86-64
You don't actually need to run the same architecture you compile for, you just need a toolchain and environment that matches your target and a compiler for your machine's architecture that knows how to write to your target architecture.
It sounds more complicated than it is, really. How you go about doing this might vary by language, but as Docker for M1 takes off, you'll see more and more guides for doing "hard" things like compiling to x86.
Thanks for the pointer! I tried cross compiling without docker and hit that exact same problem with the Ring dependency. This blog post seems to have a potential solution, i'll give it a try. I do recall trying a million things however.
Anyone have suggestions for good tutorials or courses on getting up to speed with Docker? I've procrastinated on learning, in part because the mac app wasn't ready when I initially looked at it years ago.
You mean simply having Rosetta installed on your machine noticeably slows down everything else, even if you're not using an app under Rosetta? This sounds like a bug.
probably. but the difference in performance for IntelliJ was night and day, and given the inability to uninstall rosetta and AB test I don’t really want to play with it again.