For those talking about wanting to wait till docker support is 5x5, there’s another work around you can go with.
Buy the M1 and switch to a fully remote development experience. Spin up a DO instance, wire up VSCode Remote, and run docker on that machine.
While it does require an internet connection, I’ve found remote development to be really fantastic as an option on a personal “underpowered” laptop. Especially if you are running a cluster of services or large databases, I find it’s far more productive to offload the running of my code to a VM in the cloud.
I get that remote development is a thing some people prefer, and if you have an underpowered machine it can be great to just use it as a thin-client and get all the power and performance you need without running down your battery or spending a fortune on local hardware.
BUT... you're recommending this as a workaround for limitations in expensive "Pro" local development machines, as part of an encouragement to go ahead and buy them.
Why not follow your advice and buy an old refurbed Macbook. Or any 2nd-hand machine of any brand. Or a low-end Chromebook.
To clarify: I'm not saying remote dev is bad, or you shouldn't recommend it. It's just really really weird to do so in this specific context.
It's bizzare to want to run Docker on a Mac anyway. Docker has never and will never run on a Mac natively. The way it works on Intel-based Macs is through the xhyve/hyperkit hypervisor, i.e. it virtualises Linux. That's the only way it can work. It's also why you can limit how many CPU cores and RAM Docker can use, since you're just giving the underlying virtual machine those limits through the Docker settings app. And it's just a memory hog. It will eat RAM up to the amount you specify rather than dynamically allocating and deallocating as per the requirements of each running container like on Linux.
It shocks me that so many people want to run Docker on a platform it was never built for. The suggestion above is not bizzare, it's a normal use case: run Docker on Linux like it's meant to be run.
Docker for Mac has had performance issues for years, especially related to file I/O. The common suggestion is to use NFS to get around those, which is just ridiculous when you think about it. [0]
> It shocks me that so many people want to run Docker on a platform it was never built for.
surely you can imagine a situation in which a developer prefers to develop on a mac but wants to deploy to a linux server though? that's the use case here.
Seems like the sensible solution is to have a linux VM and do push everything into the VM to run. Which is what Docker on Mac apparently does, but with less control over doing it yourself.
This is the exact workflow that VSCode w/ remote is supposed to enable: you just connect to the VM and everything works. VM could be somewhere else. Or it could be on the local machine. Doesn't matter.
Why would I want that control when Docker on Mac does everything exactly the way I would do it if I set it up myself?
I don't want to maintain my own special-snowflake VM just to run docker containers on it. I want a dumb cattle VM running Container OS or an equivalent, whose only job is to run containers, ephemerally, connected to my host. The less it can do other than that, the better. In fact, best if it gets blown away on every restart.
Any statefulness is to be avoided, because a stateful Docker host might mislead me that my images will work in prod, when they're really dependent on something in my dev VM. Caches, databases, message queues? In development, they're ephemeral. Blow those containers away, please. I don't want any persistent volumes, host mounts, any of that. This is development, not deployment. Any data is just there to verify that the code is doing the right thing. Why would any such data need to live longer than the containers producing+consuming it?
(In the rare case that I do want a persistent database, I run it on the host. It's already a special snowflake; may as well treat it like one. As a bonus, this forces me to ensure my applications can target external resources using connection-string env-vars — which is almost-always relevant in production, where the DBMS is going to be some cluster external to the compute environment.)
> VM could be somewhere else. Or it could be on the local machine. Doesn't matter.
I don't know about docker itself, but I'm using Docker on Mac for Kubernetes "application" development. Targeting my commands to my local Docker-on-Mac k8s cluster vs a remote one is just a `kubectl config set-context` away. (Or you can do the same from the Docker on Mac tray menu.)
It's not the same flow. What if I want to use docker-compose to spin up some services locally and test them? How does a linux VM help me compared to just running docker-compose up and having the details of needing a VM abstracted away?
> It shocks me that so many people want to run Docker on a platform it was never built for.
Yet, it works, and well, your shock notwithstanding.
If your goal is a SINGLE portable, virtualized platform for your developers, which does not require a persistent internet/SSH connection, and which is "fast enough" for developers to be effective, Docker on Mac is a pretty swell solution.
And docker on Linux doesn’t support all the features that docker on Mac does. Specifically kubernetes.
You can dismiss this but if your dev shop is doing dev work that lands in a k8s cluster, it sure is nice to just have devs install docker desktop for Mac and they have a fully functioning k8s cluster that just works.
My whole company uses macs except me. I had to spend a bunch of time figuring out how to get a dev environment working on Linux. And it wasn’t easy as none of the out of the box k8s solutions support how we developed our build system. If I wasn’t a k8s admin from early versions of k8s I would have just given up.
It sounds like your company built its dev tooling specifically around Docker for Mac. It's not surprising that it was a pain to replicate on Linux your company's work as a sole programmer.
But that doesn't mean that Linux doesn't support Kubernetes well. Linux is the primary target for k8s! It runs more efficiently and has better support than Kubernetes on Mac. Minikube has worked as a dev cluster on Linux long before Docker for Mac shipped a k8s dev cluster.
(I, too, have been involved with Kubernetes projects since the early days, and have built k8s tooling at fairly large companies.)
> And docker on Linux doesn’t support all the features that docker on Mac does. Specifically kubernetes.
This is nonsense. We invested multiple man month this year to create a local k8s dev environment for our company. Our devs use MacOS and Linux. Me and my colleague evaluated various solutions for running k8s locally, all of which worked fine on Linux out of the box, while the process of setting them up for MacOS was riddled with issues (mostly around performance).
I am talking about out of the box features. Docker for Mac has kubernetes built in. Check a box and you have a k8s cluster. On Linux you need minikube, kind, or in my case, I built a custom k3s solution.
> BUT... you're recommending this as a workaround for limitations in expensive "Pro" local development machines, as part of an encouragement to go ahead and buy them.
Yeah, I came to say the same thing - if you are anyway going to do remote dev, why still bother buying a new Apple device?? Anything else will do fine too!
My desktop is an AMD Threadripper 299WX running FreeBSD. My laptop is a mid-2014 MBP. I can build a kernel about 8x faster on the AMD than in a VM on the mac. So when I'm away from my desk, I use the MBP to ssh to the AMD and do all my dev there.
My biggest problem with my current MBP is that the battery life is down to 1-2 hours. Less if I have a video call.
I could maybe just use a chromebook or windows laptop, but the Mac "just works" for all kinds of corporate stuff and is the path of least resistance.
You had it since 2014 so the battery is tired, you'll get less and less usage out of your battery over time.
That's why professional laptops lets you change the battery. As we're professionals, we use our laptops a lot, so the battery needs to be exchangeable without having to buy a new laptop, so you can restore the same battery life as you had to initially.
Now any professional laptop (except the "modern" Apple ones) let you change the battery, if you don't want to do 3rd party repairs yourself. But I highly recommend you either change your battery in a repair shop, or get a laptop meant for professionals, ThinkPads are pretty good in that area. And if you do a lot of remote dev, it doesn't really matter which one, as long as it has a good WiFi card so remote latency/jitter gets as low as possible.
I basically use the same setup (an old mbp paired with a beefy linux workstation for remote development) and the only things that keep me from migrating away to a thinkpad running linux is because I'm still doing some ios and mac development. But so far remote development with vscode is working great with very low friction. It feels like I'm working locally even when I'm outside my home network.
If you want the macOS user interface, just buy an Intel Mac — new, used, or refurbished; they are still on sale, they still work (and they support all the tools you want perfectly).
Yes and no. For starters, there's reliability and durability. There might even be resale value. Just because you don't want to go all in (i.e., MBP) - or can't - doesn't mean you want to own a piece of junk.
I'm not disagreeing with you per se. Simply pointing out there are other considerations when selecting hardward.
I wouldn't be willing to do my work on a used laptop from eBay. I want something that has a warranty so I can get a replacement if something goes wrong, ideally on the same day.
I don't think my manager would take it well if I told him that my used eBay Thinkpad is broken and I need to find a replacement before I can work again.
If you're diligent with your backups, I guarantee you can buy another eBay/Craigslist Thinkpad and get it working before you get a "Genius Bar" appointment.
Thats IF your local Apple store is even taking appointments; none on Oahu are. Plus there's apparently significant parts delay no matter where you get the repair done.
It took Apple 32 days to get my MBP 16" repaired, returned under Crapplecare.
Ah, fair caveat. I suppose the pandemic has changed things. I haven't bought myself a new computer in a few years, but last time I did, Craigslist was full to the brim with sellers.
Yeah this makes no sense. I replaced my work computer with a cheap fanless NUC and moved to fully remote development, but the whole point of that is being able to work on whatever cheap hardware I can get my hands on. Buying one of the most expensive laptops on the market and then not using any of its hardware is nonsensical.
Well is it one of the most expensive? MBA baseline could be had for 1000, the one I’d pick with 16GB for 1200. I challenge you to find me anything with such high build quality, screen and inputs, not to speak of a chip that is measurably the fastest and most efficient for any task you still want to do locally. I’ve been away from macs for couple of years, but these M1s are IMO a game changer.
> I’ve been away from macs for couple of years, but these M1s are IMO a game changer.
The interesting thing here is, is it really a game changer in practice for something like web development and every day computer usage?
I'm not here to start a mac vs windows war but I have a 6 year old i5 3.2ghz CPU (4 cores, no HT) with 16gb of memory and a first gen SSD.
I use this workstation for full time development / ops work on Windows with WSL 2 / Docker, etc..
Everything is still pretty damn fast and it feels no different than the day I put together the machine.
Opening Chrome from hotkey to being able to type takes a second. Opening a terminal feels instant. Working with Vim and 50+ plugins has no type delay. Disk I/O feels good. I can keep multiple VMs running, run multiple Dockerized large web apps in various web frameworks, open 20+ tabs in a browser and a bunch of other stuff and it doesn't break a sweat. Sometimes I forget that I have image editors and other stuff open in virtual workspaces too.
I also do a lot of screencast recording / editing. The only part of my workflow that feels sluggish at times is rendering videos but that ends up being a non-issue because if I need to export 75 videos for a course I queue them up before I goto sleep and it finishes before I wake up the next morning. For 10-20 minute videos here and there I just render them before doing something where I go AFK (showering, eating, going outside, etc.).
I guess what I'm getting at here is, I'm not sure how a faster CPU will really help that much in my day to day besides crushing benchmarks.
Are folks running non-M1 MBPs experiencing slow downs in their day to day where they feel compelled to get one with an M1? Where do you see and feel the performance wins in practice?
Not sure about current Intel MBPs but I have been forced onto ThinkPads for last 2.5 years, currently on an X1 yoga. IMO this whole experience, coupled with Windows 10, is really crappy. TouchPad driver has issues with sleep mode, with no manufacturer fix on the horizon since multiple years. So my organization's solution was to disable lower power states. Now even in standby it drains battery in 6 hours or so. Screen is still 1080p, and even if it was higher res, windows still has scaling issues in places. Settings are all over the place - since some update, the only place to make sense of your language /keyboard list is Powershell. People give Apple shit for their QA but I think most of them have not used Wintel in the last 10 years.
> BUT... you're recommending this as a workaround for limitations in expensive "Pro" local development machines, as part of an encouragement to go ahead and buy them.
I agree with that in principle, but in practice Docker development on Intel MacOS already performs so poorly that it’s effectively broken anyway. I’ve actually been looking forward to Apple Silicon in the hopes that Docker starts over again and gets things working without the constant CPU pegging and 5-10x performance penalty.
I’ve been using docked on macOS for the past 5 years and it’s been quite pleasant. I had a few times when CPU was spiking and that was fixed with subsequent patches. Definitely over a year ago. I don’t think “constant CPU pegging” is a typical docker-on-Mac experience?
I don't use Docker on Mac, but I'm curious how much of the performance penalty is attributable to VirtualBox. I use a VirtualBox-based Vagrant environment, and the performance is awful. It mostly comes down to terrible disk performance due to how VBox shares directories from inside the VM to the host filesystem. Apparently it can be fixed entirely by switching to local NFS, but I haven't had any success getting that working.
I explicitly bought the top of the range i7 based 2020 13” MacBook Pro for this reason; while I guessed that the Apple Silicon MBPs were going to be even faster, I rely entirely on Docker and other fiddly dev tools for my job, and I figured the edges wouldn’t be smoothed out for at least 12 months or so.
Is it a bizarre suggestion? The laptop seems pretty wonderful by all accounts. Low power draw, amazing battery life, great OS (I’m still a fan), great mouse, I like the keyboards (blasphemy I know)...
So, if I like all those things, but I can’t build a docker image locally (for now), why not just do something that will potentially improve my development experience in addition to allowing me to use this machine?
I mean, folks should do what they want. But, I’d say calling this recommendation bizarre to be a bit over the top :)
It’s certainly weird to spend $1-2k on a high end laptop and then have to do workarounds to run one of the more important pieces of software in the current technical stack.
I personally would wait until Docker works, if that was a problem for me.
I usually draw the line for local vs. remote if I'm spinning up multiple services. In particular, if I'm using something like minikube to do end to end testing (I develop tooling on top of Kubernetes), remote development is far superior because I can offload a bunch of the heavy lifting away from my development machine. It's pretty easy to peg one's CPUs using Docker & K8, resulting in CPU throttling and just a slower experience locally.
If I were in IT, I'd not be buying M1 laptops right now for the company. However, for a personal development workhorse, I'd jump on it. I wouldn't miss local docker.
An often repeated things is that you should buy tools for what they can do right now, not what they might be able to do in the future. There is nothing wrong with waiting a bit more.
One of the great things about docker is/was that one can develop locally and the same thing would run when deployed. Having to run it on a distant machine kind of ruins this.
One of the great things about docker is/was that one can develop locally and the same thing would run when deployed. Having to run it on a distant machine kind of ruins this.
That's great, if you're doing something that will fit in a laptop. But I think the OP's point was that people using large datasets, complex workflows, or multiple projects, his method makes sense and saves money on buying new hardware.
For example, the web site I'm going to putz with today is many GB larger than the unformatted capacity of my laptop's SSD.
Well OP said "Buy the M1 and switch to a fully remote development experience."
One could argue that even the "cheapest M1 + paying your remote Docker/Compute friendly machine" is not really the cheapest option.
At least do not convince yourself "the new M1 chip is faster and will improve my developer experience regarding compute"
>One could argue that even the "cheapest M1 + paying your remote Docker/Compute friendly machine" is not really the cheapest option.
You don't buy a Mac because it's the "cheapest option". You buy it because you value several extra conveniences (displays, trackpads, speakers, construction, weight, baterry life, even keyboard - before and after they've messed theirs up for 3 years). Sure, it's not better in all of those than a comparable in price PC laptop, but it's usually better in most, and unaproachable in others. The SSDs it comes with are speedy as hell also (compared to the options Dell or Lenovo has on comparably priced models).
The CPU/GPU you can find in PC a too. At least until the M1, which has the best performance/power ratio of any current PC CPU. Plus there's always macOS, which lets you have a no-fuss desktop OS than runs all your proprietary apps and a native UNIX (as opposed to WSL/WSL2). Plus an ecosystem, drivers for your external peripherals (as opposed to the hit-and-miss Linux experience).
But it's not about saving money, and "I could the same work while spending less".
Which isn't going to happen with apple silicon because now you'll have a fleet of developers developing on arm and a bunch of servers running x86/amd64. This means you lose the benefit of the same artifact being used in all environments.
Obviously there are arm servers but they're not in widespread use.
I came across this issue. Dev Containers by VS Code solved it for me, allowing me to use development tools and packages by remoting inside the container itself.
In general, I think one of the major benefits of building out a remote dev environment would be freeing all local resources to do things like linking and formatting etc.
I'm really hesitant to invest in VSC Remote because I feel like it's a slippery slope to a world where we can't compile software on our own computers anymore
I feel like it's a slippery slope to a world where we can't compile software on our own computers anymore
To me it feels more like going back to the way things used to be. Reminds me of when I used to do my work by pushing buttons on a Wang terminal in one state, and the PR1MEOS machine that I was actually working on was several hundred miles away.
Maybe as our work becomes more complex, it makes sense to return to thin clients.
I think the concept of thin clients is where we should be heading, especially now that internet is so much better.
The kicker is who owns the super computer you're dialing into, and how much of it you actually own. "Back in the day" it probably made sense for a company to maintain their own servers.
But now it's too profitable to not use amazon in most cases. It's nice not needing a server room, but hat suffers is ownership issues with infra and data, accepting the outages, and rolling with the crazy decisions amazon might make in the future that will effect your systems.
But is it necessary complexity? The average laptop is orders of magnitude more powerful than servers were back then. Which workloads besides maybe ML training actually need that much more power than you can get easily locally?
VSCode has completely solved the complexity problem at the interface layer.
It is quite literally like you are running it locally (because you are), but the commands are executed against a remote backend.
Extensions are managed per host and just work naturally... it even tells you what extensions you don't have installed on the host (but do have on the local instance).
Yeah but what I mean is that is a lot of extra complexity, compared with just building code on your own workstation.
I would rather use my workstation's CPU & memory resources to actually compile the code rather than masking the complexity of moving that compilation process onto hardware which I do not own.
Yep. On one hand, dev containers could be really awesome. Imagine being able to spin up an old dev container to fix a bug in an old project. Ex: I added a tiny feature to a 5+ year old Java app last year and it took me longer to get the dev/build environment fixed up than it did to add the feature. A 5 year old dev container would have been awesome at the time.
On the other hand, I'm worried it's going to be used to (attempt to) "take away" our ability to compile locally rather than giving us an amazing, immutable, local dev environment.
I've played with remote docker a little before, and 2 things were stopping me:
- Is there a way to forward local ports to the remote docker instance? So I can access the app on "localhost:3000" as usual? (OAuth setups make the app URL a pain to change)
- Is there a way to "mount" local volumes on the docker instance? (Reaction Commerce mounts the code folder as a volume. Altough that project has other issues, that's the main thing that kept me from using it).
I get the Macbook Air (waiting!) to replace my mac mini, but for this and be able to run Windows instead of pay forever a remote host I buy one of this babies:
Wouldn’t a DO instance that makes an M1 machine comparably underpowered actually be really expensive though? It might only be the case for Apple’s ecosystem apps, but iOS devs seem to be seeing M1 laptops build apps faster than an iMac Pro for example.
I would love to use this method, but I am just too stuck with an actual proper IDE like my Jetbrains editors. VSCode just doesn't cut it when you compare it to any of the existing Jetbrain ones.
If anyone has some good tips for this, I would greatly appreciate it.
Wouldn't running linting and code formatting tools on every keystroke (as I prefer to) be intolerable over the network, with even minimal latency? Perhaps I am misunderstanding what actions are local, and which are remote.
VSCode actually runs that stuff over on the remote host. There’s no lag when typing because it’s pushed to the remote host async, so your editor doesn’t lag like if you were typing something in over SSH. It’s quite seamless and you forget you’re remote.
I didn’t downvote but IMO “the way to get this working on your expensive laptop is to buy another entirely separate computer and have an always on internet connection” isn’t really an answer (and isn’t at all specific to the M1).
I read it more like: "The way to get this working for the time being is to skip the upgrade for now and use an old banger laptop you have lying around until the ecosystem is ready for the way you work."
OT: But wouldn't be surprised to see Docker Desktop for Mac running on M1 in 3-6 months.
Been without a personal laptop for close to a year now, mostly using my work laptop or personal desktop... and have been considering the M1, but likely won't pull the trigger without better Docker support. I've found WSL2+Docker support to be excellent and hope to see similar levels of integration with OSX before too long.
Using the CLI to connect to a VM isn't so bad, did this a lot before Docker Desktop was even a thing, though I generally just ran an SSH shelled into the VM full time. With VS Code's remote extension, it's roughly the same either way.
Docker support with full support for running x86 docker images from docker hub via rosetta would be what is needed for me. I'm not sure that is going to be feasible but not being able to run vanilla x86 docker images means I can't build most server side projects I've worked on in recent years because they all fire up things like elasticsearch, redis, posgresql, etc. as part of their builds so tests can run against these thing. All of those are basically either vendor provided docker images or things on Dockerhub in x86 form. Manually creating arm variants for these things and customizing the builds to have ARM variants is not going to be practical for a lot of that stuff. I'd have an easier time switching to Linux or Windows based laptop than to an ARM mac right now.
Realistically, most people that have docker in their life are going to not be bothering with ARM macs for quite some time. In any case, the 16GB limit is also not great if you are using docker, some ide, and other tools. I'd be looking for something closer to 64GB if I was buying currently. Maybe the second generation will be a bit more attractive and maybe by then the software ecosystem will have matured a bit. Right now basically everything I use on a daily basis is somewhat problematic.
so it doesn't seem a huge leap to imagine the reverse being possible before too long
(I kind of hoped it already was, but I guess the existence of blog posts like this and the one from Docker here https://www.docker.com/blog/apple-silicon-m1-chips-and-docke... indicates there's work to be done on the Docker for Mac app itself to get it working nicely)
Even if you could run those, would you run your integration tests on your ARM Mac knowing the deployment will be on x86? I mean, if it passes for you it might still fail for the other architecture.
I would. Generally would also have a cluster spin up/down to run via the ci/cd platform.
Almost everything I've written for the last 6 years is with tools that are mostly cross platform.
The biggest issues I've had in that time is the prerequisites for rabbitmq in Windows, and building apps using sqlite on embedded.
I think the may be a couple hurdles in the next year, but that apple going to m1 will be the catalyst for much more flushed out arm support in the docker ecosystem. Rpi has done a lot of this already.
macOS already has this and is in fact a proper Unix to begin with. Docker is supported just fine on macOS on Intel (and soon the new chip), and it runs basically Linux inside a VM, just like WSL2 does.
I've never been able to run Docker on my MacBook Pro without the CPU's going crazy and the energy usage skyrocketing. I opened a ticket about it, but there have been other related tickets that have been closed without any real resolution: https://github.com/docker/for-mac/issues/4323
It unfortunately prevents me from using Docker for local development. I'm hoping things change with M1.
Similarly, I installed docker-machine and docker (via Homebrew) on my M1 mac via Rosetta 2, and connect to a remote docker host, similar in configuration to this blog post (which really has a misleading title).
That works - but it pegs processor usage on the M1 mac...
yeah osxfs, their solution for syncing mounted file paths, has a massive performance hit. the less files you need to mount in your composes the better. http://docker-sync.io/ helps, but docker for mac is still slow.
I notice that it consistently uses about 10% of a CPU while idle. I searched for this issue on github and see people complaining about it as far back as 2017, so I know it's probably not going to be addressed anytime soon.
I would hope that most people's production images are being built & pushed to an image repository in CI, not from somebody's laptop.
A lot of the official base images on Docker Hub are dual architecture now (both x86-64 and arm64), so if devs on M1 can build arm64 images locally from those, develop on those, and then have CI build the production x86 image, things should be more or less fine.
Of course, I'm sure a lot of people rely on 3rd party base images that are still just x86-64. So I expect there's going to be a pretty painful transition period where Docker works on M1 Macs but a lot of popular images don't.
Or maybe Docker's official response will be a Docker for Mac release that still runs x86 in a qemu VM or something, and arm64 Docker on M1 Macs will remain an oddity for a while. But I hope they lean into improving tooling for dual-architecture support, because I think we're increasingly moving into a dual architecture world, and not just because of the M1: as ARM becomes more popular in production, I'm sure more people are going to also find themselves in the opposite situation with x86 workstations & arm64 production servers.
> I would hope that most people's production images are being built & pushed to an image repository in CI, not from somebody's laptop.
That's not really the point. Cross-compilation is no problem these days, especially for targets like x86-64. The point is rather that devs will be running the code locally in a completely different environment to production which defeats the purpose of using Docker for development.
I keep hearing this mantra of "same environment". But is it really ever the same? One of my previous workplaces insisted on everyone having bloody openshift locally, so your dev env supposed to be closer to prod. Guess what, it still wasn't by a long stretch, yet it ate half of usable laptop's RAM and was slow like hell.
Most people quietly switched to docker-compose templates shared through private repos within a month for quick local runs and then tested end-to-end in dev cluster when it was mature enough.
What I'm saying here is that architecture mismatch is really just another variable, and purpose of docker locally nowadays is not to replicate prod. That is unachievable and the sooner one accepts it the better. Still it's the best way so far to keep DLL hell at bay.
It defeats _one_ of the purposes of using Docker for development.
I expect that the official M1-supporting Docker Desktop will eventually, given a dual-arch image, allow you to choose which architecture to use. (One would run natively, the other under emulation)
There are situations where I'd use this feature, but 90% of the time I'd be fine using arm images locally and deploying x64.
(I could also imagine this working the other way -- an x86 dev machine and an arm deployment target -- which I think to some degree is supported by Docker today)
At least on AWS, it's slightly cheaper to deploy ARM servers. Look for this problem to also get solved on the server side by more ARM servers being put into production.
Isn't the point that the environment is very close (and the M1 may be a lot faster than a comparable machine) and that the CI will test the final code on x86 before deployment?
It's not ever the same, that's what qa/staging is for. Paired with ci and good Integration tests it's still works. Simplicity in a local dev env is king.
AWS Graviton instances apparently offer better price/performance compared to intel EC2 instances. So there probably is a slow move to ARM for some developers.
Having a well defined "fixed" environment (i.e. container), which you can also use to "tryout" some stuff without needing any installations on your system makes this already worthwhile.
Adding a sentence to explain how ARM "saves the planet" would perhaps be more effective than a hash tag. I presume you are referring to lower typical power consumption for ARM servers.
Off topic but I think we are at the point of no return. Plastic pollution is at its highest with production and use of plastic to increase in significant magnitude by 2050. The main generators of green house gases (US, China) have yet to reduce their reliance on fossil fuels. We are constantly over-fishing, over-grazing, deforesting and plunging this earth of its precious resources with reckless abandon to feed our decadence.
World War III won’t be a fight over “values” but rather a constant fight over who controls water, energy, and clean air.
Or at least for the least 20-30 years. Sure theoretically there was long term a way to salvage it. But it was always unrealistic given the large change in economics and society it requires.
But there is a difference between a bad and a even worse situation.
So just because we can't reach a perfect outcome doesn't mean we shouldn't try to reach a better outcome.
Because of this I really don't like arguments like "we are at the point of no return" as it's often used as a argument that supposedly any improvement is pointless. (Through I think you do not mean it that way).
I mean the way we are heading will like not directly lead to a extinction of humans, but it might make live unbearable for most of humanity and it also might prevent us from reaching the necessary tech to survive other extinction events.
But this also means that "just making thinks a bit better" can in the long term have a major difference even if it won't prevent a climate catastrophe.
World War II wasn't much of a fight over "values" either, that was largely a propaganda device. Certain powers simply wanted more stuff than they already had, and other powers disagreed. Britain declared war when Germany took a Polish port, not when Hitler got in power or when he started rounding up Jews.
Britain declared war when hitler took Gdańsk, sure... but after hitler took Czechia and Austria (and funded a coup in Spain). so it’s fair to say Britain’s declaration of war was initially mostly about stopping an expansionist power... but I don’t see how that’s the same as propaganda
Going back _would_ save the planet, but it's not the only way.
There are routes forward, but they require global coordination and creativity. It's hard to imagine because there's no historical precedent, but I think it's worth trying for, and I'm confident it's at least _possible_ we can succeed in transitioning to full sustainability.
Containers purpose is not to be CPU agnostic, if you want that use Java, .NET, whatever, and build the container image that pulls the required runtime depending on the host.
Because Docker is basically a user interface for Linux's built-in API that is unique to Linux. All the heavy lifting of process isolation is done by Linux itself, not Docker.
Adding the same interface to other operating systems would require extensive changes to their kernel. In the closed-source macOS that is not feasible. Apple doesn't give that level of access to anyone.
The process in the OP—starting a VM in Virtualbox, and having Docker use the VM—is basically a manual version of what docker-machine does automatically.
Even though it's all a VM under the hood, docker-machine makes it all feel a bit more streamlined/native. Also, the version of Linux it uses—Boot2Docker—will likely start up more quickly, and probably use slightly fewer resources as well.
I'm using docker-machine so I can run Docker on OS X 10.9, which predates Docker Desktop (and Hyperkit).
I'd expect it to work via Rosetta. Alternately, it's a Go project—I'm not sure what the status of Go is on Apple Silicon, but once that's figured out it should be a relatively easy recompile.
I’m fine with my iPhone and iPad, but my MacBook Pro is a PITA. I own a MBP 13” 2018 with 8GB of RAM. For starters:
1. Windows 10 via Bootcamp boot faster than macOS — on an Apple machine.
2. What’s wrong with a tiling manager? Why I need a third-party app to organise windows?
3. I saw in another HN post that an Apple engineer was patting themselves on the back for the memory mama garment. Then, why the hell at login macOS uses more RAM than Windows?
4. I can’t use Docker for Mac. The same dev environment with Docker and WSL2 on Windows 10 via Bootcamp works fine. I mean, it still sucks because of the limited RAM, but it’s fine. On macOS it’s way slower, and the fans go crazy.
The hardware — except for the keyboard and the Touch Bar - is fine. The touchpad is great, the screen is fantastic. But Apple needs to work on the operating system; just don’t touch the shortcuts. Those are fine.
A bonus to using command-line docker versus using "Docker Desktop": the former is free and open source software and doesn't include any spyware.
The latter is closed source, proprietary, and silently uploads a ton of information about your local system to Docker Inc if it should crash (which is easy to trigger accidentally), including pcap logs of your local network, and a list of all running programs on your system, among other massively intrusive things.
Docker Desktop is a privacy nightmare. Always opt for the command line open source docker, and either set DOCKER_HOST="ssh://remote.example.com" in your environment to do remote builds, or run a local VM like this article suggests.
What sort of source are you looking for, other than the direct report of someone who has analyzed the binary (me)? It's not open source, so I can't link you to the part in the code that pcaps your network.
I don't understand why someone would want to run docker on Mac OSX (or Windows) in a VM. Is there a way to set up the VM so it doesn't lock memory and processors?
I'm pretty sure that's the only way to run it on Mac OSX. Sure, you can have fancy tools to hide the VM and make it less visible (Docker Desktop), but it's still there.
I have been running docker on a 10 year old desktop, running ubuntu, which I ssh into. The code lives on the desktop which I edit over ssh. The desktop can still handle running a db and server, and I don't tie up resources on my laptop.
I keep looking for a better solution but this works the best.
Windows containers are Windows specific, and the base image is like 1+ GB. I think the best solution is run Linux cause that is what containers were created for and are the most performant on.
So while Linux containers are all over the place in 2020, Linux had zero influence in technology per se, and like Windows is just catching up with the mainframes and UNIX old timers already offered during the last couple of decades.
They aren't all over the place, in fact they are far more advanced than whatever you're referring to. Containers on Linux have also existed for a long time:
The containerization solutions you're referring to didn't gain widespread adoption like Docker and Kubernetes because they weren't as polished and lacked features and functionality that is available today. Windows containers have a big image requirement, and most server-side software is already optimized for Linux so they aren't in high demand. Just take a look at Docker Hub and tell me how many Windows containers you find compared to Linux, especially the official images of the most common software.
Free beer has a great power, hence why the other solutions didn't got as widespread as Linux.
As for the number of Docker images with Windows, that is just a side effect of Windows containers being relatively recent and most public cloud deployments being anyway based on Linux distributions.
Buy the M1 and switch to a fully remote development experience. Spin up a DO instance, wire up VSCode Remote, and run docker on that machine.
While it does require an internet connection, I’ve found remote development to be really fantastic as an option on a personal “underpowered” laptop. Especially if you are running a cluster of services or large databases, I find it’s far more productive to offload the running of my code to a VM in the cloud.