Enterprise likes it, because it gives them more visibility and control over what's going on in the dev box. That gives more angles to control against e.g. exfiltration threats. It also makes dev environments much more homogeneous: you worry less about client machines. In principle you could have a fleet of stateless Chromebooks whose main function is ssh and http, all connecting to identical virtualized desktops that are easily provisioned and deprovisioned.
I agree that enterprise likes it. Developers hate it, though. We are a lot that are very picky about our tools.
Virtualization means images means standardization means everyone is using the IDE and tools that IT and "Enterprise" decides.
It has it's pros and cons. But one of those cons is that it can make for an intolerable work environment. At least for me. I've gotten to the point where I now ask about dev environment during the interview process. If they mandate Mac or Visual Studio I'm not interested (not trying to start a flame war, those are my personal preferences). I know that certain organizations need their control for auditing, security and remote wipe reasons, and that is understandable. I'll just go work elsewhere. It's fine.
You don't speak for all developers, at least not me. In particular, having a working dev environment that accurately represents prod just handed to you, instead of trying to rig up something locally is actually kinda nice. It matters greatly what you're working on. If you're building a local C++ application, a devbox is going to be wildly inappropriate, but for a SaaS company where prod is a untangleable mess; databases and queues and microservices, it's infinitely more efficient to gave me a box already spun up that's a good enough faxscimile of prod. Bonus points for seeing the DB with test info. In IT, if a user comes to you with a broken mouse, do you try and fix it, or do you just give them a new one and get them back to work asap, and take the mouse in the back and fix it later? Same principle. If the dev box isn't working, it's far easier to spin up a new one rather than debugging on someone's laptop.
Dev boxes don't have to mean standardizing on One True Editor (although everyone knows that's vim), as long as you can get to the source, typically via ssh, and edit it, whatever you want to use is just fine.
If only there were tools to spin up repeatable environments based on a given config file locally, using either VMs or containers, reducing the setup process to a single command.
I didn't mean one specific thing. There are numerous tools in this space that approach the problem from different angles using different technologies.
My point really, was if your local dev environment requires you to jump through a bunch of hoops doing manual setup to get it working you're doing it wrong.
I think this hits the nail on the head. Plenty of companies have software stacks so large / complex to configure that there's no reasonable way to run the entire system on a single laptop.
As much as I dislike the experience of remote dev environments in general, there are certainly times where it may be worth the pain.
Sometimes thing are just comped complex. I know we want to see epicycles and simplify and be able to reduce the system, and sometimes that's even possible. (Don't get me started on being all in on all of AWS.) But sometimes it isn't and, well, that's where we are as an industry.
I haven't worked at many places like this, but the last time I did they had services that managed and operated on petabytes worth of data. The full suite of front end applications, rest services, ingestion services, maintenance, queues, billing, communication, scheduling and other systems is large enough that I probably only saw the source code for maybe 1/3 at most, let alone modified.
I do like local development better, every time it is feasible, but there's not much point in doing so if it can't reasonably reproduce a production system.
This is really not true - a poor implementation shouldn't dictate the direction of the "remote development" space. Sure, IT and security have their requirements, but primary requirement is to make the developers happy and provide them with reproducible dev environments.
I used to work at Uber and what we ended up doing with devpod (https://www.uber.com/blog/devpod-improving-developer-product...) was to enable the popular IDEs to connect to these remote environments - all the dotfiles etc etc were persisted so it literally felt like the local IDE, just way faster. Admittedly, it costs a bunch of money to build internally, but there's a path to having people be happy with dev environments.
(we collected data on what IDEs to prioritize based on surveys)
Why use a survey and not just ask the endpoints directly? Presumably the laptops are managed and are running something like Santa on them. Would remove bias to get the data this way.
yea - we had that too (good for understanding how laptop tooling worked, and what areas were starting to show latencies and therefore, needed to be worked on)
When I discover a company prefers Windows desktops, Microsoft 365, and Sharepoint, it's often a red flag for me. I've worked with guys who didn't know how to use git, so they copy all their code into Sharepoint. Then there are others who didn't know how to merge branches, so they copy from one folder to another using the windows desktop. It's even worse when the people doing this are supposedly "senior." In a Linux or Mac environment, I never encounter these sorts of WTFs
Yep. I ran into the “don’t overwrite these files” issue on a branch at a new company. It took months to get a sane branching/merging process in place. The “lead” loved copying crap with his windows desktop, did not even script his half baked processes. You should’ve seen what the actual code looked like.
I like it. I used gitpod.io before. Super nice to just be able to switch between projects in an easy way. Stopped when they changed pricing though from unlimited time to credits.
As someone who migrated our devs from local VMs to cloud VMs, I like it because despite the promise that every VM is the same, every dev laptop is not. There's so many ways for Vagrant and VirtualBox to screw up. Sure, the cloud VM is a little slower than local, but at least it fundamentally _works_, unlike VirtualBox where on any given day you might be suffering from any combination of 1) VirtualBox kexts not working on the latest version of MacOS, 2) file sharing silently breaking yet again with no logs, 3) docker containers inside the VM losing network connectivity for the third time this week because the bridge decided it didn't want to do anything, or 4) having VirtualBox installed just completely breaking MacOS and causing a crash during boot even after completely wiping, reloading, and trying to install VirtualBox again. Those are all real issues I spent weeks tearing my hair out over across dozens of devs. Now that we've moved to EC2-based VMs the only problems I have to deal with are minikube problems (which I had to deal with under VirtualBox as well) and when devs forgot they shut down their VM before going on vacation. The devs sometimes don't like that the default EBS volume is a little small, but every one that used the local VM knows it's a small price to pay
Even HashiCorp know vbox is scraping the bottom of the barrel for hypervisors:
> if you are using Vagrant for any real work, VMware providers are recommended since they're well supported and generally more stable and performant than VirtualBox.
If you're using vagrant to manage the VM and either use widely-supported or self built base boxes, each developer can use whichever works best on their platform.
So you might have a windows dev using hyperv, another usimg vmware workstation, a mac user with parallels, a linux user with lxc and another with kvm.
This is exactly what I've been doing. I pay for a vps and just use vscode to do all of my work, I tend to switch between various computers and even various laptops, and it doesn't matter which one I'm on and just ssh into my dev station.
The advantage of docker based developer workflows is that it makes things repeatable. If you’ve ever tried to support a team installing their own dependencies you’ll understand the pain!
If everyone is working in docker in VS Code it’s not a huge jump to have them develop in remote VMs. Now your devs can get setup instantaneously and can use thin and light MacBook Airs instead of heavy MacBook Pros.
> Someone, anyone, tell me which hypervisor doesn't enforce a memory limit on VMs?
They don’t mean the memory usage is allowed to grow indefinitely.
They mean, if your laptop has 32 GB RAM, and if your OS and apps consume 20 GB, it’s using 20 GB RAM.
If a VM has 32 GB of vRAM allocated, and the OS is aggressively caching, that VM will consume 32 GB of RAM on the physical host, even if only 20 GB of vRAM is being used within that guest VM.
Of course, every good hypervisor will have a way to prevent this from happening, so the article does seem disingenuous on this point.
If you work from the same computer in the same place everyday there are no advantages to having a remote box... If you work 50:50 from office & home you start getting annoyed by either having a different environment at home & work or by having to drag your laptop around with you... If you're a digital nomad wanting to work while travelling through "cheaper" parts of the world it's less worrying to carry around a $500 laptop than a $4000 laptop...
it was an either/or statement.., if you don't have a remote devbox your choice is either having to deal with the fact that your home work environment is different from your work environment at the office and you have to set up lots of things twice and encounter some weird issues even if you try to keep them in sync (more problematic for less experienced devs in fast moving companies), or your choice is using a laptop and carrying it with you even if you do have a more powerful PC at home, have the option of having a more powerful desktop at work and would otherwise prefer desktops to laptops...
Automated reproducible VMs are pretty straight forward with vagrant, and "setup the same thing from a defined set of instructions" is basically the definition of docker.
But if you're not happy with that scenario - I don't know maybe your work will be drastically affected if you have two different but identical VMs - tb3/4 external ssds are perfect for storing VMs on, so that you can move between machines at the drop of a hat. I've been doing this for about 5 years as a safety valve to my desktop having a fault: I can just unplug the drive and plug it into the laptop or a new machine and the VMs are all there exactly as I left them.
Doing a reproducible build with Vagrant and Docker is possible but very far from straight forward as you have to make extra sure that you use the exact same source box/image and install every software in the exact same version and deal with updates manually (and while most of the time updates don't break stuff anymore, it still happens at least once a year) + you still have to deal with synchronizing your IDE settings, secrets & credentials which you don't want baked into a Docker image, ... again yes it's possible, but not as straightforward as working in the exact same environment...
the VM on an external SSD is a better solution, but then it's still something you have to carry with you even though it's more compact than a laptop...
Devbox will give you the same project environment (packages, env-vars) on your work and home laptop. It leverages nix, and uses your native file-system avoiding the overhead and complexity of using Docker.
As someone who has actually traveled through "cheaper" parts of the world, I couldn't imagine working without a local environment. Cheaper parts of the world often imply worse internet connections, spotty wifi, and so on. Requiring a stable internet connection for everything would have been a productivity killer.
> Someone, anyone, tell me which hypervisor doesn't enforce a memory limit on VMs?
Not to mention the popular virtualization platforms like kvm, vmware, and hyper-v all support ballooning[1][2] where you can define a base amount of memory and a max, and when the guest garbage collects its heap within the vm the memory is reclaimed by the hypervisor.
This also allows overcommitting to better utilize the resources that would otherwise site mostly idle, and trust me every cloud provider overcommits their hypervisors, I used to work for a major one but anyone can check the cpu steal time in top and see how thrashed their host is
Author doesn’t mention it but I wonder if tried or considered Nix/NixOS’s reproducible developer environments and ruled them out for any reason. I couldn’t tell from the article if there’s something unique to his requirements that disqualifies them.
Nix solves a different problem than Hocus. Nix lets you define a development environment, Hocus gives you a way to run it on a remote server. Right now we use Dockerfiles to let users define the packages they need in their dev env, but we would like to support Nix in the future too. Interestingly, you can use custom BuildKit syntax https://docs.docker.com/build/dockerfile/frontend/ to build Nix environments with Docker https://github.com/reproducible-containers/buildkit-nix, and that's probably what we will end up supporting.
I think Nix is relevant here, because being able to run software across different machines reproducibly is one of its major selling point. I particularly like that it doesn't rely on virtualization or containerization to do that. It's up to the user to decide how to isolate the runtime environment from the host or whether they even should. Alternatively, tools building upon Nix can make that decision for them. Either way, it allows for a more flexible approach when you have to weigh the pros and cons of different isolation strategies. Development environments defined by Nix tend to compose well too, as a result of this design.
I am curious how important 100GB dev environment support and efficiently binpacking dev environments and rat-holing on memory efficiency actually matters for the first X enterprise customers for Hocus.
Memory and CPU cores are cheap. Getting the UX, resource allocation/sizing/instance type, and general pattern for multi-tenant dev boxes seems more fruitful.
Would love to see any data supporting this prioritization, and what the answer is for Workstation / HEDT-type environments given the industry's current focus on AI.
TFA goes right into the core complexity and misses it.
"Dev environment" depends on what kind of work you are doing. What a web developer needs is worlds apart from what a driver developer needs in terms of capability, resourcing, isolation, and rebuilds. Trying to offer a single environment for anyone who builds anything using a computer, is analogous to a single environment for anyone who uses a screwdriver. An electrician is using the same tool, sure, but they're doing fundamentally different work from a kitchen installer or a plumber and have vastly different needs. The single environment with all the capabilities that any computer developer needs is called a "computer."
IMO this is the genius of VSCode remotes. They built a core capability of running the bulk of the IDE "somewhere else" with just the display layer on your local machine. Then they can allow the developer to choose if container(s), VM(s), and local or remote are right for their case, without having to learn new tooling. They can even let you choose if "owning a computer" is required, or if a web browser is sufficient.
I keep trying to use Jetbrains Gateway. It spins up the IDE on a remote machine and renders it locally.
I honestly can't find a use for it. My personal machines are ancient quad core setups, but I have a beefy server at my disposal.
But the benefit of faster build times isn't worth the hassle. Between the suboptimal performance of the remote interface, syncing files back to my local machine, and all of the little idiosyncrasies of setting up the remote environment, it's still faster to develop locally. I just go grab a glass of water when I start my compile.
This might make more sense if you're working on some C++ monstrosity with millions of lines of code that takes hours to build on a mainframe, but if you're in the region of minutes per build, there's just no benefit.
Or I guess if all you have is a thin client, this might be better than nothing. But then again, you can get an IDE and compiler for most languages on an android tablet. If you have a machine with a processor, it can compile your code.
I prefer having a powerful development machine and install and use development environments natively. The only times I am prepared to use a (local or remote) VM is when the development environment cannot be installed on the OS of my development machine, or when I absolutely do not want to install those tools natively (because they might impact my main OS).
I am often involved with multiple projects at the same time, but if those projects are set up well (and the toolset is decent), that does not cause any issues. In practice that means that developers always have to take into account that the project itself includes references to the specific versions of the tools that the project needs. And that those tools are -as-much-as-possible- installed in the context of those projects instead of in the context of the whole machine.
A concrete example in node world is to use "nvm" and to always install tools locally in the context of a project instead of globally. In dotnet world, use global.json to specify the version of the tools you want to work with, install dotnet tools locally in the context of a project instead of globally.
I have worked on projects where this is a problem. For example, projects that depend on hardcoded paths. Or projects that depend on hardcoded connection strings for a database. Or projects that do not use something like nvm to specify the version of node/npm they are compatible with.
Each of those things is an issue with the setup of the project and is something that you will bump against when you try to setup a CI/CD pipeline for the project. Although, I agree that it can be useful to use docker containers with a correct version of the tools to create builds in a build pipeline.
The reason that I prefer to develop natively is that it is faster, snappier and more enjoyable. I also believe that it is easier to diagnose issues and investigate performance. Aside from that, renting a virtual machine with the same performance as my development machine, would be a lot more expensive, and I would still have to deal with the lag. I am aware of recent changes that allow you to for example run a JetBrains IDE locally (similar to what VSCode already had for a long time?) that is connected with a remote headless version of the IDE. I do not have any experience with that kind of setup yet, but I assume that would remove most of the lag.
Consider exploring LXC for a more mature alternative to Docker. It's not confined to the OCI ecosystem and offers a higher degree of isolation for development environments.
I don't know if I'd use the word "mature" (Docker is quite mature after all), but as a long-time Docker user I did jump into lxc/lxd and can confidently say it's "system container" approach is better for dev environments.
I used to use WSL 2 or a VM, but on a Linux host LXD is a really nice workflow. I can create a fully isolated Linux instance with all my dev tooling on it (optionally script/automate the install of all my tools), and with nested containers enabled I can even run _Docker in that LXD container_ so that when I type `docker ps` on that instance, and `docker ps` on my host, they each have their own set of containers running. For example, I have an ElasticSearch and Redis instance running in the dev box but Syncthing on the host.
Then I can use snapshots to back up the dev environment, or blow it away completely once it gathers too much cruft all without affecting my host. It's really great.
Pair it with VS Code remote SSH and you have a very feature rich setup with little effort.
Docker packages applications, and should only really have one app inside it. LXC is similar to a vm in it presents itself as a stand-alone machine but uses the host kernel, it’s not like a vm in it’s not as isolated as sharing the host kernel.
So with LXC I agree, managing it is like being a sysadmin as that’s what it’s designed to be.
I use Docker for things that are stateless, maybe throw away, or just test an app quickly. I use LXC for things I want to run multiple services inside, more statefull, typically where people plumb a bunch of Docker images together in I’ll use LXC. The advantage in Proxmox is I tell Proxmos to backup my LXC nightly as it’s treated similar to a VM.
For making LXC feel less like needing to be a sysadmin, you can use Nix to build your LXC images and import in to Proxmox. Your LXC container becomes declarative and not to dissimilar to using a Dockerfile, it’s a far more powerful Dockerfile. What I’ve done is create a bare minimal NixOS LXC with some basic config and use that as a template then edit ‘/etc/nixos/configuration.nix’ inside the LXC on first boot. However as it’s just nixos you can build push the config remotely, use NixOps etc.
It’s a really good workflow using NixOS with LXC however it took me a while to get it as the docs are a bit thin and an old+new version of docs with the new version skipping things mentioned in the old you need to do, I.e change the tty to /dev/console to get a shell inside proxmox console.
I have never really looked into LXC. How strong are the security guarantees? Presumably less isolated than a real VM, but with significantly better performance?
I have started to run more and more software inside a VM for better security isolation, but the loss of performance is pretty discouraging. For things that are probably fine, I might be willing to trade some theoretical security benefits.
There is an LXC provider for vagrant, which gets you the one-file concept, with the benefits that not everyone on the project has to use LXC, they just need a provider that works on their host for the specified base box.
I know this is an article specific to hocus so I don't mean to take away from that but the title is much more open ended and plural so I'd like to ask this -
What has been your favorite local dev environment?
I personally docker-compose everything w/ .env overrides and I don't really feel like I've ever needed anything more than this.
Docker Compose with .env overrides is very close to an ideal local dev environment, the only downside is if you don't use Linux you can hit some I/O bottlenecks (it got better with VirtioFS in Docker Desktop but still not native-speed)..
I still wait for properly usable remote work environment that trumps local in performance and convenience.... I feel like the VS Code approach with server-client IDE architecture and the client running as a web application is going in the right direction but not quite there yet.., Especially if you can leverage lots of RAM and GPU power in your workflow and often work from different locations, don't want to carry your laptop around but still want to use the same power, environment & configuration on any machine you work from....
How do you handle tooling? Install it in the parent or in the container? I quite like the idea of VS Code dev containers since it does a lot of the bootstrapping work to make the container be more like a full system, and I wish this “remote environment” idea was more widely supported amongst editors.
Hmm. By tooling are you referring to things like jq and other "uncommon" cli/tui apps I use to work against a code base locally? I think that's what you mean.
One of the best practices when using docker-compose to develop is actually to mount your code base as a volume. This lets you modify the code locally without requiring a complete rebuild of the container each time code is modified.
You might have to no-hup a service or run it not as pid1 (init container or supervisord) so it can restart without killing the container but that depends on the work being done.
Mostly tools that you are going to run during the development process that aren’t going to be run in CI/at “build time”. So sure, possibly jq, but especially language server and editor integrations. Do you run the editor in the container? Do not use these tools?
Devcontainers are indeed the answer if you’re using VSC. I use the docker-outside-docker feature (there’s a few flavors of these) so my app can be defined with docker-compose. The experience is nice once you’ve taken the time to customize your image, settings, and extensions.
> Many cloud providers, like AWS inside VM-based EC2 instances, won't let you run a VM since they don't support nested virtualization
Does anyone know if there's a technical limitation to nested virt on EC2 or if it's just so people use .metal? I know Azure recently (1-2 years?) started offering it.
This blog doesn't convince me why I should care about Hocus over solutions like VSCode devcontainer + Vagrant. That is more what it is competing with, not raw Docker/VM.
Oh yes, in this post I was not trying to. Hocus gives you a web interface that lets you spin up a dev env with a single click of a button. We also implemented a git-integrated CI system that prebuilds your dev env on new commits. It’s basically a self-hosted Gitpod or GitHub Codespaces.
yea, great for small projects but no good when you're trying to expand into enterprise capabilities -- I have to get one tool for dev, have that config diverge from CI, and then from staging
then i have to hire a large devops team to manage it all -- super inefficient
Good god, no. Or, at least, let us never make this a sort of standard way of developing that you're going to push on to your engineers.
Containers have their place, they can be great, but I've tried the whole "containerize everything" approach, and it's no less time consuming than not containers, and now you have another Thing You Have To Learn (TM).
This isn't to say that virtualization is a bad thing for a development environment. It can be a very good thing, actually, but you don't need containers. A development environment, in my opinion, should be well organized, easy for a new developer to pick up, but also scrappy and adaptable to new and interesting situations. Containers, at least in their current conception, are antithetical to this. They're best kept to deployments or running things in a scenario where you know for sure you're not going to tinker with anything. I don't think I've worked on a software project where, at some point or another, I didn't need to reach under the hood in some unconventional way to investigate an issue or simulate some circumstance. Yes, you can do those things with containers, but now you have to deal with that complexity, which can get in the way by merely being confusing and indirect.
Having given up on using containers for development, when I want to use virtualization, I just spin up minimalist instances of Debian under Qemu (using the UTM GUI). This way, I can run a Thing (TM) in isolation, easily open up a shell, or SSH into it from the host shell, or start a GUI from within it if I need to, or install however many services I want within it and open up ports to the host, or have shared folders, etc. etc. etc.
But what if I need multiple of a Thing (TM) running at the same time? In that case, I just clone the VM. No Dockerfiles, no docker-compose.yml, no "volumes", no orchestration, no images failing to build because reasons, no qcow2 files blowing up with Docker For Mac (though that situation has improved), no pruning of orphaned containers/images, no worrying about whether your container should only be running a single process/service, and so on.
The only drawback to this is that, if the host is Linux, then it's indeed kinda dumb to be running other Linux kernels in VMs. But hey, it's 2023 and the difference in performance may not be meaningful depending on what you're doing. I know that, on an M1 Mac, I can run graphical applications at monitor resolution with incredible speed (perhaps moreso with Parallels instead of Qemu), so I'd struggle to care about any performance drawback if my host was Linux on the same hardware.
I'm sure that, for some, containers in development will be totally worthwhile. But if they don't seem worthwhile to you, dear reader, then that's probably because they wouldn't be.
As a consultant who jumps around between code bases with vastly different requirements, containers make for a much better experience than managing that on my main machine, or spinning up VMs for each separate project. It’s an ideal situation for me.
I don't really understand the point of a "dev box" that's hosted in a DC and shared, at all - at least not the way they're painting it here.
Hardware capabilities for even consumer level laptops and desktops have progressed much faster than average network connections.
Having testing/preview/branch named environments in a DC? Sure. But this line:
> They should be able to run any software they want inside their own workspaces without impacting each other.
What does that even mean?
Is this about someone working on a feature branch that uses some new dependency that needs to be installed?
That's 100% the sort of thing your local development environment is for, until you're ready to push it to your hosted test/whatever environment.
> But when it's running in a VM, the VM gobbles up as much memory as it can and, by itself, shows no inclination to give it back.
Someone, anyone, tell me which hypervisor doesn't enforce a memory limit on VMs?