I'm curious to know the answer in an honest/practically-speaking sense not ideologically.
IMO, the downside of this container/web-app solution is the memory size that all the SDKs would need and this would add up eventually. But I'm not sure this fact alone could win any hearts over that sweet ease-of-entry-to-development.
People were nagging about latency and bloat in webapps and Electron GUIs...yet here we are...even SpaceX's console is a chromium instance.
Furthermore you could version and manage the evolution/drift of your workflow as underlying components change/get updated.
"Doesn't scale" in the sense of "other developers are pretty unwilling to learn that chain of they didn't grow up with it."
Funny, I've been running into this problem while trying to switch people to docker. Maybe we're all a bit guilty of this in our own way.
I've been using a containerized Rails app with VSCode for a while now and absolutely love it.
I basically used the container like a VM. Configured it with all the tools I normally use (e.g. OhMyZsh, etc) and had it constantly running in the background. I would use VS Code as a front end and work directly inside the container (cloning repos and pushing commits).
It had its quirks but the main benefit was that my local machine was no longer a snowflake. I could easily move to any machine, pull my "development" image, spin up the container and everything was exactly as I liked it.
The container approach is lighter weight, and I found it easier to manage the configuration via Dockerfiles. Managing a full VM with the OS install is a bit of a pain.
That being said, I worked at an organization that did the VM approach using Vagrant. It wasn't as nice as the VS Code/Docker approach, but the results were similar.
On OSX I only had problems with GUI running in docker. I was used to sharing X between linux host and docker container also running linux.
For some projects, the only working solution I found was to run a VNC server in docker.
Specific example: in a docker container, run a GUI built with Kivy and view the window on the OSX host. If anyone manages to do this without VNC, I'd like to know how!
I've been running Docker on Windows since Windows 10 17.09 or roughly the time WSL 1 came around. That's since October 2017.
It's been really fast and stable here and now with WSL 2 it's even better.
There hasn't been a single Flask, Rails, Phoenix or Webpack related project I've developed in the last 3+ years where I felt like Docker was slowing me down. I'm using a desktop workstation with parts from 2014 too (i5 3.2ghz, 16gb of memory and a 1st gen SSD). About a month ago I made a video showing what the dev experience is like with this hardware while using Docker.
Code changes happen nearly instantly, live reloading works and even Webpack happily compiles down 100s of KBs of CSS and JS in ~200ms (which could further be improved by using Webpack cache).
: The only exception to this is invalidating cached Docker layers when you install new dependencies. This experience kind of sucks, but fortunately this doesn't happen often since most changes are related to code changes not dependency changes.
The Windows solution is the equivalent of Smart Hulk figuring out time travel.
I'm not saying it's not good, because it is.
In my experience, Docker on Windows with WSL 2 has been pretty snappy.
This solution of VSCode + Docker containers seems to sidestep the whole WSL issue as WSL is no longer necessary for development if you’re containerizing everything anyway. While, I must admit, I like the idea of a project having the same steps for all users regardless of platform (to each his own), I don’t believe the majority of people would like to ditch their IDE of choice for the one tool that does this somewhat seamlessly. I’m probably not characterizing this well as I’m new to the workflow and I like to customize my own terminal workflow - which from what I’ve seen this doesn’t lend well to. Lemme just say it - I like the Linux cli workflow more than Windows and have since the beginning. But that’s just me. I’m sure their are plenty of peeps who feel the opposite. What I don’t like and I feel I share the same annoyance is now having to know both Windows and Linux command line interfaces.
Yes you’ve stepped into a rant, gotcha!
Here’s to hoping Microsoft goes full retard and strips out Windows and just maybe go the Edge route: shifting to putting a pretty face on Linux. That would be awesome, I’d buy Microsoft’s distro, frickin base it off Debian and let’s get this show going! Apple ain’t really doing anything special at this point - so your move Satya!
I like this phrase. I would like to borrow it if you don’t mind :)
I work on linux, but try to onboard windows dev frequently ("windows dev", ikr ;) . The experience is always painful, and despite many efforts from the WSL team to move forward, docker for development on Windows is still barely usable right now ...
Granted I work on Azure and the cost of the vm is not something I have to worry about.
For example, I don't have ruby installed on the remote dev box, but it is installed inside docker on the remote host. I also don't have ruby or docker running locally.
I think all the linting plugins either expect ruby to be available on the remote host, or inside docker, but not this combo... Is there something I'm missing? (disclaimer: only played a bit with VScode, I use vim on the remote box usually).
Yes, it can. https://code.visualstudio.com/docs/remote/containers-advance... talks about setting it up.
A new terminal in vscode is a terminal on the remote.
Only the client GUI is local.
If you want decent IOs, better than an old notebook, Azure is extremely expensive.
Originally I did this because of bad performance and bugs in WSL 1. I hear WSL 2 is better but I already have it all set up and it works great for me so I've just kept it.
Not sure if there's a way around it, but I was sitting down at my desk every morning to find a toasty laptop, lid closed and fans blaring.
I'm a bit of an old fart when it comes to software development. I prefer stable, slowly evolving solutions. I am a fan of the role of classical distributions. I abhor bundling every piece of software with all its particular versioned dependencies until everything works. I'm not gonna change. And that means I'm probably not the type to use Docker to deploy anything. That being said, I do see great value in it as a way for sysadmins to let semi-trusted people run their own OS on shared hardware without stepping on each other's feet. I love that it lets all of us run each our OS of choice on our compute machines at work. But that's just it: when I use Docker, I pretend it's a VM. I really would like to learn to use it in a better and more appropriate way, but whenever I try to seek out information, quality search results are absolutely covered in garbage 10-second-attention span "just type this until it works" blogposts.
(Alternative question: are there some cleaner solutions than Docker out there for the workflow I describe above?)
One is using Docker as a deployment packaging method.
The other is using Docker only for development and still deploying traditionally. Sure you can do both, but it doesn't have to be this way.
>when I use Docker, I pretend it's a VM
Also check anti-patterns 1 and 4 here
I feel like a lot of the complaints levelled at Docker pertain to the packaging and deployment use case. Where Docker really shines - even for small teams or solo devs - is as a development tool.
Could you give me some hints?
Basically learn the basics (cgroups, namespaces)
You should also study this https://github.com/p8952/bocker
I think I have a basic grasp of those things, but still don't get how Docker uses them.
> You should also study this https://github.com/p8952/bocker
Cool! That's very usefl!
Docker is a treadmill. Docker leads to Docker Compose leads to Kubernetes leads to whatever. It's a lot of noise and motion; you will increasingly encounter engineers who grew up on this stuff and assume it as a prerequisite, and are eager to climb the treadmill, thinking it's a ladder. You know about other options, and can decide when to stop.
Or think of it as a set of databases that include the combination of the current state and the code ("migrations") to achieve that state, while allowing those databases to share the same history.
In other words, containers are a solution to state problems, not only "works on my machine" problems. The benefits are reproducibility and shared resources. It is "functional OS" as in "functional programming," a pipe of common operations to apply to inputs to generate a consistent output, and which can be forked anywhere in the history/pipeline long as the pipeline does not hide held state.
Coming back to the `git` analogy, a traditional VM with Snapshots is like saving ProjectV1, ProjectV2, ProjectV3, ProjectV2_fixed, ProjectV2_fixed_final, while a container solution is saving only the history and places where histories diverge.
To answer your alternative question, nix package manager (which can also be run as a standalone OS, NixOS) is an interesting alternative solution from Docker. Reading its documentation may also help in appreciating the alternative set of perspectives.
In Docker...it depends.
Even assuming they have the same version of macOS, homebrew constantly evolves so getting everyone developing with the same version of dependencies as you use in prod is super painful. If they’re on Linux they likely don’t run the exact same distribution as prod.
Even if all that is the same, maybe they have to work on multiple projects simultaneously with different dependencies.
The issue becomes when you are on a non Linux environment for development and deploy to Linux in containers. If you build your code locally then you are debugging $DESKTOP issues which might not be the same as Linux.
Also languange environments with poor dependency management (e.g. C, C++) benefit from having an installable system.
> Even assuming they have the same version of macOS, homebrew constantly evolves so getting everyone developing with the same version of dependencies as you use in prod is super painful.
If someone is using homebrew for their development dependencies, unless they are targeting a release to homebrew, kindly ask them to stop doing this.
setting up pyenv, correct version of python manually, nvm, postgres, etc.. takes time and our setup guides grown a lot in the past 5 years...,
with containers all those long setup guides can boil down to `docker-compose up -d`
I do see value in it, though, after the comments and after doing some additional reading. I could see myself pushing this direction if the teams I worked on had higher churn rates or more frequent new hires. Also, I think that it lends itself well to certain tech stacks / languages / ecosystems than others.
Personally I would use virtualenvs with Python to solve that problem you described.
note: me too I default to virtualenv for local development, however there are usecases where this becomes insufficient.
Docker is not a general solution to this problem because it ties you intimately to Linux, whether directly, or through VMs, or compat layers.
Having all versions of you language and framework installed top level is a huge pain in the ass, since they inevitably will interfere with each other. Having separate containers with all necessary dependencies in it for each app is a lot more manageable.
I don't use docker with any of these projects. They're mostly legacy for us at this point and are shipping to EC2 instances directly.
Our more current projects do use Docker, however, and we're doing development along side of Docker in those instances and that seems to be working fine for us.
I do appreciate that a dev container would / may be a better approach to this for other reasons and especially, potentially, other languages and ecosystems, though.
What's the advantage of running your dev environment in a container?
You could reconfigure your laptop to look exactly like the production target, but then you have to keep doing that every time you change projects.
Before Docker, I used VMs for this, but VMs have certain disadvantages that Docker addresses. Like size, and documentability. Every time someone wanted me to look at a project, we had to figure out a way to transfer and store a copy of a 20+ GB VM. And they couldn't tell me everything they did to create that VM from scratch, because VMware doesn't do that and neither does Hyper-V. With Docker it is just a small text file that describes everything it takes to create what was previously a massive, undocumented VM image. It forces you to document how to create the environment, and it saves on the space and time of transferring VMs around.
Until Docker came along, it was a royal PITA. I always dreaded getting a new laptop or something breaking, as it took forever to set everything back up again, and it was never quite the same.
Docker changed all that. It forces you to configure everything in a reproducible way in a Dockerfile - and it's much simpler than trying to come up with scripts to install and configure everything in Windows, and I'd say it's also quicker than trying to come up with scripts for a Linux VM, just because you can spin containers up and down so quickly.
Docker has been a game changer for dev/test.
If you mean why use dev environments in containers:
2) re-use of container creation scripts for different environments,
3) isolation from your actual OS,
4) ability to run the same OS/libs/etc as the final deployment,
5) tons of base images with different environments already configured - from LAMP to data science,
6) easy sharing with others, team
7) ability to work with 1-2-5 or 100 different environments, with different OSes, versions, libs, python versions, whatever.
As for why have your IDE/editor work from inside a container (as described in the article).
Well, because you get all the benefits of containers (as above) PLUS get to use the editor as if you were programming directly on the target machine (including having visibility of installed libraries for autocomplete, running code directly there, and so on).
It's the same thing people have been doing with running Emacs in "server" mode inside another host, and programming with Emacs client on their machines as if they were locally at the machine.
For me, this is the major benefit. I don't have to worry about installing new tools or libraries and how they interact with my primary OS. While this isn't a huge issue for many, I don't want to have to worry about how Go, Python, Java, etc... are installed on my Mac. I like being able to pull in a Docker container with everything already setup (or a customized one). Then when I throw away a project, I don't have orphaned installations on my Mac.
Don't get me wrong, I am a fan of containers (in particular LXC), but I wouldn't list those benefits as if they are unique or novel to container based workflows.
Edit: To be clear, there _are_ benefits to containers over VMs, just not the things you listed above from my perspective.
Even the hurdle of SSH'ing to a VM is more cumbersome than `docker run`.
Certainly this could be automated and scripted, but the Docker solution is so... streamlined.
And I say this as someone that used to use Vagrant. With smaller installs available (such as Alpine), maybe more modern VMs would be just as easy as containers...
Also -- for me, VirtualBox was just "meh". It worked, but really wasn't that great. It always seemed like it took too many resources to run. That was another issue with Vagrant. (And yes, I did also use Vagrant with VMWare, but again -- that's a lot of overhead).
But I'd say VMs and containers solve different problems. In the case of dev environments, VMs are too "persistent" and accumulate personal cruft very quickly. Container tooling can be built to be noninteractive.
I also find the workflow of creating Dockerfiles to be much smoother than cobbling together scripts for a VM.
Plus Vagrant was a real PITA to get working on Windows (at least it used to be - I think I eventually gave up trying to get something running on Windows 7).
No, docker server runs directly on top of the OS as a native program.
As for Docker containers managed by the Docker server, they are runing on top of a supervisor - not in a full VM.
"With the latest version of Windows 10 (or 10 Server) and the beta of Docker for Windows, there's native Linux Container support on Windows. That means there's no Virtual Machine or Hyper-V involved (unless you want), so Linux Containers run on Windows itself using Windows 10's built in container support".
In any case, as Linux and macOS prove, there's no need for docker to have to run on a VM. And it seems there's no need on Windows either since 10.
I don’t know what happened with that but I was not wrong and Docker for Windows does still run in a VM: https://docs.docker.com/docker-for-windows/install/
I assume you don’t use Docker on Windows and just pasted the first google result?
Well, VMs are like overweight containers. Containers make "all of those benefits" easier, more performant, and more ligherweight.
Super easy for new team members to get started on a project. No need to manually install dependencies.
Environment versioning in git and docker. Your local environment gets automatically updated with a git pull.
I see this brought up a lot as an argument. So why do we want this? How often do people switch companies? Once every 3 years on average or something? Getting your development env setup takes what, a few hours max on 3 years?
Its not unusual for large organizations to have internally hosted registries (Artifactory), source control and network proxies. This usually requires setting up different config files (.npmrc for Node.js/NPM), installation of custom root certificates, ssh keys, etc. None of that includes project/team specific configurations and workflows.
Take all that and multiply it by thousands of developers and you have a recipe for an endless stream of Slack chats, email chains, and Teams messages repeating the same config questions and answers.
If you can reduce all that down to a single docker pull, while making sure everyone's development environment is consistent, it can be a big win.
* You want means to keep all of the dev environments in sync so you don't get "works on my machine" problems.
* If you update something, then you need a way for everyone to have their environment reconfigured.
* As the number of projects/stacks/developers scales this becomes a bigger and bigger issue.
I've used some Anisble in combination with a shell script wrapper to handle some of this kind of stuff. Even still, it takes a lot of hands on support to make it all work. So, if you can get something like this to scale, it might be a big win ... if...
In docker world, I create an MR that updates the dev container and deploy container dockerfiles at the same time, check that it runs tests, and merge it in. I push a new version of the dev dockerfile, and have the .vscode/devcontainer.json reference that new tag. Next time all the devs open up this repo, they'll get notified they need an update. You just updated a dev dependency across the whole group in a source-controlled way.
What's your way to do it? Email everybody?
Updating a postgres version comes to mind as one of the possible differences and that usually only is an issue when working with pg_dump and pg_restore with different versions.
Good point nonetheless. I am not sure whether it is worth the work to maintain dev containers and the performance hit you get vs running a database directly for example.
Just yesterday, I ran into an issue where a set of node unit tests were failing. My college and I were both getting failures, but different failures. The reason was: Different versions of Chrome, and thus different versions of the chrome integration plugin.
Given that we have effectively no control over Chrome's auto-updates, we'll never have truly identical development environments. A container with headless chrome would have resolved this for us.
On larger teams, setting up a consistent environment like that also makes it easier for developers to collaborate. I've had experiences where attempts to pair program or share utility scripts generally stumbles and fails due to everyone's environment being a special snowflake.
 syncing the changes is left as an exercise to the reader, but I use a common git repository.
It's not about you. The people coming in generally need confirmation and help when setting up their environments. Someone there would have to take time out of their day to help you. A few hours, a few days, a few weeks, a few deleted companies (https://news.ycombinator.com/item?id=11496947).
Here's an even more fun one: https://news.ycombinator.com/item?id=14476421
Its easy to load a container with some stored test state. Its easy to load a completely fresh environment. Its easy to run multiple instances of things (with docker compose).
Its easy to totally shutdown the environment.
Its easy to work on different branches with different/conflicting dependencies and juggle containers.
- Clone project
- Build container
If you're in a polyglot shop there are HUGE productivity gains in not needing to setup your environment manually, or worse, risk that vital information about it is distributed as tribal knowledge.
Plus, if your project has external dependencies like DBs, S3, etc... you can use docker-compose with VS Code as well.
Here's our current base go template, you only need Docker+VSCode on your system to get started: https://github.com/allaboutapps/go-starter
* As all IDE operations solely run within the local Docker container, all developers can expect that their IDE will work the same without manual configuration steps.
* We can easily support local development in all three major OSes (MacOS, Windows, Linux) and even support developing directly in your Browser through GitHub Codespaces.
* Developing directly inside a Docker container guarantees that you use the very same toolset, which our CI will use to build these images. There are no more excuses why your code builds differently locally versus in our CI.
I, personally, hope that JetBrains comes up with something similar which will allow devs to use the same workflow with the JetBrains IDEs.
Just launch the respective container and you are good to go.
If you are full stack or work with multiple programming languages there is no need to learn the "equivalent" of virtualenv everywhere else.
Also with docker there is no setup/installation involved. You just pull the image and that is it.
Also virtualenv requires that you already have pip/python installed. Docker requires nothing (apart from itself). So you can instantly launch Java/Node/Erlang/Haskell whatever without any SDK/libs installed.
There’s a “pets vs cattle” angle here, too. Something goes wrong, just pave over it and start again.
YMMV, of course, but I can’t imagine going back.
If I use 4 programming languages why learn 4 tools instead of one (Docker)?
With docker its a single command run after installing docker.
And let's be honest, virtualenvs and all the various ways they're managed and updated and such aren't really bulletproof either.
It's really, really refreshing how much "yeah I managed to break my dev environment" or "I followed the wiki for how to start developing your project but it 'didn't work'" can be avoided if it's just "run docker(-compose)?".
And this is especially true for junior developers, who are probably fresh out of college and won't be familiar with lots of the tooling that exists in the world. Not that docker is a simple tool, but it can hide so much complexity that it is easier to just show people how to docker run and docker build and such.
And if you're working with multiple projects in multiple languages, why bother learning each language's equivalent of virtualenv (assuming it has any), when there's an universal method available?
The TL;DR is there's a lot of things to set up yourself without Docker in order to run a typical web application and it's different depending on what OS / version you use. Some of these things are unrelated to Python too, such as running PostgreSQL, Redis, etc. but these are very important to your overall application.
Docker unifies that entire set up and the barrier of entry is installing Docker once and then learning a bit about it.
It saves so much time to just pull down the repo, run "docker-compose up" and have everything running, almost exactly the way it's running in production. With the right node or php version, databases, Elasticsearch, Redis etc.
Just deploying changes affecting both assets carefully in production can be quite awkward relative to a code-only deployment. You might need to do this in several stages, updating your code to allow for but not require the DB changes, then updating the DB schema, then maybe updating existing DB records, then at least one more round of code updates so everything is running on the new version of the DB. And you might need to make sure no-one else on your team is doing anything conflicting in between.
Doing the same in a staging environment isn't so bad because you're running essentially the same process. However, for a development environment where you want the shortest possible feedback loops for efficiency, you need to be able to spin up a DB with the correct schema, and possibly also with some preloaded data that may be wholly, partially or not at all related to controlled data you use to initialise parts of your database in production or staging environments.
It is not always an easy task to keep the code you use to access the database, the current schema in the database, and any pre-configured data to be installed in your database all in sync, and to ensure that your production, staging/testing/CI facilities, and local developer test environment are also synchronised where they should be.
It kind of amazes me that there doesn't yet seem to be a way of handling this in the web development community that has become a de facto standard in the way most of us look at tools like Docker or Git these days.
If you're an individual, then it would benefit someone who has multiple devices and/or multiple operating systems and doesn't want to manage their environment across all those devices and operating systems. For example, I personally have OSX, Windows, and several different Linux distributions on my laptop itself. My desktop also runs several operating systems.
Managing software across all of those is a pain. With Docker, I only have to manage the containers, and just have Docker installed on all the operating systems. Instead of managing like 50 different dependencies across 7 systems (think 7x50), I only have to manage Docker inside each system.
If you're working for a company, they will have their own dev environment. Instead of setting up and troubleshooting all their dependencies on your computer, you can just use their containers.
It's frustrating at best and catatrophic at worst when you have code working on your machine, deploy to prod, and then discover incompatibility.
Vscode can be problematic in that respect. Typically with dependencies used by an extension often assuming they can reach out to other servers.
I look at initial internet facing container creation as a separately managed snapshot process to grab dependencies which then gets configured for particular Dev, build, test, runtime and release containers that are built by the dependency collecting original container.
Ie something like vscode isn't installed in the internet facing container, it is installed in the offline build of a Dev container. This is where the difficulties lay in my approach.
Nothing at that scale is a "mess". It's simply what was created by our collective distributed system of humans and we should appreciate that it's not really a problem that can be solved instead of talking about it as if we could "fix" it.
The fundamental problem, as you say, is that our dependency ecosystems don't meet our requirements. Docker is one way to avoid the problem without fixing it since it's easier. Forward progress would be to fix the problem.
On one hand, Docker removes some pressure to fix the problem and encourages perpetuating it. On the other hand, maybe it gets people to think about the problem more. I don't know which influence is stronger.
If you have a roadmap for how 150 or fewer engineers can "Clean up the inherent mess that is modern computing" in less than 5 years, then I'd be eager to read it. In the meantime, tools which enable people to manage the symptoms of that mess are good.
The Chunnel lets us work around the fact that the ocean has not yet been boiled away.
You’ve got to pick your battles. If you’re, for example, a front-end dev working right up at the top of the stack, then delivering value to your clients means getting them their marketing webpage, CRUD app, what-have-you. To do that you have to abstract away a vertiginous amount of stuff under you, all the way down the stack. We’re all standing on the shoulders of giants.
Docker is an amazing tool for just this sort of thing.
The great mistake happened way back in the 1980s (maybe earlier) when most OS developers didn't implement a proper permissions system for executables. Basically, the user should always be prompted to allow a program read/write access to the network, the filesystem and other external resources.
Had we had this, then executables could have been marked "pure" functional when they did't have dependencies and didn't require access to a config file. On top of that, we could have used the refcount technique from Apple's Time Machine or ZFS to have a single canonical copy of any file on the drive (based on the hash of its contents), so that each executable could see its own local copy of libraries rather than descending into dependency hell by having to manage multiple library versions sharing the same directories.
Then, a high-level access granting system should have been developed with blanket rules for executables that have been vetted by someone. Note that much of this has happened in recent years with MacOS (tragically tied to the App Store rather than an open system of trust).
The only parts I admire about Docker are that they kinda sorta got everything working on Mac, Windows and Linux, and had the insight that each line of a Dockerfile can be treated like layers in an installer. The actual implementation (not abstracting network and volume modes enough so there's only one performant one, having a lot of idiosyncrasies between docker and docker-compose, etc) still leave me often reaching for the documentation and coming up short.
That said, Docker is great and I think it was possibly the major breakthrough of the 2010s. And I do love how its way of opening ports makes a mockery of all other port mapping software.
Like that completely destroys the reproducibility of your container!
Reproducibility is a continuum, not a binary. We've chosen a point on that continuum that we believe gives us the best trade-off between reliability and maintenance effort. 100% from-the-ground-up reproducibility would be ideal, of course, but there's a cost-benefit tradeoff, and we're not being paid to be perfectionists.
For all its warts this is something that npm gets right. Package management is a tool for software development, not software distribution to end users.
Do you really enjoy having separate packages for the same software for each flavor of package manager that does the same thing? Wouldn't it be nice if we didn't have to use containerization to distribute the same software to machines running a kernel with a stable ABI?
It makes absolutely no sense to pollute a global namespace with these.
Like it or not, users don't care if your software has interchangeable parts. They care if it runs on their system. The only sane way to guarantee a piece of software runs outside your developer machine is to include its dependencies during distribution and packaging (not refer to them - which is what package managers require). The less sane way is to use containers, but those are required when developers don't package their software sanely.
This doesn't preclude users from installing software or replacing interchangeable components should developers support it. What it prevents is disgusting bugs and workarounds because dev A built on distro B while user C wants to use it on distro D but the packages have to be separately for everyone because the distro package managers don't agree with each other on what things are named or how they should be built.
Containers are an even more insane approach, so maybe we are in violent agreement.
Contrast that with something like RiscOS AppDirs, classic Mac applications, or Next/Mac Application Bundles.
If what you're saying were true, unmodified software wouldn't work in a Docker container, either.
EDIT: Here's the exact command to do what you're saying isn't possible: apt-get download package; dpkg -i --force-not-root --root=$HOME package.deb
Regardless, lets assume it did work. Here's what it would do: unpack the package replacing '/' with '$HOME' in the destination paths. That's it. That software will not magically be able to find its associated libraries and configurations without the user mucking with environment variables at best, or chrooting or sandboxing such that $HOME appears to it to be a wholly separate installation.
That's not how sane systems do this sort of thing. I have been trying to do this sort of thing in Linux for pretty much as long as I have been using Linux because I loathe the way Linux installs software, and in 20 years it has never been straight forward. AppImage is a close as we get and software needs to be carefully built and packaged for that.
> If what you're saying were true, unmodified software wouldn't work in a Docker container, either.
> [...] or using namespacing and chroot to build it a sandbox wherein its baked-in paths actually work.
EDIT: It seems to be significantly easier on dnf, to the point that it could be trivial to add full support for home-directory installs.
It also doesn't really, since this is a system level issue. Applications need to package their dependencies, not the other way around. Dependencies form graphs, not flat lists. The existence of a global cache of libraries shared by all programs is a total inversion of requirements.
For instance, we create base images configured with SDKs, libraries, frameworks, configurations, binaries, etc...
Those base images are then built, versioned, tagged and then pushed to our container repos ready to be used by developers, CI/CD, etc...
Images based on these base images never need an apt-get, pip install, etc... If there is a dependency missing, updated needed, etc... we'll create a new base image with it, following the steps above.
I would love some constructive feedback.
The reason I like the approach you describe is because it keeps things simpler at the start of a project and consistent across most projects.
I also think it makes sense to have those support containers build on a schedule. For example, you build your build/CI container weekly and that’s the CI container for the week. On demand project builds use that CI container which has all dependencies, etc. baked in.
It would be nice if CI systems would let me explicitly tag builds as (non)reproducible.
And just to be clear, I'm not building (no human) the base image. The base image is also created within it's own build pipeline that has all of the necessary things to track its materialization and lineage. Logs, manifests, etc...
Once the image has been thoroughly tested and verified (both by humans and verification scripts) each time a change is merged, the git repo is tagged, docker image is built and tagged and then pushed to the container repo.
Perhaps you could explain what you mean by the other way? Why would you ever need to recreate the base image? Perhaps if the container repo dropped off the face of the earth and had to be created from scratch?
Oh no! CMake is too old a version to support a dependency we have to build in the image construction. So we better pull in a version of CMake from a PPA which is community maintained, and build it from source/etc.
Personally, I don’t see the issue with it if you’re at least being a little careful— don’t make obvious mistakes like installing latest/nightly packages automatically, etc.
In your image that extends from the base image, you'll typically update the package repo cache (it is typically cleared after building the base image, to reduce the size), then install whatever packages you want.
Like you, I don't see a particular issue with updating system-level packages - especially from a security standpoint.
for a more streamlined experience...
(GitPod does a similar thing where you can append the URL to gitpod.io, but they can't use the VSCode extension marketplace).
All the services I mentioned are fully open for everybody today.
I know, I know. Unthinkable.
Remote SSH works. Local devcontainer works. But mixing the two requires configuring the docker engine settings to point to the remote. This forces other projects to also run on the remote machine.
This was a problem as of 2 months ago.
Oh! My sides!
As much as it is a bunch of small paper cuts to support varied developers, you are more resilient to a big shift.