A license term was not respected; the license allows to use, modify, etc. but not to remove the copyright message or change the copyright to Microsoft.
It's simply ignorance. For example, out of the 600 comments in this post, yours is the only one which was able to clearly articulate what actually happened. And it's all the way at the bottom. It goes to show the headspace most developers are in. This mistake will be repeated by many others until the end of time.
EU Directive to mandate the creation of a Truth and Democracy Committee that will run a social media platform financed by a EU-wide tax on everything "obviously not good for you".
It’s not. Its core is open source, but the actual build that is branded VS Code and that people download is not. I’m not even referring to many of the key extensions that many people use, such as the SSH remote and Pylance, which themselves are proprietary.
If you want to use only open source code, you need a rebuild like VSCodium.
Why not? I can't imagine you'd deploy an application without some form of service management (even if it's just throwing it in tmux) and unless you've gone out of your way to use a non systemd distro systemd is builtin and works for both user and root containers.
Most places (though not all) that I've seen using docker or docker-compose are throwing them in systemd units anyway.
I was more questioning why a .container file would work for system services versus application services, since basically all those same problems occur for system services too.
Either way this type of argument just comes down to "should I cluster or not" but to think out loud for a bit that's just basic HA planning: Simplest solution is keepalived/etc for stateful services, standard load balancing for stateless. Don't want load balanced services running all the time? Socket activation. Don't have a separate machine? Auto-restart the service and you can't cluster anyway. The only thing you'd really have to script is migrating application data over if you're not already using a shared storage solution, but I'm not sure there's any easier solutions in Kubernetes
Not having to install and manage Kubernetes? Unless you're paying someone else to run it for you (in which case this entire conversation is sort of moot as that's way out of scope for comparison) that stuff is all still running somewhere and you have to configure it. e.g. even in small-scale setups like k3s you have to set up shared storage or kube-vip yourself for true high availability. It's not some magic bullet for getting out of all operational planning.
Also even in separate components it's not really "all of that". Assume an example setup where the application is a container with a volume on an NFS share for state: on a given node we'd need to install podman and keepalived, a .container and .volume file, and the keepalived conf. An average keepalived conf file is probably 20ish lines long (including notification settings for failover, so drop like 8 lines if you don't care about that or monitor externally) and looking at an application I have deployed a similar .container file is 24 lines (including whitespace and normal systemd unit file boilerplate) and the NFS .volume file is 5 lines. So ballpark 50 lines of config, if you or others wanted to compare configuration complexity.
Also, fun fact, you could still even use that Kubernetes manifest. Podman accepts .kube files to manage resources against itself or a Kubernetes cluster; I've been recommending it as sort of a middle ground between going all in on k8s versus transitioning slowly from a non-clustered deployment.
You can always use something like Docker Swarm or Nomad which achieves the same end result as Kubernetes (clustered container applications) without the complexity of having to manage Kubernetes.
Just spawn another VPS with your application and connect to load balancer. Even better - use Fedora CoreOS with Butane config and make that VPS immutable.
We run large, auto scaling clusters with compose and a small orchestration thing we have been using since the 90s (written in perl) (before compose we had our own compose-like with chroot). No issues. For decades.
Cool - everyone knows it’s possible to hack together something, or anything else. But Is it a good idea to spend time and resources starting to do that now if you haven’t been doing it with a sketchy Perl script since the 90s? Not really.
You do you. I think k8s is overly complicated stuff that almost no companies (outside google/meta/etc) need and some peopple here peddle for some reason. That's fine. We like making profit instead of waste time with the latest nonsense.
I don't think it's a good idea to spend time and resources learning complexity you never need. You are probably going to say that's not true as your cloud hoster has this all set up, but yeah, that's where the making profits comes in. We save millions/year on not cloud hosting. And managing yourown k8s cluster apparently is quite hard (at least that's what even the fanboiz here say).
Starting today, I would use Go probably instead of Perl, but I would do the same. It's much simpler than kubernetes. But sure; for our goals; we like simplicity and don't need resumes to drive.
I migrated several services from Kubernetes to compose and couldn't be happier. K8s was a maintenance nightmare, constantly breaking in ways that were difficult to track down and unclear how to fix once you did. Compose configs are simple and readable with all the flexibility I need and I've literally never had an issue caused by the platform itself.
That's what I thought the first day, the first month and maybe even the first half year.
Now it's 5 years+ running, all fine. No major hassle (crossing fingers!), no configuration hell. Never touched k8s, and not going to in the near future.
Maybe it just works and is sufficient for my case.
Sure. But that’s not the point of the article. The articles point is “Docker Compose is too complicated”, then proposes a half baked implementation of a solution that hand waves away of all that complexity with a scenario that will fulfill their specific use case but will completely fall apart when it diverges from the specific way they’ve decided to build the abstraction layer.
The problem that the author refuses to accept is that deployment is inherently complex. It does require configuration of a lot of things because it supports every use case. So you either have a generic complex tool that everyone can use (like Docker Compose or Kube) or you have some specific tool that only works for a tiny subset of all users that is simpler that satisfies your use case.
Note that I’m not saying Docker Compose is perfect. The syntax is a bit arcane it’s complex to understand etc. But removing the complexity by removing configuration options is not the solution here. Instead the author should focus on different approaches and tools to manage the same existing level of abstraction.
And for what it’s worth, that’s essentially what Helm is for kube - a way to manage and hide away the complexity of kube manifests (but still use those manifests under the hood). But frankly, docker compose doesn’t need a helm. Because docker compose, as you point out, has value not as a deployment tool, but as a single file that developers can manage and spin up on their local machines in a manageable way that doesn’t have the author fighting YAML all day.
I would say if the author was actually interested in solving this problem in a productive way they should first try to see if docker itself is amenable to altering their constructs to provide optional higher abstractions over common concepts via the compose interface natively. If the source tools roll out those abstractions everyone will get them and adopt them.
> I would say if the author was actually interested in solving this problem in a productive way they should first try to see if docker itself is amenable to altering their constructs to provide optional higher abstractions over common concepts via the compose interface natively.
docker-compose is a lot of things to a lot of people. When it was created I doubt anyone realized it would eventually be the de facto standard for deploying to homelabs. It's an amazing tool, but it could be better for that specific use. I don't think that segment is important enough to the team that maintains it to warrant the change you're suggesting.
docker compose doesn't need to be overly complex though; I think if it starts to feel that way you're likely doing it wrong(tm). K8s IS very complex, and likely overkill but exactly what you should do if you need it in production. This was a very long ad with an unconvincing argument for a product that addresses the wrong problem.
So you took the time to write a commentary about how the parent wasn't worthy of your counterpoints, but no actual rebuttal. That sort of low-effort / high opinion / no value / zero impact comment (and this post too) is perhaps what's really wrong with internet discourse. It's not about sharing and learning, but hearing your own voice.
you're too lazy to post counterpoints and then call the place sad. you are the problem, my friend! i don't like posting this kind of comment but felt it needed to be called out. i'd prefer if neither of our comments were posted.
I worked for a medium size company that served, and still is, ~150 clients (some Fortune 500 included) by deploying prod with docker-compose. It can be done.
Of course they will deny it, they have investors... Read the posts from the engineers - 30 people's research and large model training coming to a grinding halt for a quarter. That's easily worth billions in today's market, can you imagine if OpenAI or Google didn't report any progress on a major model for a quarter?
Can someone please explain why?