Hacker News new | past | comments | ask | show | jobs | submit login

Containers were a mistake. This is all radically more complicated than it needs to be. Running a computer program is not that complicated.



Being able to tell a fellow developer "do docker compose up and you can work" is a lot better than navigating them through the installation of a bunch of tools, each with their own quirks.

I'm not convinced anyone below google needs kubernetes, but containers for quickly setting up and running something on my machine are a blessing.


Agreed. Running a modern "computer program" is more than just executing a single binary file.

When you have a open-source application with five or six moving parts (monitors, backend, a database, etc.) being able to deploy it to a VPS with a single docker compose and have them all containers act on an internal dockerized network without conflicting ports, etc. is a GOD SEND.


Nobody's pointing a gun. You can go back to making readme's and hosting perpetual 12 hour 1-on-1's if you want


Docker can be a huge pain I agree. But as a senior dev that’s had half their career before docker and half after docker, it’s totally worth it and lets you avoid much worse pain.


As a guy with 25 years in industry I have to agree containers are way too complicated. The abstraction mirrors too closely the real world and so now we literally "rack and stack" in yaml. Every port, every dependency, trade your handful of cat6e for notepad.

I've heard it said the greatest trick ever pulled was the devil convincing mankind he didn't exist.

The second greatest trick, apparently, was Google convincing the world to adopt a solution to a problem few other than Google actually have.


Nah fams. It's turtles all the way down. 40 years in IT, and nothing old is new. Containers arent fucking new. Virt capabilities have been in most monolithic kernels since the 80's. VMWare decided to kick it off in the 80x86 space, and from there, a physical host is the root of all virtualization, regardless of hypervisor type or ephemeral system segmentation. It's like, the Devil you know, or the Devil you don't.


I do think something like serverless functions is the better abstraction, but there is no open format I am aware of that you can use to bring these across providers (other than arguably serverless I suppose), and generally there has been little support/interest for long/infinite running serverless functions, making some applications problematic.


Disclaimer: I work for DBOS. But the reason I took the job is because I think we are solving a lot of the problems that make people choose containers over serverless.

We have an open source library called Transact[0] that you can run anywhere, including locally, and get durable serverless with state (and even some observability). Then you can deploy it to our cloud[1] and get reliability, scalability, more observability, and a time travel debugger.

[0] https://github.com/dbos-inc/dbos-transact-py

[1] https://docs.dbos.dev


awesome, will check it out!


Containers are a convenient work-around for the problem where programs have incompatible dependencies, and additionally the problem where security isn't as good as it should be.

For instance, you want to run one program that was written for Python 3.y, but also another program written for Python 3.z. You might be able to just install 3.z and have them both work, but it's not guaranteed. Worse, your OS version only comes with version 3.x and upgrading is painful. With docker containers, you can just containerize each application with its own Python version and have a consistent environment that you can run on lots of different machines (even on different OSes).

They're also a lot more convenient than having to go through the arcane and non-standard installation procedures that a lot of software applications (esp. proprietary ones) have.

Yeah, honestly it kinda sucks that we're adding this layer of inefficiency and bloat to things, but these tools were invented for a reason.


> For instance, you want to run one program that was written for Python 3.y, but also another program written for Python 3.z. You might be able to just install 3.z and have them both work, but it's not guaranteed. Worse, your OS version only comes with version 3.x and upgrading is painful.

This is because the Linux model of global system wide shared dependencies is stupid, bad, and wrong. Docker and friends are a roundabout a way of having a program shipping its dependencies.


The Linux model works fine (very well, in fact, because of less HD space and much more importantly, less memory used for shared libraries) for programs that are normally included in the Linux distribution, since the whole thing is built together by the same organization as a cohesive whole. If every random little 20kB utility program were packaged with all its dependencies, the bloat would be massive.

It doesn't work very well for 3rd-party software distributed separately from the OS distro and installed by end-users.

The problem I've seen is that, while pre-Docker there was really nothing preventing ISVs from packaging their own versions of dependencies, they still only targeted specific Linux distros and versions, because they still had dependencies on things included in that distro, instead of just packaging their own. The big thing is probably glibc.

As I recall, Windows went through a lot of similar problems, and had to go to great lengths to deal with it.


> because of less HD space and much more importantly, less memory used for shared libraries

Literally not in the Top 1000 problems for modern software.

> Windows went through a lot of similar problems, and had to go to great lengths to deal with it.

Not really. A 20 year old piece of windows software prettt much “just works”. Meanwhile it’s nigh impossible to compile a piece of Linux software that runs across every major distro in active use.


>A 20 year old piece of windows software prettt much “just works”

No, it only works because Windows basically included something much like WINE (they call it WoW) in Windows, so old pieces of software aren't running on the modern libraries.

>it’s nigh impossible to compile a piece of Linux software that runs across every major distro in active use.

Sure you can, with Docker. It's effectively doing the same thing Windows does with WoW. Windows just makes it a lot more invisible to the user.


Nope, that's not how WoW works. It might feel close enough for you but if that's the case then you aren't careful enough with the analogies you make. Think harder, aim for more clarity.


It is an abstraction.


Many abstractions are very bad. Stacking bad abstractions on bad abstractions is why modern software is so slow, laggy, and bloated.


What would be your solution to running, say, 15 different Python applications on the same machine, each requiring a unique set of library versions?


Create 15 pex files. Or possibly 15 PyInstaller executables. Then simply run them like normal programs.


Ah yes let's not use bad abstractions like docker, let's use pyinstaller...


Why would you load an entire userspace environment just to manage a Python package's Python dependencies? Seems a little heavy.


https://github.com/claceio/clace is a project I am building for that use case. Run multiple apps on the same machine, Clace acts as an application server which manages containers https://clace.io/blog/appserver/


nix




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: