Hacker News new | comments | ask | show | jobs | submit login

Docker more and more gets into a direction which i don't like.

EDIT the missing content..: I mean currently it mostly is for the big users. There aren't too much things for "small" users. The big things like kubernetes, etc are really hard to configure / maintain, etc. I mean it's easier to maintain ansible / puppet / chef / etc... - scripts than maintaining a real "docker" environment. even looking at deis, flynn, openshift its not just run "this, upgrade with this".

after you setup the hole thing you need to create huge buildpack scripts or Dockerfiles or kubernetes configs or whatever. you just needed process isolation, now you build a infrastructure on top of a infrastructure.




I think at this moment docker does looks cool, but doesn't provide much value at this time. Some people are using it just for packaging their apps, but that's something that was already solved or supposed to be solved by the language that is used.

The direction where it is going is to create an "internet operating system" you upload your application and don't care on which server it is running.

Currently that above problem has not been solved yet, and what we have is just bunch of tools to put them together and DIY.

The real power of this will come, once cloud providers will allow you to simply upload the images without having to build that infrastructure yourself.

I think you're right to fall back to ansible / puppet / chef, because this technology is simply not ready yet.

This is especially true if you're using public cloud (and while you can make it work in AWS, it will cost you more; both due to amount of effort, but also due to overhead imposed by it. Remember, AWS still charges you by VM).

There might be some benefits of using it right now when you have your own datacenter with physical machines. It could provide costs savings, compared to running your apps on dedicated physical servers or even using VMs.


Certainly. There's an excellent opportunity there if someone is willing to execute. Would you (the community) pay for a product? Or support an open source project through consultancy? Or are we going to sit and wait for a large engineering organisation to build it internally and open source?


i would pay, yes. however it would need to be a excellent product. and I don't think that any product the near term could replace a existing ansible/puppet/etc... workflow which contains a bunch of lines (less then 1000 for multiple projects).

as said the only thing gain for docker would be process isolation, so it should be really really awkward simple and useful on low end hardware. (as the other solution already does) and getting process isolation with cgroups isn't too hard on newer kernels. (# systemctl set-property httpd.service CPUShares=500 MemoryLimit=500M)

so what the product needs to have:

  - process isolation
  - easy configuration
  - configure the os/software and update it easily
  - nothing more than a bunch of lines per project (no dockerfile frickling)
  - binary / git rollouts


Installing kubernetes isn't actually as difficult as you've made it out to be. You'd be able to draft a workflow in less effort in comparable circumstances using puppet and you'll get things such as health checking and failover of your apps for mostly free.

If you're well invested in puppet, using puppet is going to be easier because you know it. You can happily use docker with puppet. Stop puppet from installing $APP and instead use it to docker pull && docker run $APP.

This means the logic for building your application has obviously moved to the Dockerfile. You cannot currently get rid of this logic, only hide it in abstractions. I prefer it living in the apps repo as it's a nice seperation of concerns, but you obviously would prefer it to be magic, which you can have but at the price of versatility.

If you are able write code to build and deploy apps, then moving over to using docker should be pretty trivial for you. However, I actually see docker as a replaceable part, whilst kubernetes might actually be here to stay.


* you need network isolation as well. no point in doing process isolation without it. and thus forwarded ports.

* shared folders to persist necessary files. and thus volumes. and a few years later distributed volumes.

* not just isolate a process, but all its dependencies as well. no point in having a shared .so file which everybody can change, while just a single process is isolated. and thus a whole sandboxed container.

* and then deal with the size of a full sandbox, until you need some way to share unchanged files. and thus images and layers.

* and so on and so on.

big things always start small. at least in docker's case they did start small, and they're still small and lean individual projects. feel free not to use docker compose or anything else.

edit: formatting


I totally agree with you and that's why I started https://github.com/slicebuild

The project is only month and a half long so if you want I can talk you through privately


You're describing a PaaS. There are already several.

My favourite is Cloud Foundry, because I've worked on it and I trust the way it's built.

Here's how I deploy an app:

    cf push
Done.


docker-compose and tmux and runit works.


so how is docker helping you? I mean tmux and runit are really good without docker and docker just gives a few things, like process isolation. (which you could also have with cgroups and other container technologies which probably working better with runit)


Ability to reproduce a build environment - and have others do it. I can spend time working out what dependencies are needed on various distros, or I can ship a Dockerfile in my Git repo.




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: