
Instead of containers, give me strong config and deploy primitives - awinter-py
https://abe-winter.github.io/blues/2017/04/27/config-vs-containers.html
======
erulabs
So... systemd + Ansible?

I really disliked SystemD before I got my hands dirty with it. Assuming you're
developing with any modern language, the language itself probably wraps OS
differences anyways (Node / Golang / Rust / Ruby / Python / anything-libUV-
based to name a few), not like you can convince developers to change their
habits anyways.

People act like Docker makes builds reproducible by magic, then go on to not
pin any Golang deps or curl internet resources or not generate lockfiles for
NPM, ad infinitum.

There is and never will be a shield operations people can put around their
developers. You just need good developers. If you want SoA scaling or
automatic cloud configuration, why not just use the most established tools
available? Linux's init system and kernel _already do_ resource management,
even outside of LXC. People act like a "cloud scheduler" is the only thing
that lets multiple processes work together.

This whole idea of isolation is also really inane. Each app will have
different disk, io, network, cpu requirements. I also dream of the day
Kubernetes can do this "sorting" better than an experienced operations team
can, but that day is _far off_ (and I will die before I put
redis/mysql/postgres/cassandra/any-other-mature-datastore behind docker and
iptables)

~~~
awinter-py
Yeah. Would be really interesting to see a container technology that people
were eager to use for DB management. Persistent storage is the thing that
hasn't clicked yet.

My guess is this technology will be some kind of cluster management. If there
were a 'platform' for horizontally scaling data that managed sharding,
rebalancing & backups plus gave me user-space hooks to override policies,
databases might slowly & cautiously build on top of it.

Most DBs have their own of these (pgpool, mongos, redis sentinel, all of
cassandra & hadoop) but would be cool to centralize the common reqs and
collaborate.

~~~
william20111
smartOS gives you persistent storage btw. And its lovely ZFS storage..

~~~
justinsaccount
saying smartos gives you persistent storage is like saying docker gives you
persistent storage because you can use

    
    
      docker run .... -v /storage:/storage myapp
    

That's not what people are talking about. If you run a container on smartos
and the physical machine it is running on blows up, what happens to your
persistent storage?

~~~
awinter-py
technically it still persists, just in the form of hawking radiation instead
of bits & bytes. Some people think this is what makes amazon glacier so cheap.

------
wahern

      If you’re deploying C programs that rely on system
      libraries, things may get tricky if you cross flavors or
      versions of linux. But you can probably deploy static-linked
      executables more easily than setting up docker.
    

For various reasons static linking isn't a good idea for C applications. glibc
doesn't work well when statically linked. musl works well. But glibc has
strong backward compatibility using symbol versioning. It's not usually worth
the trouble to build and link aginst musl if your distro uses glibc. What you
should be worried about are all the other third-party libraries, which aren't
often written to be statically linked--because of namespace pollution, because
of slightly different semantics between dynamic and static code, especially at
application startup.

Fortunately, Linux supports packaging apps much like macOS bundles, where all
shared objects are kept under a relocatable subtree. When building a binary
that will be installed in bin/, for example, just use the following GCC flags:

    
    
      -Wl,-rpath,'$ORIGIN/../lib' -Wl,--enable-new-dtags
    

The first flag tells the linker to find libraries in the lib/ directory
adjacent to the binary file itself. The second flag tells the linker to try
LD_LIBRARY_PATH first rather than the embedded rpath. Debugging and regression
testing can be very difficult without the ability to use LD_LIBRARY_PATH, and
unfortunately LD_LIBRARY_PATH has lower precedence than embedded rpaths, thus
the need for --enable-new-dtags to change the default behavior.

Note that $ORIGIN is a special string that is expanded by the runtime linker,
not by the shell at compile time. It's an unfortunate choice of syntax.
Getting $ORIGIN to pass through to the compiler without being evaluated by a
shell can be tricky when dealing with recursive make and shell invocations,
such as from an RPM spec file.

Another example: when building a Lua module I'll use

    
    
      -Wl,-rpath,'$ORIGIN/../..' -Wl,--enable-new-dtags
    

because Lua modules are usually installed under $(prefix)/lib/lua/5.?/, two
directories below lib/.

~~~
rryan
Allowing LD_LIBRARY_PATH in production is a potential security risk.

~~~
Natsu
And then you might just end up using setcap on it anyhow, perhaps to load
privileged ports, which is treated like suid, making it so that the linker
won't look at LD_LIBRARY_PATH, in which case you have to create a conf file
with your paths and feed it to ldconfig after using ldd on the program because
you're confused about why it suddenly doesn't load its libraries.

And there's another fun security restriction in there where chown erases any
setcap permissions you had applied, just in case you had to adjust something
and didn't understand why you suddenly lost your setcap permissions.

------
jph
> "If you develop on linux and run on linux and you’re in a deploy-as-source
> language, you may not care that much about repeatable builds."

If you develop in Elixir or Erlang, there's an additional issue that
containers (such as docker) tend to interfere with hot code reload.

There are work-in-progress solutions (such as Elixir Distillery) that
emphasize turning your app into a single package which can be deployed
anywhere.

Even if you don't use Elixir, you may enjoy seeing how the Distillery
packaging system works, and how hot code reload works.

A good intro is [http://www.east5th.co/blog/2016/12/26/deploying-elixir-
appli...](http://www.east5th.co/blog/2016/12/26/deploying-elixir-applications-
with-distillery/)

~~~
tie_
Hot code reload interferes with immutable deployments. I'd prefer the
immutability over hot code reloading.

~~~
tremon
Nope, hot code reload is the same thing. They've just moved the immutability
line even closer to the code.

------
olalonde
> What will it take to build this? 6 months x 3 smart devs who understand the
> linux kernel. So like $600k.

It seems like PaaSes are the new JavaScript frameworks[0]. Kubernetes now has
most of the "critical pieces" mentioned and Helm addresses the "rigid" config
criticism. It's not newbie proof yet but I doubt your 1.5 man-year project
would get anywhere close to K8s is in terms of maturity and stability. Please
do your homework and consider contributing to an existing project before re-
inventing the wheel.

[0] [https://github.com/search?q=paas](https://github.com/search?q=paas)

~~~
TheSoftwareGuy
Do Linux kernel devs really make 6 figures on average??

~~~
callahad
The median salary for a software developer in the United States is six
figures: [https://www.bls.gov/ooh/Computer-and-Information-
Technology/...](https://www.bls.gov/ooh/Computer-and-Information-
Technology/Software-developers.htm)

------
zie
Nomad [0] does some of this already:

    
    
      * config file for resources.
      * It does log capture/delivery mostly.
      * it does chroot exec.
      * it handles network ports, file paths, etc
      * handles batch workloads.
    

It can't handle the load balancer piece directly, but if you use the consul
tie-in, you can make it work, since it does service discovery, and then
something like fabio [1] /traefik can handle load-balancing.

[0] [https://www.nomadproject.io/](https://www.nomadproject.io/) [1]
[https://github.com/fabiolb/fabio](https://github.com/fabiolb/fabio) [2]
[https://docs.traefik.io/](https://docs.traefik.io/)

~~~
fred909
Using fabio with nomad and loving it.

------
DonbunEf7
Have you seen Nix? It can be used in this way, and its configuration language
is relatively good.

~~~
awinter-py
I love the idea of nix but haven't had a chance to try.

Can you post a link to how nix handles resource permissioning?

~~~
joshuata
A note about nix: I've found the language to be very obtuse and the
documentation out of date and fragmented. However the worst sin has been the
attitude of several community members: insulting new developers, ignoring
valid questions, and self-promotion at the expense of civility. I love the
idea behind the system and like most of the community, but the very vocal
jerks really put me off.

~~~
dragandj
I just wish to report that I had only good experience in communicating with
Nix/Nixos/Nixpkgs community.

When someone could solve something that didn't work for me, they did. When no
one could or didn't have time, they kept silent or politely tried to help me
find the solution myself. And everything absolutely for free!

------
state_less
People like the idea of isolation that containers (LXC) provide. We don't have
to worry about stepping on toes, my container works with any number of other
containers running on the same host, not sometimes, all times.

The env is moving fast, Kubernetes and Helm are on a tear. It's also worth
taking another look at cluster standup
CoreOS+Vagrant/Tectonic/Minikube/kubeadm are filling in gaps.

~~~
awinter-py
minikube was not quite primetime when I tried it, but I saw release notes a
few weeks later that fixed everything I ran into. (IIRC it was ingress or DNS
problems).

~~~
alpb
Your impressions about Kubernetes sounded quite out of date when I read your
post. I strongly suggest you give things another try. Things move fairly fast
in this area.

------
xuejie
This reminds me of the awesome article by the founder of skyliner:
[https://blog.skyliner.io/the-happy-genius-of-my-
household-2f...](https://blog.skyliner.io/the-happy-genius-of-my-
household-2f76efba535a)

Google built Borg/Kubernetes because they own all the host machines which are
super powerful, and they need multiple apps on one machine to cut cost. But
for the rest of us building apps, different VMs provided by cloud provider,
such as EC2, already provides a nice way of resource isolation that is also
cheap, so why do we need to run multiple containers per machine? Suppose
c4.4xlarge is too big for your app, for cost effective reason, you want to
leverage Kubernetes to run multiple apps on the same VM, why not just spawn
multiple c4.xlarge or c4.2xlarge VMs, and run only one app per VM for you?

I admit this model won't fit us all, but for a significant number of us, this
is already a good enough solution like containers, and it works today without
needing to setup all kinds of fancy software defined networks, which we never
know how long it's gonna last without signaling an issue.

------
colemickens
"It also gave me a duct tape feeling or batteries not included – like the
critical pieces (docker support, DNS, load balancing) weren’t integrated into
the original design."

I'm sorry, this article^H^H^H^H^H rant isn't worth reading. I don't understand
why people write these posts where they so poignantly display their complete
ignorance alongside their strongly held opinions. It's tacky and doesn't
reflect well.

Load balancing is literally built-in via kube-proxy, NodePort and/or
CloudProvider integration. Kube-DNS has existed for years (and is
intentionally removable, but not at all duct-taped on). And the claim that
Kubernetes didn't have Docker support... I mean, I literally don't even know
what to say to that.

~~~
user5994461
These pieces were bolted on later, as an after thought. They are critical and
should have been there for the start.

Many people have been forced into docker for a few years already, facing a ton
of critical features that are completely missing. kube is a brand new project
that only started to be usable recently, you seem to not take that into
account.

It doesn't help that marketing around containers is "over promise and under
deliver".

~~~
colemickens
> _These pieces were bolted on later, as an after thought._

First, I don't know why you think this? Moreover, what is the practical
implications of the claim? Have you ever had a problem with DNS service
discovery in a Kubernetes cluster? Or else what is the point of bringing this
up at all? (Nor does this change the inaccuracy of the other claims)

> _Many people have been forced into docker for a few years already, facing a
> ton of critical features that are completely missing. kube is a brand new
> project that only started to be usable recently, you seem to not take that
> into account._

It's been 1.0 for well over a year and I still don't know what you're getting
at in regards to docker. Kubernetes is (more or less) powered by Docker (at
least historically, it's getting to a point where the container runtime is
pluggable via CRI/CRI-O).

> _It doesn 't help that marketing around containers is "over promise and
> under deliver"._

Also no idea what this means. Kubernetes and containers have done everything
I've expected them to do. I usually see this comment from people who haven't
actually ever used them for more than a day and gleefully point to articles
talking about stale docker images with vulnerabilities, ignoring that nearly
the exact same problem happens with naive VMs + CM deployment methodologies.
Or otherwise think they're magic, when they're not.

Not seeing anything to change my initial conclusion here.

~~~
user5994461
Judging by the comment. You're the one who used Docker for a day or on nothing
critical. Maybe you're also lucky to have started with Kubernetes, maybe even
fully managed on Google Cloud.

I used it for 3 years at different companies with more or less hassle. The
route to container has been full of dangers.

~~~
colemickens
> _You 're the one who used Docker for a day or on nothing critical._

Not really, but thanks for coming for me instead of offering any substance
beyond hand waving.

> _Maybe you 're also lucky to have started with Kubernetes, maybe even fully
> managed on Google Cloud._

I mean, sure, I think it's crazy to try to use Docker in Prod without an
orchestrator and don't think Swarm Mode is mature enough for such a use case.
And no, I don't use GKE.

Now that we've dropped anything that was really being discussed, let me agree
with you, Docker itself is not my favorite, but that has nothing to do with
containers or orchestrators in general and instead has to do with the QA of
Docker and the frequency of breaks and regressions.

~~~
user5994461
We both agree that orchestration is critical. Yet orchestrators are fairly
new, the docker hype was strong long before they were viable, or existed at
all.

You'll forgive me but implying that one one should just use an orchestrator
points to one ignoring their history -or lack thereof-.

I have seen entire clusters taken out by a regression. Sadly, even the best
orchestration cannot do much better than the core it is running on. You can
google "docker in production" to get more substance.

~~~
colemickens
>I have seen entire clusters taken out by a regression.

I think we agree a lot actually :). Not sure Docker would call this one a
regression, but...
[https://github.com/Azure/ACS/issues/3#issue-201986759](https://github.com/Azure/ACS/issues/3#issue-201986759)
(that's me)

------
brango
I've started using K8s and so far so good. Minikube makes local dev easy.
However, I use K8s with ansible. So now I can:

* Build a fully baked container for local use

* Build a container that mounts my local filesystem for the code directory so I can have a sane dev-test cycle with hot-code reloading

* Mount a directory when running locally containing credentials for accessing Google Cloud services

* Spin up a K8s cluster in arbitrary GKE accounts, template and deploy my deployment and service accounts, pin some with external IPs, etc

* Push containers to GCR

* Deploy my containers wherever - shared directory ones can only run locally.

This is for an architecture with about 4 microservices and that will probably
grow.

Now I'm working on an ansible playbook to zip the code directory of one of my
microservices, upload it to GCS, then run a build container on my cluster to
build the docker image for my microservice and push to GCR from there so I
don't have to waste time pushing large containers up to GCR. Once this is done
I'll look at promoting containers through dev/test/prod environments since all
config will be done with env vars.

I've never read about using ansible with K8s, but to me it's a no-brainer.
Most people seem to cobble together bash scripts, but using the best of both
has really led to a good experience.

The benefit we hope to get is isolation and cost efficiency.

~~~
scaryclam
That's interesting. I'm using docker-compose for local development at the
moment. Do you have any experience with it and would you choose Minikube over
docker-compose?

~~~
cryptarch
You didn't ask me, but here you are:

I'm using docker-compose as well but I'm looking at moving to Minikube, so I
can more realistically test my K8s clusters before sending them down to
CI/staging.

I also don't want to maintain both docker-compose and K8s as I basically am
ops by myself and I have NaN tasks in my backlog.

------
djsumdog
So many docker containers I use at work just have a ton of apt-get or yum
statements (depending on a debian or centos base) and people rarely pin
dependency versions. I really feel for the "repeatable build" issue, not to
mention dependency rot.

If you want to keep things up to date, there are many examples scripts out
there that either 1) checking to see if your base container is out of date (if
someone updated jessie:8, you should rebuild) and 2) checking if any packages
are out of date (sometimes limiting to just security updates) meaning you run
apk, yum, apt, etc. within each container (or extract their package lists and
examine them outside the container).

You gain a lot in the isolation, but debugging can become more difficult and
you've introduced new areas of dependency rot.

I generally like containers, but hate how there are so many different
orchestration formats (k8s, marathon, nomad, swarm, etc.) many different
network layers (weave, flannel, etc.) and many different ways to package/fit
components together (CoreOS, DC/OS, etc.)

The ecosystem is messy.

~~~
kordless
I'd like to build something that attaches cryptocurrency payments to API
calls, basically a 402 payment required implemented with the Lightning
Network.

With such a system in place, pulling updates could be pinned to a particular
API call, with a particular payment address, which itself was attached to a
particular data store. In theory, as long as you had access to the wallet that
did the build before, you'd be able to do the same build at a later date.

I'm greatly simplifying this here, but my conclusion after getting a crypto
payment OpenStack instance launcher PoC working was that applying crypto
payments to deployment methods basically provided a lightweight federated
layer that allowed logically linking processes, similar to what you would get
with a Zapier/Apigee/IFTTT knockoff. Applied to microservice architectures it
would allow deployments to be both trustworthy and repeatable at later dates.

~~~
exabrial
Json web token (jwt) is probably what you're looking for

------
agermanov
"6 months x 3 smart devs who understand the linux kernel. So like $600k."

Show me that $400k/year linux kernel job.

~~~
user5994461
An employee costs much more than just the salary he receives at the end of the
month.

~~~
mgkimsal
Show me the $200k job with $200k of overhead instead then.

~~~
user5994461
Looks like any senior job at google/facebook/microsoft/apple.

150-200k in salary + 50-100k in stock + 50k-100k for
health/401k/parentalleave/food//office/hardware/concierge/gym + employer taxes
+ ...

------
vidarh
rkt seems to meet his "define resource access in a config file" requirement in
principle.

The separation in rkt between setting up the isolation and handling the images
or running of the code means that you can opt to launch a rkt image in
anything from an _actual_ chroot to a full container or even a full VM, or you
can write your own if you have weird needs (e.g. if you want to do a seccomp-
bpf type sandbox, you can do that).

E.g. I have deployments with etcd in a chroot (using rkt), and the rest
deployed as containers, which with rkt defaults to use systemd-nspawn, and
which means they have systemd running as the init in the container too and
logging goes straight to the journal just like everything else.

Cron really depends on scale. For small deployments it's simple enough to
simply use a cron or systemd and use a simple wrapper that uses e.g. Etcd or
Consul or even a database table to arbitrate who gets to run for jobs that
shouldn't run more than once. In practice depending on exactly what you do
it's often simpler/better to write the jobs so it doesn't matter if they're
started more than once anyway. Seen too many nasty surprises in systems where
people just assumed nobody would ever end up with the scripts overlapping in
time only for something to slow down processing until they do just that
anyway. If you're first doing that, all you need to do to handle a smaller
distributed system is to put your locking mechanism somewhere shared.

At scale it becomes more complex, but the overall principle still holds,
though you may want to separate scheduling aka. the various distributed
cron's.

As for load balancer, I agree it needs standardising, but it really is a small
enough issue for smaller setups that it's fairly easy to solve. I've come to
rely on setups that uses Nginx and rewrites incoming hostnames to a local
hostname (e.g. foo.com => foo.com.service) and use resolvers in Nginx to point
them to a SkyDNS instance. That means all the container config needs to do is
set/delete Etcd keys to receive traffic for a given hostname.

But there's several load balancers out there that now can pull config directly
from a suitable shared datastore, be it Redis or Etcd etc.

------
jacques_chester
> _When I used kube in 2016 the cluster turnup support was bad; it seemed like
> you could use it managed in the G cloud but not anywhere else._

You might like to look at Kubo[1], which was jointly built by Googlers and
Pivots to make this more manageable. We built it based on requests from joint
customers who want BOSH as their infrastructure-level platform and the choice
of Cloud Foundry or Kubernetes for application-level platform.

[1] [https://github.com/pivotal-cf-experimental/kubo-
deployment](https://github.com/pivotal-cf-experimental/kubo-deployment)

edit: forgot my usual disclosure. I work on Cloud Foundry for Pivotal, though
not on Kubo.

~~~
colordrops
Are employees of Pivotal really called "Pivots"? Am I shallow for finding this
repulsive? Same with googlers for that matter.

~~~
twittertoss423
Twitter employees call themselves tweeps, which might be the worst of the
corporate self-nicknames.

------
scarface74
No real opinion about anything in the article except the use of config files.

After using Consul, I never want to go back to config files again.

[https://www.consul.io/intro/](https://www.consul.io/intro/)

~~~
mattbillenstein
Config files as copied around by scp and edited by hand are a pita -- but as
checked into source control as a template and staged to machines by your
deploy; the very simplest and most robust solution.

~~~
scarface74
Compared to going on your Consul's website to change a value and that value
automatically being updated in all of the systems?

~~~
mattbillenstein
Actually yes, I want to version control that change and deploy it like I
deploy code. Not blindly fire changes that I can't track into the ether.

~~~
tehno
You could always disallow consul web interface updates and have `consul kv
put` scripts in SCM and validated/deployed automatically.

~~~
mattbillenstein
Yeah man, I could setup a quorum of consul boxes, integrate consul's api into
my application, figure out how to manage changes in that system, and then do
as you say -- make a change management system to apply deltas on deploy.

Or, I could put one more file on each box in addition to the other 100,000
files I put on there...

~~~
awinder
Obviously you wouldn't choose consul for this one feature, but if you're
already relying on consul as service discovery in your network, why not?

You don't need to invent a change management system, you just need to utilize
from a myriad of existing consul OSS.

I'm not saying consul makes all the sense in the world for you, but there's
definitely sound architectural thinking behind what the parent is talking
about. It's not just a bunch of crazy talk :-D

------
noway421
>kind of like a fancy chroot

You might argue that containers are fancy chroots. They are just built from
scratch and packed each time

------
oneplane
This exactly hits the problem with containers (and especially Docker and
Kubernetes, but to a lesser extent CoreOS's model as well). The crows that
seems to like/use it seem to be the people that aren't responsible for
maintenance, security etc. and may not want to become the responsible party
either. It's often developers that are either simply a lone wolf or part of a
team that is big enough that release management falls on one or two people and
'the rest' just wants to churn out code in 'their way' and have that 'way'
magically work in production too. Sadly, that is not how the world works.

You will always need some form of configuration management, release
management, infrastructure and operations, no matter how compartmentalised and
'serverless' the software coders think they are. Right now, the idea seems to
be that if you stick your code and practically a complete user land in a
container, you don't have to do those things, as if old style deployments were
the reason for those practises to be in place. Obviously, it is the other way
around, and by trying to bypass them, you will run into the exact problems
they are meant to prevent. (i.e. not knowing what is available/what is
released, not knowing the configuration within the infrastructure in a global
scale, not knowing the side-effects of two opposing configurations because you
have no management for that, not knowing what versions are used because it is
spread out over different/many unmanaged files inside containers somewhere,
not knowing what is secure/patched, not knowing what else is influencing your
environment, no control over parameters that should be globally overridden...)

We take a different approach: you get do write your software, and may only
configure requirements within its context. Those have to be defined in a
SaltStack Formula with sane defaults, that has to run in a local environment,
but also DTAP-style chains of environments with no changes to the app. You
don't get a say in what it will run on, but your required facilities will be
available. If it doesn't pass automated testing (i.e. sticking it in a VM,
running a highstate and expecting a working app) it doesn't get passed to
DevOps and you're stuck until you fixed your stuff. With regards to scaling,
we make sure that a few calls are always available to the app and/or it's
formula, so that we can register where it is and connect it to the correct
balancing pools (i.e. incoming requests, but also FS, DB, queue pools). When
it's up, it registers, and as long as the automatic health checks pass it gets
requests sent to it. If the load is too high, we scale up, if it's been idle
too long it scales down. But the app/software/code isn't aware of that nor
does it need to be, and using the reactor system, we don't need 'extra'
management software to make that happen. You set your KPI's and health checks
where you need them, just like you would on any other setup, and you're
golden.

~~~
awinder
Your last paragraph could clearly be implemented in k8s / etc. but the
defining characteristic is that you've built organizational controls to
protect & optimize a distributed system. And the main problems you bemoan in
the first paragraph are issues of, again, organizational control & maturity.

It's almost, like, technologies are just tools that can be either applied or
misapapplied to a given domain, either for better or for worse. Huh.

~~~
oneplane
It's obviously all about implementation and application, but instead of always
reading 'Docker is the solution' it would be nice if it were listed as 'just
another tool in your toolbox'. The problem I personally have with this and
some other 'emerging' technologies is that they are touted as a 'replacement'
for Ops, management, or infrastructure, while in practise, it's obviously not.

------
discordianfish
I think the higher level goals are pretty much aligned with those of
kubernetes. That it uses 'containers' to achieve this, is just a side effect.

And it's surprising flexible! You don't need any overlay networking, you can
just define a IP range and it will 'just work' as long as your nodes are in
the same ethernet segment. If you grow to more than, let say, 500-1000 pods,
you need some routing but still can do without overlay networks.

I'm currently working on a project where I use the kubelet standalone. I can
give it a pod manifest and it will make sure the specified containers are
running. This doesn't need any cluster components or data stores and is
already useful for tiny use cases. Also using this now for my tests instead of
docker-compose.

------
hosh
Isn't resource access like that found in CloudFoundry or Chef Habitat?

Isn't load balancer standardization found with Kubernetes Ingress? Though it
doesn't let you select the lb algorithm though.

Isn't cron found in Kubernetes Cron Jobs?

------
exabrial
That's sort of what systemd could have been, before it tried to do everything.

Kiss is not something the systemd designers don't seem to understand, nor
modularity

------
matt_wulfeck
> _my iterative dev setup often isn’t that similar to my ‘under docker’ setup_

This seems like a strange complaint to me, because it's one of the best things
about iterating with docker. Make a change, do a build (super quick because of
fs layers) then run it. If it works, ship it!

Not only that, but now I can do the same thing in Windows. It's a wonderful
time to develop linux applications.

~~~
lobster_johnson
I don't know about the OP, but a lot of people rely on hot code reloading to
see live changes, be it a Node.js app or a React UI. The Docker build workflow
is _way_ too slow for that. When I work on React apps, I expect the UI to
seamlessly reload every time I save a file.

You can accomplish this perfectly well with Docker, of course, though you have
to use volumes and so on. Docker for Mac's mounting support has
inotify/fsevents on both sides, though the volumes are not fast. (Fast enough
for a React app, but enough for a Postgres database.)

------
EGreg
_" What will it take to build this?

6 months x 3 smart devs who understand the linux kernel. So like $600k."_

Really? $600k for three smart developers for 6 months?

So a smart Linux kernel developer costs $400k per year all-in?

How about various web and node.js developers LOL. Where are the other fields
with comparable costs?

------
moondev
kubernetes + vault is a powerful combo.

------
apeace
Everything the author describes sounds like docker-compose to me. Combine that
with Convox and you've got a powerful, easy-to-use cloud platform.

~~~
educar
Can convox do docker compose?

~~~
apeace
Yes, the point of Convox is that it deploys your docker containers to AWS
using your docker-compose config. It sets up a VPC for you, ELBs, autoscaling
group, private docker repository, the whole deal :)

------
leecarraher
I like this concept, but maybe add sandboxing too.

------
alanfranzoni
Something like Ubuntu MAAS? [https://maas.io/](https://maas.io/)

------
unexistance
wisdom from (not-so-distant) past

[https://news.ycombinator.com/item?id=13262045](https://news.ycombinator.com/item?id=13262045)

[https://news.ycombinator.com/item?id=12804185](https://news.ycombinator.com/item?id=12804185)

------
exawsthrowaway
I'd be interested in a project like this if it was targetting FreeBSD.

------
siegecraft
What's wrong with strong config and deploy primitives + containers?

------
educar
So you are asking for cloudron?

------
bloaf
Wouldn't all of this be provided by the Plan9 paradigm?

~~~
paulddraper
No, I think it's unikernels solve all problems.

------
mbrumlow
I just want bare metal servers. I really don't get the craze over containers.

EDIT: spelling, sorry bears, no metal for you.

~~~
devrandomguy
Anyone who serves metal to bears should be put in a container.

~~~
mbrumlow
bare :(

You know, the entire time I keep telling my self "don't type bear, don't type
bear" and my wife distracted me and I typed bear.

------
anonyfox
Am I the only one that sees the container-stuff as an intermediate stepping
stone until AWS lambda (or similar offerings) can be applied on every coding
Problem?

~~~
devrandomguy
> ls $home

No network connection.

> sudo ifup eth0 && ls $home

AWS coreutils: unable to connect to target filesystem host. Please open port
22 and ensure that the aws-guest user has full access to the filesystem.

> find -name aws -exec rm -rf {} +

> sudo apt-get install coreutils -y

