
Show HN: Nedoka – rapid creation and management of containerised environments - SmurfJuggler
https://github.com/stackscape/nedoka
======
SmurfJuggler
Demo video (no audio):
[https://www.youtube.com/watch?v=B6yD3Zo3LlY](https://www.youtube.com/watch?v=B6yD3Zo3LlY)

I'm forever jumping around between different stacks and I found myself really
hurting for something that would streamline the process, so I made it. It's
not for running production services and it's not polished, but it's very
powerful.

It comes with a combined bootstrap and control script which has been tested
with Ubuntu 16.0.4 and mac os. Windows and other distros will follow, but it
should work on anything x86 which runs Vagrant and Virtualbox if you do some
manual setup - the readme in the repo gives a kind-of-detailed/kind-of-hand-
wavy explanation of how to set it up manually.

I should highlight up front that this was made as both a learning exercise and
to solve problems I experienced specifically when working on isolated
container ecosystems. It was built out organically so there's a lot of tidying
up to be done, and I've cut corners here and there to get it out the door.
There's no scaling or clustering, and you shouldn't attempt to run production
services on it. You _can_ enable an external interface in the vagrantfile to
present the containers on your LAN, but you'd be wise not to expose it to the
internet in its current state. I aim to move it towards something tidy and
secure which exports production ready containers and stacks in the future, but
for now, it is what it is.

I also haven't included a license. I wanted to ask you guys for opinions on
which license I should run with. I've read loads about the various different
licenses, I'd like to see some discussion on the pros and cons from people
with experience before making a decision.

Anyway, with that all out of the way...

Nedoka lets you control containers, stacks, segments and groups of stacks with
painfully straightforward syntax. Your container landscape is defined in yml
files anywhere within a config directory tree, and specs can be wired up
however you want, regardless of where they live. Any spec can be a container,
a stack, both or neither, and can have children which inherit and extend its
settings. Stacks/containers/partials can contain dependencies which come up
when required, and can include other containers and stacks, and merge
containers, stacks and partials together to create new things.

It sounds a bit wild, but it's all very straightforward in practice, the end
result being that you can wire any number of arbitrarily complex builds and
environments together however you want, and control any portion, portions or
the entirety of it as easily as you can a single container.

Aside from letting you mash configs together to create all manner of
ridiculous superstructures, there's a lot more going on. Tags anywhere in your
specs will generate and/or retrieve vault/consul based secrets, create and
switch users portably, trigger actions on the host or other containers, split
layers, inject cache busters, pull environment variables and so forth. Specs
can have onbuild, onrun, post-start, pre-backup, post-restore, host post-run,
host post-destroy, and arbitrary (and extremely handy) helper scripts defined
inline, and can be accompanied by file structures and/or database dumps which
are injected into the resulting builds automatically at baseline.

When containers come up, .test DNS records are created on split views for
traffic kicking around within the VM and over a hostonly interface. If you
uncomment the external interface in the vagrantfile, containers with endpoints
will be presented over a shared IP with reverse proxying where relevant. If
you enable ipvlan i the settings and provide some basic network config,
containers with defined service endpoints will automatically join an ipvlan
bridge where they'll act as distinct endpoints on the LAN.

Specs can specify service endpoints and port forwarding. HTTP endpoints are
automatically reverse proxied. HTTPS endpoints will be reverse proxied and a
certificate will be created and signed by a local CA.

A basic database abstraction layer handles the creation, backup and
destruction of postgres, mysql and mariadb databases and users - add a db to a
spec and it will be created automatically and backed up/restored alongside the
container. Databases are treated as persistent, but can be destroyed or
restored on demand. Environment variables for accessing databases are injected
into the containers they're defined in automatically.

Backups of containers or stacks can be taken manually and will happen
automatically on the destruction of a container, restores happen similarly
transparently on run, but you can baseline or restore to previous restore
points whenever you want.

It covers a lot of ground, and I'm sure for some people it will break in weird
and wonderful ways (you can generally just restart supervisor or the VM, but
I'm sure there are plenty of bugs I haven't come across) there are also some
obvious pain points such as manual renaming and repacking of backups being
needed if you want to change a container's name, there is no API or web
frontend yet, and it needs a lot of polish, "but other than that it's fine" :)

I'm very fond of Nedoka, not just as the creator but as an end user too. I
can't wait to see what people make with it.

