
Ask HN: How do you manage on-prem deployments? - MorganGallant
I&#x27;ve recently started a little home-lab with a few refurbished dell rack servers, and wanted to ask about other peoples experiences with managing software deployments. Each server runs Debian, and I want to be able to automatically deploy &amp; run new code in a clean, easy to manage way. Ideally, each server would run identical software.<p>A few ideas I had:<p>- https:&#x2F;&#x2F;equinox.io is a good option - $29&#x2F;mo for automatic release channels, deployments, etc.<p>- Periodic clone from GitHub is another solution here - every x minutes, clone, build and replace if needed. This option works great, but can lead to some annoyances.<p>- I&#x27;d guess the simplest way is to write a small script which copies the binaries over to all the machines, then restarts the servers. This is fine I guess...?<p>Has anyone else worked on something similar? How did you &#x2F; do you automatically update the binaries running on your on-prem servers?
======
moondev
Same way it's done in the cloud. Immutable infra via virtual machine images.

vSphere is the fabric that connects everything, packer builds the machine
images, tf/govc deploys.

To be honest though these days it's more for standing up K8S clusters,
workloads are managed via kube manifests and less directly on the vm level.

~~~
MorganGallant
I've never been a huge fan of K8s - whenever I've used it in the past, it's
always made my deployments harder just because of all the overhead from K8s
itself. Perhaps in a micro-service architecture spread across a large cluster
it would be great, but probably a little overkill for something like this
where I need to auto deploy a single binary onto multiple machines.

That being said, K8s is definitely an amazing technology, and I will continue
to follow it and other similar initiatives. Some of Kelsey Hightower's talks
with K8s are really well done and do a great job showcasing its potential!

------
shoo
> I'd guess the simplest way is to write a small script which copies the
> binaries over to all the machines

I do roughly this. custom application is rigged to run as a service on debian
, managed by systemd. I.e. path of least resistance, app is run in idiomatic
way by the operating system. I have an ansible script to copy over a new
version, ensure all dependencies are installed using apt, run admin tasks to
update anything that needs updating in database, then restart the service to
pick up the new version of application code. It is a bit sloppy & doesn't
automate everything but does automate the common path for upgrading an
application code change to an existing deployment.

------
user_agent
Well, Ansible is the kind of software which does exactly that and has been
created to address this issue from the start. It's an industry standard. You
define how your environment should be configured in a declarative way, and
Ansible connects via SSH to your machines and does the job for you; might work
like cron does - running periodic tasks. Surely a very useful tool for
everyone who has anything to do with bare-metal machines.

You can always go with the Bash / Python scripts, but hey, Ansible is really
cool! I'd give it a try even for small things. I use it to get in touch with
my home RaspberryPi cluster.

The above doesn't make you free from knowing how your operating systems work,
BTW! That's a must.

Have fun!

~~~
user_agent
Plus some of what you have mentioned, @OP, craves to be packed into Docker
containers!

------
asguy
I’d recommend you learn to use Debian’s ecosystem for packaging and
distribution. We build binary packages out of our source tree using dpkg-
buildpackage and run our own organization of apt repositories managed by
aptly. We control all of our own signing/key infrastructure and install that
on machines just like the rest of the OS.

~~~
secondcoming
We work in a similar fashion. I find the art of creating a debian package from
scratch tedious though.

~~~
MorganGallant
Are there any resources you've used in the past to do this? I'm curious what
the process is to making / distributing via apt. I've always been a fan of the
services which can be installed with a single apt-get, which also
automatically setup the systemd service. That's a super awesome combo.

~~~
secondcoming
We have a Jenkins job that builds our code from github, runs the tests, builds
the package and then uploads that to our private JFrog repo. Then we have
another job that builds VM images (GCP) by installing the package from JFrog
and all its dependencies. One final job starts the rolling upgrade of live
machines.

The hardest part is getting all the debian/* files right. Recently, I tried to
convert a 3rd party tar.gz into a .deb but failed. The documentation isn't
great. I think someone on our team mentioned using fpm, but I've not tried it
personally.

I think our command is just 'dpkg-buildpackage -us -uc $@', but like I said,
prepping all the required files for that to just work isn't my area of
expertise.

