

Deploying code with packages - balou
https://synack.me/blog/deploying-code-with-packages

======
grosskur
Packages are great because they simplify automation. Once you've got a package
built and uploaded to a repository, you can install it across a large fleet of
machines one-line of Chef or Puppet code.

There are some pitfalls, however:

* It can be time-consuming dealing with the arcane details of Debian package metadata or RPM spec files. If you're deploying your own application code, you're likely better off using fpm to generate a package from a directory tree:

[https://github.com/jordansissel/fpm](https://github.com/jordansissel/fpm)

* If you have a complex stack, e.g., a specific version of Ruby and a large number of gem dependencies, you should avoid trying to separate things into individual packages. Just create a single Omnibus-style package that installs everything into a directory in /opt:

[https://github.com/opscode/omnibus-ruby](https://github.com/opscode/omnibus-
ruby)

* Maintaining build machines and repository servers takes ongoing effort. Shameless plug: This is why I created Package Lab---a hosted service for building packages and managing repositories. Currently in private beta and would love feedback:

[https://packagelab.com/](https://packagelab.com/)

------
onli
Yeah, sure. Because it is so easy to build packages, as the shortlyness of the
article and the amount of the involved commands prove, it is surely a fast and
good way to produce those packages to deploy your code.

For reference, it is not. Packages solve a different problem, and he even
writes it: Well made packages with dependencies enable everyone to use the
software, regardless of the involved system, given some constraints. They
don't need to be fast and they don't need to be easy (as much as I would like
them to), because they are used by specialists in a lengthy process.

But if one deploys code on a system, we know a bit more of the system than "it
is a computer". Maybe it is a standardized production instance, maybe it is a
vm - in any case, we have direct access. So it is possible to use easier and
faster methods to deploy code directly, without having to resort to arcane
voodoo.

If you really want to use debs for deployment, at least use checkinstall and
handle the dependencies manually. Then you need at most 3 command
(./configure, make, checkinstall).

------
lamby
I've done this for a few years, and when it's all setup the integration with
the underlying system is absolutely wonderful. In particular, your app is
"just" another package - there's no magical special-casing you ever need to
think of.

You can also make your app quite modular - you can build multiple binary
packages from one source which is perfect for different server roles that
share a lot of code or configuration.

The only drawbacks are the fair amount of knowledge you need to share within
your team, as well as quite a bit of machinery needed to get everything up and
running once you move beyond a single "dpkg -i"-able .deb (some sort of APT
repo, signing keys, blah blah).

------
datr
How does this work with:

1) Clusters of application servers, where I will only want operations on
shared resources to fire from one of the servers? E.g. database updates,
shared file changes, etc.

2) When I want to deploy the code to a different location on the server so
that I can have multiple versions of the application available? Do I have to
spin up new servers for each version?

3) You mention roll back by just specifying an earlier package but I don't see
how this would work with stuff like database changes either.

~~~
mbreese
You're over thinking this... None of these are specific to this type of
deployment. If you want to do any of the above, you'll have to work around
whatever production deployment solution you have.

1) How would you do this for _any_ deployment method? You'd have to be able to
ID one specific node in the cluster or make sure that your job could only run
on one node with some sort of locking.

2) If you're deploying multiple versions of an application _in production_ ,
you have other issues.

3) How would you rollback database changes to begin with? Come to think of it,
how are you going to deal with any DB schema changes? This isn't just an issue
with .deb/.rpm deployments of code - you'll have to figure this out for any
application deployment.

~~~
datr
Well, I suppose I mention some of these as the article likens package
deployment to Capistrano in the opening paragraph.

Using Capistrano, for (1) I can just mark a target server as the update server
to run these commands. With packages, I assume I either have to have a
different version of the package or rely on some sort of environmental
variable which the package reacts to?

2) Yep that would be slightly ridiculous for production, but I was thinking
more along the lines of UAT, staging and other shared testing/development
environments. It's not ideal but I have been in the situation where this was
required.

3) True, but with Cap I can just issue a rollback and it will revert to a
database snapshot that was taken from the last version. I can't see how we
could do something like this with packages alone.

Maybe you mean that these problems are outside the scope of packages and that
I should be using packages with something like capistrano to solve them but
then why wouldn't I just use Capistrano on its own?

------
ABS
An (ex) colleague of mine blogged about the valid reasons behind this some
time ago: [http://www.thoughtworks.com/insights/blog/deploy-package-
not...](http://www.thoughtworks.com/insights/blog/deploy-package-not-just-tag-
branch-or-binary)

------
olgeni
"Just avoid Debian, and everything else related to .deb packages" seems a
fitting solution to me.

(even more so after rewriting Erlang packages to get something that a) works
and b) is not stale)

------
balou
Very wordy but plenty of usefulness!

------
cauliturtle
open my mind on deployment

