
Ask HN: What's your dev environment setup look like? - andreygrehov
I know this has been discussed a lot, but things change rapidly.
I currently use a combination of Vagrant + local services, that I manually run when required. Some other people in the team use Docker, that I personally haven&#x27;t touched for quite a while. I&#x27;m looking to standardize the dev setup in our organization and curious what are the best practices in 2017. There are a few goals I pursue:<p>- an easy way to spin up the entire infrastructure locally (say for new hires);<p>- an easy way to share new services with the team;<p>- the environment should mimic staging&#x2F;production as much as possible.<p>I&#x27;m leaning towards a Vagrant VM + Docker containers inside of it. In this case we can share a single Vagrantfile across the team, which to me solves the first two goals. Running multiple containers in the VM solves the third goal.<p>I sometimes hear that Vagrant is a useless layer in this setup. But I&#x27;m not sure.<p>Are there some best practices today? What&#x27;s your setup looks like?<p>P.S. We use Chef to manage non-local environments. Could be nice to have a single tool for all environments (local, staging, prod, etc.), but I&#x27;m not a dev-ops, not sure if that&#x27;s a handy practice.
======
dozzie
> \- an easy way to spin up the entire infrastructure locally (say for new
> hires);

DEB/RPM packages, along with the source packages (they make preparing the OS
for development easier).

> \- an easy way to share new services with the team;

DEB/RPM packages.

> \- the environment should mimic staging/production as much as possible.

DEB/RPM packages, inside a container or VM that runs the same OS as on
servers.

Don't make your environment overly complicated. You can safely use tools that
are on the market for dozen years, they haven't aged.

~~~
andreygrehov
Interesting. So the idea is to package all the internal services/github
repositories/etc and then share it all via internal PPA repo?

~~~
dozzie
Something like that.

Building RPM and DEB packages is quite easy and very robust if you have sane
build system and don't download random things from the internets during the
build.

If you do download things, you need to invest some time into packaging them
(maybe into one large bundle, if you managed to create a fractal of
dependencies; such packages are atrocious if you'd want to publish them, but
are OK if used internally), so you never ever hit the network during building
the package. This gives several benefits, such as control over what is
actually installed, resistance to software disappearing from repositories, or
installation process that is repeatable (dependencies don't change their
version magically between two installations).

The resulting packages are very fast to install (you don't recompile modules
on every single deployment and you don't need to install compilers, which can
be heavy) and can pull other OS packages with necessary software. Even
development process can gain a little, because you have a canonical place for
build-time dependencies, so setting up a workstation for developing a service
is easier.

~~~
andreygrehov
Say I have a new hire, from my understanding, setting up his environment would
come down to `add-apt-repository company-repo`, and then `sudo apt-get install
company-infra`. Does that sound about right?

I guess in this case I would have to keep services running on local ports of a
VM?

