
LXC – Running 14,000 tests per day and beyond - cleverjake
http://codeascraft.com/2013/09/23/lxc-running-14000-tests-per-day-and-beyond-part-1/
======
arohner
If you're interested having this kind of setup for your own team, without
having to muck with servers, you can use something like
[https://circleci.com](https://circleci.com). We use LXC, and pay lots of
attention to having fast I/O. [disclosure: I'm founder/CTO of circle]

~~~
thejosh
Stupid question, do you also support bitbucket?

~~~
arohner
Not yet. We're github-only at the moment.

~~~
malandrew
I love github and have never used bitbucket or any other service, however I'm
always disappointed that no one has tried creating a library that provides an
abstraction layer for many of the features on github and other companies like
github. This would help foster diversity in a market that is currently a
monoculture.

------
pedoh
Part two is up: [http://codeascraft.com/2013/09/23/lxc-automating-
containers-...](http://codeascraft.com/2013/09/23/lxc-automating-containers-
aka-virtual-madness-part-2/)

It looks like they're not leveraging LXC via Docker. I wonder if that's
because they've been doing it this way pre-Docker, or if there are some
technical reasons why it made sense to skip it.

~~~
bobf
They've probably been doing it long before Docker, as most people who use LXC
have been. Also, it makes sense to skip it because Docker adds relatively
little for their use case.

~~~
jaytaylor
Precisely, you've nailed it.

Docker is great when it fits your use-case, but there are lots of practical
ways in which LXC stands great all on it's own.

I've commented on HN before about why I often choose plain LXC over Docker:
[https://news.ycombinator.com/item?id=6378823](https://news.ycombinator.com/item?id=6378823)

~~~
themckman
You have any good resources on vanilla LXC?

~~~
jaytaylor
The ubuntu docs are pretty good:
[https://help.ubuntu.com/lts/serverguide/lxc.html](https://help.ubuntu.com/lts/serverguide/lxc.html)

And also the lxc-user mailing list is a _great_ resource:
[https://lists.sourceforge.net/lists/listinfo/lxc-
users](https://lists.sourceforge.net/lists/listinfo/lxc-users)

------
contingencies
Indeed, LXC rocks for this.

Drawbacks are that it's nontrivial to set up and requires some rigid formalism
in developer output that sometimes demands training and/or cultural change.
But it's definitely something everyone should consider.

In my (currently internal, heavily LXC-utilizing but) infrastructure and OS
neutral project, I am looking at specifically this sort of automation but for
complex topologies of interdependent services, HA clustering layers, complex
emulated network topologies (bonded links, multiple VLANs), etc. Plans are to
include failure testing at the communications level (slow links, lossy links,
cable snaps, switch failures, etc.) in addition to resource levels (disk,
memory, etc.).

Outputs of a successful automated testing environment can include amazingly
detailed information for capacity planning, automatically generated security
policies (both for container-side, host-side and infrastructure-side
deployment).

It's a fascinating area and one that is ripe for great change. Many people
have needs here, the question is how to meet them at the intersection of
current infrastructure and codebases, existing teams, business level concerns,
varying hardware availability, etc. Both pre-commit and post-commit hooks are
useful for different types of automation. IMHO LXC's blazing speed broadens
significantly what can be tested with pre-commit.

------
jtreminio
You say your dev's local development environment is different to prod. Are you
guys letting each dev set up their own environment by hand, or have you
provided a Puppet or Chef repo that they can clone and have an exact replica
up and running within minutes with Vagrant?

~~~
rorrr2
It doesn't matter. Developers almost never develop/test in an environment that
mimics production - multiple load balancers, multiple app servers, multiple
database servers, failover to a second data center, etc.

If your dev environment is not different from prod, you're either insanely
rich of your server setup is trivial.

~~~
npsimons
I would argue that a dev environment that is identical to prod, no exceptions,
is too constraining. As the OP points out, having root access in the VM to go
willy nilly and try out new tools is a must for developers.

I'm kind of surprised they didn't have Jenkins setup from the start; I'm also
a bit taken aback that they don't use automated code reviews before accepting
patches to their "deploy" branch. Even for a small project, it's not that hard
to setup Jenkins+Gerrit to reject patches that break tests (or have to pass
whatever other hurdles you want).

~~~
JPaulEtsy
We only really have one branch "master". We encourage the engineers to push
small changes, behind config flags if necessary, all the time so there is
never any huge merge conflicts, etc. This also means you don't push your code
to master until you are up in our push queue.

We've also had Jenkins set up for a long time now, we just used LXC to
drastically improve our performance and scalability.

Here is an old blog post explaining some of how it all works:
[http://codeascraft.com/2011/04/20/divide-and-
concur/](http://codeascraft.com/2011/04/20/divide-and-concur/)

~~~
xiongchiamiov
When do you do code review?

~~~
JPaulEtsy
We use a review script that creates a temporary branch in github and sends an
email to everyone you specify to review it. We then kill that branch when the
review is over. Any time you push code, you run our test suite on your changes
then create a review. Since these are encouraged to be small and behind config
flags to not affect all our users immediately, it happens quite often. Once
feedback is taken from the review you enter our push queue yourself and push
it out yourself. If the code is possibly dangerous, we recommend those pushes
wait from Friday night to Monday morning for safety's sake.

------
sir_charles804
I would love to know the OP's (or anyone else's) thoughts on Vagrant and why
it is not in use to mimic prod on all of the devs' machines.

~~~
bazzargh
They've 2 use cases, this test one, which has a rationale linked from
'workload' in article:

 _" Run end-to-end, the 7,000 trunk tests would take about half an hour to
execute. We split these tests up into subsets, and distribute those onto the
10 machines in our Jenkins cluster, where all the subsets can run
concurrently..."_

[http://codeascraft.com/2011/04/20/divide-and-
concur/](http://codeascraft.com/2011/04/20/divide-and-concur/)

...so clearly running these tests on a single devs machine would be a
bottleneck. The other use case is the dev env: in a previous blog they
described how they're using their own internal cloud to run the dev vm's
faster on dedicated hardware (with easy, one click provisioning):

[http://codeascraft.com/2012/03/13/making-it-virtually-
easy-t...](http://codeascraft.com/2012/03/13/making-it-virtually-easy-to-
deploy-on-day-one/)

which makes sense. Why emulate prod running on dev's own boxes when you can
pool the hardware and get better utilisation, & at the same time run them
faster?

~~~
contingencies
_Why emulate prod running on dev 's own boxes_

It is a good way to have faith in your ability to execute your stuff on
new/unfamiliar/heterogenous environments, which can be valuable. People may be
geographically distributed. People may wish to work offline. People may want a
degree of control not available or even feasible on shared hardware resources.

~~~
bazzargh
The 'offline' one I agree with, but don't think that's precluded (they have
the chef recipes, so why not); but it's an option not the common case. And the
last one too - we've got a similar setup, and there are times when there were
issues with our xen vms that we explored outside of the shared cluster. One
more justification would be overflow - this has happened here, we don't have
room for all the VMs on the cluster so some of us run them locally.

So yes, don't rule it out, but still seems to make economic sense to share
hardware.

------
rurounijones
What on earth kind of tests are running that can whip the I/O of 3 SSDs?!

Maybe worth looking at something like a small fusion-io / other PCI memory
card

------
nickstinemates
What a cool project! Great work.

