
Ask HN: How is the staging/dev environment setup at your workplace? - tsudot
Some background about me. I’ve been working on web APIs for the last 5 years. Initially it started with a monolith codebase, plugging in all sorts of APIs and packaging it all together. It was fairly easy to setup a staging cluster due to the simplicity involved in hosting the code and configurations.<p>Over the years I’ve realised the importance of services and have adopted that strategy over time. With multiple micro services comes the problem of discovery, config management and tooling around it so we created ansible playbooks with configurations for 3 main environments. However we were still hosting these services the old school way of running it directly on VMs. A few major problems arise due to this.<p>- There are 3 pre-defined environments. prod, stage and test. To create an environment, lets say for individual developers, a lot of manual work is involved.
- Infrastructure cost increases linearly with every new environment we spawn.
- Service discovery, config management becomes a mess.<p>A few months ago after reading about Kubernetes and how it solves most of these problems, I’ve created pods for most of our services (mostly using brute-force), this is just a start and I’ve made notes of all technical debts which have been accumulated during this process. The immediate consumers of this platform is the QA team. Long term plans also involve replaying production traffic on this cluster and to bench it out.<p>My questions:<p>- How is the staging cluster setup in your organisation?
- What are some of the learnings and the problems faced while building it?
- Is the thought process of having a working cluster for every developer make sense?
======
NicoJuicy
I did it myselve ( asp.net mostly).

Jenkins with git ( gitlab) connected: alpha, beta and master branches
seperately. It can execute batch, powershell, msbuild and .exe files in the
user environment.

All packages contain nuget publishes to our own nuget server. The versioning
is like ( for alpha, beta): {packagename} Version
{MajorVersion}.{MinorVersion}{PatchVersion}-{PreReleaseString}{BuildNumber}

{PreReleaseString} is removed for the master ( production) builds.

All nuget packages are constantly build, including api nuget packages. Just
make sure you change the MajorVersion,MinorVersion and PatchVersion when doing
newer versions that change features/functionality accordingly. ( mostly isn't
required because the incremented jenkins build number).

It supports parameters and a publish can choose to include / exclude :
database migrations, nuget packages, api and web frontend ( normally, all of
them are published/executed)

Alpha, beta and master is chosen because of the alphabeticle name. So the beta
version appears on top of the alpha version.

I created a powershell script, that uses EF-Migrations for multiple databases
depending on an auto-generated XML file ( with all connectionstrings in it).
So it's server /database agnostic.

Powershell also sets the web applications offline/online accordingly. So the
user receives a notification on the webfrontend ( we are back soon) and the
api gives a 503.

The API libraries trigger an error on 503 and it's explained in the
documentary for third parties ( One update a month with 30 minutes downtime in
the morning from 06:30)

Tests happen after the "builds", if the tests fails, the build fails. But
nothing has been changed ( database and published code).

If the migration scripts fail for a database, a "instable" build is given.
Mostly these are manual fixes.

The offline html file gives an internal link to a Jenkins build monitor ( for
duration estimation). You can see how long it will take till a "normal" finish
of a build.

