
Beyond CI/CD: GitLab's DevOps Vision (2017) - FooBarWidget
https://about.gitlab.com/2017/10/04/devops-strategy/
======
rdsubhas
We have explored GitLab in the past and a lot of our CI/CD is heavily inspired
from GitLab.

They seem to be consistently taking right decisions at the micro level. Their
CI/CD design and execution is way more usable and reliable than <name
suppressed> pipelines (still no manual stages, still no re-triggering, etc).
Their design and integration with Kubernetes is also a great choice.

So on the micro picture, things are quite good. On the macro picture, they are
building a universe. On the Issues front, they're trying to be like Trello
(and in some places, reminds of Jira). They're trying to tie Issues with
customer support, getting slightly in the way of Zendesk/Freshdesk. They're
building deployment. Then monitoring - they support Prometheus. Now post-
deployment/post-monitoring. And of course, they are competing on core git
hosting as well.

Of course, they'll do a great job. Heck, if they get into the messenger
domain, they'll pick a nice strategy and just integrate deeply with Slack
maybe.

But the core of the problem is: None of these are instrumentable/hookable.
Want to enforce some of your own organization policies before deployment? Want
to use everything _except_ with your own monitoring tool? Sorry. If you use
GitLab, you use their universe and everything that comes with it. There is no
graceful integration with a broader set of tools.

Maybe for a small startup, just "doing things the GitLab way" and subscribing
to all their micro choices (which are good) make sense. But organizations
grow, and they'll outgrow these micro choices sooner than later. Then the lack
of extensibility, hooks, etc. will bubble up.

All this is assuming GitLab continues to build out a perfect application
platform with all the right choices (and all the version and time matrices of
those choices). Hugely laudable work so far though!

~~~
iamtew
On the topic of messaging, GitLab acquired Gitter about a year ago:

[https://about.gitlab.com/2017/03/15/gitter-
acquisition/](https://about.gitlab.com/2017/03/15/gitter-acquisition/)

Edit: I first said "not too long ago" then realized it was just about a year
ago.. Where is time going!?

~~~
lloeki
Also, omnibus GitLab includes Mattermost, and there are integrations between
both (notifications, slash commands, auto channels...).

------
eadz
I like gitlab, but a mixed open source / closed source product is always going
to be a challenge in terms of hearts and minds. Take this quote from the
article.

> The other way to look at it is that this is pretty advanced stuff, and
> frankly, it doesn’t deserve to be, free, open source.

So is all the stuff that gitlab builds on, like git, or ruby, or linux, that's
all pretty advanced stuff I'd say.

~~~
Cthulhu_
Wow that really is a poorly worded phrase. If someone from GL is reading this,
please fix that, it makes you look pretty bad.

Just be honest and say you want to charge for advanced features, because
there's little money to be made in open source and you're a business after all
and want to pay your employees a living wage.

~~~
mariusmg
>it makes you look pretty bad.

But they are being honest, they are saying what they are thinking....

~~~
oblio
They could rephrase it to: "we consider this an enterprise usage pattern for
our software so this feature will be included in the paid Enterprise Edition".

------
cabraca
Is it just me or is gitlab becoming more and more bloatware? Is it realy a
good idea to bundle everything in one application? Why do i need an integrated
artifact management with solutions like sonatype nexus? Is it worth it to add
an "awesome environment for ops" on top of kubernetes? why add this
complexity?

Dont get me wrong, i like gitlab and use it since the beginning. I just have a
problem with this "munch it all together" style of products.

~~~
foepys
Gitlab in a way suffers from being open source. Their open source product is
so good and includes so many features (even a full featured CI/CD pipeline!),
that they need something big to justify their paid version. So I don't blame
them. You can disable a lot of stuff that eats away your memory.

~~~
FooBarWidget
Corollary: do you hate Gitlab becoming bloatware and integrated suites? Then
pay them! Even if you only use their open source offering, pay them. And not
just "pity money" like 100 USD per year, but really pay them as if they're a
serious commercial software vendor.

Yeah I know people won't do that. But think about this for a second. It does
not apply only to Gitlab but to lots of independent open source software
vendors too.

~~~
zingmars
How is that going to solve anything? The more you pay them the more bloated
version you're going to get. In fact, if you want to travel lite the community
edition is actually your best choice there.

~~~
slgeorge
The item it would solve is that if you "pay" then you can more directly
influence their road-map. As a customer they want you to be happy and to
remain one, that gives you the ability to influence direction. Individually,
it may not be a lot, but in aggregate customers have power.

------
tazjin
Slightly offtopic from what the article focuses on, but the thing that bothers
me about all these built-in CI/CD things in software like Gitlab is that
people seem to be perfectly content with building an artifact and then just
... pushing it out.

In traditional deployments that may be some kind of "copy this thing over SSH
and make it go!", in Kubernetes-land it's more like "lets just modify this
API-state to point at the new image tag!".

Neither of those produce a consistent record of changes applied and state
mutation like that is very error-prone.

We actually use Gitlab's CI at work, but we have our own deployment solutions
built on it that end up making git commits into the NixOS & Kontemplate
repositories which then run their own pipelines to deploy.

This way we can always answer the question "what _set_ of applications at
which versions was deployed at $time?" and also roll back consistently.

~~~
akvadrako
Sorry but it's not clear what you are saying.

What is the fundamental difference between modifying your k8s deployment to
point to a new tag and your solution?

You can easily roll back (except with database migrations) and easily see what
version is deployed when.

~~~
tazjin
When you update a Kubernetes deployment in the API directly you're essentially
modifying a global variable. If you do this as the result of some imperative
process (e.g. a CI pipeline), you don't have any record telling you what the
value _should_ be - you only know what it currently _is_.

You're also only modifying a single piece of your whole state at any given
time, meaning that if you have the services A and B and their deployment
pipelines independently modify state - you have nothing that declares any
relationship between which versions of these should be deployed together.

There's a little piece of infrastructure wisdom I've learned over the years, I
refer to it sometimes as "tazjin's law":

Any infrastructure component not controlled by a reconciliation process will
eventually fail.

Versions of dependent components will get out of sync, configuration is being
deployed independently of the application, and so on.

In order to reconcile your current state with your desired state you must know
what your desired state is.

Does that explain it?

~~~
akvadrako
That doesn't explain it. You always need some global variable saying what the
current state is. In your case, maybe it's the branch pointer in your git
tree. In the k8s deployment, it's a tag value.

~~~
tazjin
No, those don't contain the same information.

Our Kontemplate repository contains the state of an entire cluster, including
_all_ versions and _all_ configuration.

A tag value is a single piece of mutable data that is in no relation to the
other relevant data.

~~~
akvadrako
The tag points to the current version of your docker image, which contains the
current configuration, except secrets which you don't want in a repo. Older
deployments point to the older docker images which contain the older versions.

Still don't see any advantage to justify the additional complexity in your
system.

~~~
tazjin
We have more than one service, more than one environment, and auditability
requirements (due to being a financial institution).

I find it more complex to try and keep track of remote state modifications
than to have a single source of truth, but whatever floats your boat ;-)

~~~
pas
You can export k8s state as JSON/YAML and put that into a git repo if you
want.

------
frio
This is awesome; it's something that I've slapped together out of disparate
components (that've then gradually fallen apart) so many times I've lost
count. Having a tool which can manage it all -- from commit, to builds, to
deploys, to monitoring -- is a massive boon for small business (and personal
projects).

I look forward to the next releases of Gitlab!

------
Walkman
Now I’m pretty sure GitLab has a focus problem. It’s already very complex with
somewhat unrelated things. You can’t be everything for everybody. This will
even further decrease quality of individual parts of the system. It has been
unusably slow for a long time. If they try to do everything, they will have
way bigger quality issues than slowness.

------
zerogvt
I'm upvoting/adding this in my favorites if only for the pipeline diagrams.

Dunno whether their vision will pan out though. I do not use gitlab so I'm
probably not that qualified to speak but I wouldn't like using one PaaS for
all things CICD. I'd be afraid of locking myself in plus loosing a few degrees
of flexibility.

------
therealmarv
I don't get the beyond part, especially monitoring.... will Gitlab only have a
Prometheus view in their UI or do they want to create their own monitoring?
And when using own monitoring... this is a bit crazy... we have from day to
day more and better Prometheus for that.

~~~
joshlambert
We are indeed leveraging open source tools like Prometheus, and not seeking to
build our own. We do believe however that the data and insights that
Prometheus provides can be much more impactful to an organization if it is
provided in the workflow that developers already use, rather than a separate
tool/UI.

For example when looking at a CI/CD deploy to an environment, you can easily
access important Prometheus metrics and in the future logs, from the same
console.

This also allows us to build more intelligence into the platform, as GitLab is
more aware of your application and its health. One example is to incorporate
Prometheus monitoring to compare the performance of a new release in an
incremental deployment, and automatically pausing it if key metrics have
degraded.

------
oblio
Is anyone here using the Gitlab CI/CD for anything a bit more complex?

I'm especially interested in a pipeline with parallel executions of different
kinds of tests, maybe some manual checkpoint in there, a more complex chain
with execution on different kinds of hosts or containers.

~~~
FooBarWidget
Last time I checked (a few weeks ago), Gitlab's CI/CD system is great for most
use cases as long as you can describe your pipeline as a list of non-
interactive tasks -- some of which may be parallelized -- that each can run in
a Docker container.

For one of the software projects I'm involved in, I have more complex needs.
We release binaries for multiple platforms so our Jenkins master delegates
certain tasks to slaves running on specific OSes. Then at the end the Jenkins
master downloads the built artifacts from all slaves and publishes everything
to our artifact hosting server. As far as I can tell, Gitlab CI does not
support this.

In future CI jobs I may even require user interaction, e.g. I may ask a human
to sign off a report. I don't think Gitlab CI can do this.

But if your needs aren't so complex then Gitlab CI is great. It's UX-
philosophically similar to Travis and setting it up is super simple, as
opposed to Jenkins which is a pain to use.

~~~
Already__Taken
Can't you put the gitlab runner on whatever you want and set up CI
environments to the runner?

~~~
bpicolo
The issue is that the configuration format makes complex jobs difficult to
define. It's good for the simple case, but versus e.g. a Jenkinsfile[0] where
you get a complete groovy DSL that can interact with it's environment across
tasks, pipeline, branch, etc with ease it's rough. Having fine-grained control
becomes important.

[0]
[https://jenkins.io/doc/book/pipeline/jenkinsfile/](https://jenkins.io/doc/book/pipeline/jenkinsfile/)

~~~
vorg
> a Jenkinsfile[0] where you get a complete groovy DSL

Apache Groovy _isn 't_ "complete" as shipped with Jenkins. It's collections
API is deliberately crippled so it doesn't work.

------
9034725985
I would like to know what the goal for auto devops is...

Will it be able to look at a project and compile it? For example:
[https://gitlab.com/postgres/postgres/-/jobs](https://gitlab.com/postgres/postgres/-/jobs)

~~~
Snappy
Yes, it attempts to detect the language/framework, and build it. It doesn't
work for all languages, and is based on Heroku buildpacks so has similar
limitations. If autodetect fails, but some Heroku buildpack would work, you
can specify it manually. Or, just include a Dockerfile and it'll build that
instead.

------
deadbunny
From the team who know basically nothing about ops? (unless they've hired
actual ops people in the last year)

