
Critical Security Release for GitLab 8.2 through 8.7 - iMerNibor
https://about.gitlab.com/2016/05/02/cve-2016-4340-patches/
======
beefhash
Is it just me or has gitlab issued a lot of critical security releases lately?

I'm not sure if this is because they're particularly open about these things
or because their product might be particularly insecure.

~~~
JamesSwift
Your comment is weirdly similar to this other comment that I saw in the
initial announcement:

[https://news.ycombinator.com/item?id=11593362](https://news.ycombinator.com/item?id=11593362)

~~~
sdesol
Since we are doing deja vu, I uploaded some new metrics that better highlights
how insane GitLab's churn rate is.

[http://imgur.com/a/4uaSR](http://imgur.com/a/4uaSR)

What's really interesting is the number of contributors.

~~~
StavrosK
What does this show, exactly? I'm not familiar with this tool and there are no
labels.

~~~
sdesol
It basically shows how much has changed in GitLab's master branch in the last
30 days. The main metrics is cumulative code churn (lines added, changed, and
deleted that does not involve comment or blank lines).

In the first picture it shows the churn was about 24,000 if you don't include
merge commits (nomerge:true) and if you ignore changes that were the result of
adding/deleting files (action:M).

The numbers with the avatars are basically. First column is number of commits.
Second column is cumulative code churn. Third column is percentage of churn
that does not involve blank/comment lines.

The second and third pictures shows the code churn as grouped by top level
directories. In GitLab's case the most churn occurred in the app directory.
And if you drill down to app/assets/javascripts, you'll see the following:

[http://imgur.com/B2GE1yk](http://imgur.com/B2GE1yk)

The charts and metrics basically shows GitLab's code base is changing a lot.
This is the churn for Gogs in the last 30 days.

[http://imgur.com/ZmzyMsz](http://imgur.com/ZmzyMsz)

And I guess the question is, is the high rate of change a contributing factor
for increased security issues? Statistically speaking, the more code, the more
chances for something being missed during code reviews.

It's important to note the metrics isn't saying the quality is bad. It's just
saying a lot is changing.

~~~
sytse
A lot is changing since GitLab is rapidly developing. If you have more churn
with the same amount of developers that might lead to increased security
issues. We've been quickly adding paid developers over the last year, so I'm
not sure what that ratio did. Also, some of the churn is due to refactorings,
although these are risky in themselves over the long run they increase code
quality.

------
dewiz
Note for admins using Apache 2.4, regarding the manual quick fix described,
take into consideration the Apache auth config changes described here
[https://httpd.apache.org/docs/current/upgrading.html](https://httpd.apache.org/docs/current/upgrading.html)

------
kentonv
Gitlab can be run on Sandstorm.io (of which I am tech lead / co-founder).
Sandstorm claims to mitigate most vulnerabilities in apps:

[https://docs.sandstorm.io/en/latest/using/security-non-
event...](https://docs.sandstorm.io/en/latest/using/security-non-events/)

Let's see how it scores here...

For background, on Sandstorm, each Gitlab project is placed in a separate
grain (container), isolated from all others. In order to communicate with a
grain at all, you must have been granted some level of access to it by its
owner -- Sandstorm does not let you send requests to private grains to which
you haven't been given access. So, private Gitlab repos hosted on Sandstorm
are basically not vulnerable to any vulnerability.

Of course, Gitlab is the kind of thing you might intentionally make public to
all, e.g. to host an open source project. Therefore, it makes sense to analyze
whether a public repository would be exploitable.

    
    
        Privilege escalation via "impersonate" feature
    

On Sandstorm, authentication is handled by Sandstorm. The app receives an
unspoofable header indicating which user the request came from, and what
permissions they have. A well-written app uses this header on every request to
authenticate the user.

Unfortunately, our Gitlab package currently uses this information only when a
session first opens, then relies on the session cookie going forward. This
pattern is sometimes used as a "hack" on Sandstorm to more easily integrate
with existing login code designed to do upfront / one-time authentication. As
such, public Gitlab repositories hosted on Sandstorm would be vulnerable.

Had Gitlab on Sandstorm been implemented "properly", it would not be
vulnerable. That said, the "impersonate user" feature would not have worked at
all. That's probably for the best: a feature like this really ought to be
implemented by Sandstorm itself, which would have the ability to implement it
(securely) across all apps at once, rather than have each app implement its
own version.

Somewhat embarrassingly, the Sandstorm package of Gitlab actually predates
this feature being added, therefore Gitlab instances on Sandstorm today
actually aren't vulnerable. (Generally, if the upstream app author does not
directly maintain the Sandstorm package, then the Sandstorm package will tend
to fall behind. This should get better as Sandstorm gains popularity and
upstream authors target it explicitly.)

    
    
        Privilege escalation via notes API
        Privilege escalation via project webhook API
        Information disclosure via project labels
        Information disclosure via new merge request page
    

These vulnerabilities allow a user to manipulate a private project to which
they aren't supposed to have access. On Sandstorm, a private project would
live in its own grain, and if you hadn't been given access then Sandstorm
would deny you the ability to talk to the grain at all. Therefore, these
cannot be exploited.

    
    
        XSS vulnerability via branch and tag names
        XSS vulnerability via custom issue tracker URL
        XSS vulnerability via label drop-down
    

These vulnerabilities require that you have write access to one project on the
server in order to launch an attack. On Sandstorm, since every repository is
its own instance, the attack would be limited to the repository on which you
have write access -- you would not be able to use this attack to damage
someone else's Gitlab repository on which you lack write access.

    
    
        XSS vulnerability via window.opener
    

This is a subtle phishing issue that is very widespread. However, it mostly
doesn't work when the opener is a sandstorm app: the app lives inside an
iframe which is prohibited by Content-Security-Policy from browsing away from
the server.

    
    
        Information disclosure via milestone API
        Information disclosure via snippet API
    

A public project hosted on Sandstorm would be vulnerable to these (leaking
confidential issues attached to public milestones, and leaking private
snippets attached to a public project). Sandstorm can only enforce access
control at the grain level; anything finer than that is up to the app, and is
subject to app bugs. I generally recommend that Sandstorm users try to put
confidential data in separate grains from public data -- e.g. creating a
separate issue tracker for confidential issues.

(The Sandstorm packaging again predates the milestone bug's introduction,
though may be affected by the snippet issue.)

We will update the Gitlab package tomorrow. Once an update is pushed, every
Sandstorm user will receive a notification within 24 hours and can apply the
update with one click.

Conclusion: Sandstorm mitigated 8/11 issues by design, 2/11 by accident, and
is vulnerable to 1/11\. The biggest issue was only mitigated by accident in
this case, although a well-behaved Sandstorm app normally wouldn't have this
kind of issue by design. Overall, though, I'm disappointed in Sandstorm's
performance here -- it's much worse than the usual 95% mitigation rate.

~~~
mserdarsanli
From [https://sandstorm.io/install](https://sandstorm.io/install)

> Run this in a terminal:

> curl [https://install.sandstorm.io](https://install.sandstorm.io) | bash

kthxbye

~~~
zackp30
No need for the rudeness.

I always get bothered when people bring up the `curl [...] | bash` “argument”
(which is usually less of an argument and more of a rude dismissal of a good
product).

Sure, the script downloaded with curl should be validated.

Not sure, but I’m pretty sure you don’t validate every tarball you download,
and even if you do validate them, you certainly wouldn’t look through each
line of code making sure there isn’t a backdoor or something.

Any malicious person could break into a tarball download site, and replace the
checksums as well (only way to mitigate that is using a signature that is
associated with the tarball maintainer), and even if something like PGP is
used, the attacker would be able to replace the instructions to obtain the PGP
key with their own generated one, or even remove the signature completely.

This is, of course, the same for curl piped into bash, but my point is that
it’s only slightly worse than an unsecured package site (which I’m sure there
are many of). The only thing worse about it is that it’s less noticeable when
the script has been replaced.

Also, some scripts (such as the rvm installatio script) /require/ PGP to be
used for the installation to even start.

In conclusion, please don’t be rude and dismiss an entire product because of a
single installation method, which isn’t actually worse than a malicious
tarball.

~~~
kogepathic
> I always get bothered when people bring up the `curl [...] | bash`
> “argument” (which is usually less of an argument and more of a rude
> dismissal of a good product).

Why? Package managers were created for a reason. Virtually every Linux
distribution anyone would use to host a service such as Sandstorm will include
a package manager.

> Not sure, but I’m pretty sure you don’t validate every tarball you download,
> and even if you do validate them, you certainly wouldn’t look through each
> line of code making sure there isn’t a backdoor or something.

I don't download tarballs. I install packages. Who downloads tarballs to
install software in 2016?

Anyone actively developing for a project will likely be using git, and I'm not
aware of too many projects which make use of toolchains which are so new that
they're not available in any rolling release/testing distro.

> Any malicious person could break into a tarball download site, and replace
> the checksums as well (only way to mitigate that is using a signature that
> is associated with the tarball maintainer), and even if something like PGP
> is used, the attacker would be able to replace the instructions to obtain
> the PGP key with their own generated one, or even remove the signature
> completely.

Yes, fine. And that's why all major distributions sign their packages, so you
know that it's valid. Again, it's 2016, who seriously installs software from
tarballs?

> The only thing worse about it is that it’s less noticeable when the script
> has been replaced.

No, what is _much_ worse than that is you are installing software outside of
your package manager. Also, companies who deploy via curl | bash, typically
pull in their own dependencies outside of the package manager.

It's bad form.

If your application has specific dependencies, then ship it as an appliance
using LXC, Docker, or $TRENDY_CONTAINER_TECH. That way, all updates are atomic
and you don't risk eff-ing the OS by pulling in a bunch of stuff outside of
the package manager.

> In conclusion, please don’t be rude and dismiss an entire product because of
> a single installation method, which isn’t actually worse than a malicious
> tarball.

In conclusion, package management is a solved problem. Companies which have
installers which do not make use of the distribution's package management
system are lazy, and anyone who defends curl | bash is an apologist.

Seriously, it's not difficult to generate a DEB/RPM/<insert distribution
package format> and create your own signed repo. Package management has been a
solved problem for at least a decade.

It annoys me that companies think it's okay to have curl | bash as an
installation method. If they think that will ever be acceptable in an
enterprise environment where change management is important, they're
delusional.

~~~
kentonv
> Virtually every Linux distribution anyone would use to host a service such
> as Sandstorm will include a package manager.

Yes, but:

* Most of them ship on a 6-month or even 2-year release cycle whereas Sandstorm updates every week.

* Most of them will not accept a package that wants to self-containerize with its own dependencies, which means Sandstorm would instead have to test against every different distro's dependency packages every week, whereas with self-containerization we only depend on the Linux kernel API -- which is insanely stable.

* If we publish the packages from our own repo, we're back to square one: how does the user get the signing key, if not from HTTPS download?

* Sandstorm actually _has_ a PGP-verified install option: [https://docs.sandstorm.io/en/latest/install/#option-3-pgp-ve...](https://docs.sandstorm.io/en/latest/install/#option-3-pgp-verified-install)

* You should probably be installing Sandstorm in its own VM anyway.

I'm certainly not saying curl|bash is perfect. There are trade-offs. For now
this is the trade-off that makes the most sense for us. Later on, maybe
something else will make sense.

> If they think that will ever be acceptable in an enterprise environment
> where change management is important, they're delusional.

We've never had an enterprise customer comment on this (other than requesting
the PGP-verified option, which we implemented). The vast majority of
complaints come from HN, Twitter, and Reddit. ::shrug::

~~~
kogepathic
> Most of them ship on a 6-month or even 2-year release cycle whereas
> Sandstorm updates every week.

That's fine. Run your own repo where you control the release cycle. Puppet
does this. GitLab does this. PostgreSQL does this. Sandstorm does _not_ do
this.

> Most of them will not accept a package that wants to self-containerize with
> its own dependencies, which means Sandstorm would instead have to test
> against every different distro's dependency packages every week, whereas
> with self-containerization we only depend on the Linux kernel API -- which
> is insanely stable.

GitLab ships an omnibus installer with their CE/EE product, and it works
great. I don't see why Sandstorm couldn't also publish an omnibus installer
which contains all the dependencies, in essence creating your own container in
somewhere neutral like /opt

This way, you have atomic releases you can install. How do I select the
version to install with 'curl | bash' without user interaction?

> If we publish the packages from our own repo, we're back to square one: how
> does the user get the signing key, if not from HTTPS download?

Publish your signing key as a distribution package. This is what most
organizations do (e.g. EPEL, PostgreSQL, Puppet).

Then the user does 'apt-get install sandstorm-release', 'apt-get update',
'apt-get install sandstorm' and you have an authenticated release.

Your signing infrastructure should be secure enough that you don't have to
change the signing key within the major release cycle of a Linux distribution
anyway.

> You should probably be installing Sandstorm in its own VM anyway.

This isn't an excuse for 'curl | bash' installs. If they want to recommend
that their customers run the product in a VM or container, they should provide
appliance/container images, as well as packages.

> We've never had an enterprise customer comment on this

I am in enterprise, although I'm not a sandstorm customer. I will tell you
that one of the first considerations when deciding on a product is how well it
fits into our existing infrastructure. If I can't push a repo and install a
specific version of the software, tested and known working, using puppet, it's
not getting deployed in our org.

Perhaps the reason none of Sandstorm's enterprise customers are asking for
packaged installers is because every enterprise customer who expects a
packaged installer sees Sandstorm's installation method and decides not to use
it.

------
gramakri
Thanks for the release! We updated the cloudron.io app to use 8.7.1 we well.
Cloudron users will get GitLab auto-updated tonight.

