
Remote code execution vulnerability in apt/apt-get - justicz
https://justi.cz/security/2019/01/22/apt-rce.html?
======
Sephr
Using HTTP for apt may seem bad, but you should really pay attention to Ubuntu
itself: [https://github.com/canonical-
websites/www.ubuntu.com/issues/...](https://github.com/canonical-
websites/www.ubuntu.com/issues/4394)

Ubuntu ISOs aren't served securely and are trivially easy to MitM attack.

This vulnerability is still being exploited:
[https://www.bleepingcomputer.com/news/security/turkish-
isp-s...](https://www.bleepingcomputer.com/news/security/turkish-isp-swapped-
downloads-of-popular-software-with-spyware-infected-apps/)

~~~
vvanders
Wow, that's pretty stunning and I'm honestly quite shocked it's still the case
in '19.

Just downloaded Ubuntu server for some local instances here at home and
realized that I hit this path without even knowning.

[edit] Looks like ISO signatures are served over HTTP as well.

~~~
blub
It's one of my pet peeves and I can say that a ton of organisations do this in
2019, including ones that should know better, like Ubuntu.

The only secure way to install is to somehow get their key, already have gpg
installed and then verify that way.

~~~
whydoyoucare
As long as verification happens independently, and the keys are obtained from
trusted sources, there is nothing wrong in downloading Ubuntu over http.

It appears this to be the case, see "How to verify Ubuntu download" tutorial
that provides detailed steps: [https://tutorials.ubuntu.com/tutorial/tutorial-
how-to-verify...](https://tutorials.ubuntu.com/tutorial/tutorial-how-to-
verify-ubuntu#0)

(If you plan to install an Operating System, then I believe some homework is
in order -- you cannot expect the OS developers to spoon-feed you trivial
security aspects that are expected to be a skill-set that you, the operating
system installer, or System Administrator, do possess).

~~~
vvanders
> and the keys are obtained from trusted sources

Isn't that a little hard if the verification keys(from that link) are served
over HTTP[1][2]?

[1]
[http://releases.ubuntu.com/18.04/SHA256SUMS.gpg](http://releases.ubuntu.com/18.04/SHA256SUMS.gpg)

[2]
[http://releases.ubuntu.com/?_ga=2.21478331.1690774166.154812...](http://releases.ubuntu.com/?_ga=2.21478331.1690774166.1548124147-631293260.1548124147)

~~~
bigiain
Those are GPG signed SHA hashes. Sure anyone MITMing the http connection could
change them, but for them too be valid signatures for the theoretically-
modified ISO you've ended up with, they either need the correct GPG private
key to generate that new signature, or they need to convince you to accept a
bogus public key for ubuntu-archive-keyring.gpg

~~~
vvanders
Or they could serve all this over HTTPS where that process would happen
_automatically_ when you download.

As another poster noted the keyserver in that link will work over http so
you're still susceptible to a possible downgrade attack.

Given how easy it is to get a LE cert these days it's making me question what
other security decision Ubuntu is making that I'm not aware of or have the
expertise to evaluate.

~~~
bigiain
Heh. I think that "other poster" may also have been me...

With regards to LE certs, I discussed elsewhere here the problems faced by the
apt developers that make the obvious "just install a LE cert!" answer not
actually workable for them - they rely heavily of volunteer run mirrors and
local proxies. I don't know what Ubuntu's ISO download "server" really is
underneath, it's possible they've got similar problems, where either they pay
for all the bandwidth themselves, or they let people assume that an ssl
secured connection to "ubuntu-mirror.usyd.edu.au" or "ubuntu-
mirror.evilhacker.ru" is for some reason "safer" than an http connection...

------
rabi_penguin
Hmm, it's almost as if the author of
[https://whydoesaptnotusehttps.com/](https://whydoesaptnotusehttps.com/) may
have overlooked a few things.

~~~
moviuro
OTOH, they would have been right if there had been (yet another) bug in
openssl/whatever lib would be used for https.

FWIW: 16 vulns in apt in NVD [0]; but 202 for openssl [1]

[0]
[https://nvd.nist.gov/vuln/search/results?form_type=Advanced&...](https://nvd.nist.gov/vuln/search/results?form_type=Advanced&results_type=overview&search_type=all&cpe_vendor=cpe%3A%2F%3Adebian&cpe_product=cpe%3A%2F%3A%3Aapt)

[1]
[https://nvd.nist.gov/vuln/search/results?form_type=Advanced&...](https://nvd.nist.gov/vuln/search/results?form_type=Advanced&results_type=overview&search_type=all&cpe_vendor=cpe%3A%2F%3Aopenssl&cpe_product=cpe%3A%2F%3A%3Aopenssl)

~~~
skywhopper
So your argument is that bugs in OpenSSL (a necessarily very complex piece of
software) mean that using SSL to increase network security is a bad thing?

~~~
moviuro
No. My argument is that both arguments do exist and security is all about
middle-ground. Do we add (yet another) layer of (bug-riddled) software to
defeat one possible sort of exploits, or not? How much does it cost? (Time,
money, etc.)

Other options include: not handle http directly in the package manager but use
a known-good library (curl?); do priv-sep; not having the package manager
execute code from a file before it checked its authenticity...

My personal opinion on the topic is that authenticating servers is a good
thing to do (https does this; encryption is an added benefit), but time has
shown that https is broken: libs are full of holes; the CA model is broken by
design. Maybe share updates using ssh?

~~~
nneonneo
Oh for god’s sake will you stop spreading nonsense FUD about https? https is
not “broken”. The _majority_ of websites run on https now; most system update
mechanisms with the VERY notable exception of apt use https to serve updates.

Sure, there are bugs in libraries, but seeing as https is already widespread
you’re not exposing yourself to MORE risk by using https over plain http, and
you mitigate attacks like this post. Any random coffee shop, untrusted public
WiFi, or attacker with a Pineapple could have used this attack to MITM HTTP
apt, whereas the attacker would have to compromise an upstream mirror to pull
off the same attack over HTTPS.

And re: the CA model, if you’re THAT worried about compromised or fake certs,
then pin the cert for a root server like debian.org, then download PGP-signed
cert bundles for mirrors and enforce certificate pinning using those bundles
only. Done. Apple and Microsoft use cert pinning for their update systems
(IIRC).

~~~
moviuro
> The majority of websites run on https now

We are talking about a specific use case of https: software repositories,
which are far higher-value targets than your random website, with another set
of challenges. Your package manager can actually do some things as root; once
it's owned, your system is _Game Over_.

Adding yet another lib on top of the (?) most important piece of software on
your computer is not a risk to take lightly. There are more elegant solutions
(signatures, priv-sep, not trusting anything until authenticated, etc.) that
require less risky code to run, and fewer people to come into play.

> Sure, there are bugs in libraries, but seeing as https is already widespread
> you’re not exposing yourself to MORE risk by using https over plain http

Irrelevant. We're talking about instant game over if it goes to sh.. even if
just once. More attack surface = more vulnerabilities.

~~~
nneonneo
If you're that paranoid about OpenSSL, then just sandbox it. Throw the entire
`apt-transport-https` subprocess in an unprivileged context. Done.

------
peterwwillis
Am I crazy, or is the bigger problem here not the fact that Apt will just
install whatever random package the server provides, whether your system
trusts its GPG key or not? What the hell is the point of the keys if the
packages are installed anyway??

~~~
thriqon
Usually, the packages themselves are not signed with GPG, only the Release
file is (containing the hashes of all .deb files). This is actually the
default of both Debian and Ubuntu. I never quite understood the reasons behind
it... I'd not expect this vuln to happen, though.

More info: [https://blog.packagecloud.io/eng/2014/10/28/howto-gpg-
sign-v...](https://blog.packagecloud.io/eng/2014/10/28/howto-gpg-sign-verify-
deb-packages-apt-repositories/)

~~~
peterwwillis
I kind of get why they did it that way (my guess: managing lots of dev keys
was problematic, so they used one key to sign a list of "official-seeming"
files). But then why wasn't the Release file's contents verified (assuming
this doesn't involve generating collisions for those packages)?

 _" The parent process will trust the hashes returned in the injected 201 URI
Done response, and compare them with the values from the signed package
manifest. Since the attacker controls the reported hashes, they can use this
vulnerability to convincingly forge any package."_

Wtf? This sounds like Apt is just downloading a gpg file and checking if it
matches a hash in an HTTP header, and if it does, it just uses whatever is
specified, regardless of whether your system already had the right key
imported? This makes no sense. Any mirror could return whatever headers it
wanted.

This is the real vuln, not header injection. If Apt isn't verifying packages
against the keys I had before I started running Apt, there was never any
security to begin with. An attacker on a mirror could just provide their own
gpg key and Releases file and install arbitrary evil packages.

Can somebody who knows C++ please verify that their fixes actually stop
installing packages if the GPG key wasn't already imported into the system?
[https://github.com/Debian/apt/commit/690bc2923814b3620ace1ff...](https://github.com/Debian/apt/commit/690bc2923814b3620ace1ffcb710603f81fa217f)

~~~
justicz
The hash is done locally in the http worker process. I think you may be
confusing headers in the HTTP response with headers in the internal protocol
used to communicate with the worker process. The 201 response is not an HTTP
response.

------
kuhhk
> Thank you to the apt maintainers for patching this vulnerability quickly,
> and to the Debian security team for coordinating the disclosure. This bug
> has been assigned CVE-2019-3462.

From the article. Good stuff

------
kanox
I was expect a buffer overflow but this is a quoting issue which is equally
applicable to all languages. Are the any language-level methods which make
such bugs impossible (or much harder)?

Heavy-weight strongly-typed HTTP libraries can force you to always construct
headers in a way that handles quoting for you but people seem to love "light"
solutions.

~~~
aasasd
The method is to treat the protocol as structured data instead of a bunch of
concatenated text. I.e. use a library that processes each piece of data,
escaping any invalid characters, before putting the pieces together.

Strong typing isn't relevant here, this is applicable in any language. But the
lib needs to know when you're putting text in a header name and when in the
value.

------
kpcyrd
It seems debian testing and unstable are still vulnerable:

[https://security-tracker.debian.org/tracker/CVE-2019-3462](https://security-
tracker.debian.org/tracker/CVE-2019-3462)

~~~
ChrisSD
Yep, this is our daily reminder that they're called "testing" and "unstable"
for a reason. They're not meant for production.

~~~
gtirloni
New fixes often land in unstable/testing before going to stable branches.

~~~
cwyers
I would hope so! Why have a testing branch if you're not going to test things
in it?

------
explainplease

        $ sudo sed -i 's/http:/https:/g' /etc/apt/sources.list
        $ sudo apt-get update
        ...
        Err https://us.archive.ubuntu.com trusty/main Sources                          
        Failed to connect to us.archive.ubuntu.com port 443: Connection refused
        ...
        Err https://security.ubuntu.com trusty-security/main amd64 Packages
        Failed to connect to security.ubuntu.com port 443: Connection refused
      

???

    
    
        $ curl -v https://security.ubuntu.com/ubuntu/
        * Hostname was NOT found in DNS cache
        *   Trying 91.189.88.149...
        * connect to 91.189.88.149 port 443 failed: Connection refused
        *   Trying 91.189.88.152...
        * connect to 91.189.88.152 port 443 failed: Connection refused
        *   Trying 91.189.88.161...
        * connect to 91.189.88.161 port 443 failed: Connection refused
        *   Trying 91.189.88.162...
        * connect to 91.189.88.162 port 443 failed: Connection refused
        *   Trying 91.189.91.23...
        * connect to 91.189.91.23 port 443 failed: Connection refused
        *   Trying 91.189.91.26...
        * connect to 91.189.91.26 port 443 failed: Connection refused
        * Failed to connect to security.ubuntu.com port 443: Connection refused
        * Closing connection 0
        curl: (7) Failed to connect to security.ubuntu.com port 443: Connection refused
    

So even _security.ubuntu.com_ is unavailable over HTTPS? Am I missing
something?

~~~
deathanatos
As was discussed recently on HN (and linked to elsewhere in the comments for
this article), packages are signed, and APT checks those signatures; however,
APT does download both the packages and the signatures in the clear. So,
normally, the signatures get checked, which ensures that you get the package
you intended. This is fine, mostly. (If you don't care about _privacy_ , but
it does prevent tampering, normally.)

APT does not, however, give _privacy_ , which HTTPS/TLS would. (Those in favor
argue that TLS doesn't help here, as you can still see that you're connecting
to Ubuntu, so it's still obvious that you're downloading updates. I personally
disagree w/ this stance: I think there is value in protecting _which_ packages
you're pulling updates for, as what packages you have installed can inform
someone about what you're doing. I think there's further argument that the
sizes of the responses give away which updates you're pulling, but IDK, that
seems harder to piece together, and TLS at least raises the bar for that sort
of thing.)

The bug discussed in the article circumvents the signature checking, by lying
to the parent process about the validity of the signature by being able to
essentially execute a sort of XSS/injection attack.

~~~
explainplease
I think you misunderstood my comment. I'm aware of the apt security model and
the nature of this bug.

My point is that these Ubuntu repo servers are not available over HTTPS, which
seems like a problem. In the context of this bug, a serious one--who's to say
that there aren't more bugs like this lurking? There's no reason that these
servers _shouldn 't_ be available over HTTPS.

------
ptx
So how do I safely update apt to the patched version on Debian Jessie?

It's apparently fixed in version 1.0.9.8.5: [https://security-
tracker.debian.org/tracker/CVE-2019-3462](https://security-
tracker.debian.org/tracker/CVE-2019-3462)

...but the suggested _apt -o Acquire::http::AllowRedirect=false update_ fails
because security.debian.org wants to do a redirect.

Manually downloading the packages listed in the announcement doesn't work
either, since that's the Stretch version.

I can get the source package here:
[https://packages.debian.org/jessie/apt](https://packages.debian.org/jessie/apt)

...but the key is not in /usr/share/keyrings/debian-archive-keyring.gpg. (And
I'm not entirely sure how to build a source package.)

I tried adding a different source, as suggested in the announcement:

    
    
      deb http://cdn-fastly.deb.debian.org/debian-security stable/updates main
    

...but it seems it can't find the right versions of all the dependencies:

    
    
      # apt -o Acquire::http::AllowRedirect=false install apt apt-utils libapt-pkg5.0 libapt-inst2.0 liblz4-1
      ...
      The following packages have unmet dependencies:
       libapt-pkg5.0 : Depends: liblz4-1 (>= 0.0~r127) but 0.0~r122-2 is to be installed
      E: Unable to correct problems, you have held broken packages.

~~~
jwilk
No idea why would would security.debian.org make a redirect. Can you paste the
whole error message?

Regarding the other error, jessie ≠ stable. You want "jessie/updates", not
"stable/updates".

~~~
ptx
Ah, silly blind copy-pasting on my part. With "jessie/updates" it works.
Thanks!

The message I was referring to, which looks like it's indicating that
security.debian.org is trying to redirect:

    
    
      # apt -o Acquire::http::AllowRedirect=false update
      ...
      Err http://security.debian.org jessie/updates/main Sources
        302  Found [IP: 217.196.149.233 80]
      Err http://security.debian.org jessie/updates/main amd64 Packages
        302  Found [IP: 217.196.149.233 80]
      Err http://security.debian.org jessie/updates/non-free amd64 Packages
        302  Found [IP: 217.196.149.233 80]
      Err http://security.debian.org jessie/updates/main i386 Packages
        302  Found [IP: 217.196.149.233 80]
      Err http://security.debian.org jessie/updates/non-free i386 Packages
        302  Found [IP: 217.196.149.233 80]
      Fetched 422 kB in 2s (169 kB/s)
      W: Failed to fetch http://security.debian.org/dists/jessie/updates/main/source/Sources  302  Found [IP: 217.196.149.233 80]
    
      W: Failed to fetch http://security.debian.org/dists/jessie/updates/main/binary-amd64/Packages  302  Found [IP: 217.196.149.233 80]
    
      W: Failed to fetch http://security.debian.org/dists/jessie/updates/non-free/binary-amd64/Packages  302  Found [IP: 217.196.149.233 80]
    
      W: Failed to fetch http://security.debian.org/dists/jessie/updates/main/binary-i386/Packages  302  Found [IP: 217.196.149.233 80]
    
      W: Failed to fetch http://security.debian.org/dists/jessie/updates/non-free/binary-i386/Packages  302  Found [IP: 217.196.149.233 80]
    
      E: Some index files failed to download. They have been ignored, or old ones used instead.

------
nisa
FWIW you can have https today just install apt-transport-https and use this
sources.list:

    
    
        deb https://deb.debian.org/debian stable main contrib non-free
        deb https://deb.debian.org/debian-security stable/updates main contrib non-free
        deb https://deb.debian.org/debian stretch-backports main contrib non-free

~~~
wtetzner
Do you have to install apt-transport-https over http?

~~~
richardwhiuk
No, but it might be a pain not to:

wget [https://cdn-aws.deb.debian.org/debian-
security/pool/updates/...](https://cdn-aws.deb.debian.org/debian-
security/pool/updates/main/a/apt/apt-transport-https_1.4.9_amd64.deb)

dpkg -i apt-transport-https_1.4.9_amd64.deb

apt-transport-https is going away in apt 1.5 anyway

------
jamieweb
All my servers do an update and dist-upgrade every 24 hours, and it emails me
the log. I saw this post just a few minutes after checking the log for today.

I imagine that this is a higher risk for virtualized servers in a public
cloud. I use Linode, so somebody else could have set up a Linode to MITM
everybody and serve the exploit. If it were a private home or corporate
network, somebody would either have to be on your network, or on a piece of
major infrastructure between you and the mirrors.

Is there a way to tell from the apt log whether I am affected? It looks like
you can see it trying to install an extra dependency package. Anyway, the logs
are not immutable or verifiable, so if somebody got root they could
theoretically kill apt, write a fake log in its place and then email that to
me...

I took full images of all my servers a few days ago, so at least I have those
should I need them.

~~~
perennate
> I imagine that this is a higher risk for virtualized servers in a public
> cloud.

I think it might be the other way around (at least in terms of virtualized
servers versus physical servers, both on public cloud) -- it is easier to
implement IP address and other filtering measures with virtualized servers
than inside physical network switches. Linode and other virtual machine
providers almost universally implement this filtering, but many dedicated
server providers are not as robust.

~~~
jamieweb
That's a good point actually - although when using dedicated hardware I
usually have in my mind that everything is raw rather than abstracted by a
hypervisor, so this sort of thing should be more expected.

With a public cloud you don't really know how it's set up on their end, as
there are countless different ways to do it.

------
aboutruby
> a malicious mirror could still exploit a bug like this, even with https. But
> I suspect that a network adversary serving an exploit is far more likely
> than deb.debian.org serving one or their TLS certificate getting compromised

Exactly, this is far more easily exploitable because apt is using HTTP by
default instead of HTTPS

------
Kliment
If this were one of those bugs that do the full PR bullshit run and use a
catchy name and a landing page, I'd propose it be called "Inapt"

------
aasasd
I'm just wondering if the author should've given people more time to pull the
updated apt, before publishing the issue. This is only a few days old, right?

I'm not familiar with established procedures in such cases, and am curious
about the rationale for omitting a time window for the update.

~~~
pfg
Public disclosure once patches are available is a fairly common policy.
Google's Project Zero operates like that as well.

------
systematical
So will someone more in the know than myself tell me if using apt-transport-
https is a reasonable solution to this problem, or, at least mitigates the
problem?

As someone who uses Linux as their personal O.S. and administers some at work,
but doesn't think in bash, I'd really like an answer :-)

~~~
moosingin3space
It makes it so that your mirror would have to be exploiting apt, instead of
effectively anyone. As a result, using TLS for downloads would mitigate this
(but not fix it).

I've read the arguments against HTTPS for apt many times. They're wrong.

------
whydoyoucare
[https://www.openbsd.org/papers/bsdcan-
signify.html](https://www.openbsd.org/papers/bsdcan-signify.html) is a good
read on this topic.

------
mehrdadn
[https://news.ycombinator.com/item?id=16224684](https://news.ycombinator.com/item?id=16224684)

------
ChrisCinelli
Incidentally binaries from support.apple.com are also served on http.

~~~
aboutruby
The downloads are in an HTTPS page leading to HTTPS download links, and HTTP
redirects to HTTPS:
[https://support.apple.com/downloads/quicktime](https://support.apple.com/downloads/quicktime)
for instance.

~~~
ptx
> and HTTP redirects to HTTPS

...or to anywhere a MITM attacker wants to redirect you.

~~~
ptx
I guess some people disagree?

But if any response in the redirect chain is served over HTTP, it can be
replaced with a different response containing any "Location" header of the
attacker's choosing instead of the original one. So it doesn't matter if the
eventual intended URL is an HTTPS URL, because it will never be reached. The
redirect will go to the attacker's site instead. (And in the case of a
download, the user will never notice because the file URL is usually not
prominently displayed.)

So a HTTP response anywhere in a redirect chain is equivalent to serving it
over HTTP. Perhaps this was exactly the point of the parent post, but I
thought it would be useful to make it explicit.

------
feikname
Seems like the discovery of this vuln was a direct result of yesterday's
discussion about HTTPS on apt here on HN
([https://news.ycombinator.com/item?id=18958679](https://news.ycombinator.com/item?id=18958679)).

~~~
dreta
No. That would mean less then a day heads-up from the researcher.

~~~
loeg
It would be a really impressive turn around on the fix from Debian, though
;-).

