Ubuntu ISOs aren't served securely and are trivially easy to MitM attack.
This vulnerability is still being exploited: https://www.bleepingcomputer.com/news/security/turkish-isp-s...
Just downloaded Ubuntu server for some local instances here at home and realized that I hit this path without even knowning.
Looks like ISO signatures are served over HTTP as well.
The only secure way to install is to somehow get their key, already have gpg installed and then verify that way.
It appears this to be the case, see "How to verify Ubuntu download" tutorial that provides detailed steps: https://tutorials.ubuntu.com/tutorial/tutorial-how-to-verify...
(If you plan to install an Operating System, then I believe some homework is in order -- you cannot expect the OS developers to spoon-feed you trivial security aspects that are expected to be a skill-set that you, the operating system installer, or System Administrator, do possess).
If you plan to publish an operating system, you should take effort to reduce this homework as much as is humanly possible. For every step where a user is expected to—but not explicitly required to—perform extra work to ensure their own security, the overwhelming majority of users will not take these extra steps.
Arguing that they "should" gets us nowhere.
Isn't that a little hard if the verification keys(from that link) are served over HTTP?
As another poster noted the keyserver in that link will work over http so you're still susceptible to a possible downgrade attack.
Given how easy it is to get a LE cert these days it's making me question what other security decision Ubuntu is making that I'm not aware of or have the expertise to evaluate.
With regards to LE certs, I discussed elsewhere here the problems faced by the apt developers that make the obvious "just install a LE cert!" answer not actually workable for them - they rely heavily of volunteer run mirrors and local proxies. I don't know what Ubuntu's ISO download "server" really is underneath, it's possible they've got similar problems, where either they pay for all the bandwidth themselves, or they let people assume that an ssl secured connection to "ubuntu-mirror.usyd.edu.au" or "ubuntu-mirror.evilhacker.ru" is for some reason "safer" than an http connection...
keyserver.ubuntu.com is available over http as well as https, which means it's probably susceptible to a downgrade attack if you're mitming it - which would let you serve a bogus public key and therefore generate apparently valid signatures for the hash for a modified ISO.
That's not good...
In the abstract, this is true. In practice, however, the checksums are always downloaded from the same page as the OS and usually over the same (unencrypted) connection from the same servers.
Having said that, it seems keyserver.ubuntu.com is happy enough to allow connections via http instead of https, so there's a valid avenue to serve up a bogus public key...
That's a much smaller window of opportunity since if you're already a Ubuntu user you'll have a pre-existing copy of ubuntu-archive-keyring.gpg already, rather than trying to download a possibly mitm-ed public key at the same time as you download the ISO. But I must admit I boggled a little bit when I saw that keyserver.ubuntu.com happily server their public key over http instead of just https...
That's less of a problem than it sounds.
Sure you can MITM and change the sig on the fly, but without the private key you cannot generate a valid sig for a modified IOS. (And if Ubuntu have had that private key stolen, there are much much deeper problems than the ISOs being served over http...)
On the other hand, I suspect it's probably fewer than single digit percentages of people who download the ISOs who then jump through the GPG hoops to check the signature is valid. And as with all PGP/GPG keys, you've got the bootstrapping problem of how do you know that your copy of ubuntu-archive-keyring.gpg is real to start with... (I've been to key signing parties, but not in the last ~30 years...)
Videolan and apt/get now. Here is why videolan doesn't do it: https://www.beauzee.fr/2017/07/04/videolan-and-https/
TLDR; They can't force HTTPS to 3rd parties, which is why they can't do it. It's not as simple as running LE.
It should require multiple things to go wrong for catastrophic failure. This is a lesson from engineering that hasn't made its way to software development yet (outside of security engineering, anyway).
The process of listing all the security failure points and documenting the redundant mechanisms to protect them is called threat modeling.
For a system that installs OS-level binaries as root, it would absolutely be appropriate to threat model it and hold it to a defense in depth standard. In defense systems, they often require three levels of defense in depth, the last being an air gap network.
Lookup "threat modeling" and you will see how abstract a notion it is (even your comment calls for a "redundant mechanism" that may not be exactly what you are looking for), and how little information is available. End result? Most do it for the "checkbox effect". Don't get me wrong, I am not trying to obliterate what you said, just putting some factual data around it.
Open source projects, unfortunately, rarely have such contributors. Probably because building stuff is more fun than threat modeling (which can be quite tedious to do properly).
As an example, when an admin gets an AWS Security Group wrong, thereby exposing database servers / redis / customer data. Consequence... multimillion $ fines, brand/reputation damage.
It's kind of sad how badly things are set up to fail sometimes. :(
FWIW: 16 vulns in apt in NVD ; but 202 for openssl 
APT ≥ 1.6 doesn't use libcurl; it uses GnuTLS directly.
Whether apt using OpenSSL would in fact increase network security is a separate and debatable question, but the argument as stated assumes it would not, and is sound.
SSL provides some security guarantees.
Using signed package databases also provide some security guarantees.
Both may overlap in what security they provide.
If one fails, the other can continue to provide a subset of the previously available guarantees.
Priv-sep, correctly handling untrusted files (e.g. 1. check signature, then 2. execute whatever; not the other way round), memory-safe languages, etc. would be more welcome additions.
Apt even has the had part already implemented by separating the network I/O in other processes. Only problem is that those currently write directly to system directories, but that can be fixed.
Could you describe a way to have double the attack surface that would effect the majority of peer servers?
Other options include: not handle http directly in the package manager but use a known-good library (curl?); do priv-sep; not having the package manager execute code from a file before it checked its authenticity...
My personal opinion on the topic is that authenticating servers is a good thing to do (https does this; encryption is an added benefit), but time has shown that https is broken: libs are full of holes; the CA model is broken by design. Maybe share updates using ssh?
Sure, there are bugs in libraries, but seeing as https is already widespread you’re not exposing yourself to MORE risk by using https over plain http, and you mitigate attacks like this post. Any random coffee shop, untrusted public WiFi, or attacker with a Pineapple could have used this attack to MITM HTTP apt, whereas the attacker would have to compromise an upstream mirror to pull off the same attack over HTTPS.
And re: the CA model, if you’re THAT worried about compromised or fake certs, then pin the cert for a root server like debian.org, then download PGP-signed cert bundles for mirrors and enforce certificate pinning using those bundles only. Done. Apple and Microsoft use cert pinning for their update systems (IIRC).
Stop it, just stop it.
We are talking about a specific use case of https: software repositories, which are far higher-value targets than your random website, with another set of challenges. Your package manager can actually do some things as root; once it's owned, your system is Game Over.
Adding yet another lib on top of the (?) most important piece of software on your computer is not a risk to take lightly. There are more elegant solutions (signatures, priv-sep, not trusting anything until authenticated, etc.) that require less risky code to run, and fewer people to come into play.
> Sure, there are bugs in libraries, but seeing as https is already widespread you’re not exposing yourself to MORE risk by using https over plain http
Irrelevant. We're talking about instant game over if it goes to sh.. even if just once. More attack surface = more vulnerabilities.
Similarly, the apt team ignoring a bug like this "because it's protected by https anyway." Is an invalid argument.
If an attacker can inject packets that break your SSL lib, but wouldn't have broken your package manager, you added a vuln.
Lest anyone forgets:
(It should be noted that it /does/ match on "possible RCE", which buffer overflows are often tagged with.)
I'm not saying none of the results from your search are RCEs, but not all are, and many are fairly speculative.
The problem is that there seem to be many classifications of remote code execution including buffer overflow and "code injection" and you can't choose multiple. :(
There really is no excuse not to use HTTPS in 2019, period.
> Yes, a malicious mirror could still exploit a bug like this, even with https.
> I wouldn’t have been able to exploit the Dockerfile at the top of this post if the default package servers had been using https.
So which is it?
With HTTP, this can be exploited by anyone who can MITM a connection between you and the APT server or has control of your DNS.
If you consider all the cases like wi-fi hotspots, that's (potentially) quite a large set of attackers, and a relatively easy attack to pull off in a lot of cases.
With HTTPS, the attacker has either to compromise the whole APT mirror or has to get a valid HTTPS certificate for an APT mirror. This is likely harder to pull off, especially when you look at the work on improving CA security that the browser vendors have been doing over the last couple of years.
HTTP: Everyone can pwn you.
Not saying the first one is ideal, but the second one is definitely worse.
STOP IT. Though shall use HTTPS.
Meanwhile it's worth pointing out that OpenSSL has historically been one of the buggiest pieces of code in existence. Despite this being a game over RCE, it's the first of its kind in many years. If OpenSSL had been in the mix, Apt would have required forced upgrades /far/ more often. https://www.openssl.org/news/vulnerabilities.html
If you don't think OpenSSL is a high enough quality implementation, there are many others to choose from.
Even with a range of mirrors, it would still raise the bar for attackers, to require HTTPS.
Consider, why not double-wrap your stream? Put TLS on top of TLS on top of HTTP?
You can see this pro-HTTPS opinion all over this discussion.
As for your "consider", I personally do double-wrap many streams: I have a VPN for my browser. The VPN is great for hiding my home traffic from being spied on by my ISP. Without the VPN, HTTPS streams would reveal hostnames (SNI) and IP addresses to my ISP.
If it's the exact same implementation then that doesn't really add a second layer. If, however, I am provided the option to run HTTPS over a VPN tunnel, then I would happily do that in a heartbeat. In fact, I frequently do run my web traffic over a proxy, thereby giving it at least two layers of encryption.
Do you have a response to my question? "Consider, why not double-wrap your stream? Put TLS on top of TLS on top of HTTP?"
Sorry I don’t understand what double wrapping has to do with it, or why you’d ever do that.
Double-encrypting something with the same technique is pretty much always a sign of cargo cult crypto. Modern ciphers, like those used by TLS, are strong enough that there’s no reasonable way to break them applied once, and the downside is that applying them twice is making things slower than they need to be for zero added benefit.
On the other hand, TLS and PGP are very different things serving very different purposes, so nesting those makes sense. There is an added benefit from TLS, namely that you ensure that everything is protected in transit - including the HTTP protocol itself (which is currently not protected and which might be subject to manipulation as shown in this post). Plus, it provides some resistance to eavesdropping (and with eSNI + mirrors hosted on shared hosts, that resistance should improve further).
More info: https://blog.packagecloud.io/eng/2014/10/28/howto-gpg-sign-v...
"The parent process will trust the hashes returned in the injected 201 URI Done response, and compare them with the values from the signed package manifest. Since the attacker controls the reported hashes, they can use this vulnerability to convincingly forge any package."
Wtf? This sounds like Apt is just downloading a gpg file and checking if it matches a hash in an HTTP header, and if it does, it just uses whatever is specified, regardless of whether your system already had the right key imported? This makes no sense. Any mirror could return whatever headers it wanted.
This is the real vuln, not header injection. If Apt isn't verifying packages against the keys I had before I started running Apt, there was never any security to begin with. An attacker on a mirror could just provide their own gpg key and Releases file and install arbitrary evil packages.
Can somebody who knows C++ please verify that their fixes actually stop installing packages if the GPG key wasn't already imported into the system? https://github.com/Debian/apt/commit/690bc2923814b3620ace1ff...
Heavy-weight strongly-typed HTTP libraries can force you to always construct headers in a way that handles quoting for you but people seem to love "light" solutions.
Strong typing isn't relevant here, this is applicable in any language. But the lib needs to know when you're putting text in a header name and when in the value.
From the article. Good stuff
That's why the distribution is baked for months before being called stable.
deb https://deb.debian.org/debian stable main contrib non-free
deb https://deb.debian.org/debian-security stable/updates main contrib non-free
deb https://deb.debian.org/debian stretch-backports main contrib non-free
dpkg -i apt-transport-https_1.4.9_amd64.deb
apt-transport-https is going away in apt 1.5 anyway
I imagine that this is a higher risk for virtualized servers in a public cloud. I use Linode, so somebody else could have set up a Linode to MITM everybody and serve the exploit. If it were a private home or corporate network, somebody would either have to be on your network, or on a piece of major infrastructure between you and the mirrors.
Is there a way to tell from the apt log whether I am affected? It looks like you can see it trying to install an extra dependency package. Anyway, the logs are not immutable or verifiable, so if somebody got root they could theoretically kill apt, write a fake log in its place and then email that to me...
I took full images of all my servers a few days ago, so at least I have those should I need them.
I think it might be the other way around (at least in terms of virtualized servers versus physical servers, both on public cloud) -- it is easier to implement IP address and other filtering measures with virtualized servers than inside physical network switches. Linode and other virtual machine providers almost universally implement this filtering, but many dedicated server providers are not as robust.
With a public cloud you don't really know how it's set up on their end, as there are countless different ways to do it.
Exactly, this is far more easily exploitable because apt is using HTTP by default instead of HTTPS
$ sudo sed -i 's/http:/https:/g' /etc/apt/sources.list
$ sudo apt-get update
Err https://us.archive.ubuntu.com trusty/main Sources
Failed to connect to us.archive.ubuntu.com port 443: Connection refused
Err https://security.ubuntu.com trusty-security/main amd64 Packages
Failed to connect to security.ubuntu.com port 443: Connection refused
$ curl -v https://security.ubuntu.com/ubuntu/
* Hostname was NOT found in DNS cache
* Trying 184.108.40.206...
* connect to 220.127.116.11 port 443 failed: Connection refused
* Trying 18.104.22.168...
* connect to 22.214.171.124 port 443 failed: Connection refused
* Trying 126.96.36.199...
* connect to 188.8.131.52 port 443 failed: Connection refused
* Trying 184.108.40.206...
* connect to 220.127.116.11 port 443 failed: Connection refused
* Trying 18.104.22.168...
* connect to 22.214.171.124 port 443 failed: Connection refused
* Trying 126.96.36.199...
* connect to 188.8.131.52 port 443 failed: Connection refused
* Failed to connect to security.ubuntu.com port 443: Connection refused
* Closing connection 0
curl: (7) Failed to connect to security.ubuntu.com port 443: Connection refused
APT does not, however, give privacy, which HTTPS/TLS would. (Those in favor argue that TLS doesn't help here, as you can still see that you're connecting to Ubuntu, so it's still obvious that you're downloading updates. I personally disagree w/ this stance: I think there is value in protecting which packages you're pulling updates for, as what packages you have installed can inform someone about what you're doing. I think there's further argument that the sizes of the responses give away which updates you're pulling, but IDK, that seems harder to piece together, and TLS at least raises the bar for that sort of thing.)
The bug discussed in the article circumvents the signature checking, by lying to the parent process about the validity of the signature by being able to essentially execute a sort of XSS/injection attack.
My point is that these Ubuntu repo servers are not available over HTTPS, which seems like a problem. In the context of this bug, a serious one--who's to say that there aren't more bugs like this lurking? There's no reason that these servers shouldn't be available over HTTPS.
It's apparently fixed in version 184.108.40.206.5: https://security-tracker.debian.org/tracker/CVE-2019-3462
...but the suggested apt -o Acquire::http::AllowRedirect=false update fails because security.debian.org wants to do a redirect.
Manually downloading the packages listed in the announcement doesn't work either, since that's the Stretch version.
I can get the source package here: https://packages.debian.org/jessie/apt
...but the key is not in /usr/share/keyrings/debian-archive-keyring.gpg. (And I'm not entirely sure how to build a source package.)
I tried adding a different source, as suggested in the announcement:
deb http://cdn-fastly.deb.debian.org/debian-security stable/updates main
# apt -o Acquire::http::AllowRedirect=false install apt apt-utils libapt-pkg5.0 libapt-inst2.0 liblz4-1
The following packages have unmet dependencies:
libapt-pkg5.0 : Depends: liblz4-1 (>= 0.0~r127) but 0.0~r122-2 is to be installed
E: Unable to correct problems, you have held broken packages.
Regarding the other error, jessie ≠ stable. You want "jessie/updates", not "stable/updates".
The message I was referring to, which looks like it's indicating that security.debian.org is trying to redirect:
# apt -o Acquire::http::AllowRedirect=false update
Err http://security.debian.org jessie/updates/main Sources
302 Found [IP: 220.127.116.11 80]
Err http://security.debian.org jessie/updates/main amd64 Packages
302 Found [IP: 18.104.22.168 80]
Err http://security.debian.org jessie/updates/non-free amd64 Packages
302 Found [IP: 22.214.171.124 80]
Err http://security.debian.org jessie/updates/main i386 Packages
302 Found [IP: 126.96.36.199 80]
Err http://security.debian.org jessie/updates/non-free i386 Packages
302 Found [IP: 188.8.131.52 80]
Fetched 422 kB in 2s (169 kB/s)
W: Failed to fetch http://security.debian.org/dists/jessie/updates/main/source/Sources 302 Found [IP: 184.108.40.206 80]
W: Failed to fetch http://security.debian.org/dists/jessie/updates/main/binary-amd64/Packages 302 Found [IP: 220.127.116.11 80]
W: Failed to fetch http://security.debian.org/dists/jessie/updates/non-free/binary-amd64/Packages 302 Found [IP: 18.104.22.168 80]
W: Failed to fetch http://security.debian.org/dists/jessie/updates/main/binary-i386/Packages 302 Found [IP: 22.214.171.124 80]
W: Failed to fetch http://security.debian.org/dists/jessie/updates/non-free/binary-i386/Packages 302 Found [IP: 126.96.36.199 80]
E: Some index files failed to download. They have been ignored, or old ones used instead.
As someone who uses Linux as their personal O.S. and administers some at work, but doesn't think in bash, I'd really like an answer :-)
I've read the arguments against HTTPS for apt many times. They're wrong.
I'm not familiar with established procedures in such cases, and am curious about the rationale for omitting a time window for the update.
...or to anywhere a MITM attacker wants to redirect you.
But if any response in the redirect chain is served over HTTP, it can be replaced with a different response containing any "Location" header of the attacker's choosing instead of the original one. So it doesn't matter if the eventual intended URL is an HTTPS URL, because it will never be reached. The redirect will go to the attacker's site instead. (And in the case of a download, the user will never notice because the file URL is usually not prominently displayed.)
So a HTTP response anywhere in a redirect chain is equivalent to serving it over HTTP. Perhaps this was exactly the point of the parent post, but I thought it would be useful to make it explicit.