Hacker News new | comments | show | ask | jobs | submit login
Remote Code Execution in Alpine Linux (justi.cz)
286 points by justicz 6 days ago | hide | past | web | favorite | 106 comments





> When apk is pulling packages, it extracts them into / before checking that the hash matches what the signed manifest says it should be.

Pros: downloading and installing packages is faster.

Cons: vulnerabilities like this.

Reading the commit message in https://github.com/alpinelinux/apk-tools/commit/6484ed9849f0... I'm not convinced they fixed it.


Just to preempt some replies to this I can already picture: if you find more bugs, please report them to the maintainers I mentioned in the post and not via an HN comment!

Of course :-)

That has to be one of the more moronic ideas I’ve seen in a long time.

> This is especially bad because packages aren’t served over TLS when using the default repositories. This bug has been fixed and the Alpine base images have been updated [...]

Do I understand this right: They fixed the apk vulnerability but the packages are still downloaded without TLS in default repositories?


The bug isn't that attacker can control apk package contents, but that you can trick apk into running a hook even if a package has a fingerprint mismatch. You want the simplest immediate fix for this vulnerability before you want to kick off the long discussion about moving all package management to TLS.

^^^ THIS

Before launching a campaign to ban guns, focus on triage for the current gaping bullet wound.


I'm usually the first to step up to the plate and insist on HTTPS, but it's not actually necesary here. The packages are PGP signed, and the public keys are established securely at install. This is actually more secure than SSL imo because it uses web-of-trust for key exchange rather than easily compromised certificate authorities.

In this case `apk` does a ton of processing on the package before checking that the hash matches the signed manifest, and the maintainers don't want to/can't change that behavior at this time. As a result, https would be a real security improvement here, at least for now.

I don't understand your argument - why would the SSL certificate need to be publicly signed but it's alright for self signed PGP keys being added as part of initial install?

You may ask "well why not just use PGP if it's not going to be publicly signed" and to that I would say "so the attacker can't even establish an HTTP session, further reducing the attack surface".


Because I'm trusting individual package maintainers directly, not a suite of shady CAs. Have you ever looked at the list of trusted CAs? They're large, opaque, and shady businesses that I want no dealings with. CAs don't establish trust, they just establish privacy. I could go make myshadysite.com and get a certificate for it and serve malware. I trust Alpine Linux, not Alpine Linux's certificate authority.

And for the record, the initial iso download is secured with SSL. I can also establish trust through my other Alpine installs, with friends who have also installed Alpine, via magnet links from friends (which are inherently sha'd), by emailing several package maintainers and cross verifying their signatures, consulting keyservers, etc. However you do it, once you establish the initial trust PGP is a superior option to SSL.


> Because I'm trusting individual package maintainers directly, not a suite of shady CAs. Have you ever looked at the list of trusted CAs?

Again, CAs are only needed for PUBLICLY signed certs but we are talking about a PRIVATELY managed trust and repo. The repo cert can be self-signed by the Alpine group and added to the root store during install the same as a PGP key can be added during install. No 3rd party "trust" involved.


I addressed this comment here: https://news.ycombinator.com/item?id=17983271

> I trust Alpine Linux, not Alpine Linux's certificate authority.

Did you check and sign Alpine's maintainers key yourself or do you rely on the Web of Trust to create a path between you two? In the latter case every person in between of you two is acting like a certification authority.


Yes, but all of those authorities are just people. Not hundreds of businesses with ulterior motives. Plus, compromising one WoT PGP key doesn't break the whole system like one compromised CA will.

I don't disagree that SSL/TLS doesn't add much security (though it does provide privacy regarding the packages being pulled), but don't conflate it with the CA system; they could just as easily pin the SSL/TLS cert at install.

Fair point, but this would still be less secure. Packages are generally signed by individual maintainers and as maintainers come and go, the trusted keyring evolves. It also puts the power of signing packages directly into the hands of maintainers, rather than granting access to a single all-powerful private key.

I don't think anyone is arguing that you should replace PGP signing with HTTPS.

I don't think that anyone is arguing that, either. I'm saying adding HTTPS isn't an improvement.

And yet the existence of this exploit, which would have been prevented with HTTPS as I understand, counterproves your point. There's a reason security engineers advocate for defense in depth; even if one system fails you have a higher likelihood of preventing any issue or perhaps at least mitigating the scope/severity because other systems are still there. Self-signed HTTPS certs seems about as safe & secure as self-signed PGP packages so how can having both in any way reduce the overall security.

"The existence of an exploit" will undermine literally any system and you can't meaningfully use that argument.

"The existence of an exploit" is an argument in favor of defense-in-depth techniques, as an exploit in one layer does not result in full-system compromise.

I would again add that preventing a connection from forming, preventing protocol, buffer, filesystem, and timing attacks as well as preventing a malicious payload from ever reaching disk, is itself a great improvement. Also it carries the guarantee that once the transport is finished the contents are ready to be trusted rather than the transport -> validate -> trust model using PGP signatures.

For maximum security and user experience I'd say both though - validate you are connecting to who you think you are and also validate the PGP signature. For how often packages are installed vs how high of a risk remote code poses to the system I think it's a bit foolish to say we can only do one and when we do we are perfect.


This is less easy if you have mirrors.

The web of trust doesn't come into the system you've described at all. Instead, everything is pinned to a particular key, and trust in that key is absolute. But you can do this perfectly well in TLS, indeed it's been discussed here on Hacker News more than once.

Well, if the signed package had been transferred over an SSL connection then this exploit would not have happened at all, so it's arguably more secure as it would have prevented a security bug in form of an RCE...

You can also always run your own CA and trust it for the purpose of the APK installer or pin the CA or some other method, it would only be a problem if a CA got compromised (which I doubt is "easy" as you say, esp. if you restrict which CA's are allowed) AND APK had this bug.


The presence of a software exploit will undermine literally any secure system, including SSL, so this isn't a meaningful argument.

You are aware of such things as layered security, yes?

SSL isn't unbeatable security, it has holes, but it can help prevent a number of issues, such as the one in the OP to become critical at all.

Similarly, PGP signing helps against a certain class of attacks.

Both together cover a wider attack scenario than either alone.

A software exploit could undermine literally any secure system but in this case the addition of SSL would have prevented this exploit from being dangerous to a huge number of people.


I am aware of that, yes. Layered security and privacy of which packages you've downloaded are the two most compelling arguments in favor of SSL that I've seen in this thread.

But I think it's reasonable to talk about security systems by giving the developers the benefit of the doubt that their code doesn't have exploitable bugs. My main point is that, assuming it's implemented without bugs, PGP is more secure than SSL, and that SSL alone would be worse than PGP alone. Combining them would be an improvement, but I'm rejecting the idea that the current design is insecure.

Having the package maintainers personally tell you their package's checksums would have also prevented this problem, but it's clearly not practical. Sometimes SSL isn't practical. If you can design a system which is secure without it, it is an improvement, but not a requirement.


>assuming it's implemented without bugs,

The problem with this assumption is that available evidence points towards software inevitable having, developing or gaining bugs in the implementation (through existing code, through new code or through new threat vectors respectively).

The current design isn't insecure but it could be more secure at little cost. In current times, SSL is fairly practical and easy to use, a lot of popular webservers either have or are developing ACME certificate mechanisms (including nginx and apache to my knowledge).

Due to the low cost, designing SSL into the system should be a requirement since it's an improvement of low cost with great effect.


Sure, all software tends to have bugs and using multiple layers of security is a defense against this. But the assumption of bugs is not a logical foundation for a discussion of the design. Sure, if it's practical Alpine should deploy SSL. I'm just dispelling the idea that the PGP-based approach is insecure.

I don't like certificate authorities either but "easily compromised", especially in this context, is just disingenuous.

HTTPS would be a meaningful security improvement to this system.


It's not. Certificate authorities are regularly compromised by invasive workplaces, sketchy governments, and crappy CAs. PGP keys are much more secure.

HTTPS would not be a meaningful security improvement, not accounting for justicz's comment. At best, it would be the same amount of security, but more annoying to maintain.


> Certificate authorities are regularly compromised by invasive workplaces, sketchy governments, and crappy CAs.

They can't do that silently anymore, though, not without getting caught. Certificate Transparency addresses that.


> It's not. Certificate authorities are regularly compromised by invasive workplaces, sketchy governments, and crappy CAs. PGP keys are much more secure.

Citation very much needed for this claim that (publicly trusted) Certificate Authorities are "regularly compromised" while some random guy's PGP keys on his laptop are "much more secure".

You claim to be worried about "sketchy governments" so here's a nice simple exercise. List the maintainers of the top say, 100 Alpine packages and for each the countries of which they are citizens and their country of residence. Someone worried about "sketchy governments" would definitely know all this of course, since it's _far_ easier for a "sketchy government" to lean on one individual.

Next I'd like to know what Alpine is doing to ensure all those PGP keys are kept safe and that, for example, nobody's PGP key for signing Alpine packages is on, say, an unencrypted backup they took back in 2017...

If one of the CAs in the Web PKI does something they shouldn't there's a public forum for discussing how it happened, what's being done about it, how it will be prevented from happening again, and so on. Where can I find the public forum where Alpine maintainers have to confess to anything that goes wrong?


I still have to trust some random guy, SSL or not. Let me put this in simpler terms. With PGP, I have to trust:

- The package maintainer (aka random guy with a laptop, aka ncopa)

- ncopa's government

With SSL, I have to trust:

- The package maintainer

- ncopa's government

- My government

- The governments of anyone along the link

- Certificate authorities

- The governments of every trusted certificate authority

- A workplace's mandatory root certificate which MITMs all browser traffic

- etc

Which of these lists has fewer threats? SSL is generally more secure than the alternative, but it doesn't add security on top of a PGP-signed package repository.


It's not a question of SSL versus PGP. It's a question of PGP + SSL versus PGP alone.

SSL provides:

- Privacy, i.e. making it harder to learn what packages you are installing, or any data about your system that can be gathered from request headers. (Traffic analysis is still possible, but much more difficult and won't disclose request headers.)

- An extra layer of defense if there are flaws in the system doing PGP verification, as in this instance (or perhaps in the HTTP protocol implementation itself). Of course, the most important thing is to fix that system, and IMO apk should continue to be regarded as insecure as long as it continues to unpack files onto the root FS before doing any verification; there are just too many ways to screw that up. Still, in practice, adding SSL would have made it much more difficult for an attacker to get to a position where they could exploit the vulnerability.

Both of the benefits above are relatively minor. But in a world where SSL is now the expected default for the vast majority of things served over HTTP, the cost of adding SSL to one more thing should be seen as extremely minor, making it easily justified.


These are both very good points, though I want to point out that the first doesn't lead to RCE, which is a much more severe problem that information disclosure. I think Alpine should add SSL, but my message is that I don't think that the problem on display here is result of a flaw in the current design.

What you're doing here is a technique called the Gish Gallop - in which rather than engage with debate you just try to introduce more and more "reasons" you're right, most of them entirely spurious, hoping to somehow win on the numbers.

That is not an honest way to have a conversation. Knock it off.


I'm doing no such thing. I have issued rebuttals and new arguments, not just new arguments and a tactical ignoring of the counterarguments. If you can't address my points then don't. Don't acuse me of arguing in bad faith. You can knock that off, thank you very much.

Thing you didn't provide:

1. Any citation whatsoever for your claim that "Certificate authorities are regularly compromised"

2. Any citation whatsoever for your claim that "PGP keys are much more secure"

3. The list of those maintainers and their countries you're actually trusting for some reason with your current stance even though you're supposedly worried about "sketchy countries". Yes, a list, I know, you don't have one. Which is illustrative that in fact this "sketchy countries" concern was pure invention and you don't actually care.

4. The measures Alpine puts in place to protect these keys. Again in reality there aren't any, and in reality you don't care, because your concerns are bullshit. In the technical sense.

5. The public forum where I, as a third party, can see that this is all above board and not, as is the reality, just nobody cares and it's all assumed to be fine. For reference for the Web PKI this forum is operated by Mozilla, mozilla.dev.security.policy

Things you did provide:

1. A big list of non-threats like on-path governments somehow tampering with HTTPS.

That's a Gish Gallop. You don't engage with the existing debate, you just keep spewing more "new arguments" of no value, in the hope people will accept you're right.


I can't provide what I'm not asked for.

1, 2: I didn't cite explanations for these because I explained them directly.

3: https://git.alpinelinux.org/cgit/aports/tree/main/alpine-key...

4: I don't know the particular steps ncopa et al take to safeguard their private keys, but that doesn't mean that my concerns are technical bullshit.

5: You're looking for alpine-devel, a public mailing list.

I'm not going to entertain further discussion with you if you aren't going to be civil. Ask questions and I will ask them. Acuse me of bad faith and call my concerns bullshit and I won't.


I didn't ask you for "explanation" I asked you for citations supporting such extraordinary (indeed I'd go so far as to say ridiculous) claims. You still haven't done so and I think it's entirely reasonable to conclude that this is because you cannot.

Not technical bullshit, bullshit in the technical sense.

"Bullshit" is a technical term that's distinct from a lie. Lies are told on the understanding that they're a falsehood, with the intent to deceive, but bullshit isn't interested in the veracity of the claim at all, it serves only a rhetorical purpose.


Comodo has had their signing keys compromised multiple times. I can probably dig up the old ZeroForce (ZF0) chat logs that show them poking around their forum servers that had copies of the signing certs. I am not even kidding. They had their signing certs sitting on their phpBB forum. That all started from one of their employees taunting some kids on irc.

Symantec has signed and provided signing certs for multiple government agencies and proxy vendors, allowing them to sign for anything. That is part of the reason Google is distrusting them.

There are many more, but a majority of certs in use today are from one of those two companies. They also have numerous re-sellers with signing certs, so the names won't always be Symantec or Comodo, hence one more need for intermediate certs.

TLS today just means the average Joe won't be able to see the specific bits you are transferring. Sorry Joe.


Whilst pwning Comodo's web server is presumably fun, the signing _keys_ don't live in a Linux box running phpBB or whatever nonsense that was, they're in a dedicated HSM. The certificates on the other hand are public documents, everybody has a copy of those.

Symantec certainly did sell certificates to the US government, and to companies that make breakfast cereal, and indeed to at least one bona fide children's party clown. But those didn't allow them to "sign for anything". I was part of the distrust discussions you're talking about so let's see how close to the facts you were.

Here are the real issues closest to what you've described:

Issue L: Symantec signed the US government's "Federal Bridge" PKI, which is a PKI for the use of the Federal Government, a vast sprawling mess.

Issue P: Symantec issued a subCA to UniCredit (an Italian bank) which was operated incorrectly and without an audit until October 2016

Issue T: The Korean firm "CrossCert" and three other Symantec partner companies were able to issue certificates as Registration Authorities, although Symantec was not effectively overseeing this activity. Non-compliant certificates were issued, mostly for Korean firms, and Symantec says CrossCert did not have any paperwork for these certificates, which Symantec should have known about back when the certs were issued.

Issue V: Symantec's "GeoRoot" programme provided subCAs under technical control to five companies to issue their own trusted certificates within certain constraints. These five were: Aetna, Apple, Google, Intel, and Unicredit (but see P above) and the audit paperwork for these subCAs was always reprehensible garbage, an unnamed root program (probably Microsoft, given that Apple and Google are on the list) was pressuring Symantec to fix this, but they had not done so before Symantec got themselves distrusted.

Issue Y: The "VeriSign Universal Root Certification Authority" CA has a couple of subCAs that are not themselves constrained to prevent issuance in the Web PKI but are also not sufficiently audited or subject to oversight. Symantec swore these subCAs aren't actually used in the Web PKI, but without a technical constraint that's hard to enforce.

Back when Symantec was operating a CA, the two companies together (all of Symantec, including Versign and other brands, plus Comodo) had than 40% of "market share" in the top 10K web sites, and far less in the deeper corners of the web. Most certs are from Let's Encrypt and that's still true today.

Because of how modern TLS works, bad guys would need either to use a large quantum computer for each individual session they want to break (problem: those don't exist) or they'd need not only a publicly trusted certificate, but also a live active attack on the session as it happens, which would of course be detectable.


Comodo most certainly had copies of everything on their forum servers. They had to be revoked. Yes, some time later they started using HSM's. Too little, too late. They lost a lot of trust after those incidents.

The problem with the story "they had to be revoked" is that er, they weren't revoked. Comodo's roots from that era are still in use today. That's because though you've said so twice now, those keys weren't "on their forum servers" and remained in Comodo's hands.

I think you're conflating two incidents, one from 2008 when ZF0 broke into a poorly secured web forum on Comodo's systems and one in 2011 where "ComodoHacker" gained access to Comodo's issuance system because they used a single factor password login system, and was able to create a bunch of certificates for themselves using the account credentials they had.

You might even be further conflating problems at DigiNotar, which "ComodoHacker" also claimed responsibility for, where Iran was using certs DigiNotar had no record of to MITM its citizens. DigiNotar is gone, the company is defunct, that entire hierarchy was distrusted seven years ago.


> Citation very much needed for this claim that (publicly trusted) Certificate Authorities are "regularly compromised" while some random guy's PGP keys on his laptop are "much more secure".

Actually X.509 keys evolved under enterprise pressure and some devices have nice additional security measures. For example it's possible to attest that a given X.509 key was generated on a Yubikey (so it's only in hardware, no copies exist) [0]. Microsoft EV code signing certs require hardware keys. OpenPGP doesn't have this.

[0]: https://developers.yubico.com/yubico-piv-tool/Attestation.ht...


Almost every modern distro verifies package files by PGP signatures (either directly, or a signature that covers a file that contains a list of hashes, or a signature that covers a tarball which extracts a file containing a list of hashes, whatever you fancy).

TLS for downloading won't provide authentication anyway, seeing as anyone can run mirrors and serve whatever malicious content they like.


A friend once tried to update Arch Linux on a wifi behind a captive portal. Pacman broke because instead of a package list it downloaded the captive portal login page and failed to parse it, leading to mystery hunt for my friend troubleshooting the issue. With an HTTPS mirror it would have just failed to connect or handshake with the mirror.

That sounds like HTTPS would be an usability improvement, but not necessarily a security improvement.

TLS downloading would provide an improvement in privacy and prevent every random person in the same wifi network as you or along the long chain of connections from trying to maliciously tamper with contents.

And also prevents seeing which package versions you use. Some networks have ridiculous content inspection to filter out 'bad' executables like Wireshark. Basically, TLS is the best way to stop mitm of all sorts.

Hopefully the PGP signatures are downloaded over TLS, at least.

Why would the signatures need to be downloaded via https? You need to get the keys via a trusted mechanism and the signing keys for a distribution are usually part of the install iso (which you should verify)

Sorry, I meant the PGP keys, not the signatures.

If the keys are referenced by the key fingerprint they don't strictly need to be fetched over HTTPS.

But the page that says "here's our fingerprint: xyz" would better be served over secure connection (assuming someone is not using WoT).


True. When using PGP I'm used to just directly importing a public key listed somewhere. Either a key or its fingerprint (if one is provided) needs to be provided over a secure channel. But I guess it doesn't matter when the OS bundles the fingerprint or key and you're able to verify the OS image.

If PGP signatures require TLS, that's a serious problem with PGP.

Sorry, I meant the PGP keys, not the signatures.

I see you meant keys, not signatures, so it's worth stating that the keys come with the distro when it is installed.

That's true, in which case I guess the "root" level of trust comes from validating the distro image's hash.

Not using TLS on repos is pretty standard in distro packaging, though in the days of Lets Encrypt it seems sensible to change that.

There may be a comment to be made about using random distros. Alpine was a tiny distro that became super popular despite presenting several problems, docker layers making size sorta not important, and despite the recommended base image being debian.

Probably too late for that point though, and the distros popularity does suggest some of these companies should indeed donate.


I'm not sure where you get "despite the recommended base image being debian". From the best practices docker docs:

> Whenever possible, use current official repositories as the basis for your images. We recommend the Alpine image as it is tightly controlled and small in size (currently under 5 MB), while still being a full Linux distribution. [1]

[1] https://docs.docker.com/develop/develop-images/dockerfile_be...


That's today.

Debian was the recommended base image previously, and Alpine grew in popularity despite of this. The issues were more around DNS resolution based on musl not having the same bugs as glic (I believe, this is from memory).

At the end of the day, I suppose the community has spoken, and docker backed that up by hiring an Alpine developer, and I suppose the rest is history.

Not that I intend to be having a go at Alpine, all distros go through this. I was just making the point it's a fairly small distro in contributor numbers, and it seems to have become fairly important to the eco system.


I see. It feels like apk (or for that matter, other package managers that use plain http) are just time bombs waiting to be exploited again and again.

TLS has nothing to do with it, though, and TLS-or-not isn't relevant to the vulnerability. This exact same issue would be a problem with TLS enabled on Alpine repositories, because any mirror could have just served you a malicious package that passed the fingerprint check anyway, all with a fancy, validated Let's Encrypt certificate.

TLS only provides privacy[1]/some measure of inline MITM/tamper protection in cases like this, it can't fix a broken signing mechanism.

Now, TLS, as a mechanism for establishing tamper-proof connections can mitigate some attacks, but that isn't the same thing. For example, another post here links to a packagecloud.io blog about Apt-without-TLS being insufficient and being vulnerable to many MitM attacks. For example, an attacker without TLS can just strip the signed metadata info and hand it to you, or they can do a freeze attack, so you pick up old packages (that might have security vulnerabilities). But these problems fundamentally have to do with the design of apt's update mechanism, and not the lack of TLS on the repositories. TLS is the transport layer; apt is what has to enforce the security policy (and indeed, if your TLS-powered Apt mirror has gone rogue, it can still pull off many of the same attacks anyway, regardless of TLS.) It can only mitigate the issue, not actually prevent it.

Systems like The Update Framework (https://theupdateframework.github.io/) outline a design for secure update/package management frameworks that not only don't need TLS, but also are designed to prevent many of the problems that plague Apt, as well (freezes, rollbacks, endless data, etc).

All that said, TLS is great for a number of reasons and worth using and enabling where-ever you can, of course. (It's cheap these days, and PKI in general is impossible to ignore/live without, so you might as well jump on the bandwagon and be proactive!)

[1] Arguable, since the server can just log everything anyway and look at what you download. Most public package repository systems simply are not designed with transport-layer privacy in mind...


Lack of TLS allows this vulnerability to be attacked without compromising any of the mirrors, which makes it a lot more practical to attack.

When you have mirrors outside the control of the project HTTPS doesn't do anything to protect you from malicious actors (even then, project-controlled mirrors could be compromised just as well). It can provide privacy, which is still beneficial; but, again, only of limited benefit since there's never a guarantee the mirrors aren't compromised.

GPG signatures on packages are the correct solution, apk just did it wrong.


https was a practical issue at one time but it isn't anymore. This RCE would be far less severe if all communication was over TLS as the attacker couldn't just sit on the wire between the client and the server. It would still exist with TLS of course, but you can't say that TLS "can [only] provide privacy". The correct solution would be to do both TLS and GPG signatures.

You cannot MITM a .rpm downloaded with yum/dnf or a .deb downloaded with apt because they verify the signature of the package as a whole before they even begin unpacking it, the entire reason a RCE exists here at all is because of an entirely broken signature verification mechanism. There is no need to protect against a MITM attack if you are verifying signatures correctly, because a modified package will be detected prior to installation.

Of course you can mitm it. You can supply old manifests and packages. You can send specially crafted rpms/debs that trigger bugs. Resource exhaustion via enormous packages is an option for embedded systems.

"the only reason this RCE exists is because of a vulnerability in apk"

The problem with this thinking is that it's always true, yet vulnerabilities keep on being discovered.


Https does nothing to protect you from a compromised or evil mirror. You cannot trust https at all in this context, you need to rely on signatures.

You have ignored what I wrote

It's astounding how many people in this thread seem to think that a compromised mirror or compromised certificate authority is somehow a comparable risk to plain old MITM.

It baffles me as well. A lot of comments either seem to imply that TLS and PGP cannot coexist in a package manager or that because HTTPS doesn't protect you from Mussad replacing your phone with a brick of Uranium so you die of Cancer, the security advantage does not exist...

Somehow the term "defense in depth" (aka "layers of security") has lost it's meaning.




And yet, it is full of bullshit like this:

"HTTPS does not provide meaningful privacy for obtaining packages. As an eavesdropper can usually see which hosts you are contacting, if you connect to your distribution's mirror network it would be fairly obvious that you are downloading updates."

The worst sort of neckbeard nonsense.

Signatures are if benefit in bandwidth constrained distribution, and for store and forward. Microsoft uses them for package distribution too (x.509 code signing). But there is no doubt that TLS makes interfering much harder.

With the move to encrypted SNI and http/2 this will hopefully become moot.


In light of all the comments about the "missing" TLS, I want to point out that it would be a bit weird if the install processes used by the FOSS software ecosystem were dependent on a form of centralized authority such as TLS. PGP as currently used is distributed, which is arguably a good thing.

Serving large amounts of content over HTTPS is expensive. When the data is signed anyways you gain nothing by using TLS other than privacy.

Intel CPUs (since Westmere, 2010 [1][2]) encrypt with AES GCM (the recommended cipher suite for TLS) at gigabytes per second per core [3]. You can saturate 10Gbps pipe on a single core.

My laptop can do 1,400 RSA 2048 sign operations per second per core [4], 16,000 P-256 ops and 25,000 X25519 ops per second per core [5], so I wouldn't care about handshake performance either.

With large files, you don't care about the performance of the handshake, but if you do care about handshakes you can reuse http connections and use TLS session resumption for new connections, which avoids the relatively expensive asymmetric crypto (with modern servers, ECDHE and RSA digital signature). libcurl will do it, among others, so build your client using a good https library.

What is expensive about serving large amounts of content over HTTPS?

1 - https://en.wikipedia.org/wiki/AES_instruction_set

2 - https://en.wikipedia.org/wiki/CLMUL_instruction_set

3 - run "openssl speed -evp aes-128-gcm", my laptop does 4.5GB/s

4 - openssl speed rsa

5 - openssl speed ecdh


TLS only authenticates the endpoint, not the delivered packages. You need PGP/GPG anyway. TLS only provides a privacy that most organizations neither want nor need. It is overhead in terms of connection management, crypto processing, and additional PKI management for which there is no benefit.

The only benefit is the TLS-by-default mindset. It's not exactly a huge benefit, though on the other hand setting up TLS is not that hard nowadays.

Benefit or con? As demonstrated by a few comments here, the TLS by default mindset seems to encourage people to believe that TLS is all you'll ever need, the solution to all your problems.

the reason why apt (and I assume apk) don't use SSL has nothing to do with the CPU

https://whydoesaptnotusehttps.com/


>HTTPS does not provide meaningful privacy for obtaining packages. As an eavesdropper can usually see which hosts you are contacting, if you connect to your distribution's mirror network it would be fairly obvious that you are downloading updates.

This assumes that package hosts segregate themselves by all possible privacy-related dimensions.

They don't. If you host illegal-in-country-X content on AWS and someone downloads it there, HTTPS would've prevented that from being seen. HTTP does not.

---

HTTPS belongs everywhere. It has flaws, of course, but there's no reasonable alternative at the moment. You can't predict what governments will decide in the future, and HTTP-now means they can record it and apply rules retroactively if they change their mind. This kind of thing gets people killed.


I'm just replying to "Serving large amounts of content over HTTPS is expensive".

Privacy is still a worthwhile goal.

Also, HTTPS is not expensive, contrary to popular belief.

> On our production frontend machines, SSL/TLS accounts for less than 1% of the CPU load, less than 10KB of memory per connection and less than 2% of network overhead. Many people believe that SSL takes a lot of CPU time and we hope the above numbers (public for the first time) will help to dispel that.

(source: https://www.imperialviolet.org/2010/06/25/overclocking-ssl.h...)


> When the data is signed anyways you gain nothing

Wouldn't it have prevented this bug?


The author launched a bug bounty program for open source projects: https://bountygraph.com/.

That's a wonderful endeavor and I would gladly try to "convince" people I know to pledge money in it.

Monetary incentives are important for open source maintainers, as they cannot always afford the time to work on their projects.


I don't know exactly how that bug bounty program works, but it feels like there may be some fundamental issues with bug bounties in the context of open source projects.

For example, what is there to prevent someone from introducing a bug into an open source project, first of all, and then subsequently claiming a bounty for identifying and/fixing it?

(Maybe this can end up by forcing more careful pull request auditing for security critical projects, but that seems like a big ask..)


@justicz: that's one clever hack, thanks for explaining in detail how it works!

In case somebody is wondering about Alpine Linux based postmarketOS, this is what I wrote on /r/linux regarding this issue:

postmarketOS is directly using Alpine's apk-tools package from their repositories, so a simple "apk upgrade" will install the latest apk version where this is fixed.

Regarding the pmbootstrap tool we are using for development to set up Alpine Linux chroots, just update to the latest git version with git pull as usually, and it will make sure that the apk version is installed where this is fixed before you can use any of the chroots. Pulling from git may seem strange, but we do not have a stable release yet for pmbootstrap, so this is the normal way to update the code anyway. We're getting closer to a stable release of pmbootstrap though.


I can only think about this and be fearful due to the staggering number of Docker tutorials that happily whisk away the details as a sales pitch for streamlining the development process. Similar to the Node ecosystem, there's bound to be lots of junk out there that will never be updated.

Docker containers multiply faster than rabbits in a forkbomb. It behooves Docker to do vulnerability and malware scans of containers, because letting random people have semi-sandboxed write and execute permissions on random other people's systems is a dangerous combination. That will cover some cases, but end-to-end personal accountability of container publishers is a necessary evil as there is unlikely a reasonable technical way to ensure there aren't malicious scripts in a given docker container. Signature-based malware scanning operates only in a limited fraction of the past since it requires submittal and review of what has been seen by a particular user AND submitted to a malware scanner publisher for analysis.

Scenario: Anonymous docker publisher installs a kiddie porn i2p or tor downloader service hidden in a container for Apache Kafka. Random corporate trainer accidentially picks said container, blogs a howto with it and suggests its use in in-person and web-based courseware. IANAL but Who's liable? The end-users using it? The trainer? Docker?

Disclaimer: I love Docker, but it needs some chain-of-custody assurance and greater visibility of container contents.


Docker ships with support for image signing, execution authorization (based on trusted signatures), and distribution endpoint authentication (TLS / PKI). Most image registries offer deep package inspection, OSS package manifests, and CVE visibility.

If all that isn't enough, individual image layers can be accessed and inspected by hand (they're just tar files).

If you're looking for "end-to-end chain of custody" you're going to need to specify the ends. Image signatures get you point-in-stack authorship assertions. I'm not sure what else anyone could ask for here.


Alpine isn't junk, but it is a layer in a lot of things that people use (ex. `golang:1-alpine`).

Docker just makes it super opaque when you have to rotate images.


Isn't that the distro that the jre docker containers use as the base?

EDIT: Just looked into that.. it looks like just the java 8 containers.


Alpine is a popular choice for containers because it has very small footprint.

Looks mostly avoidable if O_NOFOLLOW had been used? But the fix doesn't seem to add that, instead relying on other checks?

It seems like the attack is not completely quiet as an error is printed while installing a modified package, either from compromised mirror or by man-in-the-middle:

    ERROR: ${package_name}: BAD signature
Remote code execution will happen, but at least you will get a warning that something suspicious is going on.

That is even red and there's no obvious way for the attacker to disable that. But do people really look at the build log as long as the build is successful and the intended containerized program seems to work just fine?

Maybe it's possible to overwrite already printed lines with terminal sequences.

People really have to stop thinking they can write package managers.

This is a bummer, clearly. Otherwise, apk is pure joy to work with, as is indeed all of Alpine.



Applications are open for YC Winter 2019

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: