You can't get it over SSL. Not to worry, the binary will be signed by Mozilla right? Yeah, GPG only. Not x.509 signed.
But hey, the online install page supplies it over SSL right? Well, sometimes. But it turns out they don't enforce SSL use. Cue SSLstrip.
PS On MacOS X 10.9 Apple by default prevents running unsigned binaries. Not to worry, Mozilla tells you how to bypass the check, not even hinting it has a very valid purpose.
I Googled Mozilla Thunderbird and the first hit was the download page, using HTTPS. You're right SSL isn't enforced, but that's a chicken and egg problem for Firefox downloads I guess, now that TLS 1.2 is enforced and the user may be stuck with a browser not supporting that.
SSL 3 is still fine for protecting integrity, just not confidentiality, so it is okay for downloads.
What use is having HTTPS to protect the very thing that you need to implement it. It's a basic chicken and egg problem, replayed every time OpenSSL gets compromised, and successfully circumvented by securing it with something else.
Think about it: there's an SSL break and you need to update OpenSSL. What good does it do you to have it available over SSL?
Not that people shouldn't be using GPG, but only using it means you only protect the paranoid.
However, the first C compiler was not written in C, but in NB, an intermediate step from B to C:
But anyway, the comparison isn't very good because haskell/gcc don't develop sudden security vulnerabilities that instantly turn existing binaries unusable for getting new ones.
I find it so infuriating when I see a download page with hashes and the download links next to each other, as if that's any help at all.
curl http://not-ssl.somesite.ru/ba.sh | bash
By default, `curl https://blah.blah/` will only work if the TLS certs are proper & validated. This isn't about trusting the author (you'll be running their code anyway, one way or another) but the transport medium (HTTP!s).
If I had your show-and-shame tumblr, I'd only include http:// links.
(A quick Google shows that indeed it can be: the Content-MD5 header <http://www.w3.org/Protocols/rfc2616/rfc2616-sec14.html>. Wonder how widely supported it is by HTTP software used by people who like to check hashes of things they download.)
* Checksums, like in Xmodem or CRC
* Cryptographic hashes (including MACs)
* Cryptographic signatures (ie OpenPGP key or cert)
To say that it is difficult to implement all of these correctly and in concert is a grave understatement, but this is what modern crypto software and network protocols that use it, have to do.
Now back to the thread on HTTP header checksums :)
In the real world there are bugs in TCP stacks and in HTTP implementations that cause HTTP traffic to get corrupted. I see this every day. Some applications do implement extra checking, most applications do not. Browsers and wget and curl can't implement any extra checking because the way you check is application specific. There is no standard way to do it; what you mention there is an esoteric feature.
Just for anecdotal fun, Logic Pro tries to download over 50GB of assets over HTTP (each individual file is many-GB). It has never worked for me over any of my networks (and in fact I wrote a tool to fix this).
I'm very much aware of this. Hence why I expressed surprise that such checksumming was not commonly performed at the application layer.
> There is no standard way to do it; what you mention there is an esoteric feature.
The Content-MD5 header is defined in RFC 2616. It is, by definition, standard. If it's not widely supported, then I think that it would behoove the people who care about these things to switch to servers/clients which do support it.
(I suspect the intersection between "people who know/care how to use md5sum" and "people who know how to set/read an arbitrary HTTP header" is fairly large. Hence my surprise at the common practice of ignoring this capability.)
(Although I will note that file systems, like application protocols, should maintain their own integrity; however most do not. Which also seems bizarre given that it's 2014.)
(My day job is working with enterprise-grade content-addressed block storage, so maybe I've just set my data-protection expectations too high.)
Both of these options would protect your important files against failing single disks without having to do any RAIDing. The unimportant data (e.g. the OS itself, caches, etc.) could be reduced-redundancy, since it doesn't need to be captured in a disk rescue.
For any archival DVDs I burn, I compress, then run through PAR2 http://en.m.wikipedia.org/wiki/Parchive
I don't know the root cause in general, but for me the most common cause has been bugs in the TCP/IP stacks of cheap home routers.
 You've probably already sent out the unmodified hashes before you sent out the patched file. First idea: Keep a set of known binaries and a set of hash replacements. When you encounter a binary, first check if you've seen it before. If you haven't, deliver it unmodified, then compute hashes in a couple of known formats, and the hashes of the corresponding patched binary. When you deliver text content, check if any of the previously computed hashes match and replace them with your evil version. When you encounter the binary a second time, deliver the patched version, assuming that you either sent a modified hash or there was no hash in the first place.
It might make the first visit slower, but for small binaries (or large bandwith) it's probably not that noticeable (especially in Tor, where slowness is expected).
Lots of times the hash is in a separate file, which makes the attack even less noticeable since you can patch and hash the binary as you detect it but before the hash file is downloaded by the user (and you can even delay the hash download as much as you need).
As I see it, downloading hashes through unsafe channels delivers a false sense of security, which is even worse than no security at all.
If it is an https connection where you trust the CA, you don't need the hash. The binary coming from that connection is just as safe. If it is signed by a key you can obtain securely, well, sign the executable for once, forget about the hash.
most people most of the time trust signatures/hashes served over TLS. in cases where they don't trust the upstream CA (which is commonly influenced/chosen by the developers), there's also a good chance they shouldn't rationally trust the source of that code. i see your point technically, but in practice you're almost describing a "trusting trust" attack, which i don't think is the most common threat model for app downloads.
and i agree re: signed executables, but we live in a TOFU world and vanishingly few people personally verify fingerprints OOB with devs, so for many people not looking at trust paths and such, you're practically talking about trusting EFF's CA vs. mozilla's. perhaps that's a significant distinction to some, but i'd probably characterize it differently than you have.
The application requested data from a website, but the response was not valid. For details, use Event Viewer to view the Application Logs\Microsoft\Windows\Bits-client\Operational log
Which seems pretty clear to me (although plausibly not to an end user).
This is SSL as security theater.
It's obviously not nearly as secure as end-to-end SSL, but it's probably still useful. The connection between the client's machine and the Cloudflare's server is more likely to be under attack (unencrypted Wifi, hacked personal routers, rogue exit TOR nodes, etc) than the connection between datacenters.
Or is the false sense of true security a bigger detriment?
That's a really tough call.
Cloudflare makes no security guarantees. They don't even commit to keeping your public key secure when you give it to them. That's a bad sign. One wonders how they fund their free MITM service.
How about you randomly generate and write all your passwords down on a piece of paper in your wallet? For many threat models, that's far more secure than even using a password manager. For other threat models it's far less secure than using a password manager. Other than things that are just flat-out broken "more-secure" and "less-secure" don't exist without qualification.
Or is this an indictment of Cloudfront offering to be your SSL termination point?
This suggests the exit relay itself is doing the patching. Isn't it more likely that some MITM between the exit relay and origin server is responsible?
Of course the problem with that is that countries with censorship like china seriously throttle or outright block any SSL connection that are made outside of the country. And sometimes they even use something like SSL strip to do a MITM attack with a self signed certificate.
Average users there are also used to seeing self signed certificates locally and so never even think twice before discarding a message alerting them that a SSL certificate is not valid.
Yes. And source code, too. If you can't provide ssl for downloads, you should be using a third party service like GitHub who can.
Except that this FixIt binary will have no signature and Windows will light up like a Xmas tree. So it really comes down to whether you pay attention to these warnings or you don't. And if you don't, despite all of the Microsoft's effort of past 10 years, then you get what you deserved.
- patching (more .. corrupting) binaries, hoping to break Windows Update
- intercepting and replacing the top 5 'Make Windows Update work again' downloads with a (signed if you want) application of your own?
Bonus points for injecting 'Already got this host' into requests from now on, so that Windows update magically starts working again..
Programming can be easily learned by reading and practicing but IT security, one doesn't know where to begin, what the journey is like.
Being a good security researched requires (among many other things) ability to understand how things work. How ANY things work down to the LOWEST level. Idk about others but I always considered 'security guys' to be the elite of elites in IT world.