Oh, good: there's a standalone installer available (http://support.apple.com/kb/DL1726). But the download is served over HTTP. Maybe I can
just try the same URL with HTTPS:
$ curl --head https://support.apple.com/downloads/DL1726/en_US/OSXUpdCombo10.9.2.dmg
HTTP/1.1 302 Moved Temporarily
Server: Apache/2.2.24 (Unix)
Okay, I'll follow Apple's instructions for checking the certificate fingerprint in the installer (http://support.apple.com/kb/ht5044).
But that page (Last modified November 2011) displays a different fingerprint (9C864771 vs FA02790F)...and that fingerprint was also served over HTTP.
Gives up and opens the App Store.
Status: signed Apple Software
1. Software Update
SHA1 fingerprint: 1E 34 E3 91 C6 44 37 DD 24 BE 57 B1 66 7B 2F DA 09 76 E1 FD
2. Apple Software Update Certification Authority
SHA1 fingerprint: FA 02 79 0F CE 9D 93 00 89 C8 C2 51 0B BC 50 B4 85 8E 6F BF
3. Apple Root CA
SHA1 fingerprint: 61 1E 5B 66 2C 59 3A 08 FF 58 D1 4A E2 24 52 D1 98 DF 6C 60
TLS is not used to authenticate the update.
Yes. That's the marketing page explaining how Gatekeeper works, but yes, in the end it's a feature of Gatekeeper that makes it harder for you to open unsigned packages and impossible to open packages with a broken signature.
So even when you don't know about pkgutil (most people don't), Gatekeeper will still help you.
1. Use linux/fbsd/obsd/win box to download update.
2. Verify authenticity of cert/sha1
3. scp dmg / copy to USB drive
4. Apply update and move on.
If you're 13, add "duh" at the end.
Did the mother of your thirteen year old preface her question with "I already tried to use curl but then I realized that would not work. Then I thought I could verify the SHA1 but I realized I was obtaining the sha1 value over an insecure channel."?
I agree that context matters. That's why statistically the proposed solution isn't a solution. It doesn't really work in a way that address the serious issue because the serious issue is the sheer magnitude of the number of compromised systems.
To put it another way, if you have a Linux or Windows or BSD box why keep a potentially deeply compromised OSX installation around at all. The patch isn't going to unpwn a pwnd box. The hoops might insure the patch isn't compromised but in terms of system security the horse is out of the barn and all the way to the glue factory.
The only case where jumping through those hoops makes a difference is in the second best case. And that's statistically equivalent to the best case and preparing for the best case in regard to security goes by the name of "wishful thinking."
It's certainly a bad bug, and it ought to have been caught.
But it feels like this would be much harder to exploit than many other bugs which have had far less hoopla.
As I understand, this SSL bug makes it rather trivial to perform MITM attacks against apps which use the default system SSL libs.
That's certainly a problem, but most people are using trustworthy ISPs (at least in this sense). Comcast seems unlikely to try to steal your bank password, and Verizon is unlikely to try to harvest your HN cookies.
It seems like this primarily affects people connecting to untrustworthy access-points, such as Coffee Shops, or Airport Wifi - While that's certainly something that needs to be fixed, it seems far less crucial than remote-code-execution , or many other bugs we see regularly.
I'm sure I'm missing something here, can someone help me understand?
Is it just the "Ick" factor of having something you thought was encrypted actually being fairly open?
I don't understand why this is getting more attention that other (seemingly) more dangerous exploits.
 - http://msisac.cisecurity.org/advisories/2013/2013-088.cfm
- It's an easy to spot bug,
- in the most critical part of the code,
- of a fundamental security library,
- and it's been there for a long time, nobody knows how many systems have already been compromised due to it.
With this bug, Apple's library isn't actually a SSL implementation. It does not perform the most essential part of a SSL implementation - verifying that the peer possesses the private key it claims to posses.
> it seems far less crucial than remote-code-execution
> I don't understand why this is getting more attention that other (seemingly) more dangerous exploits.
Because it's the foundation for a heap of applications and services that were believed to be secure due to use of SSL. If you consider those, this is worse than any bug in any application. It's not just one application with a critical vulnerability - effectively, it's half (or whatmany) of all OSX and iOS apps with a critical vulnerability. All you need to do is go browse the net in a coffee shop, and some stranger can easily do things like:
- Pwn your box (MITM the auto update). Actually with this he can do all of the latter too.
- Steal your money (MITM your bank connection).
- Steam your online accounts, including email.
It's not "just a bug". Yes, everyone makes mistakes, we're all human. But it's completely unacceptable that those mistakes get unnoticed and into production code of such a critical component, and deployed to millions of users.
> That's certainly a problem, but most people are using trustworthy ISPs
Your argument seems to be that it's not a big issue if the security is totally broke since we don't need security in the first place.
The handling has been abysmal as well. They dropped a 0-day on themselves by releasing the iOS update, and then delayed the fix by several days, apparently so they could release it along with the Facetime integration.
And even then they don't mention it on the release notes! If you look at the release notes for this update, you'd have no idea how important this is, if you didn't already know.
 The release notes (http://support.apple.com/kb/HT6114) link to this: http://support.apple.com/kb/HT1222 , which as of right now, lists Dec. 16th as the most recent OS X security update.
The only alternative would have been to delay the iOS release, which they didn't do because almost certainly this bug was already being exploited in the wild. All this did was make more people aware of it, and only then for a few days.
As for OS X release, I'm sure they released it as fast as they could. It has nothing to do with releasing along with FaceTime integration, and everything to do with 10.9.2. was already going through the GM process, and it was faster/easier to add this fix into that and continue trying to validate the GM than it was to spin up an entirely new train for a 10.9.1.1 with just this fix and try to validate that.
Right. This is basically Apple violating their own "responsible disclosure" policy and announcing a 0-day vulnerability in OS X.
They should have delayed the release of the iOS patch until the OS X one was ready. This is the whole point of responsible disclosure: maybe the vulnerability is being used in the wild, but by delaying release of it until the vendor can patch it, the potential for expoitation is greatly reduced.
>All this did was make more people aware of it, and only then for a few days.
You say that as if it's not a big deal...
There is no justification for this bug. It never should have shipped. It never should have gone unnoticed for so long. It never should have been announced prior to a patch being available.
No matter how you slice it, Apple failed miserably, and "iOS was probably being exploited" is not an excuse. Apple has how much money? How much money do you think it costs to put their entire Core OS engineering staff on SHIPPING AN UPDATE FOR BOTH OPERATING SYSTEMS?
They could have afforded it. They were simply too incompetent, after a chain of incompetence, to do so.
Don't cargo cult 'common wisdom'; the only incompetence on display here is your axiomation of things you don't understand.
Stop trying to acquire internet points by being a jerk.
Please have the bridge delivered to my home between noon and six.
(Though, really, I should just accept this absurd statement, since it amounts to you admitting your own incompetence.)
> Stop trying to acquire internet points by being a jerk.
This from the guy who decided his scintillating contribution to the thread would be redundantly accusing people of "apologism" and "incompetence". You do understand the people who actually do work at Apple are human beings, and that you are flinging insults at them, right?
Why? Do you not already have a bridge to troll under?
> You do understand the people who actually do work at Apple are human beings, and that you are flinging insults at them, right?
Yes, and I know who they are.
If this is true, then their process could use some adjustment. Contrast with Google Chrome which has the regular motion of changes going through channels, but the ability to update virtually all clients within a matter of hours if a critical issue is found.
(I realize there is a lot more QA necessary for an OS update, but I'm not convinced that a fix for this specific bug would have taken a long time to QA. Certainly not anywhere near as long as we've waited for this update, or as long as a lot of people will delay installing it because it is huge.)
To play the devil a bit, their process still needs some work, there isn't a good reason why they couldnt have released this patch in its own approval process simultaneously with a higher priority for staff to choose it over facetime.
This is not a reasonable argument. At Pwn2Own each year, how many browsers have vulnerabilities that allow remote code execution? All of them. How many of these vulnerabilities are zero-days? A significant number of those. This happens every single year. Even the advanced protections in e.g. Chrome don't stop new vulnerabilities from being found on a regular basis. And all of these products are deployed to millions of users.
You could complain about Apple's response to this bug, and that might be a reasonable complaint to make. At least Google patches bugs quickly when they surface at Pwn2Own. But that's different from claiming the bugs shouldn't have existed (or should have been caught before making it into production). Bugs are fundamentally hard to find, and it's not really getting any easier.
The Apple bug isn't really in that class of exploit. It's a simple coding/merge error, and it's actually a regression from previously working code. One of the things that worries me is that this bug would have been caught so easily with basic unit tests.
Test 1: Make connection to server with a valid SSL certificate [PASSED]
Test 2: Make connection to a server with an invalid SSL certificate [FAILED]
I mean, I certainly can't claim that all my code is run through extensive tests before every deploy, but then I'm not working on the security tools that underpin an entire operating system.
What would have caught the bug: automatic code indentation or any sort of compile-time warnings about dead code.
Indeed, to err is human. This is why you're a negligent jackass if you don't plan for errors and build multiple systems to prevent and detect them, at least until computers start programming themselves for us.
Vulnerabilities of all kinds exist. We need to find them, learn from them and we need to fix them.
Getting hung up on whether or not they are 'acceptable' is just kind of weird.
Bad stuff happens, incompetence happens, mistakes happen. None of that is 'acceptable', but it happens just the same.
Creating an environment where some kinds of mistakes are 'unacceptable' doesn't eliminate those kinds of mistakes, it just causes people to stop reporting them.
Complaining about their release cycle makes some sense. complaining about the existence of an existing bug is basically just howling at the moon.
if you think you operate and hold yourself to a much higher standard then go ahead and complain all you want. maybe you do... maybe
Wouldn't you need to have control of the router Starbucks is using to set up the MITM attack?
Also, a lot of smaller coffee shops will just set up a wifi router and give it a password and call it done, when many of them have inherent exploits.
On top of all that, there are lots of Asus routers out there running firmware that can be remotely exploited and for which there is no patch. Or Linksys routers. Or D-Link.
All an attacker needs to do is change your DNS server settings and they can send you to any server they want instead of the server that you expected. They could redirect all HTTP and HTTPS requests to common services through their servers, MITM your connections to Facebook, Twitter, Apple, Bank of America, your e-mail, your health insurance website, etc.
On its own it's safe, but if you have control of someone's WiFi router (which is apparently trivial) then it's entirely possible for someone to snoop on a huge swath of your supposedly secure internet traffic.
The only real saving grace here is that OS code signing hasn't been compromised, so the system won't install a backdoor'ed 10.9.2 update. At least that part of the chain is secure and people can update.
Also WPA-PSK is useless for coffee shops because you have to give the PSK to everyone, and with that the ability to impersonate the AP. For that purpose, EAP/TLS is better, assuming you take care to verify the AP's certificate.
This isn't true of home networks using PSK, where the AP and the client need an identical PSK in order to authenticate each other. Yes, each other - if you successfully connect to an AP using a PSK, you can be pretty sure that AP knows the same key, and probably isn't someone impersonating it (note however that anyone with the PSK can impersonate the access point).
Remember kids, when you connect to a network without a PSK or more elaborate authentication where you verify the identity of the AP, you generally have no idea who is operating that network.
In SSL, you have the certificate and the key; the key is private and secret, and the certificate is public. A public certificate which is wrong (e.g. self-signed) does you no good because browsers won't trust it (and many of them have made it frustrating to try to bypass the warning).
- It really is a severe bug in a major platform (and by that I mean iOS not MacOS)
- It is easy to understand once spotted, so everybody who can read code in the entire planet can write about how stupid Apple is because they refuse to see the light and switch over to their favorite pet language/coding standard/methodology
- The NSA has massive MITM capabilities and is known to sabotage American products and this looks like a very, very convenient bug to have for them, leading to speculation that this could not be a mistake
- Apple haters.
Because because the level of professional incompetence it exhibits in a world where people think using iPods as part of an airline's essential safety process is a fucking whizzing of an idea. It makes the first order problem of using iPods to transmit confidential medical records feel rather trivial.
Yet there is nothing for which you need ask forgiveness. A model of the world where everyone lives in circumstances where either Comcast or Verizon is always available for one's internet connection [and it goes without saying that neither could possibly be compromised] is so absurd that you can only be speaking tongue in cheek
Compare the reporting of this to the Chrome vulnerability in TLS patched yesterday... https://news.ycombinator.com/item?id=7295785
The difference is pretty major. It's believable that someone forgot to consider the case where a new certificate was negotiated. It's downright inconceivable that no one tested whether a bad certificate failed validation. That's like selling a pregnancy test without testing what happens if the person isn't pregnant.
But if you are tricked to go to bankofamericaa.com instead of bankofamerica.com, a crook can be the proxy between you and your bank and you are none the wiser.
Practically speaking, that probably doesn't matter, because someone who understands that won't click on an email and log in to bankofamericaa.com. But there is a difference.
You got it! Really when you think about it, why do we even bother with crypto on the Internet in the first place? It's not like your data is traveling over a series of networks and nodes with different owners/operators that you don't even know and certainly haven't vetted for trustworthiness. That'd be crazy.
Have you not seen the news, at all, for the past several months? There have been a few revalations that perhaps some ISPs are not entirely trustworthy, to say the least...
That, and also for the fact that if Steve Jobs were still around, he'd be able to sweep this up or else be able to assuage the public about the 1 extra line of goto fail;
I think that in the post-Jobs future, we'll see more and more "revelations" about bugs/issues within Apple/iOS/OS X with people aggressively posting as much info as possible to "stick it to Apple"
I don't know about that. Comcast doesn't seem to have any problem robbing its customers right now.
And that betrayed sense, which invokes a hint of paranoia - that bug looks too obvious to have been skipped in QA.
It makes a bit more sense why they'd make us wait a few days, now.
The problem was not coordinating the release of 10.9.2 and iOS 7.0.6. The other problem is their patching cycle in general.
This update includes many high severity fixes from mid 2013 and one issue as far back as 2011. That tells you all you need to know about Apple's security program management. They release uncoordinated and when they feel like it, with no sense of urgency. It's convenient for them to just toss critical security fixes in to their 6 month OS updates, so they do.
Some criticize Microsoft for their "slow" patching, but they at least have a dedicated monthly cycle just for security updates, and they will send them out of band if a 0day gets exposed. And rarely will you see them drop 0day on themselves (though this wasn't always the case).
It's still terrible. They should have pushed out a fix for this vulnerability first and push back 10.9.2 if necessary. Or perhaps introduce a modern package system for the base system.
What this doesn't excuse is disclosing the iOS bug before all fixes are ready. THAT was the major scrweup.
It was their decision to put the fix in 10.9.2 which is the problem. I agree rushing 10.9.2 would have been bad.
For quite a serious vulnerability, which requires removing one goto statement to solve? I am not sure by what standards that is a good response time. There is surely something wrong with Apple's procedures here.
I suspect what this points to is that Apple doesn't have automated testing and they need a bunch of old school "hands on keyboards testers" to run a test case list that takes 4 days.
If you find this 'totally unacceptable', my suggestion would be to either go join them and help them improve the situation or to move to another OS. Bitching on the internet merely indicates how little experience you have with the trade offs that need to be made when pushing latge volumes of software.
Uhh no, we're customers. I don't find it unacceptable but if I did, it's within my rights to say so without "help[ing] them" as I pay the Apple Tax happily and repeatedly.
So call some employees in!
The early days fiasco are long gone and the SDL (http://www.microsoft.com/security/sdl/default.aspx) has performed really well.
Is it more critical to have 4 days of protection, or 4 days of "WTF is Apple incompetent" press?
Considering it needs to update the TLS support on the rescue partition too, doing it outside of single-user mode is probably not a good idea.
You're nitpicking, it seems.
Sure, they show a SHA1 on this page: http://support.apple.com/kb/DL1726 but that could be MITM'd as well.
(And no, this bug didn't break client-side signed package verification.)
If, however, we don't engage in circular reasoning and we assume your box isn't currently in the possession of the Russian mafia or (insert preferred APT here), then how can one be reasonably confident that the update one receives through the updater is legitimately the one Apple is distributing?
Because it is signed and the code-signing verification was not broken by this bug.
I agree with the point you're making, but you can also turn this idea around, after which it serves to highlight how insanely inadequate our current tools and infrastructure are from a security standpoint.
Basically, you can only reasonably hope to verify a patch if you're not already owned, so you also have to assume you're not in order to verify. It's as if there was a contagious disease that has a good chance of killing you after a number of years, but the diagnostic tests can only be counted on to work if you don't have the disease in the first place. So then why would anyone ever bother getting tested? Our current situation is that uncomfortable.
Being owned is less like having a virus and more like having schizophrenia. You can't ever expect to self-verify yourself, because if you're suffering from it, everything you're perceiving is being filtered through a compromised and untrustworthy system.
You have to trust some third-party that you believe to not be similarly compromised to do the verification for you.
Which effectively means that most people won't bother, unless a "trusted third party" is built into their machine.
But that has huge potential problems of its own.
says at the bottom "For detailed information about the security content of this update, see Apple security updates."
(Not trashing Mac, I am a Mac user.)
In corporate settings with desktop management, Macs are actually a huge pain to deal with; Windows maybe starts from crappier defaults but there's a much more mature industry around locking it down.
>Safari can't open the page "https://www.imperialviolet.org:1266" because Safari can't establish a secure connection to the server "www.imperialviolet.org".
Cool. Someday I'd like to be able to leave a FaceTime voicemail message if the receiver declines the FaceTime call.
So maybe it brought some eyes on the "gotofail" but that's just speculation…
Thanks anonymous Apple guy :)
Regular Jane and John Does or even power users would have no idea of the importance of upgrading their system for this. "500MB update? Nah, it can wait, I don't need any of that".
Awesome for those of us using FileVault who have to enter their login password each time they wake up their computer.
Command-Option-Media Eject (⏏)
Total: 196 B sent, 45.8 MB received
Outgoing to devimages.apple.com (184.108.40.206), Port http (80), Protocol TCP (6), 196 B sent, 45.8 MB received
Available for: OS X Mavericks 10.9 and 10.9.1
Impact: An attacker with a privileged network position may intercept user credentials or other sensitive information
Description: When using curl to connect to an HTTPS URL containing an IP address, the IP address was not validated against the certificate. This issue does not affect systems prior to OS X Mavericks v10.9.
CVE-ID CVE-2014-1263 : Roland Moriz of Moriz GmbH"
FWIW, applying the security update also installs iTunes 11, which users of Requiem may want to take note of.
Edit: I did check with Imperial Violet and gotofail.com, and the issue is not there after the update.
God only knows what incompetence and disregard for user privacy and sanity awaits in these "new" versions. What are they adding that we really need? Oh, the ability to use SSL PKI. Yeah, I guess you have to upgrade.
Why isn't HN discussing the effects this screw up has on email? Email is bigger than the web, belive it or not.
And most of the world appears to use webmail.
With this "bug" HTTPS for your webmail is futile.
You have no way to know you are connecting to the real googlemail, yahoomail, hotmail, etc.
The "authentication" functions of SSL need to be made an compilation option, not a default.
It's obvious almost no knows or cares how to use SSL's PKI mechanism properly.
SSL's encryption capabilities have been useful, but using SSL to do server authentication causes more problems than it solves.
History has shown it's just not easy enough to use.
SSH can do authentication without PKI. Alas, it is embedded into a program that only nerds use.
I'm using the authentication framework in CurveCP. I'm working on making it very easy to use.
THIS SOFTWARE IS PROVIDED ``AS IS'' AND WITHOUT ANY EXPRESS OR
IMPLIED WARRANTIES, INCLUDING, WITHOUT LIMITATION, THE IMPLIED
WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE.
So yeah Apple can't be held responsible. If they could then the whole free software open source community would be in deep doo doo.
Available for: OS X Lion v10.7.5, OS X Lion Server v10.7.5,
OS X Mountain Lion v10.8.5, OS X Mavericks 10.9 and 10.9.1
Impact: Multiple vulnerabilities in PHP
Description: Multiple vulnerabilities existed in PHP, the most
serious of which may have led to arbitrary code execution <…>
Essentially, only Mac-owning web developers who enable these things (and serve PHP from their box to the world) are affected by any security problems in PHP. I imagine that most such web developers actually only dev locally and push the code to another server. It's nice that they updated them anyway.
Android file transfer utility stops the keyboard and touchpad from working.
I haven't upgraded to Mavericks at all yet because I'm worried that it will break software I depend on a daily basis. Can some please confirm if upgrading has a significant risk of breaking compatibility with things like rails, mamp, netbeans, android studio, golang, vagrant, virtualbox, docker etc.
pkgutil --check-signature OSXUpd10.9.2.dmg
The manual-download pages on http://support.apple.com/downloads/ publish SHA1 sums. Unfortunately those pages aren't served over SSL. You could download the update and compare its hash against those of your friends: at least then you'd all be installing the same thing (preventing narrowly-targetted attacks).
Unfortunately we'll probably always be in the dark about the HOW, WHO and WHY. :(
If it was an honest mistake, they're not going to publically throw an employee under the bus.
If it was a malicious action from the NSA, they're not allowed to make it public.
I suspect the only hypothetical way we'd hear about the "who" is if an employee weakened it for his own personal gain, and criminal charges were raised.
Do they use the same code stack for iOS and OSX? This seems weird to me - even though parts of code could be used in both operating systems, I would imagine separate teams would be on iOS and OSX, each reviewing code bases on their own, running unit tests and whatnot. Still, this major flaw has been present for 2.5(?) years - all the more reason for paranoia about its presence.
Add to this, why wait so long with the OSX update? A security issue THIS serious MUST be patched instantly and rolled out as an individual/separate update as soon as possible, even if that means pushing back OSX 10.9.2. Or did they need some time to introduce a new flaw somewhere? o_O
See the Dev center for the instructions to set it up.
To do this, open Keychain Access, open Preferences, and click Reset Keychain. It will create a new login keychain and keep your old one backed up in the keychains folder.
Mid 2012 MBP (MacBookPro9,2).
The issue is present after a fresh reboot with no other applications running than Chrome.
My Chrome version is now 33.0.1750.117, which should be the current one.
iTunes and Safari audio output is always fine.
I run VMWare Fusion 6 virtual machines often, but that doesn't seem affect the bug way or another. Also, I only installed VF6 long after updating to 10.9.
Audio in Windows or Linux Chrome running in a VF6 virtual machine is perfectly good.
Just tested again, the issue seems to only affect Chrome playing Macromedia Flash content. I don't have Flash player installed, only Chrome's internal PPAPI Flash player.
I think I'm just bothered by my perception that this Mavericks upgrade is being presented as a fix for an OS X security issue, rather than just offering a patch to Mavericks users.