
Apple's SSL/TLS bug - Aissen
https://www.imperialviolet.org/2014/02/22/applebug.html
======
rdl
It's interesting watching all the speculation about "was it a backdoor, or
just a bug?"

Lots of points in favor or against:

1) It's a huge compromise, and "open" to anyone to exploit, which would
ultimately get caught and fixed faster. But it's also not targeting anything
specific, so there's less of a signature of the attacker.

2) Incredibly simple, and thus a plausible mistake.

3) Hidden in plain sight

I'd generally come down on the side of "accident". The better question is if
an systematic testing system for exploitable weaknesses (or a lucky chance
finding) could find something like this (either a regression, or a badly
implemented new feature) and exploit it -- basically assuming there are going
to be errors in implementations. That's understood to be a standard technique
in the IC, and a whole industry built around it... and then, the scale of
compromise.

There are lots of mitigation techniques for vulnerabilities like this
(essentially, devices which are too fast-changing and too difficult to fully
trust, but which have to touch sensitive data), but it's not as if people can
carry around a firewall in their pocket these days, sadly.

I'm certainly re-generating any keys which touched iOS or OSX devices
(thankfully very few), and reinstalling OSes, in the interim.

~~~
brudgers
My gut says the probability of a random typo executing without raising an
error or exception is rather low. The probability of it doing so in a way that
well aligns with the interests of nation states, large corporations, and/or
criminal enterprise is even lower.

This might explain DROPOUTJEEP particularly considering how much more
efficient it is than breaking messages after they are encrypted.

[http://mobile.eweek.com/security/nsa-spying-on-apple-
iphones...](http://mobile.eweek.com/security/nsa-spying-on-apple-iphones.html)

~~~
pjene
Sure, this aligns with interests, but the bug's existence is predicated on the
_entire code base being written with substandard style rules and no static
analysis or tests_ , which suggests to me that incompetence got here first.

~~~
terminus
> entire code base being written with substandard style rules and no static
> analysis or tests, which suggests to me that incompetence got here first.

That _and_ inadequate, bordering on zero, code review. Even a beginner C
programmer looking at this code could see how fishy it looks.

No code review while checking in code to libssl. That takes a lot of
incompetence.

~~~
Create
_That takes a lot of incompetence._

"Any sufficiently advanced incompetence is indistinguishable from malice"

[https://support.apple.com/library/APPLE/APPLECARE_ALLGEOS/HT...](https://support.apple.com/library/APPLE/APPLECARE_ALLGEOS/HT5780/Crypto_Officer_Role_Guide_for_FIPS%20140-2_Compliance_iOS_6.pdf)

We begin therefore where they are determined not to end, with the question
whether any form of democratic self-government, anywhere, is consistent with
the kind of massive, pervasive, surveillance into which the Unites States
government has led not only us but the world.

This should not actually be a complicated inquiry.

[http://snowdenandthefuture.info/events.html](http://snowdenandthefuture.info/events.html)

------
apaprocki
Worth noting that static analysis finds bugs like this immediately. TLS code
seems like the perfect candidate to run through static analysis on every
checkin. There are products such as Coverity and PVS-Studio that would have
immediately flagged this and probably some open-source ones built around LLVM
as well (unsure about this one, though). I personally use Coverity and have it
hooked up in the same way everyone connects Travis CI.

~~~
pjmlp
There is no excuse to bypass static analysis nowadays. At very least should be
part of the continuous integration build.

The problem are the many developers that still think they are perfect and know
the full C standard, including undefined parts.

~~~
eliteraspberrie
Unfortunately the decision to use static analysis tools would have to come
from developers who are comfortable admitting they make mistakes sometimes.

It takes a special kind of ego to write an SSL library with no unit tests, not
turn on compiler warnings, and not use static analysis tools.

~~~
TwoBit
OpenSSL was written by monkeys:
[http://www.peereboom.us/assl/assl/html/openssl.html](http://www.peereboom.us/assl/assl/html/openssl.html)

~~~
MrBuddyCasino
Wow. Thats really bad, who would have thought OpenSSL is such a piece of crap?
Its about time someone writes a decent alternative!

------
rdl
Unfortunately OSX does not appear patched even in the latest developer center
10.9.2 build (13C62). Tested in both Safari and OS-distributed curl.
Chrome/Firefox of course is still fine since it uses the NSS stuff, but plenty
of OS services use the OS crypto. (I'm violating NDA by commenting on pre-
release Apple stuff, of course.)

Windows or ubuntu bootcamp until they fix this, I think.

~~~
modeless
MITM of any SSL connection in Safari and other system apps, and they couldn't
even bother to have an OS X patch ready at the same time as the disclosure for
iOS? I think anyone relying on the security of OS X is going to have to
seriously rethink their OS choice after this.

~~~
atmosx
Indeed the only secure OS nowadays is Linux. Everything else should be
considered _compromised by default_ a priori.

~~~
lotsofmangos
Linux is only secure if you secure it.

~~~
atmosx
Sure, but at least you have _a say_ there.

~~~
abalone
If you need to say something to make it secure then wouldn't it also be
compromised by default?

Security is not a Boolean. A better question is, which OS is more secure OOTB
for a given user.

~~~
lotsofmangos
For some users, none of them. Some users, no matter what, demand _password_ as
their password.

------
cantfindmypass
I just made this - it'll tell you if you're vulnerable.

[https://gotofail.com/](https://gotofail.com/)

Not very well tested, please let me know if it works for you. If you're on OS
X Mavericks or on iOS 7 and haven't patched you should get big scary red text.

Edit: posted here
[https://news.ycombinator.com/item?id=7282164](https://news.ycombinator.com/item?id=7282164)

~~~
plg
"If you're on OS X Mavericks or on iOS 7 and haven't patched"

how do I patch on OS X Mavericks? Software update shows nothing to update

~~~
cantfindmypass
There is no patch for Mavericks out yet. :-(

~~~
plg
I thought Apple's policy was to not release these sort of security notices
until a fix was in place?

~~~
cdcarter
That's partially why people are so upset. Mavericks is still very much
vulnerable to a publicly acknowledge bug with many PoCs out.

------
apaprocki
From looking at the code it seems like there should be a way to change the
instructions to "fix" the issue, but in a round-about way. This code:

    
    
        if (sslVersionIsLikeTls12(ctx)) {
            /* Parse the algorithm field added in TLS1.2 */
            if((charPtr + 2) > endCp) {
                sslErrorLog("signedServerKeyExchange: msg len error 499\n");
                return errSSLProtocol;
            }
            sigAlg.hash = *charPtr++;
            sigAlg.signature = *charPtr++;
        }
    

is only executed in the TLS 1.2 case, but the sigAlg structure is always on
the stack. So if this code remains "skipped" in the non-TLS 1.2 case, then
later on:

    
    
        if (sslVersionIsLikeTls12(ctx))
        {
            err = SSLVerifySignedServerKeyExchangeTls12(ctx, sigAlg, signedParams,
                                                        signature, signatureLen);
        } else {
            err = SSLVerifySignedServerKeyExchange(ctx, isRsa, signedParams,
                                                   signature, signatureLen);
        }
    

the broken "else" case can be replaced with instructions to poke the proper
values into sigAlg and then relative jmp to the code offset where the inlined
SSLVerifySignedServerKeyExchangeTls12 begins (0x86cb9), as that version of the
function does not have the bug.

I checked inside the SSLVerifySignedServerKeyExchange disassembly and the
compiler expectedly omitted the remainder of the function, so it isn't as
simple as sticking a few nops in.

~~~
lunixbochs
If Security.framework is fully buildable from their source, perhaps the
optimized out code isn't as much of a problem.

[http://opensource.apple.com/source/Security/Security-55471/](http://opensource.apple.com/source/Security/Security-55471/)

------
arielweisberg
How is this code not covered by a unit test?

I'll admit I don't think of myself as great at unit testing, but the first
thing I do when writing one for a new tool or class is use a code coverage
tool to look for uncovered lines and once I have written a few basic
behavioral tests I write tests to exercise and validate the output of the
uncovered lines.

This ensures that the tests I write to cover the public API don't miss
catching internal behaviors that require more subtlety to uncover.

~~~
llimllib
> How is this code not covered by a unit test?

...or an integration test. or a functional test.

When you write an SSL lib, I suspect that at some point you ought to test that
it checks f%^&ing certificates :/

~~~
sneak
Negligence or malice.

My theory is that Apple is spread too thin. Kayak.com reproducibly crashed
MobileSafari the day iOS7 shipped, and brand new iPad minis regularly kernel
panic and reboot.

It wasn't always like this.

~~~
andrewflnr
Hmm, not just my Nexus 7, then. Fortunately that's not very frequent.

------
badman_ting
If blocks without curly braces ಠ_ಠ

~~~
Pacabel
I cringe whenever I hear people advocating that it's okay to avoid braces with
single-statement conditionals/loops/etc.

This is a perfect example of just how bad that suggestion is, and just how
disastrous it can be. It's totally worth typing a few extra characters here
and there to basically avoid these kind of situations completely.

~~~
ams6110
I do write single statement conditionals without braces, but I write them on
one line, which emphasizes the single-statement nature:

    
    
      if (something) do_something();
    

Instead of:

    
    
      if (something)
          do_something();
    

So it's much less likely that I'll confuse indentation with a block scope.

~~~
SideburnsOfDoom
> I do write single statement conditionals without braces, but I write them on
> one line

That's fine until you write

    
    
        if (something) do_something(); do_something();
    

or worse:

    
    
        if (something); do_something(); 
    

I've actually seen something very similar to that one.

You haven't really changed the dimensions of the problem by putting it all one
one line. Only the whitespace is different. Sure it looks wrong to you; but so
does the incorrect code from apple.

I don't understand the need to omit braces; it doesn't make the compiled
program any smaller. And denser source code is not always more readable, or
we'd prefer "? :" to "if" every time.

~~~
gsg
Note that braces won't save you from

    
    
        if (something); { do_something(); }
    

(Personally, I configure my editor to highlight if (...); in bright red.)

~~~
SideburnsOfDoom
This is true; there is no silver bullet. So we rely on defence in depth.

The first line of defence is consistent code layout; e.g. always use "{ }"
after an if, and only one statement per line. This aims to make simple typos
less likely to compile, make the code more readable and so make errors stand
out.

Then there is using and paying attention to the linter / compiler warnings /
jshint + 'use strict' / static analysis or whatever it’s called in your
language of choice.

Then there are unit tests or integration tests.

Any of these might have caught the apple error, but there are no absolute
guarantees.

IMHO it's a professionalism issue – gifted amateurs are great, and they can
succeed... most of the time. But a professional engineer would put aside ego
"I won’t make mistake like that" and laziness "I don’t need to type braces
this time" and do the careful thing every time. Because each step raises the
bar, and makes it harder to ship code with bugs like this. Because sometimes
it matters.

------
outside1234
I think what this really shows is that Apple has no unit tests to cover the
security of their products. And that is pretty scary.

~~~
rdl
I don't think you necessarily need unit tests to catch this; code review would
work (even I probably would have caught this if I'd actually read it; "fucking
gotos" would have drawn my attention to begin with, and if it was a _single_
line commit, even more so.)

The problem is Apple (intentionally) understaffs and overworks, so I doubt
they have the spare people to look at most commits.

~~~
sneak
If only they had the money to devote to quality engineers to ensure good
processes surrounding their most critical, security-sensitive code.

You can't really blame them for doing the best they can with limited
resources. Startup life is hard.

------
clukic
Reminiscent of the "Linux Backdoor Attempt of 2003" [https://freedom-to-
tinker.com/blog/felten/the-linux-backdoor...](https://freedom-to-
tinker.com/blog/felten/the-linux-backdoor-attempt-of-2003/)

------
Myrmornis
[http://opensource.apple.com/source/Security/Security-55471/l...](http://opensource.apple.com/source/Security/Security-55471/libsecurity_ssl/lib/sslKeyExchange.c)

Can someone explain to me what this code is? It appears to be implementing
standard crypto procedures -- are there not suitably licensed public
implementations that apple can use? If they are writing it themselves, why are
they open sourcing it? (I don't have a reason why they shouldn't it's just
that I've never associated apple with much interest in open sourcing things).

~~~
sillysaurus3
This isn't a defense of Apple's choices, but:

Using public implementations for crypto code has been difficult until pretty
recently. There didn't exist many high-quality, trustworthy, public
implementations of the types of crypto that real-world systems have wanted to
do.

Even now, the situation is pretty grim for anyone who wants to integrate
crypto code into an existing large codebase. The canonical answer is "just use
NaCl." However, I've had developers outright refuse to use NaCl because "it's
so hard to compile." They were saying NaCl has some issues compiling on
certain types of architectures, and issues compiling as a dynamic library.
When libsodium was suggested in response, they retorted that libsodium was
"NaCl but with modifications by people other than the ones you'd want making
modifications to crypto code." In other words, libsodium purports to be "NaCl,
but easy to integrate into your project," yet the reason it's easy to
integrate is because someone other than recognized cryptography experts
fiddled around with NaCl until it was easy to compile. So apparently it's an
open question whether libsodium is as trustworthy as NaCl, and NaCl is a pain
to integrate.

All of this implies that the current situation is _still_ not very good as of
2014 for people who want to just do crypto safely and in some standard way.
And rather than writing that code in 2014, Apple had to write it years ago,
when the situation was far more painful than the present day's shortcomings.

To answer your question, yes, there are public implementations which could
theoretically be used. But actually using them is... difficult. Not nearly as
difficult or as error-prone as rolling your own, but perhaps difficult enough
where someone who isn't an expert in cryptography might make the dire mistake
of believing that rolling their own is less difficult.

I wish Matasano would publish an open-source crypto library. An "NaCl that's
easy to use and that people actually trust." They have the resources and the
credit to pull it off.

Alternatively, I wish we could gather funds to have them audit libsodium.

Also, all of this is based on the assumption that NaCl or libsodium actually
meet the needs of what Apple was trying to do with this crypto code. It
probably doesn't, because NaCl / libsodium provides a complete replacement to
the entire crypto stack of an application. To take a wild guess, I'm thinking
that Apple needs to implement / interact with some other kinds of crypto than
what NaCl/libsodium provide. For example, certificate validation. So then
you'd suggest "use cryptlib!" but this is hampered by the fact that in
addition to all the previous considerations, cryptlib also isn't free.

~~~
Myrmornis
Thanks! Is it surprising that they open source it? That gives their
adversaries opportunity to look for weaknesses in their code. And as for
benefit for apple, ... it seems to be exposing the fact that they have a
mixture of tabs and spaces and have let someone edit an absolutely critical
file with no code review or testing.

~~~
sillysaurus3
What you say is true. But security through obscurity doesn't work, because
people are very good at reverse engineering and decompiling. So your choices
are either: open source your crypto code and suffer some embarrassments which
people will forget and forgive, or have closed source crypto code that either
doesn't work or can be exploited in subtle ways. Pick your poison. :)

------
danieljh
Emerging tools such as gofmt and clang-format are able to automatically re-
format your code based on the language rules rather than the human behind the
monitor. Using those tools this specific category of issues should at least be
visible in the formating diff.

Interesting. Never thought about it that way before.

~~~
Pacabel
Those kinds of tools aren't new or emerging. The "indent" utility for C has
been around for decades. I first remember using it on 4.3BSD, but it may have
been around before that.

~~~
tobiasu

        indent - indent and format C program source
    
        HISTORY
            The indent command appeared in 4.2BSD.
    

And for context, 4.2BSD was released in 1983.

~~~
kps
cb(1) was in Seventh Edition (1979).

    
    
      NAME
        cb - C program beautifier
    
      SYNOPSIS
        cb
    
      DESCRIPTION
        Cb  places a copy of the C program from the standard input on the stan-
        dard output with spacing and indentation that displays the structure of
        the program.

------
spiralpolitik
I'm going with accident either through a merge or a double paste. I'm guessing
that if they were running some kind of static analysis tool that it probably
would have flagged this one as a duplicated line and someone would have caught
it earlier.

------
steeve
Does anybody know the .dylib or binary file where this function resides on
OSX? I can't find libsecurity_ssl on my system.

It should relatively simple to NOP out the 2nd goto, if the rest of the
function hasn't been optimised away.

~~~
apaprocki
The static function that has the bug will most likely be inlined into the
other static function which will then be inlined into the outer public symbol
that calls it.

I believe you can dump the assembly of the entire function like this:

    
    
        otool -t -p _SSLProcessServerKeyExchange \
            -V /System/Library/Frameworks/Security.framework/Versions/A/Security | less
    

edit: I believe this could be the offending instruction:

    
    
        0000000000086df6        jmp     0x86e0d
    

edit2: That is actually in SSLVerifySignedServerKeyExchangeTls12(). Trying to
track down the real one..

edit3: Looks like the compiler did optimize it out if I'm reading it correctly
now. The code is around 0x86c97. It does the last if() call and then
immediately calls SSLFreeBuffer() and jumps to the end. :(

~~~
steeve
Confirmed here too :(

    
    
        0000000000086c80    callq   *%r14 ; SSLHashSHA1.update(&hashCtx, &signedParams))
        0000000000086c83    movl    %eax, %ebx
        0000000000086c85    testl   %ebx, %ebx
        0000000000086c87    jne 0x86c9c
        0000000000086c89    leaq    0xffffffffffffff38(%rbp), %rdi
        0000000000086c90    leaq    0xffffffffffffff58(%rbp), %rsi
        0000000000086c97    callq   *%r14
        0000000000086c9a    movl    %eax, %ebx
        0000000000086c9c    leaq    0xffffffffffffff08(%rbp), %rdi
        0000000000086ca3    callq   _SSLFreeBuffer

------
wreegab
The faulty block of code -- line 623 (function
SSLVerifySignedServerKeyExchange) -- looks like a cut and paste (or the
reverse) of the block of code at line 372 (function SSLSignServerKeyExchange),
except this one doesn't have the extra "goto fail;".

------
SimHacker
I'd love to see the git blame logs for that one.

Whoever wrote that code even before it had the bug was an incompetent cowboy
hotshot who wanted to show off how pedantically he knew and could surf the
nuances of syntax and optimize the code to have the smallest source file size
by penny pinching brackets.

"ALWAYS USE BRACKETS" is one of the oldest and wisest rules in the book, and
this shows why. Anyone who actually writes security related code like that,
blatantly ignoring important coding practices in the name of "aesthetic
beauty" or however they rationalize omitting brackets, should be fired.

I bet they're the same cowboys who fanatically prosthelytize JavaScript
automatic semicolon insertion too, because that's another outlet for their
obsession with showing off how clever they are to write code that requires
them to deeply understand and consider every nuance of the syntax at every
keystroke, instead of facing the fact that they or somebody who comes later
might actually make a mistake, and coding defensively.

Maybe they really are fucking geniuses who never make mistakes, but that's not
true about everyone else who has to read and modify the code.

Apple should go through all their code and fix it, and dock the salary of
anyone caught not using brackets.

------
blazespin
One of the most valuable companies in the world and they can't hire a junior
level engineer to write 100% code coverage unit test? Unbelievable.

~~~
thrownaway2424
Or an intern to rewrite the entire thing from scratch. The code quality of the
entire library is awful. No comments. Impossible to determine ownership of
pointers. MIXED TABS AND SPACES?? WTF?

People are downvoting my comments all over this thread but really, if this is
not literally the worst code in iOS then I'm throwing away my iPad
immediately. The priesthood of cryptography is always warning the lay
programmer to avoid re-implementing crypto, and instead to use libraries
written by experts. But the crypto experts are apparently good at math and
terrible at programming computers. This Apple library certainly isn't the only
evidence of that. OpenSSL is largely uncommented and untested as well.

------
jdsnape
This shows another of the benefits for the community of open sourcing code -
because we can see exactly where and what the bug was steps can be taken in
other projects to stop it happening there (I think Adam mentioned he was going
to check for a test case in Chrome).

If the code was closed all we would have is Apple's release note which just
says validation steps were skipped...

~~~
ZoFreX
The code in question _is_ open source.

~~~
ZoFreX
Wish I could delete that comment - reading comprehension failure!

------
fulafel
Non-semantic indentation claims another victim.

------
lunixbochs
I loaded up the Xcode project for Security.framework and it complained about
disabled compiler warnings.

I also noticed their open-source download page [1] doesn't force you to https
by default, which probably matters to anyone looking at Security.framework to
fix SSL MITM problems :)

Doesn't look like they make their frameworks easy to compile for outsiders.
The only corecrypto headers I can find are in XNU, and they appear to be
outdated.

I'm hand-constructing what I can from context. Here are the defines from
ccasn1.h: [https://bochs.info/p/6jquf](https://bochs.info/p/6jquf)

You need these too:

    
    
        typedef int ccoid_t;
        extern size_t ccder_sizeof_raw_octet_string(signed long length);
    

Currently working on libaks.h and corecrypto/{cczp,ccec,ccec_priv}.h

[1]
[https://opensource.apple.com/release/os-x-109/](https://opensource.apple.com/release/os-x-109/)

------
jokoon
I wonder if this would have as much attention without the Snowden leaks. I
guess yes.

Not to be a conspiracy theorist, but history tells that computer technology
was released to the public, so to me it still looks like there is no absolute
objective, safe way to use a computer for a layman. It still looks more like a
tool of surveillance than a tool of communication.

You'd want my opinion, but I'd be apple (or Linux for the days that happened
for a certain commit with a =!0 thing), I'd try to sue the guy who was
responsible for this commit.

You could argue in court that it's impossible to prove if it's voluntary, but
I guess we must set an example and start to hold anyone responsible for this.
It's time to set the courts aware of computer security standards...

~~~
tcheard
You cannot sue a programmer for writing a bug in software, or at least it
would be thrown out immediately. It is the company's responsibility to ensure
that the code is peer reviewed and tested before deploy. There are so many
bugs in software, that this would be a terrible standard to set, and no
developer would be safe. The best a company could do is fire them, but even
that is unlikely.

~~~
lutusp
> You cannot sue a programmer for writing a bug in software ...

Of course you can.

> It is the company's responsibility ...

Not if the programmer was hired for his independent expertise rather than to
be told what to do, and how to do it, by higher-ups. That's a a common
arrangement in small companies.

> There are so many bugs in software, that this would be a terrible standard
> to set, and no developer would be safe.

All true, but surely you realize that lawyers and judges don't necessarily ask
themselves, "Is this fair?"

Remember that a group of scientists were successfully convicted of
manslaughter in Italy for _not_ predicting an earthquake. It was a different
country and legal system, but not that different.

[http://abcnews.go.com/International/scientists-convicted-
man...](http://abcnews.go.com/International/scientists-convicted-manslaughter-
failing-predict-italian-quake/story?id=17536977)

The handwriting is on the wall.

~~~
tcheard
> Not if the programmer was hired for his independent expertise rather than be
> told what to do, and how to do it, by higher-ups. That's a a common
> arrangement in small companies.

1\. That does not excuse software from a proper code review process, in fact
any reasonable software company would probably see this as even more of a
reason to code review the code. If I knew that a software company doesn't
properly review critical code, I would definitely think twice about using that
company's software.

2\. Apple is not a small company so that doesn't excuse them in this case.

>All true, but surely you realize that lawyers and judges don't necessarily
ask themselves, "Is this fair?" Remember that a group of scientists were
successfully convicted of manslaughter in Italy for not predicting an
earthquake. It was a different country and legal system, but not that
different. [http://abcnews.go.com/International/scientists-convicted-
man...](http://abcnews.go.com/International/scientists-convicted-man..).

Come on this is the Italian legal system. No offense to Italy, I believe it to
be a beautiful country, but they have had a shocking record in their legal
system as of late. And I highly doubt that this would have stood up in many
other western legal systems.

But even if I concede to all your points, someone on HN should be smart enough
to realise that suing someone for a software bug would definitely not be a
smart move. And we definitely shouldn't be recommending such a thing. Just
like we shouldn't be recommending scientists to be convicted of manslaughter
for not predicting an earthquake.

~~~
lutusp
>> Not if the programmer was hired for his independent expertise rather than
be told what to do, and how to do it, by higher-ups. That's a a common
arrangement in small companies.

> 1\. That does not excuse software from a proper code review process, in fact
> any reasonable software company ...

You changed the subject. The original subject was whether a programmer could
be held accountable for a bug in a computer program. The answer is yes, under
civil law, that can happen.

There are companies whose computer science departments consist of one person,
which was my example, and in such a case, there is no company-wide software
review process, because -- as I already said -- the computer science
department consists of one person working alone.

Feel free to change the subject if you want.

------
fit2rule
Umm .. Code-coverage, Apple? That's the most scary aspect of this issue - that
there is clear evidence that Apple aren't testing with coverage. At all. Or
else this wouldn't have ever made it to release ..

------
RexRollman
I installed this last night and the update came in at 15MB. Surely a one line
bug shouldn't cause a 15MB update? Or is it that some things in iOS might be
statically linked and those had to be pushed out as well?

~~~
simscitizen
The dynamic libraries in iOS aren't shipped as separate dylibs on the device.
Rather, they're essentially concatenated together as part of a single file
called the dyld shared cache that also lets Apple do some prebinding tricks,
interning of objc selector names across all system libraries, etc.

The system-wide iOS build scripts actually randomize the order of the
libraries in the shared cache by default (you can check this using
dyld_shared_cache_util against the shared cache file, which you can compile
from the dyld project on Apple's open source site). Since the ordering of
dylibs in this giant shared cache file varies between builds, you could easily
end up with a 15 MB diff even though all you've done is deleted a single goto
in source code.

------
lstamour
Can't wait for Apple to patch this bug? Downgrade to 10.8.5 ...

The answer here, folks, is don't update until all the bugs have been found and
fixed. Apple's golden days of OS updates was back in 10.6, when they spent
most of the time just fixing bugs rather than adding new features. Every OS
update since has had as many bugs as it fixed, though overall things were
getting better.

It's the old Wait-until-SP1 advice. For security, it's often true. The
exception being if the patch cycle is fast enough...I'm not going to say for
sure whether one is better than another.

~~~
tcheard
> The answer here, folks, is don't update until all the bugs have been found
> and fixed.

How can you ever be sure of this? And also I can almost guarantee that there
is no non-trivial software that is bug free. As a programmer the only way you
can guarantee bug-free code is to not program at all.

So if you follow this advice, you will never update at all, which in my
opinion is bad because the benefits of an update outway the negatives.

~~~
lstamour
I was being facetious in that one line. I never expect "all" the bugs to be
fixed, but it's equally true that if you wait 8 months, you'll know better
what you're upgrading /to/. In this case, 10.9 had the bug, 10.8 did not. It's
usually the case that as new features are added, it's much more likely that
you've added new bugs. Whereas in slightly older code, you might be "out of
date" but at least you know where the bugs are, generally, by watching what
changes in newer releases and back-port as necessary. And as I think I've said
elsewhere, if code is updated frequently, this advice might change -- so long
as you're running a stable channel, you can both get new updates and enjoy
tested code.

------
aaronem
This seems to be the current thread on the topic, so I'll mention here in case
anyone cares that iOS 5.1.1 does not contain the vulnerability, according to
gotofail.com's test for it.

------
taylodl
Dijkstra always contended GOTOs were harmful!

In all seriousness it's unfortunate the C family of languages even allow this
kind of bug - we've all been bitten by it at one time or another.

~~~
kabdib
I'm calling you on the GOTO is evil thing. It's not. This is a well-structured
use of GOTO, and it has a long tradition in C shops. This is a well-understood
idiom, and you're seeing a _different_ failure.

Your choices for exception handling systems in the C-like languages are:

\- early return (whereupon, resource leaks and bloated cleanup code)

\- setjmp/longjmp (whereupon, utter chaos)

\- "chaining" success, where you don't use GOTO and have to _consistently_
check your current 'err' state before doing additional "work" (whereupon, lots
of bugs when you forget to handle an error case)

\- or GOTO to a single piece of error handling code (this is usually coupled
with an earlier hunk of code where you initialize variables and set up state;
this is essentially the C++ "constructor / destructor" idiom, but done by
hand).

The GOTO is the best choice, because everything else is worse.

(If you're using C++ exceptions you're in serious trouble. The effort required
to write correct code in the presence of arbitrary exceptions is very high,
and you're likely to get things wrong. Scott Meyers wrote like three whole
books on the subject, which should be strong evidence that something is Very
Wrong, and if you've ever used exceptions in a language that doesn't have
garbage collection you'll probably agree (and even GC doesn't save you). Most
production C++ code that I've seen use exceptions simply catches them in a
bug-handling layer, which responds by scribbling some kind of report and then
restarting the app).

Often these GOTOs are wrapped with macros that hide the fact that GOTO is
being used. This _can_ hurt code quality, since things are less clear, but in
general they work well.

Basically they should have had the dead code warnings enabled, and listened to
them, and done better checkin reviews. GOTO is not the enemy here. In paranoid
code like this, generally you want to "claim success" as late as possible, so
setting 'err' to some failure code would have been a better choice.

But using GOTO wasn't a mistake.

~~~
Silhouette
_If you 're using C++ exceptions you're in serious trouble._

Sorry, but I'm going to call you in turn on that one.

Of course there are some unwise things you can do in the presence of the C++
exception mechanism, and 15 or 20 years ago plenty of us did them. I think
probably Herb Sutter deserves more credit than anyone else for drawing
attention to these things, but the point is valid in any case.

However, we've figured out a lot since then, and for a long time writing
exception-safe code from scratch in C++ has been quite straightforward. Just
follow the common idioms, which mostly boil down to "use RAII for anything
that needs cleaning up".

The biggest difficulties with exceptions in C++, IME, generally come from
trying to retrofit them into code that was written without exceptions in mind.
If your existing code doesn't necessarily follow even basic good practices for
writing exception-safe code, the WarGames rule applies.

------
filtercake
So here is the shirt – proceeds will go to EFF and FSFE:
[http://teespring.com/goto-fail](http://teespring.com/goto-fail)

~~~
eric_h
in.

------
rythie
How do we even know we are downloading the real iOS 7.0.6? - surely it
downloads the new version using this library?

Edit: it's also digitally signed

------
jaimeyap
I continue to be baffled as to why Apple hasn't rolled out a security update
for this issue on OS X.

------
rlu
Come on Apple. This makes the whole blunder even more embarrassing.

Code Reviews? Static Analysis?

/sigh

------
javajosh
Something that concerned me was things like software update daemons connecting
with SSL. Can they be compromised? Or is this something that requires
connecting to a black-hat server?

~~~
MBCook
If all methods of updating need SSL, and SSL is broken, then there's no real
way to securely ship a patch.

------
Bootvis
Any theories on how this got committed and merged?

~~~
Codhisattva
Looks like the kind of mistake an automated merge can make.

~~~
thrownaway2424
Or even a manual merge.

------
nikbackm
This could not have happened with C++ exceptions and RAII instead of manual
error checking and goto for cleanup ;)

~~~
laureny
Any language that lets you omit braces for single statements is prone to this
bug, which includes C++.

~~~
hub_
Suggestion to clang and gcc developer: implement -Wmandatory-curly-braces

At least, short of making it mandatory it would at least signal bad style.

------
pechay
Looks like the result of a bad merge

------
callesgg
I hope the someone fixes a patch for jailbreak'ed devices.

------
sitkack
We can assume that all Apple devices have now been backdoored.

------
IBM
If only it was open source this never would have happened.

~~~
gojomo
Once eyeballs are thinly spread over enough projects, all bugs are deep again.

~~~
throwawaykf05
In what possible reality was this not the inevitable outcome? The "enough
eyeballs" aphorism makes for a nice soundbite but in practice was always a red
herring.

------
thrownaway2424
The code is crap and while I can understand why a single person might have
written it this way, I think any organization of full-time professional
software developers should be collectively embarrassed to have accepted it.
The only reason to use the "goto fail;" idiom is to free two buffers before
returning. But the buffers in question are just sslBuffer structs that live on
the local stack. Their destructors will be called, in reverse order,
regardless of how the function is exited. If this code simply had a fucking
destructor for sslBuffer to delete the data pointer, none of the rest of these
call sites for SSLFreeBuffer would need to exist at all! And there's 24 such
call sites in this file alone. Anyone with any sense would chose the 2-line
destructor over dozens of calls to free buffers.

There's even this bullshit:

    
    
        if ((err = SSLFreeBuffer(&hashCtx)) != 0)
            goto fail;
    

Which is hilarious, because the only way for SSLFreeBuffer to fail is if the
buffer doesn't exist, but hashCtx is a local temporary object and it MUST
exist. Anyway the only thing we do by jumping to fail is to free the non-
existent object once again.

In short: wow.

~~~
_pmf_
> Their destructors will be called, in reverse order, regardless of how the
> function is exited.

Not in C.

~~~
thrownaway2424
Yes, one makes a conscious choice of implementation language. They aren't
dictated by the laws of physics. Choosing your implementation language is a
rather important step.

~~~
andrewflnr
Indeed, a step involving more factors than simply whether the language has
destructors.

