Hacker News new | past | comments | ask | show | jobs | submit login

From a quick reading of the TLS heartbeat RFC and the patched code, here's my understanding of the cause of the bug.

TLS heartbeat consists of a request packet including a payload; the other side reads and sends a response containing the same payload (plus some other padding).

In the code that handles TLS heartbeat requests, the payload size is read from the packet controlled by the attacker:

  n2s(p, payload);
  pl = p;
Here, p is a pointer to the request packet, and payload is the expected length of the payload (read as a 16-bit short integer: this is the origin of the 64K limit per request).

pl is the pointer to the actual payload in the request packet.

Then the response packet is constructed:

  /* Enter response type, length and copy payload */
  *bp++ = TLS1_HB_RESPONSE;
  s2n(payload, bp);
  memcpy(bp, pl, payload);
The payload length is stored into the destination packet, and then the payload is copied from the source packet pl to the destination packet bp.

The bug is that the payload length is never actually checked against the size of the request packet. Therefore, the memcpy() can read arbitrary data beyond the storage location of the request by sending an arbitrary payload length (up to 64K) and an undersized payload.

I find it hard to believe that the OpenSSL code does not have any better abstraction for handling streams of bytes; if the packets were represented as a (pointer, length) pair with simple wrapper functions to copy from one stream to another, this bug could have been avoided. C makes this sort of bug easy to write, but careful API design would make it much harder to do by accident.




It is indeed astonishing how simple-minded this bug is. But these bugs come in all levels of complexity, from simple overstuffed buffers to logical ping-pong that hurts your brain when you try to follow it. We need to get rid of them once and for all. If the whole world can't use a certain tool effectively, then the whole world isn't broken; the tool is bad.


Machine level languages like C and C++ aren't necessarily bad tools, even in their current states. However, I agree that they might be bad tools for the purpose of writing security libraries.


There are not bad tools, but not the best either. If you spend mental stamina on trivial things, you have less for the important ones, the ones a compiler cannot check.

This kind of tool (SSL) should be written in ada or haskell.


Why not Go, or JavaScript? I'm sorry, but specifying which language should be used is petty.

C and C++ are just fine, the fact that the OpenSSL guys cocked it up is not the language's fault, it is theirs. There are efficient ways to prevent this type of bug.


What are the efficient ways of preventing this kind of bug, if not type systems?

The parent had a good point and you should really try to look at Haskell before you say that kind of nonsense.

All the tools that are available for static analysis are basically extra type systems bolted on top of existing languages.

If you try to detect buffer overflows using static analysis of the linux kernel what you need to do is to is go through the source code and define invariants. Those invariants are TYPES in languages powerful enough to express them.

For example the invariant that memory, or any resource allocated must be freed can be expressed in Haskell.

In C++ it cannot be expressed. There are workarounds like RAII, but that does not give any guarantees.

If you do not think type systems and thus languages make any differences, you also cannot believe that formal verification makes any difference, because type systems are a weak form of formal verification. How "weak" depends on the language.

You should also read up on the Curry-Howard correspondence to learn something about the deep connections between types, programs, and proofs.

http://en.wikipedia.org/wiki/Curry-Howard_correspondence


JavaScript would be terrible, because it's easy to hide unwanted behaviour in counter-intuitive corners of the language.

Besides the language peculiarities, a garbage collected or interpreted language is very vulnerable to side channel attacks because of the large amount of complicated behaviour that is being glossed over by the language runtime. (One example would be garbage collection rounds and timing attacks, but I'm sure smarter people would find tons of features that leak secret information. Another example is on-demand JIT'ing when code becomes hot in certain runtimes. The timing of such a JIT stall could publish information you thought secure.)


C and AsmJS are just as open to side channel attacks. AsmJS is safe, C is unsafe.

I'd take javascript over C any day.


Guarding against side-channel attacks in any language is hard. Guarding against them in Javascript is probably impossible. Whether you would take Javascript over C is irrelevant. It would still be a terrible choice for a security framework. Perhaps modern system languages, such as D or Go might be suitable.


I've felt that C makes this code easy to write because it makes doing the right thing hard. What you are describing is just a lot of work in C, compared to a language with something akin to Java's generics, which are in turn an afterthought in the ML family of languages. What we're asking for is not that complicated from a PL standpoint. A generic streams library?

Economics plays an invisible part here. Someone writing a library has a limited amount of time to implement some set of features, and to balance that against other needs, like making the code "clean"/pretty and secure. In this case, pretty code and secure code are akin. Consumers would likewise have to balance out feature needs with how likely the code is going to explode. What it comes down to is that you aren't likely to have secure, stable code in a language that doesn't inherently encourage it.

It starts to be clearer then, that the more modern, "prettier" languages offer material benefits in their efforts to be more elegant.


That's what I like about Ruby. ;)

Even in C, Go or Python, I column align any text that is remotely similar, so differences are obvious.

Clean code might be extra work but the net work (maintenance) should amortize less. Reducing cognitive load for large supportable production codebase cannot be underscored enough.


There's no "just use X" type of answer in security.

Sep 2013

"All versions of the open source Ruby on Rails Web application framework released in the past six years have a critical vulnerability that an attacker could exploit to execute arbitrary code, steal information from databases and crash servers."

https://groups.google.com/forum/#!topic/rubyonrails-security...

Nov 2013

"A lingering security issue in Ruby on Rails..."

http://threatpost.com/ruby-on-rails-cookiestore-vulnerabilit...

Dec 2013

"Ruby on Rails security updates patch XSS, DoS vulnerabilities"

http://www.infoworld.com/d/security/ruby-rails-security-upda...


Ruby != Rails. We do a lot of ruby, but practically no rails.


C != OpenSSL. Some [1] would argue OpenSSL is not representative at all what C can do. Maybe you should check out Redis for beauty [2] and joy [3].

On the same note C != C++ either and you can write large systems in C++ without ever using memory allocation. You can use only bounds checked functions.

And you can have large security holes if you're not careful, no matter which language you pick.

[1] http://news.ycombinator.com/item?id=7556407

[2] http://johnpwood.net/2012/07/18/the-beauty-of-redis/

[3] http://news.ycombinator.com/item?id=2275413


But rails is written in ruby. So it can't have security bugs, right?


Observing that Ruby eliminates entire classes of bugs doesn't mean that Ruby eliminates all bugs; just that your attack surface is smaller.


-smaller +different.


Sure it can and I'm not surprised it has. But if you're trying to point out flaws in ruby, at least use examples for flaws in ruby - not flaws in something written in ruby. It's not like web frameworks in other languages magically don't suffer from XSS injection attacks.


The issue at hand is a flaw in something written in C, though. I agree the point wasn't well made (is there any reason to think those errors would not have been made had the project been written in C?) but your objection isn't quite right.


The issue at hand is an error that is typical for C (unchecked out of bound memory access). It's a class of error that does usually not occur in other languages. The vulnerabilities in Rails were XSS vulnerabilities and an information leak - both classes of errors typically found in web application frameworks.

The first is an example of an error made more common by the language design, the other an example of errors typical for a class of applications. There's a fundamental difference here. There's a ton of reasons to criticize ruby and it brings its own set of flaws and problems, some rooted in the language and some rooted in its ecosystem - but the given examples just show that web applications are hard to get right. That's why this is not "a point not well made" but rather "sorry, you're attacking a strawman here".


I'm not arguing the other side. I think you are correct. I just think you needed to point to the reason the parallel construction didn't work.


There is a "just use X". If you code in a language where you can express the invariants in your code, and make the compiler check those invariants, then your code is immune to all of the vulnerabilities that we have seen in OpenSSL.

The fact that these languages don't automatically do all my system administration tasks for me is not an argument against using them.

http://en.wikipedia.org/wiki/Nirvana_fallacy#Perfect_solutio...


Rails does plenty of "make life easier for the programmer" things that I would expect to increase the risk of security issues. Do you have those kind of problems for e.g. Haskell?


Haskell problably has/would have the same kind of problems, but finding examples will be a lot harder in the absence of large well-used web platform à la RoR


If you worked at it, you could create this problem in Haskell. However, it is in fact the case that Haskell would be, in its own way, screaming at you; your configuration (or whatever) parser takes in some text and then returns something of type "IO Configuration"... what is that IO doing there? You don't have to be very skilled in Haskell to stop right there and have a serious think about what's going on. And in the absence of IO, or some other really obviously wrong type signature, there isn't much malicious stuff you can do in the parser layer. You could still have a vulnerability by doing something wrong when given certain configurations, but there's not much we can do about straight-up bugs. Even a proof language will let you make straight-up errors, they'll just force you to deeply, profoundly make the error instead of superficially make it... but we humans are up to the task!


No it simply does not, because the language forces you to write pure functions. The type system invites you to express invariants.

There are very fundamental connections between strong typing, program verification, and proofs.

http://en.wikipedia.org/wiki/Curry-Howard_correspondence

Thus, the argument that Haskell probably has the same, is simply false.

There are large web platforms in Haskell. Yesod is probably the largest eco-system. It is clearly not as well used as RoR, but anyone can dig through large amounts of code to try to find these bugs.

What Haskell has that everyone else has are bugs/misunderstandings in how protocols are implemented. Sometimes there can be fundamental bugs in the run-time-system. However, large classes of bugs are fundamentally less likely to appear than in less safe languages.


Once you are doing functional programming a bunch of classes of problems including a bunch of classes of security problems go away.

For example, here, if the guarantee of functional programming is that a given input leads to a given output and has no memory side effects, then your attack surface area is a lot, lot smaller.


Remember - Rails is a framework for webapps. Haskell is a language. You should be comparing ruby to haskell.


Write everything in Coq.


Perhaps more reasonably: write the core of your application in Coq and have it expose a DSL for writing business logic atop this infallible core.


you get it but the opinion of "C and C++ are fine for SSL, the OpenSSL guys just screwed up" is plain wrong.

This is a question of priorities. We have speed and security. If you chose C/C++ (non-existent automated checking of memory access) you are chosing speed first, security second.

If security is critical then you need to chose a language that makes array out of bounds access well nigh impossible. This is an easy problem -- we have languages that will give this to us.

What percentage of exploits in the wild come from array (and pointer) access out of bounds? I'd venture to say it is above 50%.

Rather than have programmers everywhere "try hard to be careful" writing this code, let them use a safer language and have a few really smart folk work on optimizing the compiler for said language to make the safety checks faster (e.g. removing provably unnecessary/redundant checks).

People think that chosing C/C++ has a better business case (i.e. better performance / scaling) because "being really careful" works most of the time. The problem is when heartbleed (or the next array out of bounds access bug) hits the the business case's ROI no longer looks so much better than the safer path.

A better language won't eliminate all security holes but it can eliminate a huge class of them and allow engineers to focus the energy they used to spend on "being really careful about array access and pointers" on other tasks (be they security, performance or feature related).

EDIT: stating the obvious .. there are good uses for C style languages but writing large bodies of software that needs to be resistant to malicious user attacks is not one of them.



Thanks for this. How is this reading arbitrary memory locations though? Isn't this always reading what is near the pl? As in, can you really scan the entire process's memory range this way or just a small subset where malloc (or the stack, whichever this is) places pl?


The latter, and AFAIK the buffer doesn't get reallocated on every connection, so it should be unlikely that any private keys actually get dumped. However, I could be missing a way to exploit it.


Reading between the lines in the announcement it sounds like dropping and reconnecting may cause it to read memory freed up from a prior connection. It may "just" be a matter of keep trying or it may be a matter of opening lots of connections to consume resources dropping them all then connecting and seeing what was left on the beach after the tide went out.

BTW Amazon AWS/ELM is vulnerable, confirmed publically by their support.


> It may "just" be a matter of keep trying

I gave this some thought earlier today, and expect that address space randomisation can make this bug eventually expose the server keys. You need to hit an address that has been just vacated from a (crashed) httpd worker.

Most implementations clear encryption key material on exit, but a crashed process never got to run that code.


In most systems, this will only work within the same process. Contemporary Unix kernels always allocate zeroed pages to processes, so it's impossible for a process to recover data from another unless there's a kernel bug.


If it just reads the up-to-64KB after that allocation, wouldn't you expect to see the server process segfault before too long?

Of course, servers helpfully just start themselves back up again.

As for scanning for key material, I wonder how to tell that 256-bit random data is the 256-bit random data you want.


When for instance an AES-key is being used by OpenSSL, it is put into a 'struct aes_key_st' which is not random at all but quite easily recognizable when scanning memory.

The Cold Boot attack paper by Halderman, Schoen et al. here

https://citp.princeton.edu/research/memory/

...discusses this in detail in chapter 6, Identifying Keys in Memory.

EDIT: fixed the reference


Well, one way is to brute iterate through every potential 256-bit string you dredge out of the canal against the known public key.

If you can dredge up 64kB of fresh data every time, that's 511,744 tests per shovelful which is quite a bit to sift through from a performance perspective but it's also a trivially parallel task.

Additionally, folk might know of even better ways to narrow that down. For example, the data representation in memory might have easy to grep for delimiters.


65505 tests as I work it out as it's unlikely to be non-byte aligned:-

256 bits is 32 bytes

If you get 64kB of payload data back each time then it can only contain 65536-31=65505 different consecutive strings of 32 bytes.


I successfully obtained the private key for my local Apache install this way once, though I'm having trouble getting anything reliable.


How many tries did it take to get the whole key?


This reminds me of what another programmer told me a long time ago when we were discussing C; "The problem with C is that people make terrible memory managers.". So true.


I agree that this seems like an abstraction for this is missing, but I always have the feeling that what you're doing in covering holes in a leaking dam you might get good at it, but you'll always have leaks.


I have always detested C (also C++) because it's so unreadable... the snippets of code you cite are just so dense ie. a function like n2s() gives pretty much no indication of what it does to a casual reader. Just reading the RFC (it is pretty much written in a C style) gives me the creeps.

The RFC doesn't mention why there has to be a payload, why the payload has to be random size, why they are doing an echo of this payload, why there has to be a padding after the payload. If this data is just a regular C struct like the RFC makes it out to be (I didn't know you could have a struct with a variable size, but apparently the fields are really pointers or it's just a mental model and not a real struct).

Apparently the purpose of the payload is path MTU discovery. Something that is supposed to happen at the IP layer, but I don't know enough about datagram packets. I guess an application may want to know about the MTU as well...

I'm not here to point fingers, I'm just saying C is a nightmare to me and a reason for me to never be involved with system programming or something like drafting RFC's ;-).

But if one can argue that C is a bad choice for writing this stuff, then that is not an isolated thing. "C" is also the language of the RFCs. "C" is also the mindset of the people doing that writing. After all, the language you speak determines how you think. It introduces concepts that become part of your mental models. I could give many examples, but that's not really the point.

And it's about style and what you give attention to. To me, that RFC is a real bad document. It starts to explain requirements to exceptional scenario's (like when the payload is too big) before even having introduced and explained the main concepts and the how and why's.

So while you may argue that this is a C problem and not a protocol problem, it is really all related.

And you may also say, in response to someone blaming these coders, that blame is inappropriate (and it is) because these are volunteers and they are donating their free time to something to find valuable, the whole distribution and burden of responsibility is, naturally, also part of the culture and how people self-organize and so on.

As someone else explained (https://news.ycombinator.com/item?id=7558394) the protocol is real bad but it is the result of more or less political limitations around submitting RFCs for approval. There is no reason for the payload in TLS (but apparently there is in DTLS) but my point is simply this:

If you are doing inelegant design this will spill over into inelegant implementation. And you're bound to end up with flaws.

Rather than trying to isolate the fault here or there, I would say this is a much larger cultural thing to become aware of.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: