Hacker News new | past | comments | ask | show | jobs | submit login
Multiple security vulnerabilities in Rails (groups.google.com)
229 points by alinajaf on Jan 25, 2016 | hide | past | favorite | 62 comments



I've grouped the patches for 4.1 and 4.2 here: https://drive.google.com/file/d/0BwnrE2iUdypUMkpqWVVPTXNzNVU... -- because download one by one is boring. Don't trust me, verify each file before patching. Some comments:

[CVE-2015-7581] Object leak vulnerability for wildcard controller routes in Action Pack: Look for routes that contain ":controller" and change it to something else. Hopefully you didn't have this weird name in your routes.

[CVE-2015-7578/79] Possible XSS vulnerability in rails-html-sanitizer: You're safe if you use a single page application that properly encode for you. Stripping tags isn't the best way anyway to filter XSS, so if you're encoding, you're good.

[CVE-2016-0753] Possible Input Validation Circumvention in Active Model: params.permit! is negligence, you should not be doing that anyway

[CVE-2016-0752] Possible Information Leak Vulnerability in Action View: render params[:id] is not defensive programming, so you should not be doing that too

[CVE-2015-7577] Nested attributes rejection proc bypass in Active Record: Only if using nested_attributes and rejection proc. Wasn't my case. Just patch.

[CVE-2016-0751] Possible Object Leak and Denial of Service attack in Action Pack: DoS is bad, just patch.

[CVE-2015-7576] Timing attack vulnerability in basic authentication in Action Controller: Just patch.

-- Doesn't look THAT bad, but need to be patched fast.


Wow you have a terrible attitude about security. "None of these are an issue, just program in this [very specific way that requires pre-knowledge of these vulnerabilities] and you're safe, anything else is basically negligence."

Your opinions about rails-html-sanitizer are particularly troubling as even if you use the sanitizer as suggested in the docs you're vulnerable and your retort is "well you should encode AND sanitise, not just rely on the sanitiser doing what the documentation says it should do!" Why?

I have no issue with the wording in the official CVEs. But this attempt at whitewashing the, frankly, pretty serious issues is deplorable.


Some programming styles are not defensive from a security perspective. As a programmer one should acknowledge that using "render" passing a param right from the request, without validating it, is not a good thing, right? That's my point. Some issues here can be solved just by taking the right approach, but won't solve for all of them, of course. XSS mitigation works better with encoding rather than sanitization. If you want me to explain I explain.

But hey, you won't do any good complaining in face of this situation. Time to help people fix it. Peace.


I mean ideally you should 1) strip tags 2) encode 3) use X-XSS headers 4) use CSP headers. That way if any one of these fails, the others will catch it.

That said, there still shouldn't be security holes in the framework.


Stripping tags before encoding is security theatre. Encoding is just replacing 2–4 characters; if it’s not implemented properly, there’s no way tag stripping is.


Well said, I agree.


Calm down a bit! The implication I take from the person you are responding to is not that the vulnerabilities don't matter, but that it's useful to reflect on them and make sure that you are always programming defensively.


They went through most of the issues and shifted blame for the issue from Rails to the developer/user.

"Hopefully you didn't have this weird name in your routes."

"Stripping tags isn't the best way anyway to filter XSS, so if you're encoding, you're good."

"is negligence, you should not be doing that anyway"

"is not defensive programming, so you should not be doing that too"

It isn't "reflecting" it is blame shifting. And there's a huge difference between defensive programming and being psychic, in this case it is more the latter, as even features like the sanitiser we should have known better than to use as the docs tell us to.


Stop making this about ego. The person was helping developers secure their apps and you're trying to make this about who is wrong. Making sure someone feels bad in the open source community won't make anything better.



More like "avoid these particular, obviously risky constructs, which any good security scanner will call out as high-priority trouble spots". The two particular constructs he warns against ("render params" and permit! followed by mass assignment) will both get high-priority warnings from Brakeman.


I have a much bigger problem with your post than his--and calling him out for a "terrible attitude" regarding security is a little bit funny seeing as he runs Gauntlet.io, which while not my favorite scanner out there is a legit tool that deserves more respect than you have afforded him. He has his bona fides; where are yours?

And, more concretely, I have no problem with his pointing out that some of--not all, but some--these issues are ones mitigated by decent, security-aware software development practices, like, oh, "don't spit something into a template out of your input parameters". Because tools break. Your libraries break. They will always break. Program defensively in all situations where hostile input is possible and never take for granted any attempts to defuse attacks; always favor braces-and-belt wherever possible and you'll generally do okay. It's entirely unnecessary to be a jerk towards him for saying this.


I'm not sure how who he is or what he does is at all relevant here.

Especially considering that he makes pretty obvious factual errors, e.g:

>[CVE-2015-7578/79] Possible XSS vulnerability in rails-html-sanitizer: You're safe if you use a single page application that properly encode for you. Stripping tags isn't the best way anyway to filter XSS, so if you're encoding, you're good.

If you don't want any HTML you aren't supposed to be using rails-html-sanitizer, it's specifically for scenarios where you can't do that.


True, but there are better alternatives in some situations, e.g., bbcode or markdown processing rather than stripping tags. The point for such scenarios is that whitelist is better than blacklist.


The point for such scenarios is that whitelist is better than blacklist.

from CVE-2015-7580:

Carefully crafted strings can cause user input to bypass the sanitization in the white list sanitizer

So people are using a whitelist, and this bug is in that whitelist. In other words, people are "doing the right thing" and are still vulnerable.


BBcode can go pretty horrifically wrong from a security perspective because you're still generating HTML markup based on unstructured, untrusted user input. There's a long and nasty history of XSS issues in forums that use BBcode where clever use of mismatched markup has made it possible to break the HTML generator enough to inject JavaScript and other nasties.


Absolutely, bbcode and markdown processing are significantly better alternatives.

Also, I'm pretty sure you're specifically supposed to use rails-html-sanitizer with a whitelist. (See: Rails::Html::WhiteListSanitizer)


Such arguments are nonsense. Sure the criticism might be a bit harsh but what on earth do one's credentials matter in such situations? Oh so if I'm not some superstar open source contributor I can't try to point out what could be better in the approaches of another guy who's famous and made huge contributions? We should only look at facts case by case and such a fame-based logic can only be detrimental.


> He has his bona fides; where are yours?

I called him out for his specific replies in this specific thread. I didn't call out his reputation or character.

Maybe you should stick to what they and I actually posted here today, and not try to draw the conversation off track into reputation wars.

> It's entirely unnecessary to be a jerk towards him for saying this.

I don't appreciate being called a "jerk."

I stand by what I said, and what I said was that I felt (and feel) that they have a bad attitude to security. They're trying to shift blame from the rails developers to every rails user, and their excuses are weak.

If you think that is "jerky," that's fine, but I feel like name calling and playing the reputation card for no reason is only going to take a conversation down a bad and unconstructive path.


>I called him out for his specific replies in this specific thread. I didn't call out his reputation or character.

Eh, I'm not sure about that. When I see "Wow you have a terrible attitude about security" I'm inclined to think that's exactly what you're doing. I don't have a horse in this race but you seem to be unnecessarily aggressive/antagonistic and you could have discussed the individual merits of his comment without doing that.


>>I don't appreciate being called a "jerk

Then try not to sound like one? The OP went out of his way to compile and document the important bugs and post them here. You, on the other hand, completely ignored the value of his contribution and instead rudely called him out on a very minor (and frankly, subjective) tonal issue, and went so far as to call it "whitewashing," which implies that he has an agenda.


Did you not notice how OP had absolutely nothing to say about the more serious bugs and instead entirely focused on the less serious ones? (Except for the rails-html-sanitizer bug, which is fairly serious)

He does make multiple valid points though.


I don't understand comments like these.. He's pointing out a legitimate attitude problem which I, and many others, also agree with. There's absolutely no need for the flippant attitude.

If the users of your framework are consistently causing major security problems and the framework is built in a way that it can't be fixed without compromise.. I dunno.. document it? Maybe? Your documentation is basically the API to learning your framework, so if the API is broken to the extent of causing security problems, then it's not god damn production ready!

Remember, if every student is failing your class, the student probably isn't the one to blame.


>>Remember, if every student is failing your class, the student probably isn't the one to blame.

Except not every student is failing the class. Only those not following best practices are.

For example, if you're using params.permit!, you're simply being lazy. This is clearly documented[1]:

"Extreme care should be taken when using permit! as it will allow all current and future model attributes to be mass-assigned."

[1]https://github.com/rails/strong_parameters#permitted-scalar-...


But Rails Culture is to make developers smile!

Fixing security vulnerabilities and writing docs that actually reflect best security practices do not make developers smile.

And accusing to have a terrible attitude does not make a developer smile either, so this is obviously your fault!

/s


> [CVE-2015-7578/79] Possible XSS vulnerability in rails-html-sanitizer: You're safe if you use a single page application that properly encode for you. Stripping tags isn't the best way anyway to filter XSS, so if you're encoding, you're good.

So uh, what if your application is taking an input that's then parsed by another application with poor output encoding? Granted, the application that's properly encoding for the correct context is good, but any applications which don't do that but also use that same dataset are screwed because your application didn't perform proper input validation.

Defense in depth strategies exist for a reason. Input validation, output encoding for every single context (this includes using angular properly, for instance), anti-xss headers, CSP, the list goes on. These aren't all prescribed just to protect that one application. They're prescribed so that when applied correctly across all apps, they're all protected from negligence in one app harming the data.


Indeed, "any applications which don't do that but also use that same dataset are screwed". You're totally right. I didn't specify whether the encoded string is persisted or not. You considered that it isn't, which leads to the application with poor controls that you mentioned. Nice point. You're also right about defense in depth. I just wanted to highlight when an approach can be dangerous. That's one example, but there are others such as misuse of eval, constantize, etc. If the developer be aware of such risks and implement properly, even some (not all of course) newly disclosed vulnerabilities won't affect the application.


Yup to all others you mentioned. I usually just teach about why trust matters and how to think about it as a developer. Works out well with training the devs at my current workplace. Back to your comments on the CVEs, the reason I replied the way I did was because your comments appeared to come across as being the only necessary mitigations per-se. I know that's not at all how you intended it ("some comments" as opposed to "how to mitigate"), but my first pass over the text made me feel that way about it.

In any case, nice chat. I like productive exchanges.


As an everyday Rails user, most of these aren't really that surprising. Sure, one could cite 'defensive programming' or whatever, but a lot of these behaviors are warned against in the documentation.


and static analyzers would yell at you for one or two of them, too


Doesn't look too bad, although there are a lot of CVEs to go through:

- A timing attack if you're using HTTP basic auth

- A couple of GC related DoS attacks

- An issue with `accepts_nested_attributes_for` if you're using both the `allow_destroy` and `reject_if` options

- A validation bypass exploit if you're calling `SomeModel.new(params[:some_model])` instead of using StrongParams

- An information leak exploit if you're calling `render params[:something]` with raw user input

- A bunch of potential XSS exploits

The `render` issue looks like it could cause the most harm, but hopefully shouldn't be too prevalent. The XSS issues should be a quick fix as you only have to update `rails-html-sanitizer`, not Rails itself.


>- A timing attack if you're using HTTP basic auth

I'd say that qualifies as pretty bad.

How the hell does that even happen? Using time constant string comparison is authentication 101. That's really not something you can mess up by mistake, it's something you mess up by not understanding what you're doing. And that's is all ignoring the fact that there's no reason to not use hashing here.


The vast majority of rails applications do not use HTTP basic authentication, and I would guess that most of the ones that do use nginx or apache to provide it. This was probably not caught until now because hardly anyone uses it.


I've seen a bunch of companies use rails HTTP basic auth internally.

And it's not that it wasn't caught until now, it's that it wasn't caught before the commit was accepted.


I expect (though I have not looked) that it's old code that wasn't eyeballed for security so much as for consistency and correctness. Once in, it doesn't leave.


I imagine every app on Heroku which uses basic auth is at risk then.



Turns out, deciding how to compare two strings when they may be differing lengths is tricky.


Do we really need CVE numbers assigned for all of these? More or less every application out there is vulnerable to some form of timing attack


I see a timing attack in the list. It's fairly trivial to mitigate against this in the majority of languages nowadays [1] [2] [3] etc..

I presume this can also be mitigated by implementing rate limiting on your authentication endpoints, although that should also be implemented for other reasons.

[1] https://golang.org/pkg/crypto/subtle/#ConstantTimeCompare

[2] http://php.net/manual/en/function.hash-equals.php

[3] http://www.levigross.com/2014/02/07/constant-time-comparison...



That's still broken. They've just pushed the problem deeper. Now instead of having a timing attack on the number of operations in the compare, the timing attack is pushed to the number of bytes that is hashed by sha256. Also, this opens up a new avenue in that now hash collisions (as unlikely as they may be) would be considered equal.


This is considered best practice for languages where you can't trust your "constant time" comparison won't be optimized out from under you.

Performing a timing attack requires control of the bytes being compared. If you can control the bytes of the output of a SHA256 then there are some Bitcoin miners who will pay you a lot of money.

If you want to be over-the-top about it you can get some secure randomness and add it to the values being compared before hashing, and then attacker would have even less control over the bytes being compared.


You don't need to control every byte for this to be catastrophic. You can't decode every password like you can with the previous comparison, but if you can generate a rainbow table that contains the password you're trying to crack, you can just do a timing attack using the hashes instead. My intuition is that this might even require fewer attempts than the original comparison assuming a reasonable password length, but I haven't done the math.


I suppose you could theoretically deduce a large enough part of the hash to perform a bruteforce attack though.

You don't need that much of the hash to perform wordlist attacks and find likely candidates.


Yes, this is true if you are talking about unsalted passwords, or if the attacker knows the salts. If there's an unknown salt, having the salted hash of a given plaintext input match the first few bytes of the target salted hash does not help you narrow the word list at all. If there is no salt, or the salt is public, then that's a case where you could append some ephemeral CS-PRNG output to both sides before hash-comparing... but probably better to fix the underlying issue.

I mean, it's funny to be far enough down the rabbit hole to be doing hash-compare. Then to say we wanted randomly keyed hash-compare is the final step. The nice thing is adding some random bytes imposes a fairly miniscule performance hit and it's purely computation, no additional storage. Probably still too slow to use for every comparison, but for constant-time critical comparison, it literally can't hurt.


>Also, this opens up a new avenue in that now hash collisions (as unlikely as they may be) would be considered equal.

Hash collision implies attacker has control of both the inputs, in this case we'd be talking about a preimage attack.

If your attacker can perform preimage attacks on SHA256 they can also most likely hack you via your package manager.


why

    return false unless a.bytesize == b.bytesize
instead of

    if a.bytesize != b.bytesize
        return false
disclaimer: never programmed in ruby


It's idiomatic ruby.

First as the conditional only has one statment it's preferred to write it in its shorthand way, so instead of

  if a.bytesize != b.bytesize
    return false
We start by writing

  return false if a.bytesize != b.bytesize
And then unless is the negated conditional, so we rewrite it as

  return false unless a.bytesize == b.bytesize
Which some peolpe (myself included) consider easier to read, the 'unless' is easier to note (more chars) than the '!='.


On one hand, I find that Rails often has security issues. On the other hand, perhaps it is just indicative of its popularity and interest. When a framework has no security issues, is it because there are none or is it just that no one can be bothered to look for some?


Excellent point. Also the fact is there will be people working on these immediately.


The last entries on the Rails security list before this were from June of 2015. Does that seem like a lot to you?


Aah Aaron. Thanks. Everywhere he codes he refactors, fixes performance issues, finds bugs, he's so my hero.


tenderlove is the best. Not only does he work on both Ruby and Ruby on Rails, he's also very active in the community (conferences, workshops, talks) and working hard on promoting a spirit of humility and respect (see the humorous Adequate HQ, the Friday Hug, the overwhelmingly positive and generous attitude he's always had towards anyone, famous or not, in the Ruby community). I have nothing but respect for that man and I'm glad he's working hard everyday on the language I so love.

Thank you Aaron.


Really really well said. So grateful to have him in our lovely community.


A quick `bundle update` appears to be just the ticket:

    …
    Installing rails-html-sanitizer 1.0.3 (was 1.0.2)
    Installing actionmailer 4.2.5.1 (was 4.2.5)
    Installing activemodel 4.2.5.1 (was 4.2.5)
    Installing activerecord 4.2.5.1 (was 4.2.5)
    Installing railties 4.2.5.1 (was 4.2.5)
    Installing rails 4.2.5.1 (was 4.2.5)
    …
    Bundle updated!


CVE-2016-0752 is explained in more detail here: https://nvisium.com/blog/2016/01/26/rails-dynamic-render-to-...


Great overview, thanks!


[flagged]


Is this the general view? What is the best course of action here? Stop using Rails or somehow protect it better?


The best course of action is to ignore trolls.


The HN handle is "trollian". Enough said.


What was the name of that still-in-development OS that is going to mitigate against most forms of conventional attacks.

random quote: 'I used to consume cannabis on a daily basis, I suffer no short term memory loss, as far as I can remember....'




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: