Speaking as someone who isn't a Rails developer (but does use GitHub Enterprise for work projects), when this first broke I was on the side of github and thought homakov was acting irresponsibly.
Now that more background is coming out, I think he probably did the Rails community at large a huge favor here. Had this just been fixed quietly on GitHub, that would certainly be better for GitHub's PR but the wider community might never have realized the lurking horror that the Rails team appears to have been unlikely to do anything about other than point people to the existing docs.
This situation shows that pointing people to those docs was clearly an inadequate solution. If GitHub (arguably the poster child for Rails apps outside of 37signal's own apps) could fuck this up, anyone using Rails could. All of this exposure to the problem is net positive for everyone using Rails other than GitHub and the core Rails team, IMO.
It's not like this is a little known pitfall with Rails. Anyone who has read Hartl's Rails tutorial knows about it[1]. It's very commonly mentioned in basics for Rails security.
And I say all this someone who has never professionally developed for Rails. My experience with Rails consists of a couple half-done toy projects. I find it pretty surprising that Github makes this mistake. But I don't think they should be burned at the stake for this. The bigger problem was how they were initially handling the issue, which they're trying to rectify now.
[1] However, Hartl recommends using attr_accessible at the model level and DHH says this preventative measure should be implemented in the controller, ie:
class PostsController < ActionController::Base
def create
Post.create(post_params)
end
def update
Post.find(params[:id]).update_attributes!(post_params)
end
private
def post_params
params[:post].slice(:title, :content)
end
end
I'm frankly amazed at how optimistic HN seems to be about "professional" coding practices. To this day I find "professional" developers writing fresh SQL injection vulnerabilities with some frequency.
I don't find it in any way surprising that this has happened any more than I would find it surprising that if you put a big hole in a footpath that someone would fall into it.
"I don't believe it is the stunt that costed github five to six figures. The loss of wealth was already there from day 1 when Github developers did not read Rails documentation and/or when Rails decided to make attributes publicly accessible by default. Today it is merely a "correction" where instead of Github's customer losing confidential company information without knowing it is now Github bearing the costs upfront, as it should be. In the "emperor has no clothes" story would you say it was the kid who pointed out the emperor had no clothes caused the emperor's embarrassment?"
You can also compare the whole context with the misfeature of PHP:
I think everyone won in the end. Homakov got the credit he deserved, Rails' security flaw got a lot of attention, and GitHub had the chance to prove themselves, which in my opinion they did.
It's an easy mistake to make, but arguably no easier than, for instance, not escaping input strings to guard against SQL injection. IMO it falls to the developer to set protected on vulnerable attributes. This is pretty basic Rails security practice.
EDIT: not 'escaping', but using hashes or formatted strings, etc., you get the idea.
I am glad you fixed this in your post btw, but I still think the initial thought of using escaping to solve issues like SQL injection shows the way we think about security is flawed.
In general we tend to see security flaws as programming flaws. In other words, the programmer makes a mistake and therefore there is a security hole. The problem with this approach is that programmers make mistakes all the time.
Certainly it is impossible to take all the weight off the programmer's shoulders. Mistakes will always allow a program to be misused, misdirected, and so forth. However, most security issues are best solved as architecture problems, not as developer problems.
For example input sanitation is generally a bad idea* beyond certain things we should never see in inputs. It's far better to find ways of making the input safe to the database and to the web interface that doesn't depend on it being sanitized on input.
* This is because you can only sanitize based on how you want things to go on output, whether you are outputting from your program to the db or to a user interface of some sort. If you sanitize for HTML, you can't use the same info reasonably well for LaTeX, etc.....
So I am of the considered opinion that data should be checked for basic sanity on input (no termination sequences in the middle of input strings, etc), and escaped on use or output. If you have a framework to do this, then you centralize that logic so you don't have to think about it every time. This drastically cuts down on things like XSS, SQL Injection, and the like.
This sort of thing again strikes me as something the framework should prevent. That's not necessarily a flaw of Rails if you use Rails as a toolkit for your own application frameworks. However, it is an architectural flaw, not a programming mistake.
I'm not a Rails/Ruby user but any decent database abstraction layer or ORM should be using bound parameters for all literal values. "Escaping" of SQL strings is best left to the database driver.
You're correct and Rails does do this (handle parameters in such a way as to prevent SQL injection attacks), however it is always possible to circumvent these protections and code things up in such a way (concatenate your own raw SQL string and push it through) as to shoot yourself in the foot.
Or you could disallow raw SQL strings and always construct programmatically (e.g. building ASTs). All of these recurring holes are due to bad design, period. Imagine if your microwave manufacturer said "ultimately it's up to the consumer to avoid irradiating himself". Nobody expects you to be saved from sticking a drill into your face, but nor should it electrocute you by forgetting to do something.
Rails, by default, does things like escaping input and output strings, CSRF protection, masking password fields in the logs, etc. So why doesn't it do the same with attribute assignment?
I'll take a guess and say it's because it's not possible to magically generate that code. If I wrote a code generator, there's no way that program could know which attributes should be accessible. The only way to get a scaffold to work out of the box is to require some user configuration or allow all attributes to be modifiable.
As homakov suggested, you could at least define *_id attributes as "protected" by default. Only being able to change attributes on your own records probably causes a lot less grieve.
(1) the nature of the suspension was not communicated to Egor at the onset of the situation, nor,
(2) noted in the blog post [1] describing how Github "detected the attack",
I am inclined to believe that this is a response to the furious reaction to their suspension decision and was not, as this post implies, the game plan from the beginning.
It's healthy that they've reversed their suspension but the lack of transparency (not to mention potential dissembling) on the decision process regarding the revocation is still troubling.
If you're investigating a security situation, you don't have to say squat. Whether you were in the right or the wrong, explain what happened and apologize to those where were affected once the facts are understood, as they have done here.
It takes time and energy to come up with responses such as this (not a lot, but every bit counts in an emergency), and those are resources that you should be using the solve the problem. Not to mention that saying the wrong thing is worse than saying nothing at all.
It's basic emergency management: 1) stabilize the situation, 2) figure out what's going on, 3) fix it, and then 4) explain what happened to the stakeholders. I see nothing wrong with github's actions here.
That is irrelevant. The world doesn't stop turning on the weekend. It was in Github's best interests to get out ahead of this, and they did. They don't need to be commended for it just because it's Sunday.
Actually, the world _does_ stop turning because it's Sunday. Anything happening on the weekend is emergency management, which takes time to scramble. I work in network/information security, and if we had a security incident on a Sunday, it would go to our backup team (of one). If they (he) deemed it critical, they would notify the security director, who would notify my boss, who would notify the rest of the team. This would take 3-5 hours, plus the amount of time it took us to get to the office and actually investigate the emergency. While we were investigating, we would likely shut down everything (including user accounts that were involved).
Weekends are time away from the office. I don't understand how you could expect the same amount of service on a weekend opposed to a week day. Do they deserve to be commended just because it was a weekend? Internally maybe, yes, externally maybe not. Fact remains that the weekend is when they are most likely to be short-staffed.
I agree. But we're talking about software as a service here, and that kinda makes it a different ballgame. Folks are paying money (in some cases) to use your software living on your servers (or, at least servers that you manage). I would certainly hope that someone who can deal with outages or penetrations is actually working on the weekend (perhaps that person doesn't work Monday and Tuesday?).
Would you say "eh, but it was the weekend" if an attacker purged your paid enterprise repo on a Sunday morning?
Not all Muslim countries do though. All north african countries have a saturday/sunday weekend, except Egypt. and even in Egypt some companies make sure a subset of their employees work on their weekend because, well, most of their partners are working.
I think it could be reasonably inferred that when someone says "the world stops turning", it's being used metaphorically. Kind of like "stop the presses".
Playing nice with a hacker who just broke into your service shouldn't take priority over:
1. Making sure he doesn't continue breaking into your service (by suspending his account)
2. Fixing the security flaw he used to break into your service
3. Appraising your users to the situation.
I feel for the kid--he's just 18, and if he gets some good judgment to go along with his technical skill he'll go far. But I don't understand the nerdrage at Github. People trust Github's service and their software to protect proprietary code; their response has been everything you could hope for in the interests of 100%-1 of Github stakeholders, at the expense of not communicating well to Egor why and how long they were suspending him for breaking into their service.
I agree with you up to a point; those certainly should be their priorities. However, suspending his account didn't help them accomplish any of those goals.
I think we can all agree that it certainly didn't help them fix the vulnerability or communicate with users, right? (Actually, it arguably did the reverse...) But it also did absolutely nothing to stop him from breaking into the service; the exploit works for any user, and Github allows anyone to create an account instantly and for free. Until the exploit was fixed, anyone including him could have created an account and exploited their service.
A better defence of Github would be that they couldn't have been expected to know that, and so they shouldn't be slammed for doing something unproductive and pointless that distracted them from the three core priorities you list above. And I agree! If something looks fishy, banning everyone involved, and sorting it out later is actually a pretty decent idea...even if (as here) it proves to be a complete waste of time in retrospect.
Because it's trivial for him to set up another account and break the service from there. His ability to do this exploit wasn't tied to his specific account. His point was that anyone could be doing this.
=edit= Sorry, I am apparently agreeing with the comment I replied to.
In retrospect, it wasn't that helpful unless they also suspended new account creation, or at least kept an eye on it. But they didn't have that information at that time.
During a security breach, is suspending a malicious user who's currently attacking you really something you sit around and discuss beforehand? Or do you just do it?
I'm inclined to believe GitHub did exactly what they thought was best. That is to say anyone's knee jerk reaction would be to suspend the account as quickly as possible, for whatever good it might do... I can't fault them for what they did. When someone hacks you, you need to act very quickly because your customers are more important than your ego. I do believe they should provide incentive for responsible disclosure though.
I agree with you, but even if this was a reaction to public outcry the real reason to be disturbed is that top-notch Ruby devs like the GitHub guys didn't use attr_accessible. I can't wrap my mind around that.
That you have to use attr_accessible is known throughout the Rails community since "ever". Only toy apps don't use it.
It's like saving passwords in plaintext, only arguably even worse.
This is true. We actually don't use mass assignment that often. He happened to catch 2 our of 3 spots that still used it. Everywhere else is explicit about what to accept.
We use this (posted by @dhh) https://gist.github.com/1975644 in some spots, or simply just Hash#slice. We have some other thoughts on making params access more explicit. The problem with explicit patterns is they can get left out if a developer forgets.
Or more specifically, since the public keys objects require an associated user, the old chestnut:
@user.public_keys.build(...)
.. where @user is retrieved in a role based manner (that is, you only get the right @user if you are authorized to get it.)
Ultimately, this is less an issue of mass assignment specifically and more an overarching one of allowing a user to perform an action in the guise of another. But, of course, these mistakes are commonly made by developers of all skill levels! :-) (me included)
You can do @user.public_keys.find(params[:id]).update_attributes(:user_id => 25)
Its the mass assignment protection on foreign keys that prevents you assigning one of your public keys to someone else, ensuring the chain is correct doesn't necessarily help with this scenario.
Sorry - updated. I was focusing more on the notion of GitHub suspending his account with the intention to re-instate it after clearing uncertainty. Their lack of communication with him is inconsistent with a rational reaction to that assumption.
I'm inclined to agree with you, but, to play the devil's advocate, "we're temporarily suspending you...oops, no, it's permanent after all" is not a position they want to find themselves in.
The usual phrase would be along the lines of "pending a full investigation", which is what they should have said, but people often fail to. PR is hard enough when you're calm and collected, in the middle of a shitstorm it's a miracle if you can fully articulate what is and will be happening.
The account was suspended, not deleted. That was made clear to everyone. Suspended does not mean banned, it does not mean canceled, it is impermanent by definition.
The problem I see with this blog post is something I haven't seen mentioned in the comments. It's not GitHub's place to set policy on what kind of disclosure is or isn't "responsible". Egor Homakov's responsibility is not to GitHub; his responsibility is to other users. His moral duty upon finding a security vulnerability is to act in such a way that other users will be minimally hurt. It appears that he has fulfilled that responsibility spectacularly in this case.
GitHub has no business demanding his, or your, agreement to a legal contract that prohibits you from exercising your best judgment in such a case.
Furthermore, "responsible disclosure" is a propaganda euphemism for "allowing irresponsible vendors to cover their asses, possibly at the expense of their users". Terms like "responsible disclosure" have no place in a serious discussion. Please see the blog post by the Google security team at http://googleonlinesecurity.blogspot.com/2010/07/rebooting-r... for further details.
Honestly, I am less likely to want to use github in light of this announcement. You handled this incident badly, and then didn't acknowledge it, nor offer the much-needed props to Egor for exposing an issue you guys didn't think was serious.
If this is how you react to someone who WANTS to tell you about a serious problem, how what percentage of the people who don't love you enough to put a tattoo on themselves are likely to report an issue versus sell this to one of the many buyers of 'sploits who exist out there?
The reality is that these folks generally don't want to hurt you, they just want you to understand the thing you won't admit. When it happens, and you've got egg on your face, grow a pair and cop up to the fact that you/the system failed, and GIVE PROPS. Fix the issue, move on, and award the guy who did you a solid by finding an issue his 15 minutes of fame.
He didn't WANT to tell github, he wanted Rails Core to pay attention to him, and github was the sacrificial lamb.
He doesn't deserve props from github, he just exploited their app to make a point (to rails core) he never disclosed anything to github, from what I can tell.
I hate to be the one pointing out this but it's a shame that a company like GitHub will reward responsible disclosures just with a thank you and the promise to not pursue a legal action.
If what you're implying here is that they should be offering a bounty for discovery of bugs, I'm not necessarily disagreeing with you, but to expect them to get a policy about that and to allocate funding for those bounties on a Sunday, within 24 hours of a major, public breach seems a little unreasonable.
Another in the long line of Silicon Valley companies screwing up and then getting kudos from the community for handling the screw up. It's getting tiresome to see people get accolades for basically doing their job, after failing to do their job.
This is a great response to this clusterfuck. These guys have a responsibility to react slowly and deliberately. I'm impressed they got the problem solved and two public statements out on a Sunday. This is why I'm happy to be a paying customer.
Impressed by how Github handled this. It's easy to get red-faced and smack down with a hammer, and it's hard to remain level-headed and come to the best decision. This seems like it was, in fact, the best decision.
What the guy did was not only morally irresponsible but also criminal.
The security community has long has an accepted standard of responsible disclosure, which involves informing the vulnerable party beforehand and allowing them time to fix the problem before publicly disclosing it.
Publishing a vulnerability before giving those vulnerable a chance to fix it is irresponsible, using it to compromise a system is criminal. He was getting off light from getting his account suspended, GitHub could push for a criminal prosecution resulting in deportation and serious jail-time for his actions.
It doesn't matter what he did after the compromise (whether it was benevolent or not), the compromise of an account not held by him puts him clearly into the "black-hat" category.
The vulnerability was public, known for years, and no doubt already exploited. Making a splash about it on GitHub, popular as it is with rails hackers, is the best thing that could happen to the security of the rails ecosystem.
The class of vulnerabilities was known but not this particular case. That's like saying people should be publish 0-day buffer overflows because it's a known vulnerability class.
Gain unauthorized access to an account and using it falls under pretty much any standard definition of "black-hat" and in practical terms breaks computer security laws in pretty much all legal jurisdictions which have them.
If you steal a loaf of bread to feed your family it's still a crime.
Regardless of whether or not you think what he did was justified it's still illegal. And there's very few serious crimes for which "publicity stunt" will generally be regarded as a good reason.
The straight up ban is a great short term solution, but in the long run being easy to work with on security issues and not alienating your users is a better road. As they stated, this user wasn't malicious so banning him only causes grief and could turn him from an ally into an enemy.
There was a great post a couple of days back that in effect said: It's not a matter of _if_ your security will be compromised but _when_. By being open to your users disclosing this information you're helping to keep your product secure. IMHO 37signals does a good job of this by linking and giving credit to those that have discovered security flaws in their apps (http://37signals.com/security-response).
There's a very good reason not to do that. I'm a Wikipedia admin, and we have a policy on blocking and banning that sort of makes sense. A block is done to "prevent damage or disruption". That might be short term or indefinitely, but not permanently. You put in a block when there's an issue. But a ban is a formal statement that you are no longer allowed in. You only do that after some thought and consultation.
Github is actually pretty similar given that the commercial side of Github is fuelled by the free, open-source side. There is a feeling of community ownership. Going straight for the ban without some thought and soul-searching is confusing fixing an issue with making a more detailed judgment about whether the user ought to be on the site at all.
Now that more background is coming out, I think he probably did the Rails community at large a huge favor here. Had this just been fixed quietly on GitHub, that would certainly be better for GitHub's PR but the wider community might never have realized the lurking horror that the Rails team appears to have been unlikely to do anything about other than point people to the existing docs.
This situation shows that pointing people to those docs was clearly an inadequate solution. If GitHub (arguably the poster child for Rails apps outside of 37signal's own apps) could fuck this up, anyone using Rails could. All of this exposure to the problem is net positive for everyone using Rails other than GitHub and the core Rails team, IMO.