Google, Facebook, and now over 70 companies do grant tacit permission for anyone to test their systems, and will pay the researchers for a disclosure, as long as they follow the program rules, which are usually quite reasonable. I'm serious when I say that few people are more thankful than myself for the existence of security bug bounties.
However, one thing has always crossed my mind: since the legal definition of authorization is still very fuzzy, what stops a third party from going after a researcher, even though the company who owns the server which was technically hacked has no interest in filing any complaint against the researcher?
To clarify my question, the recent Brazilian law regarding computer hacking establishes that only the owner of the hacked computer can file a complaint against the attacker, and legal proceedings can commence only after such a complaint has been filed. Does it work the same way in the U.S.? My understanding of american law is very weak, but I know that, for some crimes, the victim does not have a say, i.e. the state will prosecute regardless of the victim's will.
> only the owner of the hacked computer can file a complaint against the attacker, and legal proceedings can commence only after such a complaint has been filed. Does it work the same way in the U.S.?
After a US law enforcement agency has been notified of a complaint by a victim of a crime they forward it to a prosecutor. At this point the victim can no longer drop the charges. The only person who can drop the case then is the prosecuting lawyer. They occasionally do drop cases where it doesn't make sense anymore. But procescuters don't get 'cybercrime' cases very often and they often make headlines , especially these days, so I doubt many lawyers would voluntarily drop that opportunity for their resumes and work on the usual murder or drug trials instead.
There is something really disturbing about a system that allows personal ambition to play such an important role in how the institution of justice operates in effect, at least in specialised matters like this.
Expensive attorneys and ambitious prosecutors, each trying to twist half-truths to, more or less, ignorant judges and jurys. Makes me wonder if some of these servants of justice are forgetting that, their specific role aside, as the above description suggests, their common goal is to reach an honest conclusion about whether someone actually did something wrong, which implies everyone's effort to understand in what ways are the related actions harmful and how does that harm balance against fundamental freedoms.
Judges and juries aren't ignorant. They just don't give a shit about the things that are important to you. To your typical juror off the street, "hacking" into a computer for ostensibly "white hat" reasons is no different than breaking into a store to "test the alarm system." The reaction is not "oh yes, we have to make sure our legal system is flexible enough to accommodate this sort of 'security research'" but rather, first, "I don't believe you" or, at best, "didn't your mother ever tell you it is wrong to mess with other peoples' things without permission?"
You don't think the supposed difference in values is the result of ignorance of how the internet works? Or what a security researcher does? Or that security researchers exist as a hobby and profession? Or that the security of the internet at large depends on people who do this? That the every-other-month theft of giant numbers of credit cards or passwords can be prevented if white hat hackers find the security hole first? I would expect most people don't know that big companies like Facebook or Google offer bounties to people who find exploits, or that bugs that threaten the entire internet are routinely found by people who donate their time in order to protect people they don't even know, and who don't know they exist.
The facts of the case: someone broke into a computer system without permission.
The inability to interpret those facts in the light of what a security researcher does isn't a result of different values, but a lack of knowledge of the context. People who don't know how computers or the internet work are open to being told whatever story the prosecution decides to spin.
Edit: I think the ignorance is actually made clear by the example in the GP. Imagine some good Samaritan is walking past a jewelry store after closing time. They notice that the front door is ajar, and upon testing they find that the alarm doesn't go off when they enter the store. So they call the owners and wait in the store until the owner can get there and make sure the store is secure.
Do you think it's likely that this person would be prosecuted? Or, if they were, that the prosecutors and judge would throw the book at them to "make an example"? People understand that scenario and are likely to treat it with leniency in a way that they don't understand the equivalent scenario in computing.
P.S., Always a pleasure to be slapped down by tptacek :)
In your jewellery store example I think it may be reasonable to prosecute the person.
In increasing levels of seriousness:
1. The person is walking by the store and, in the course of their everyday activity, sees that the door is ajar; they then contact the owner. This seems fine to me.
2. The person is walking by the store, sees the door ajar, and then altering their normal activities decide to actively test the door to see if they can break into the store; they can and then contact the owner. This seems dodgy to me.
3. The person chooses to visit each jewellery store in town to see if any have a door ajar. This definitely seems inappropriate.
The reason I come down opposed to the person in the second example is two-fold.
Firstly, ignoring intent, where do you draw the line on an acceptable level of 'break the security' activity?
- Thinking that the door is ajar and pushing on it?
- Seeing that the lock is vulnerable and picking it?
- Finding a ground floor window and breaking through it with a brick?
The resolution I choose is that if you have gone out of your way to subvert the security of my stuff without my consent then you have crossed the line. Gray is black.
Second, I don't care about your intent. Every security system will break at some point, and so I view the existence of doors and locks as mainly being about roughly outlining the boundaries that I expect to be respected. If I want to improve my security then I'll hire someone to advise me on how to do it. If I come home tonight to find a stranger who has broken into my house in order to prove that it's possible then (1) I already know, and (2) they have just caused the harm which they are nominally trying to protect me against.
But most likely a security researcher will fire off some multiple of a thousand probes to see if the door is open. Collateral damage is likely. This is not what is happening in your jewelry store door case.
That the every-other-month theft of giant numbers of credit cards or passwords can be prevented: These things can be prevented by the folks in charge paying attention to the alarms going off in the back.
But they should give a shit before they can pass judgement, because they're not important just to me and because there are actual victims involved (which might be different than the accusants).
What if no private data were actually accessed, let's say if the researcher only compromised his own account.
Or about the case that he hacked a device that he bought, violating the Acceptable Use Policy of the producer.
Or the case where someone automated the retrieval of data that he already had legally access to, like, if I recall correctly, Aaron Swartz.
All these examples are unique and would fail any physical-world analogies, so they should be examined and judged differently, by people that do give a shit, want to take the effort to understand their unique aspects -and are actually able to. I'm not sure if that's the case.
My general point is about how we found ourselves in a system where justice servants, like prosecutors, appear to treat their job "just like any job" (at least in cases that they might consider abstract -"hacking", less clear and direct effects than "murder"), where they can put their careers first and ignore any consequences to others. Or that someone has to bear enormous defense costs to stand a chance, or be coerced to plead guilty or abstain from exercising what should be his right, out of fear of finding himself involved in such a situation.
The whole point of juries is for them to judge you against the norms of society at large. The fact that a small group of people might be operating under different norms is irrelevant. They don't have to understand your values in order to judge you. All they need to understand are the facts and the law.
The prevailing norm is that property rights are sacrosanct, and any invasion of those rights is considered suspicious and explanations about benevolent intent are disbelieved. There is no general right to "tinker" with other peoples' property without permission, for fun, for research, or for any other reason. We are not a society that requires security measures to be effective in order to serve as a signal to keep out. A velvet rope is as effective as a steel door for the purposes of signaling that access is not allowed.
This is not a matter of prosecutors putting their careers ahead of the spirit of the law. It's about hackers not understanding that we're a society that requires you to keep your hands to yourself.
NB: I have a beef with the CFAA, but it's not with the spirit of the law, but rather the fact that criminal penalties under the CFAA are totally out of line with those in analogous physical scenarios. The standards for trespass on digital networks shouldn't be higher than the standards for trespass in the physical world. But juries can't do anything about this problem, and judges really can't either. It's Congress's problem for putting the felony escalation provision in there.
Researches that trespass a digital network aren't the only ones who are affected, though.
Let's say, a quote from the OP article:
"Lanier said that after finding severe vulnerabilities in an unnamed “embedded device marketed towards children” and reporting them to the manufacturer, he received calls from lawyers threatening him with action.
As is often the case with CFAA things when they go to court, the lawyers and even sometimes the technical people or business people don't understand what it is you actually did. There were claims that we were 'hacking into their systems'.
The threat of a CFAA prosecution forced Lanier and his team to walk away from the research."
There's nothing to that anecdote other than a company getting mad about exposing defects in a product and their lawyer making a nasty phone call.
The CFAA is vague and over-broad, you won't get any disagreement from me on that. Applying it in a case involving a device you bought and own is totally inconsistent with traditional norms of private property. But those are edge cases. The actual prosecutions people get up in arms about aren't edge cases. They pertain to conduct that clearly violates the norms of trespassing on private property, and hackers justify their actions by saying that those norms shouldn't apply to digital networks. Juries, unsurprisingly, don't buy that. So hackers and the broader tech community call them "ignorant."
You've also got the Sony VS Hotz lawsuit, where Hotz was forced to back-off. Edge cases, maybe, but demonstrate that not everybody draws the line at the same place.
For you, someone finding a vulnerability in the software that provides a network service, hosted in some server he doesn't own, is clearly trespassing private property -even if he only accesses his own account's data- but finding a vulnerability in the software that comes bundled on a device he bought, is not.
For Sony, let's say, both constitutes violations of her property -it's her software, she owns it and she doesn't care if the carrier is her server or the device she just sold you. In both cases she only gives you permission to use her software in a certain way, which excludes any sort of hacking.
Maybe the reason that many draw the line to the medium, is because it is easier to visually compare a computer network to a physical property than a device that you have bought (but has data you don't own)?
But is the physical ownership of the medium that carries the data what matters or the ownership of the actual data that are being accessed? If it's the medium, why, when the really important thing that the owner cares to protect is, in almost all cases, the data?
Not trying to argue, just expressing some questions that I think are tricky and deserve more thought than they get. In any case, I think physical and digital property analogies can only take us that far, so I try to keep clear of them.
Sony vs. George Hotz was a civil case in which the CFAA played a small role compared to the numerous other statutes invoked, and that case ended in a settlement.
What we are talking about in this thread is the supposed criminalization of security research. If you're trying to get someone to take the other side of the argument that security research is needlessly legally risky, you're probably not going to find many takers. There is a world of difference, however, between being sued and being imprisoned.
usually these cases go to special DA Investigation units who invest huge sums of money to determine who was involved and to what extent (think full blown computer forensic investigations to gather evidence). even when the cases make absolutely no sense to pursue, they will persist on the sole outcome of recovering some of these costs... i know first hand, borderline extortion.
In 2011 I wrote an emulator for the LatticeMico32 processor . After it was written, I decided to play the chaos monkey game and change some instructions, like reversing "less than" and "greater than", just for kicks. So the code I was actually running would be the equivalent of taking a program and replacing all > with < (and vice-versa). Much to my surprise, some emulated code still worked. It mostly did the wrong thing, but the fact that the code worked, printing stuff in the screen and all, still baffles me.
Your comment did it. I've been avoiding it, since I know I won't stop until I either finish all the levels or give up after spending too many hours, but I finally decided to give it a try. And it is amazing. As an emulator author myself ( http://www.ubercomp.com/jslm32/src/ ), I'm in awe of the care that was put into the CTF environment. Very nice.
To the people who are wondering if they'll be able to finish the CTF, here are my two cents: go ahead and try for a few hours. I'm sure you won't regret it. I haven't done assembly since the beginning of 2012, and I was able to do a couple of levels in less than an hour, and before I started I new absolutely nothing about MSP430 assembly. The tutorial/manual are so good that I believe even someone who doesn't know assembly at all might be able to finish at least some levels.
To the same group: I've finished 18/19 with no previous assembly experience, so it's certainly possible. If this seems interesting you should try it, even if you've never written or really read assembly code.
Fixing XXEs in Java is not a trivial thing to do. The best reference I know comes from Apache shindig , and you do have to make all those BUILDER_FACTORY.setAttribute calls, otherwise you block general external entities but allow parameter entities, which still leaves you vulnerable.
I don't know if you read it, but I sent you an email about this same bug (when I originally found it in Drupal) in 2012. Didn't know FB was vulnerable back then. By the way, I learned a lot from you here on HN. So let me take this opportunity and say thank you very much.
Hi HN, I'm the one who found the bug. My writeup is at http://www.ubercomp.com/posts/2014-01-16_facebook_remote_cod.... I'd be glad to answer any questions. I won't disclose the amount for now because I want to know what people think this would be worth, but eventually it will be disclosed. If you run an OpenID-enabled server now it's a great time to make sure your implementation is patched.
Apologies for making the assumption that based on how OP stated it, assumed that he had full control over disclosure. I'd still prefer to hear from OP, as Facebook can say what they want or could be mistaken on the finer details of what was or wasn't agreed upon.
I'm curious: how much time would you say you worked on researching and identifying this bug? BTW, I don't begrudge you the payout one little bit, no matter how long you spent on it; such an amount is change down the back of the sofa for facebook, and the potential impact of the bug means they got a great deal!
Well, I originally found the OpenID bug in 2012, but hadn't noticed Facebook was vulnerable until very recently. After I found their OpenID endpoint, the hardest part was getting them to make me a Yadis discover request. Then I had to squash a little bug in the exploit. Most of the time was spent re-reading the OpenID spec. I'd say total amount of work (including the time it took me to write the post) was about 2 days.
As I said in the post, I already had a strong suspicion that, once I could read files, escalating to RCE would be easy. But I decided not to do it without permission and they fixed the bug very quickly. As much as I'd loved to actually see the output of an ls or something like that, I think I made the right call.
I quoted that as a joke. I'm too familiar with bug bounties to ever expect one million dollars as reward for a bug. Let's hope people don't take it seriously. Lesson learned: since I'm not a native speaker, I shouldn't joke unless the joke is obvious.
A bug that lets you execute code on Facebook's servers is worth millions if not billions of dollars. You'll be rewarded with much less than that, but considering Facebook's market cap it is extraordinarily valuable.
No, it is not worth "millions or billions". It is worth whatever anyone is willing to pay for it. Since Facebook has very aggressive monitoring and will shutdown hacks quite rapidly, the ROI for a bug like this would have to be realised very quickly. Say in the order of days, (or maybe even hours), rather than months. How would you monetise 1 week of running code on facebook? Injecting malware would get the whole thing shutdown even faster, so you'd have to either go passive or operate in a reduced window of opportunity.
There are no legal entities that would buy the bug, the USG can access any data w/ a warrant (thats free) vs. "millions or billions". Any other law enforcement agency could do the same thing. There is really no value there to them. So it would have to be blackhats, and that means some idiotic Russians mass owning everyone with old Java bugs. Again - not worth much.
This sort of bug has very little value, except to facebook.
The last stripe CTF literally changed my life. I've always been interested in security but didn't have the confidence and honestly thought I didn't have it takes to be successful in the field. I decided to try the CTF anyway just for fun and was able to finish everything much to my surprise. I had read about SHA-1 padding when it affected Flickr so I knew just what to do on the level that involved SHA-1 padding, which I thought was the hardest.
When I came to HN and saw a lot of people I admire talking about how hard it was, especially daeken , I remember thinking something along the lines of "well, I thought it was hard but not that hard", and decided to try to find a few security bugs in open source software... Best thing I ever did... In just a few weeks I found some nice bugs on both Drupal and Wordpress, got the first CVE credited to myself, and then I started to have fun (and some profit) with the various bug bounty programs around the web, most notably those run by Google (I'm currently 0x05 overall)  and Facebook (7th) .
After a year doing security work on the side I was able to quit my day job last august and now I make my living basically as a security consultant and also as a "bounty hunter". I also got multiple job offers from US companies (I currently live in Brazil).
And all of this happened only because the stripe CTF gave me the confidence to actually follow my dreams. Oh, and I still don't know if I have what it takes to be really successful in the field, but frankly security bugs are everywhere so I go ahead and keep on finding them. I'm learning a lot every single day and the mean time between bugs is getting lower and lower, which is great. So thank you Stripe. Thank you very much.
Shameless plug: BTW, I'm in the committee for the W2SP conference, so if anyone has some interesting discovery to share, please submit a paper.