You missed the part where he immediately reported the vulnerability to his manager, security team and execs and got the assurance that it was being handled. If after that he is still thrown under the bus and fired, it's clear that someone at his own company dropped the ball and put the blame on him. I wouldn't be surprised if the actual fraud was committed by one of these people as well, using the very hack that he found and disclosed.
Finding a vulnerability is one thing. Using it is another. This person overstepped pretty severely and I'm not shocked at all that they got fired.
And honestly, writing this post at all continues to show bad judgment. I don't think this person understands where white-hat boundaries are, or how to deal with being accused of a crime.
Please reconsider if you think doing something like this to your competitors - or anyone - without some kind of program of consent on their part (like the previously mentioned Project Zero) is a good idea.
How did he "use" the vulnerability? What did he do that qualified as overstepping? He literally logged into a website and made requests to a hidden endpoint with random IDs. When he realized he was looking at production user info he stopped and reported it to higher ups. The process he has described is right out of the white hat guidebook. What would you have done differently?
He didn’t exactly stop. He could have quit poking around once alarm bells started ringing but he wanted to see how bad it got so he continued to poke around. Knowing when to stop should probably be a white-hat quality.
Poking around is exactly how you determine how severe the vulnerability is.
Simple data extraction is hardly a outside the what a white hat hacker should do. If there's an API endpoint that returns say, `user.name`, it's reasonable to try other things like `user.email` or `user.credit-card` to see what else could be do BEFORE reporting it.
It might be totally valid to return `user.name` where `name` is simply the first name. But you'd never know unless you actually tried a few more endpoints.
I agree with a parent comment in that someone dropped the ball here and threw him under the bus to protect reputations and prevent any nastiness between the two companies.
No. Simple data extraction is what a contracted penetration tester does, because they secured (here's that word again) a contract that limits their liability when doing simple data extraction. Poking around to find out how severe a vulnerability might be is how you manage to get yourself in civil trouble even when you're testing a site that runs a bug bounty; god help you if you're doing it on a site that doesn't have one, or, worse, hasn't really ever heard of one.
People share a lot of really bad advice about security research. The best advice you'll get from people that don't work in the field is "stay away" and "don't try to help"; it's cynical, but at least it's not going to get you sued, like following this kind of "poking around is a white hat norm" advice will.
You can get into legal trouble simply by looking at the source code of a web page you are browsing, as has happened in several famous recent cases. However, the take away from that isn't "if you go into Chrome developer tools you totally had it coming".
There are bounds to what is or should be considered reasonable testing, and if you don't press against them then they will keep moving closer and closer to the point where simply using your own computer in ways the manufacturer didn't intend will be illegal (and that is already happening as well).
According to what the author describes he did absolutely nothing wrong morally or legally, and the attitude of victim blaming prevalent in this thread is a huge problem in the security industry. We should be supporting rather than crucifying him, because one of us will surely be next for looking at a website the wrong way.
It is absolutely not the case that the author described something absolutely legal. I don't care if you think it's "victim blaming"; I am making a simple positive statement, not a normative one. People will get hurt buying into what you're saying. There's no "reasonable testing" exception to the CFAA. You're very unlikely to be prosecuted for doing this stuff, but it's not that unlikely that you will get wrecked with legal bills reaching that conclusion.
'tptacek is 100% correct here. Anyone reading this: please listen to these words of wisdom, because not doing so is how you get seriously hurt in the eyes of the law.
Hack, but hack carefully, at least until there are laws that protect you. Today, there aren't.
There is a huge gray area between "absolutely legal in all circumstances" and "absolutely not legal under any circumstances". The fact that somebody has not (so far) been found guilty in court doesn't mean their conduct was definitely legal.
If you're trying to discuss whether something is legal in general, looking at a broad spectrum of case law is generally more instructive than focusing on the outcome of one particular case.
Because the behavior described in the blog post is open and shut CFAA-and-equivalents, regardless of weaseling over words like “authorized” and “access” and “computer”. The author’s own words complete a CFAA case. Not argue for. Complete. The narrative as described is prosecutable.
You could have done everything the author claims to have done in order to stop a Martian invasion or the extinction of every living thing or in the genuine spirit of trying to help and it’s still several prima facie violations of CFAA. It just is. I’m sorry. The why doesn’t matter, barring the contractual scenario tptacek points out (and which STILL requires diligence by both parties to avoid prosecution).
To be clear I think it sucks what happened to the author, but if weev goes down for enumerating primary IDs via a Web browser (and to be fair, also trying to sell the data like a complete tool), setting up an entire technical infrastructure to compromise this app in this way is trivially demonstrable intent. You and I both know what Charles does. Now wait until a prosecutor spins the whole setup as a giant technical hack that shows this person intended to compromise a competitor. I’m not even finished with law school and I’m certain I could prosecute this person successfully, but note that doesn’t mean I’m saying they should be.
Given what you said here and to your point, I’m going to preempt your likely retort and point out that I’ve described the behavior as afoul of CFAA and not the person. You’re right that they are entitled to due process. The blog post is literally evidence is all. I’d bet my Rams tickets next year there’s a subpoena on its way to this post. If not in a theoretical criminal case, then definitely in the civil litigation already underway (again, taking the author at face value).
tptacek is right, and I mean this with respect: you really need to be careful with your opinions on CFAA, particularly when potentially suggesting violation thereof. Your pronouncement that the person didn’t do anything illegal is actionable in a very distant, fucked up world with a bunch of prerequisites, but still a very very possible world. (IANAL/YL, comment is general opinion and not advice, etc)
> You're very unlikely to be prosecuted for doing this stuff, but it's not that unlikely that you will get wrecked with legal bills reaching that conclusion.
What he's saying seems to be exactly in line with what's happening. Even if the author doesn't end up being found guilty by any court, he'll still be wasting a bunch of money on lawyers.
See the recent(ish) "but, hacker, prosecute!" case with teacher PII in, um, some US state or another. Where a journalist was looking likely to be put up on charges of "computer hacking" based on exactly just "view source".
> Poking around to find out how severe a vulnerability might be is how you manage to get yourself in civil trouble even when you're testing a site that runs a bug bounty
Sueing people doing it is a fantastic way to ensure well intentioned people will never report vulnerabilities to you any more.
You’re a curious person and see that the back door to a closed restaurant was left unlocked. You should let them know, but make sure you find out how big of a screw up the closing manager caused so s/he can be dealt with or the process fixed.
You open the door a little and peak inside and see the office door is open. “This can’t be,” you think as you walk into it.
You bet there’s a safe left unlocked and customer reservation left unprotected on the computer, “how irresponsible can these people be…”
If their security is this bad, you wonder what their food safety processes are.
It’s a slippery slope, and maybe well-intentioned, but that doesn’t change the fact that you’re not allowed to wander into this restaurant’s back door or be there when it’s closed, and now that you have, how do you prove you didn’t do anything malicious if the only evidence there is is of you in the restaurant when you’re not supposed to be?
Maybe you can make an appealing public good argument against criminal accusations based on your stellar clean record, but how do you protect yourself from civil suits, which they have every right to spin up if they have damages and can link you to them?
Please don't conflate physical invasion with violation of data privacy. Everyone downloaded a car, 3d printed it, and are now driving around happily in them. This argument is beyond tired.
The laws regarding computer intrusion are stricter than those governing real-world breaking and entering. Trying someone's door isn't a federal crime (though it'll absolutely get you arrested, even though that's the open protocol a doorknob advertises).
Which is exactly the problem - this kind of rule following behaviour for arbitrary and non-sense rules allow for more rules to be implemented without push back.
We as developers and end users have to fight it and not simply argue for the sake of following rules.
It makes zero sense to prevent people from viewing source code of a page when that is how the entire tool chain was built to be used.
They should have made their own native app that couldn't be reverse engineered if they had any mind for real engineering rather than blame 'the web'.
You can be as pissed about it as you like, and I don't have a problem with that. Where I start having a problem is when people say things like "poking around to see what kind of data is exposed by a bug is reasonable testing and not illegal". You wanting something to be legal doesn't make it legal!
Point of order - upstream what is being discussed is 'whitehat' behaviors and tendencies, and whether a behavior is reasonable. This got transmuted to a question of legality by some one downstream, and is in my opinion a good shout but wasn't really a rebuttal to any of the points raised. It seemed like there was some talking past each other going on and I think that is the core of it.
I think I keyed off of "I agree with a parent comment in that someone dropped the ball here and threw him under the bus to protect reputations and prevent any nastiness between the two companies."
>...and is in my opinion a good shout but wasn't really a rebuttal to any of the points raised.
True, but it was not somehow 'out of order' because it was not a rebuttal of the 'whitehat' claims. Given that the article is not primarily about how the author discovered the vulnerability, but the legal problems that ensued, it was not at all unreasonable to point out that acting in accordance with 'whitehat' behaviors and intent is not enough to shield oneself from legal scrutiny. Furthermore, the ensuing discussion, in which various attempts were made to deny this distinction, shows that the point needed to be made!
> poking around to see what kind of data is exposed by a bug is reasonable testing and not illegal
Poking around to see what kind of data is exposed by a bug is reasonable.
I’d stop there, and I’d suggest people do that, because it makes for a better society for all of us, regardless of the law. What’s that called? Civil disobedience?
(Note: The post appears to have disappeared, so I don't have the full details of what the author did/didn't do. I have a vague understanding based off the parent conversation.)
I don't want to sound purposefully ignorant here but is your advice basically that, if you find a vulnerability, the only way to remain a white hat hacker, assuming you were not hired to check for vulnerabilities, to not report the security flaw?
If not, please explain.
If so, I can certainly understand the perspective, but for me, I feel a sort of obligation to make people aware of serious problems. Whether it be them driving around with a headlight burned out, or their website processing credit card info in an insanely insecure manner. Each are dangerous, and can cause many problems not just for the operator, but for the people around them as well.
To me, it seems like it is ultimately at least as malicious as a grey or black hat, if a white hat were to find a basic (or serious) issue, and not try to inform the owner (or in this case, their boss) of the issue in some fashion.
Perhaps the law should look for some kind of "good sanitarian law" for these sorts of situations.
There is no "good samaritan" clause in the CFAA, just for what it's worth to you.
I don't know what this "white hat hacker" stuff is. There's no rule of hats in the law. I know "white hat" mostly as a term of derision, not as a technical or legal term of art.
The problem this person has run into is that they went looking for vulnerabilities in someone else's website without permission to do that. You're not allowed to do that. If you do it, a lot of times you'll run into companies that are cool about it, but a lot of times the companies you run into aren't cool about it. The law is on the side of the people who aren't cool about it.
to the sibling comment, since replies are turned off: yes, US law. I doubt either 'tptacek or I know any detail of significance around non-US computer fraud / security law.
> I don't want to sound purposefully ignorant here but is your advice basically that, if you find a vulnerability, the only way to remain a white hat hacker, assuming you were not hired to check for vulnerabilities, to not report the security flaw?
I think the important point is: if you don't want to risk a world of legal trouble, don't look for vulnerabilities in other people's systems in the first place, unless and until you have a contract with them. At the very least, make sure they have a bug bounty program or some other public indication that they are interested in getting vulnerability reports.
Again, the most important point is that you do this before even checking for the vulnerability you think you may have found.
From a legal perspective, as far as I can tell, you're right. From a moral perspective, this only considers the well-being of the company with the security flaw. Limiting security investigations to those that have the full blessing of the company means that security flaws can be swept under the rug.
This is a bad analogy - it's more like you request access to an area, there are no locks, you're invited in and told that the content is not restricted.
This happens a lot! It's gotten people in trouble even in situations like bug bounty programs, where some amount of testing is authorized --- there's a delicate set of norms about stopping your exploration when you find a problem. In a situation where there's no prior authorization --- as is likely the case here --- violating whatever implicit norms exist can easily get you in legal trouble.
At the very least, if you stumble across something that spits out a plaintext credit card number, stop right there and don't do another damn thing with the target. Don't see which credit card numbers you can see, don't change the `/100` in your URL to `/101`, just stop.
...and report it, apologizing for going too far in the first place but explaining how you got there and that you want to cooperate in any way to help them fix it.
And please, don't tell them you'll give them the bug if they pay you or give you a t-shirt - this is blackmail, most likely (IANAL), and sure to get you in trouble. (but it happens)
The crazy thing is how unintuitive the law is here. If we were calling customer support and asked "can you tell the card number of [John Doe/customer 314/etc]" without ever claiming to be that person, wouldn't we put 100% of the blame on the company for answering that question? (If the caller there used the number, that'd be a different story.)
Most devs outside security would just assume that unless you're doing something encryption-related or at least trying multiple passwords, you're not hacking. You're just asking nicely for information.
It is amazing how little technical knowledge you need to violate the CFAA.
> Knowing when to stop should probably be a white-hat quality.
You don't stop until you fully explore just how severe the vulnerability is and all affected systems. This isn't rocket science. You have to know just how bad things are if you to have any hope of fixing them.
Isn't that the job of the person who is responsible for the code, and NOT the responsibility of a well-wishing outsider who stumbled across the bug in the first place? Unless you're actually angling to get the job of fixing the code.
> I don't think this person understands where white-hat boundaries are, or how to deal with being accused of a crime.
OK but where, exactly, ARE "white-hat" boundaries? Is there a rule-book somewhere that he missed? How does one know how to deal with being accused of a crime? Practice? Cop-shows on TV?
IMHO this person is someone who was just too naïve, overly trusting and unlucky. If there's blame here, much of it falls on his organization for failing to handle the initial incident report seriously.
I think the subtext of CFAA is that the law believes it shouldn't have to tell you not to try to pry credit cards out of someone else's website. I'm not speaking normatively, I'm just telling you the way it is.
I think publicized security research --- which I obviously support --- has created expectations among technologists that either weren't intended, or were more wishful than the facts support (lots of researchers, most probably, think the CFAA is a bad law and are happy to set expectations premised on its illegitimacy, but: see Wesley Snipes for how that turns out).
If there weren't lots of public security research, I don't think you'd really have to tell developers not to go looking for vulnerabilities.
Again, and to be clear: if it's something running on your own machine, or a machine you own, go nuts.
This is an important caveat; the issue at hand is looking for vulnerabilities on other people's machines, not in other people's software. If it is running on your machine (read: not SaaS), reverse away.
Releasing the exploit, on the other hand, is a different story (don't do that)
It depends on how you got the bug. If you test something on your own machine and find a vulnerability, you're fine publishing it (modulo you may be civilly liable if a binding contract prevents you from doing that kind of research --- a pretty rare case). If you find a vulnerability in someone's SAAS application running on their server and publish it, the publication itself probably isn't unlawful --- the problem you'll have is that you've also published evidence that you committed a federal crime.
To be clear: I meant releasing an exploit, not a bug.
Per my understanding, there is a distinct difference between releasing a bug (that you found in software on your own machine) vs. releasing a tool that exploits that bug.
I don't really think that's true either, though obviously it has to be at least a little riskier to release an exploit; like, a fact pattern that has actually happened before is "someone writes an exploit, and a friend uses that exploit to steal a bunch of credit cards; the exploit writer is culpable as a co-conspirator". But just releasing an exploit by itself is I think --- NOT A LAWYER, TALK TO A LAWYER --- pretty safe.
This also comes up in discussions about exploit markets: you can sell an exploit to a bounty program, or to Zerodium or whatever, but people always think you can make more (for instance, if it's some dumb SQLI, more than the $0 the public exploit markets will pay you for it) by selling "on the dark web". But if you're selling a bug that only lets you exploit some specific SAAS application "on the dark web" to some anonymous criminal (or someone a jury will think you should have known was a criminal), you're taking a chance that a prosecutor will make a case that you took money to facilitate a specific crime that they're now going to charge you with.
Part of it is that there's a specific law criminalizing copy protection bypasses (the DMCA), but it also includes an explicit exemption for security research.
I work in banking and when probing (internal) api’s of partners/clients you accidentally find stuff; they are all very broken and it is normal to find bugs and holes while just probing the use of the api; not trying to find issues. Not sure how to prevent that and no I don’t know how far I can go; I know I cannot do my integration work if I do not do it. Not sure how others can. I would never report bugs to them though; I am old and cynical; software is very broken (especially since the Big Move to modern tech; I see more and more ‘out facing’ (still internal) APIs in banks written in js (not blaming the language but blaming the vast amount of cheap and untrained coders in that language niche); we used to write those apis against the real ‘basement’ backends, now they do (usually hiring an external consultancy corp like cap Gemini for big $$$ who then has it done by their offshore teams) and it is a freakin’ horror show; same for insurers); I assume going in I can somehow query all creditcard numbers in there, so no need to tell them that.
I agree in a practical sense. But I think there is a huge social problem here.
Power relations between employers and employees, corporate goons (lawyers), lack of honest communication, lack of trust, lack of collaboration to make things better, general FUD. This all leads massive inefficiencies, people get hurt, progress is slowed down or even reverted.
The problem is that many people still insist on doing things in person or via a ‘quick call’. I have been around long enough that I insist on chat and email; voice is fine for fleshing out features and such but things like this or basically anything that could come back to you, should be in writing with tons of Ccs. I know many people who insist on phone or personal because they are too lazy to read and write, but I definitely also know a % who do it because then there is no verifiable record. And so I err on the side of safety and do everything in writing.
What you do in those cases, though they can deny they received the email or read it, is to send a sort of "meeting notes" email to them, where you say something like "as agreed during our phone discussion earlier today...".
That way you create a sort of trail that a discussion took place and at least that was your understanding at the time.
Even worse if it can be proved that they received the email and read it. Because in that case if they don't reply disagreeing, you can argue that the silence was an approval of your account.
> though they can deny they received the email or read it,
That’s why I put many Ccs and ask them to respond if they agree or not; I make sure they do. This is always never, at that time, an adversary type of thing, so people generally play ball automatically. When things go rotten then we have everything to win. This happened a few times in court.
That's an important point, which will make white hats think twice if they are going to report stuff like this in the future or keep it to themselves and deny anything that might circle back.
> it's clear that someone at his own company dropped the ball and put the blame on him
Hard disagree. Why is he reporting it to his own company? It's in his company's best interest for a competitor to have a security issue. He should have gone directly to the competitor's security team. Could have potentially gone anonymous. Could have gone through another person. A reputable security researcher. Lots of other things he could have done, but didn't.
Sure he could, but his "mistake" was to act in good faith. He reported to his own employer to gain points inside his company. He's showing everybody how smart and competent he is. Maybe he will get a promotion.
Moral lesson: your employer isn't your friend. They will throw you under a bus in a blink. Always cover your a*.
Agreed that anyone who knew about the flaw should be as suspect as the employee in question. But what was this guy doing pentesting a competitor instead of performing his job duties - that’s grounds for being fired right there. Even with an “ok” outcome he’d have arguably caused harm to his employer.
Even ignoring that most software jobs understand that a small amount of time just noodling away on tech stuff is to be expected as part of the job, exploring a competitor's public APIs seems very closely aligned with many programmer's priorities.
Maybe, but if his company really had a problem with it he would have been fired or at least repremanded when he initially brought up the problem, not months later.