My $.02 on this is that Prezi should have not awarded the researcher the cash under the bug bounty program, however they should have given him a reward anyway. Awarding the money as part of the bug bounty wouldn't be fair play under the rules of that program, but he potentially saved them a TON of money and problems. As such, he should be rewarded somehow. Further, had he been less than honest, he may have been able to leverage the code itself to find more than one $500 bug.
I think Prezi should have done something like this:
* Acknowledge the problem and the seriousness of it
* offer a reward, but not under the bounty, just a "thanks"
* Have him sign an NDA about the source itself, and the specific details of the issue, and the amount of the award
* Allowed him to write up the experience should he choose (good PR for prezi)
* (maybe) offered a contract for the researcher to find more such issues, or announced a different program as a result of it.
The reasoning behind doing it outside the program is that Prezi needs to walk a fine line between saying "just attack everything and we'll pay you!", "we are too process driven for our own good", or they end up getting bad press from people who tried to follow the rules not getting anything, but cheaters are getting paid.
>Further, had he been less than honest, he may have been able to leverage the code itself to find more than one $500 bug.
I'm not sure I agree with this particular argument, it essentially reduces the concept of a bug bounty to blackmail. This mindset is not a constructive one.
The tester should get rewarded for their hard work and helpfulness, not the decision to follow the law.
It was out of scope. The rules are pretty clear: http://prezi.com/bugbounty/ and he broke at least two of them.
And it seems like he knew it was out of scope when he submitted it too: "I had spent a total of 2 hours sifting and crawling through their services which were in scope, but wanted to see if I could locate any other subdomains..."
Now I think Prezi should probably have paid him anyway because that's a pretty boneheaded error and I'd be very grateful if someone politely pointed it out to me... but they aren't obligated to. You can put your pitchforks down.
Sometimes people and companies have their heads stuck so far in procedures and policies that they can't see the forests from the trees.
The Finder provided tremendous value by discovering this issues and reporting it responsibly. He certainly should be rewarded with something more substantial than swag.
Would Prezi have preferred that the Finder just not report this issues?
It's not like they got him on some legalistic technicality. The bug bounty clearly doesn't cover the bug he reported.
And I don't usually go looking for them, but if I come across a security problem (e.g. someone left login credentials unsecured in bitbucket) I would let them know because it's the right thing to do, not because I expect cash.
It's not a technicality, but you're just saying "well, that's the policy" without considering whether the policy is the best way to accomplish certain goals. That's the point.
You're not entitled to a bounty just because you found a bug. Some companies offer these bounties and it's good that they do, but that doesn't mean every company is obliged to offer them, or that a company that offers bounties for some bugs is obliged to offer them for all bugs.
How about a moral obligation? Honestly, it sounds like if a taxi driver returns a bag full of cash to the owner, it is perfectlly alright if they just say "Thank you" and walk him to the road. Legally: nothing wrong, morally: being a greedy asshole.
Frankly if a taxi driver bitched on his blog about someone doing that I'd be saying the same thing. It's nice when someone gives you a reward for doing the right thing. But you shouldn't act like you're entitled to it, because you're not.
> But you shouldn't act like you're entitled to it, because you're not.
Depends where you are. In Germany you are entitled to a finder's fee by law (in the case of the taxi only if the value is > 50€ and only 2.5% instead of the normal 5%)
It should absolutely be in the interest of companies to reward security researchers who find flaws in their systems. Otherwise, they will be screwed by the less scrupulous.
We are talking about different things. Sure it's in the company's best interest, just as it is in the interest of someone that loses their wallet to offer a reward. That said, when nothing is offered up front (possibly because the problem is unknown), to feel entitled to a reward and disgruntled when one isn't offered is not what I would call "moral" behavior, as brought up farther up-thread.
It's moral when you do it because it's obviously the right thing for everyone involved. When there's money involved, that's something else.
Just because you're complaining doesn't mean you feel entitled. If someone is rude to me and I complain about it, and I expressing that I feel entitled to have non-rude interactions with this person? If I post a negative book review am I feeling entitled to a good book?
But is it rude for someone to not monetarily reward you for doing something good? That's what I was replying to up-thread. To feel you deserve compensation for a good deed when there was no prior agreement as such is indeed entitlement.
This thread hasn't really been about the article for a while. It's been about someone feeling that people that don't reward for good deeds are greedy assholes, which I think sets a bad precedent. If you want to incentivize fine, but let's not confuse that with what the right thing to do is.
How about a moral obligation? Honestly, it sounds like if a taxi driver returns a bag full of cash to the owner, it is perfectlly alright if they just say "Thank you" and walk him to the road. Legally: nothing wrong, morally: being a greedy asshole.
That's a false analogy. Taxi drivers are obligated to return lost property, but nobody is obligated to report bugs. That's why you create an incentive to report, i.e., the bug bounty.
Aye! Its not a perfect analogy but I was pointing out why people should reward the guy if he didn't exploit the situation in a wrong way. In this case, it was the whole source available to him. Albeit, he was more or less inclined to report the bug but what if he hadn't and probably sold it somewhere? why shouldn't the company reward for his effort.
As an aside this very thing is an excellent example of how extrinsic motivators can "poison the well" as it supersedes intrinsic motivation. Dan Pink gave a great talk on this -- http://www.youtube.com/watch?v=tJr9QajdCNc (sorry, I prefer the illustrated version).
I think he means that if we're not holding Prezi ethically responsible to pay the bounty, then we can't then start saying the researcher is ethically bound not to sell the exploit.
Why not sell it? People sell URLs all the time, and bitbucket is clear written intent from the company that they wanted their source control systems accessible to the public else they would not have provided written notice to the world of their passwords.
Surely the creators of the software are competent software experts who fully understood the implications of making their repository public. Surely, they are not asserting that they were so negligent in the performance of their duties as to not check whether the repository would be made public.
Also, they've made numerous written affirmations that the issue found is not a bug, and would not qualify as part of their bug bounty for security flaws.
They are morons and deserve to be hacked because they are negligent and make affirmations that leaving their source control system passwords on public computers is not a security issue worthy of payment. They deem the risk to be so insignificant as to not even be worth $500.
That waives legal responsibility, but I fail to see how it affects ethics/morals. The ethical implications of an action are determined by the community/profession, so if the community agrees that this was unethical, it was.
This is some crazy entitlement culture. If you help someone out, you are not entitled to a reward. If you want a guaranteed reward for your efforts, get a contract first.
"Now I think Prezi should probably have paid him anyway because that's a pretty boneheaded error and I'd be very grateful if someone politely pointed it out to me"
But Shubham did one additional thing, he unintentionally embarrassed a founder. That's the real reason he's not getting paid, everything else is a technicality...
Why even have a limited scope on bounty programs? (This is not the only time I've seen that.) Is it only to limit payout? Are their legal reasons? For example, their client tablet applications are ineligible. I just don't get the reasoning.
In their position, I'd pay him the $500 and remove the idea of scope. I'm just curious if there's some counter-argument I'm not thinking about.
Having these kinds of rules on bug bounty programs is excellent for hackers though.
If I wanted to hack Prezi I now have a lot of very useful information.
1) Prezi is not interested in blocking access to people who already have the ID of the presentation. This is good news since it means I can enumerate the IDs and get access to private presentations - some of which could have useful private data.
2) Prezi is not interested in blocking attacks which enumerate user ids, etc. This is great news - I can get a list of likely email addresses to use later.
3) Prezi disallows any forms of attacks that utilize outside services. That means that while Prezi's core systems have now been nicely screened, other systems are going to be wide open because nobody has bothered to test them properly. This works well with the list of email addresses from above and possibly data obtained from the private presentations above.
EDIT: Just want to add that this shows a very large misconception in the corporate security world. Security is not something you can get a "B - good effort" for. Security is all encompassing. You either get an A+ and the hacker does not get in, or you get an F and your data is gone. There is no middle ground. Putting parts of your security off-limit means you shouldn't have even bothered to begin with.
'Just want to add that this shows a very large misconception in the corporate security world. Security is not something you can get a "B - good effort" for. Security is all encompassing. You either get an A+ and the hacker does not get in, or you get an F and your data is gone. There is no middle ground.'
That's not true. There are substantially different levels of security required depending on the expected resources an attacker can devote to attacking you, and you can be better or worse at resiliency and recovery (where dollars and hours very much form a continuum).
I disagree here - you've either lost the data or you haven't. You can make guesses as to the expected resources of the attacker, but if you're wrong and the attacker has more resources, then you might as well have not even bothered.
As an example, you have some fairly non-sensitive private health records. Here are three approaches:
(1) No security at all. You hope nobody is going to bother taking them and using them for anything malicious.
(2) You put in decent security, but a contractor for a new feature left open a vulnerability you didn't know about.
(3) You make sure everything is secure and have security audits over the code that closes the vulnerabilities that a contractor made.
The data for (1) and (2) get hacked and used in a bigger hack on a different service that results in money being stolen.
Now you could say that (1) gets an F, (2) gets a B because at least they tried, and (3) gets an A+ because the data wasn't stolen. This is rubbish - both (1) and (2) resulted in data being stolen and lost customers / lost money / insurance penalties / whatever. The security teams for both (1) and (2) failed utterly and get an F.
If (2) had guessed correctly and nobody had actually devoted those resources then (2) gets a flying colors because the data is safe - but it's just pure gambling. Gambling with security will always be a losing bet in the long run. Rather just make it secure. Going off some strange 'expected resources' is just asking for the time when your data somehow becomes valuable and those resources get brought (or more likely, one of your employees annoys the wrong person with too much free time).
Explaining to your customers that their email addresses weren't valuable enough to do proper security is a great way to lose me as a customer.
I think the idea the parent was trying to express is there is different risk appetites for various things/companies. If it would cost more than your profits to secure something 100%, obviously you need to look at other ways to go about it. Mitigation is a major force in information security. Mitigation isn't solving the risk, it's just making sure that the impact of the risk is low if it does get exploited. Likewise, while PCI data needs to be as locked down as possible, other data doesn't need that level of security because the tradeoffs are too massive to be cost effective or business effective.
What you should realize is that "security teams" are generally not responsible for the level of security at organizations. The information security team will generally present the risk to the business owner of that process, that data, that application, etc and let the business owner decide if they want to accept the risk, mitigate the risk, or avoid the risk. If I went to the CEO of Dropbox and told him the biggest security flaw in Dropbox is that users can share files with each other, he's going to tell me to jump in a lake because that's their entire business.
Nothing is 100% secure, and nothing can be 100% secure. I'm not agreeing or disagreeing with what Prezi is doing, but your notions of all-or-nothing security seem a little out of touch with the reality of business.
> I disagree here - you've either lost the data or you haven't.
You seem to be implying that the fact there are two possible outcomes implies there are only two possible initial states - vulnerable and not vulnerable. If the attacker steals data, the initial state was vulnerable, and if the attacker fails, the initial state was not vulnerable.
This is what poker players call "results-orientated thinking". The initial state is much more like a range of continuous values, where 0 is "having literally no security whatsoever" and 1 is "having security no earthly force can overcome in any scenario".
No private company has perfect security, and perfect security is not desirable, because incremental security has non-zero cost. Does it make sense for a typical firm to spend millions of dollars hardening their office building against the threat of attack by a heavily armed private militia? No, because for most firms the cost of preparing against such an attack outweighs the risk-weighted value of preventing such an attack.
Incrementally improving security narrows the range of successful attacks. Incrementally improving security means fewer attackers will be skilled enough able to successfully infiltrate, and fewer attackers with enough skill will go to the effort to successfully infiltrate. The goal is not to guard against every conceivable attacker, but, in a simplified model, to incrementally improve security until the marginal cost of the last improvement is equal to the marginal value of the reduction of attack scenarios.
> If (2) had guessed correctly and nobody had actually devoted those resources then (2) gets a flying colors because the data is safe - but it's just pure gambling
"Gambling" has no particular meaning in this context, because every decision about security precautions involves weighing known costs against potential risks. The division of security plans is not between "gambling" and "not gambling" but rather between "positive expected value" and "negative expected value".
How do you relate this to something like home security? You have valuables at your home. People can come in and take it with varying degrees of force. Are you prepared for the maximum force attack, or do you accept the typical security features which you know to be minimally effective?
hahah was just gonna write about this, but you beat me to it.
Either way, the point is, there's a trade off. Kinda like the 80-20 rule. It obviously taken 20% effort to protect against 80% attacks (the casual opportunistic attacks. like preventing sql injections, or locking your front door) and it takes 80% effort to prevent those last 20% attacks (actual Pros). SO "you might as well not have bothered" is somewhat naive in my opinion
By your logic, there are only two kinds of chess (or any game) players: those who win every game they play, and those that don't.
Unfortunately, the real world isn't so black and white. The resources someone will put into hacking your site depends on their perceived value of success, and if someone with enough resources values it enough, they will hack your site, no matter what you do.
Anyone who claims to have constructed an unhackable internet service has either constructed a trivial and useless service, or doesn't understand the complexity of software.
"Because they tried" doesn't get a B. You're not graded on effort. What you're graded on - when it comes to defense as opposed to recovery, though both are a part of this - is how likely a breach is. Unfortunately, you don't always learn your grade (and when you do, it's bad).
"Gambling with security will always be a losing bet in the long run. Rather just make it secure. Going off some strange 'expected resources' is just asking for the time when your data somehow becomes valuable and those resources get brought (or more likely, one of your employees annoys the wrong person with too much free time)."
So, every site you deploy is going to indefinitely withstand armed assault by government forces?
No. But if the site gets hacked, I failed. If I asked users for their credits cards and stored it in a publicly accessible plain text file or in a secure system that still gets hacked the end result is still the same. My users are having unauthorized payments coming off their credit cards. I've failed.
Maybe I can sleep better at night if I didn't go storing them in plain text and I can make up excuses easier, but I still failed. Regardless of how likely any breach was, I failed. My customers have probably jumped ship.
If I store it in plain text and I never get hacked, then I've succeeded. I'm more likely to succeed the more security I add, but if it gets stolen then it doesn't matter anymore. Basically I'm trying to imply that success or failure is a boolean based on real world results and does not depend on the amount of effort placed into the security. The security can influence the result, but once the result occurs the security I used or did not use is irrelevant.
So skimping on security is always a terrible idea. If you know of a way to increase security, then you should increase it. If you offer a bug bounty to improve security, make sure you give a reward for any possible breach that could cause you to get hacked, regardless of whose 'fault' the vulnerability is. If someone can social engineer your developer, then pay out the bounty. Maybe it won't happen next time because now the developer has learned something.
This is a fascinating discussion because it betrays two fundamental attitudes of society to risk.
RyanZAG is "correct". If someone breaks into my house and steals my TV, then my security was a failure.
This leads to the next problem - its not a catastrophic failure in today's (western) society. I am probably out at work, and I am insured, and the burglar is unlikely to be waiting when I get home to murder me.
However, there have been plenty of societies in the past, and are many now, where the expectation of loss would be almost total - someone breaches your security, they take the tv, kill you and your family and burn the house down on the way out.
So its not a judgement on the resources of the attacker that matters, it is the expected consequences of the breach - the expected value of damage.
Which side of the argument you come down on depends on whether you see the Internet as basically a nice London suburb with a few bad eggs in it, or a violent amalgam of Feudal Middle England and Mogadishu on a bad day.
"So its not a judgement on the resources of the attacker that matters, it is the expected consequences of the breach - the expected value of damage."
I nodded at this when I mentioned resiliency and recovery, but I still think resources of the attacker matters. A determined attacker could doubtless breach your front door with a battering ram or axe and enough time. Part of the reason you don't worry about this, I assert, is that it's not likely because the costs to the attacker (in terms of chances of getting caught and penalties if they are) are too high. Part of it, as you say, is that we have some amount of resiliency against the threats posed. And probably part of it is that most of us are not terribly inclined to do damage to each other without provocation and there are many possible targets for the few who are - I'm not really sure the degree to which we should legitimately consider that bit a part of "security" but it certainly merits weight in calculating risks.
That is a good point - I factor in the security of a effective police force, a legal system that will not tolerate using threats to sign over a business for $1 - all of these are part of our security.
Curiously I am not convinced of the total damage done by these various break-ins. Stealing credit card numbers is not the same as getting the loot into a laundered bank account. Grabbing bitcoin wallets is closer, but the liquidity does not exist to extract much.
The damage is seemingly more reputational, or other internal costs to the hacked company (like paying security consultants). The actual "money the thieves ran off with and could convert into real cash" is pretty thin - would value some pointers at studies here.
You're conflating two things, inappropriately in my opinion:
> If you offer a bug bounty to improve security, make sure you give a reward for any possible breach that could cause you to get hacked, regardless of whose 'fault' the vulnerability is.
This is true. There's no upside for rejecting this as "out of bounds" except for a relatively tiny sum of cash.
> If you know of a way to increase security, then you should increase it.
This I disagree with completely. If there's anything you can do with negligible cost you should do it, however there are all kinds of costs. There are usability costs, operational costs, training costs, etc, etc.
You can't hand-wave these away by declaring that any breach is failure without recognizing the fact that there is no such thing as perfect security. In fact all security is gambling, and it should be a gamble based on the best odds we can come up with professionally against the cost of failure. If something requires 100% perfect security then that thing should not be done, period.
'This is true. There's no upside for rejecting this as "out of bounds" except for a relatively tiny sum of cash.'
There can be. If the attack involved something that - done broadly - would itself cause problems even without a vulnerability, then you don't want to reward people for probing those ways without arranging it first. As a sort of extreme example, imagine hundreds of security researchers getting in the way of your paying customers while trying social engineering attacks on your staff.
No, security is about trade-offs. If you throw absurd resources toward protecting against entirely unrealistic threats and your company goes out of business, you've failed. If you have legitimately made the risks small enough, for the resources and threat model (and that threat model sufficiently matches reality), you've succeeded. There are of course some legitimate caveats, including talk of externalities and questions about how one would measure things, but I still assert my basic model is more correct than yours.
Recognition that security is only as strong as the weak link does not imply that all links must be infinitely strong.
> So skimping on security is always a terrible idea. If you know of a way to increase security, then you should increase it.
This is what all of the "security" vendors would like you to believe. It completely ignores the value of the assets you are securing.
How many rounds do you use with PBKDF2 if you want to slow down attackers? You can always add more rounds to slow down brute forcing, so how would you reconcile this with your statement of always increasing security. The same applies to bcrypt.
>>> You can make guesses as to the expected resources of the attacker, but if you're wrong and the attacker has more resources, then you might as well have not even bothered.
"Butler spent months plotting to infiltrate and overtake his four competitors, culminating in the two-day hackfest in his overheated safe house high above the Tenderloin. The sites blinked out of existence, their thousands of forum posts later rematerializing on CardersMarket. Iceman now had upwards of 6,000 users on his site, making it by far the biggest carder site on the Internet."
Your security people work 8-5 and go home and leave their work at their office. Most hackers have the ability go days or weeks at a time banging away on your system until they find a crack wide enough to get in and then its game over.
Almost by definition, there is only a secure access continuum for known points of attack. Once you breach the access layer--no matter how it's done--the game is over.
- One copy of your data that is publicly writable is very insecure
- One copy with credentialed access is better
- Redundant copies with credentialed access and PK-signed master-slave synchronization is better still
- Add periodic off-site backups to encrypted media with keys generated using a hash-based one-time password and it's even better
But, oops! Someone left a debug line in the Javascript that runs your restore-from-backup webapp. The auth layer has been silently truncating passwords for the last 10 months to just 3 characters. All that extra security you entered to prevent anyone reading your dataset now means ... absolutely nothing. Anyone could have gotten in, and once they're in the backup app, they've got everything.
Beyond redundant copies to recover after malicious tampering, every single seal must be perfectly tight or you'll leak all your data. I've seen source code for some old Windows 98 malware (analyzed on MSDN, I think) and it's crazy how specific they are. One unchecked array index or untested struct size UINT before a memcpy is all it takes to do a privilege escalation.
Defense in depth is an important principle, and compromise of some but not all layers could be said to form something of a continuum, but I think a stronger case is that odds of a breach forms a continuum.
I think your post also shows a very large misconception in the disclosure world.
It sounds like you're saying that bug bounties should be a free-for-all.
Are you recognizing that these companies often already have security programs in place? Do you also concede that the companies may already be aware of where their vulnerabilities rest?
Large organizations know things that you don't when you're submitting bugs to a reward program. Constraints on a program help them focus on areas where they know they have unknowns. It also helps them deal with situations where they know fixes are scheduled, but not currently implemented.
How are things going to play out if you took the time to discover a bug and the company told you they're not going to pay for it because they already know about it and already have a fix scheduled?
The average 'researcher' is going to be pissed. You don't know if they're telling the truth, you put in your valuable time into finding the bug, and you're wondering why you should put in your time next time.
Rules on a bug bounty program do not necessarily exist to constrain the reporters to only the "known strong areas". They're there to help avoid situations that might lead them to quite reasonably ask why they bothered to try to do a responsible disclosure in the first place.
Well of course there have to be rules. Does spear phishing employees email accounts and using their password to access control panels count as a bug? I bet I could hack a lot of companies that way. Does being susceptible to a massive DDoS count as a bug? Cutting power to the building?
I can't speak for Prezi, but it seems like they want people to test the security of their app, but not of their employees or back office infrastructure. Maybe you disagree, but it's their bounty and I think those are fair rules.
A simple rule of thumb seems to be is, does it cause a problem if all the bug bounty hunters take the same approach.
Phishing employees, DDoSing definitely cause problems if a large number, or one, of bug bounty hunters take on the approach.
It seems even if all the bug bounty hunters searched for and found http://intra.prezi.com:8081, preformed google searches and tested found logins by hand, no problem would result for prezi.
So it seems like Phishing employees and DDoSing are inherently different then the approach in the post.
Yes, it does. Customers do not care how the intruder got in only that they got in. Spearfishing is an attack that makes the company look dumb. Leaving the credentials for your source code on the web makes you look even dumber.
To qualify for the bug bounty he should have inserted code into their codebase and then exploited that. Fuck these guys.
Flooding communications channels (in particular, mental bandwidth of front-line employees) with attempts to spearfish is an attack that interferes with operations even when unsuccessful. It does not make sense to ask the world at large to persistently try such attacks.
How? A phishing site can relay any of this information by acting as a client to the real site while prompting the end user for the requested credentials.
The only way FIDO could prevent this would be to make the credentials dependent on the URL in the browser, but I don't see where it does this.
With FIDO, the user doesn't manually enter a 2FA token into a form field. Instead they press a button or something which directly transmits the token over SSL to the authentication server.
MITM is still possible, but there are other ways to combat that, such as TLS Channel IDs [1] or Bearer Tokens [2].
Large tech companies routinely run pentest exercises against themselves that involve phishing their own employees. Good security has to include educating the human element as well: if you have great technical security but all you have to do to get in is ask an employee their password, you've lost.
Large companies also invest significantly in protection against massive DDoS and power cuts to the building, along with drills for earthquakes and zombie apocalypses.
I wasn't trying to say those things aren't really security problems... just that they perhaps aren't things you'd want random people on the internet attempting to exploit.
They also control the rate at how their own employees get phished, especially if they want the employees to report any suspicious attempts. Constant barrages from outsiders will make the employees stop reporting.
I can see why they would want to set up rules instead of allowing anything to happen.
For example, if I was to set up a bounty I really wouldn't want people at random contacting current or former clients trying to phish for passwords; I completely understand this is a threat, but I would want to personally manage something like that.
With that said, if something like this was found I'd pay the person. There's a point where you just recognize "Oh shit, that's a big hole, pay the man.".
Your point is good. I'd solve the problem like this: Instead of whitelisting certain kinds of attacks and parts of the company for bounty eligibility, I'd create a very limited blacklist. This blacklist would consist of actions which, despite being good-faith security research, would cause unacceptable damage to the company. For example, blacklisted actions might include:
- Deleting the company's data.
- Stealing from customers.
- DDoSing the site.
If you find a bug by taking any of the blacklisted actions, you get no bounty.
This approach protects the company without unduly limiting the thoroughness of the review.
> Why even have a limited scope on bounty programs?
Theres a few reasons, most of them having to do with managing day to day operations and keeping the business operating, etc. It'd be great to have everything wide open and and getting hammered until anything resembling a vulnerability is found, but that is sadly not really practical in most businesses.
Most bounty hunters aren't using precision. Without a doubt some are very meticulous, but a great many will throw every possible tool/option at their disposal at an application. This is great if it finds bugs, but it can also cause a lot of problems if their script generates a few hundred thousand help desk tickets that put your support/sales team way behind at a crucial times.
Theres also a lot of politics thats come into play. A lot of times these bounty programs have a split fanbase within company management and anything that interrupts the business, causes "bad" PR, and such will be quickly pointed out as reasons why the program should be discontinued.
Bug bounties != pen tests. Penetration testing takes a lot more for teams to work with and get something out of, and honestly a lot of organizations don't get anything out of a pentest. They either get a vuln assessment that a scanner jockey exported to pdf and showed up in a sports coat to present, or if they get an actual pen test by some of the people really doing it they get their ass handed to them so badly they have no idea what to do.
Bounties are to help a company understand the problems they have and get them fixed. Pen testing is about seeing how well you respond when everything goes to hell around you. Smaller orgs being constantly beat down isn't going to let them get a lot done to do anything except put out fires. (beware, physical world analogy ahead) Learning to defend yourself involves working with an instructor, and constantly getting better, not paying someone to whip your ass daily until you can't stand. Some people can work through the latter and become very well adapted to mitigating the attacks, but most will just get beat down and quit.
Maybe Prezi was trying to take a stand by not paying the guy for being out of scope, and thats fine they're certainly dealing with the consequences of that decision, but its completely understandable as to why they'd want some sort of scope to begin with.
I think it's ridiculous, I've reported similar "out of scope" bugs and got no bounty for them.
Even worse are the companies that DON'T state any kind of bug bounty or instructions to report a security bug...
I found a data leak issue in one of the web properties of an S&P 500 company last week and I'm not sure if I should report it, because I feel that if misunderstood it could have negative consequences for me; and not having a security contact means I can't be sure the person I'm talking to understands my motives.
Sorry, I have some problems with this attitude of expecting a reward for each and every action that benefits other human beings. Whatever happened to altruism?
I don't think you understand, it's not about a reward; it's about having a clearly defined process to report security bugs that is inclusive of every kind of bug.
If you don't have that, people don't know if they are breaking the law by sending you a bug report, and they might not report the issues.
Most of the time, the bounty is not going to pay for my time anyway; I just do it for the fun of it, but it definitely says "security issues are welcome"
We continue to have internal discussions about whether we should give this guy the $500 reward.
My advice would be to add up the hours of the people who have contributed internally to this discussion and then multiply it by their hourly rate, then try and add in the rough cost of the delay from these people not spending time on their current projects, add a 30% fudge factor for organisational overhead, add on the $500 that you owe anyway, then double the result and pay that, just so you don't feel like doing this again.
To be frank, this isn't some minor display bug, he had access your source.
In other words, this could have ended your company.
He could have sold or leaked it. If naivety is stopping you from grasping the possible consequences, then go ahead and read about Adobe's recent mishap.
Sure, in that possibility, that is very true in that nobody could build a full fledged knockoff product. But, what concepts or features that could result in cheap knockoffs? Designed attacks? Password leaks and user privacy breaches? Customer information that can be sold to competitors? All of the bad PR and loss of business as a result?
You would also have access to their development branches which would give insight into future product features and bug fixes that have not yet been released. The former would be useful information to give to competitors and really put the company into a tough position to compete down the line while the latter could be used to find possible critical holes to exploit.
Who says a competitor has to be the one to exploit security holes? It is much more likely that the source code would be sold on the black market to those who have no qualms about doing this for gain.
This is a no-brainer. Surely the risk of putting off skilled people from your bug bounty program due to the press from this could cost you a lot more than $500.
Bug bounties have a purpose and it is not to generate press or to be an equality outreach program. It is to find bugs.
If the rules are getting in the way of what the organisation is actually trying to use those rules for, then to be a stickler for rules is nuts when the same organisation wrote the rules in the first place and can change them at will.
edit - and if it is neccessary due to corporate legal waffle to always be a stickler for rules, then make a rule that details the protocol for exceptions.
Someone at your company should probably be thinking about Prezi's reputation. That person should probably have a discussion with whomever is running the bounty program.
You should stop talking and you were smart to delete that other comment.
I was wondering about what truly happened but now I get the impression that Prezi is officious and bureaucratic and I wonder what kind of customer support such an organization would offer:
"Our Terms of Service say we are not responsible for your lost data. Have a nice day and here's a T-Shirt."
It seems that he pointed out a security vulnerability in your infrastructure - something you do have control over. And if a vulnerability is found in an external service you use, do you feel that you don't have a responsibility to mitigate the risk posed by the service whether or not you have direct control over it?
Often this is to keep from having to pay out for bugs you can't fix (the most common things to be out of scope are third party services). In this case the problem was actually on Prezi, but I imagine the rule was written to exclude bugs in their version control system from the bounty program.
The only reason I see is if you provide immunity in exchange for following the rules you don't want to allow actions that can degrade your service like DDoSing, online brute forcing, vulnerability scanners, etc.
There should be some neutral third party non-profit that adjudicates bug bounties so that security researchers don't need to worry that their efforts will go to waste.
Companies could sign on to using this third party and pay a fee and put up escrow for the service. This would motivate researchers to find bugs for those companies that utilize the service, knowing payment will be impartial.
A simple option is CrowdCurity - reward programs as a service. Private or public, dollars or bitcoin payments - everything setup and managed for the companies.
Ps: the idea is pretty cool. So is the implementation =) though how would you guys have handled if an issue like this occurs on your platform? A submitter submits a bug but the company refuses to pay for it citing "out of scope" ??
You know, you are just harming yourself this way. If you must show your stuff on HN, why not post it as a ShowHN?? why do this dishonorable thing to gain attention? IMO it actually harms you.
What is the gain in setting up a "Can you hack us?" and then make some parts out of scope?! It's not like a black hat hacker would go "Oh well, this isn't their usual domain, so It's not fair" -.-
The only thing this causes is exceptionally bad PR, or even worse for the company; someone just got access and you don't know. Access to source code is like the gold mine of finding an exploit, because you will know exactly where a vulnerability is, and you won't even have to blindly test it.
> What is the gain in setting up a "Can you hack us?" and then make some parts out of scope?! It's not like a black hat hacker would go "Oh well, this isn't their usual domain, so It's not fair" -.-
This suggests that anything less than perfect security is worthless. Which is better, having pentesters look for vulnerabilities in 50% of your surface area, or having pentesters look for vulnerabilities in 0% of your surface area?
Setting up a bug bounty program has a cost, both in terms of processing the data submitted and in potential disruption of the provision of services. This cost will differ from attack vector to attack vector. Having pentesters dress up as utility workers and attempt to sneak into your company offices to install keyloggers will have an extremely high cost in terms of disruption. This cost may be higher than the potential benefit of learning about the company's vulnerabilities in this area.
There are also some attack vectors that may be problematic to allow pentesters to probe due to third-party contracts, data protection laws, compliance issues, etc.
You may disagree with the particular areas a company chooses to define as out-of-scope, but to claim that having any areas off-limits renders the whole enterprise pointless is reductive and incorrect.
> This suggests that anything less than perfect security is worthless. Which is better, having pentesters look for vulnerabilities in 50% of your surface area, or having pentesters look for vulnerabilities in 0% of your surface area?
Is this supposed to be rhetorical?
Say you buy a really good front door for your house, and forget to put a back door on your house. I would say that testing the security of the front door is a waste of time.
You should read the rest of that post instead of stopping at the point you quoted. I think he makes a good point: There are real costs associated with expanding security, and there are points at which those costs can become unreasonably high.
I think your point is too extreme. Locking your front door is most definitely NOT a waste of time, because with that move alone, you've automatically protected yourself against the subset of attackers who don't think to try the back door. Are you still vulnerable? Yes, of course. But decidedly less so. As the OP said, 50% is better than 0%.
The real conversation that should be taking place is not whether or not a limited scope should exist (it should), but how far that scope should extend given the costs of extending it.
Exhibit A of why having a scope for bug bounties is a terrible idea. What is the point of testing your app for esoteric bugs when your entire source code and passwords can be Google dorked?
I'm hp co-founder and CTO of prezi. We learn from our mistakes, we have changed the program: To improve the program from now on we will reward bug hunters who find bugs outside of the scope provided that they do not violate our users’ information and that their report triggers us to improve our code base. We will also retroactively check to see if other reports found issues that fall into this category. More info at engineering.prezi.com/blog/2013/12/03/a-bug-in-the-bugbounty/
This should be up-voted some more so people can see the resolution. I'm glad you guys decided to reward the bug hunter for his time as well as provide a response.
"Out of scope". Wow. Even more worthwhile that such a huge out of scope bug was found. These companies seem to try anything to keep from paying bug bounties.
To be fair, there was a scope set, and the author was fully aware of it:
> I had spent a total of 2 hours sifting and crawling through their services which were in scope, but wanted to see if I could locate any other subdomains, with the assistance of google.
While I agree that he most certainly found a "bug" (perhaps flaw would be a better word), it was out of scope. And using credentials from an employee to log in is nearly always out of scope.
That said, he could have gone "gray-hat" and used the source to find in-scope bugs. Such a resource would be invaluable to an exploit author or bug bounty hunter.
You're right, but it will still get you into legal trouble. Not only may you not get a bounty, but they might sue or press charges for essentially copying and scanning their source code.
Generally "gray hat" and "corporation/law-friendly" don't mix, even if there are some cases that call for it.
From Wikipedia, which agrees with my understanding of the phrase: "… such people sometimes act illegally, though in good will, to identify vulnerabilities in computing processes." My point, though, is that it's hardly out of scope when it's a valuable resource for developing novel attacks on in-scope domains.
Using login in credentials that are not your own found in a public place to take source code is like finding someones house key on a park bench and coping their secret invention designs or trade secrets.
As I read it, he didn't use the credentials to take the source code; he found the credentials in the source code. He used the credentials merely to verify the credentials were valid.
Define "take" source code. Do you mean "read" or "access" source code? I know this is an aside, but I think we as a community need to be more judicious in our use of criminally-accusatory words, especially when it comes to taking/stealing/theft vs copying vs distributing/selling vs reading/watching/accessing. They're all very, very different things.
You read my post in the ~5 secs widow where it had the word "take." It was the wrong word because in the case I was talking about it would not have deprived Prezi access to their source code.
Hi, I just thought I would update everyone on my experience and the last 12 hours.
At the time in which I found the bug and was not awarded for it, I was quite upset, evident from my tone in the email in which I decided that I did not want to receive any of their "swag", but rather give them some constructive criticism.
I wasn't expecting the blog post to get as noticed as it did, but as it has, I was able to observe great points on both sides of the argument of whether or not I should be received the bug bounty. These discussions were definitely required as they brought out some important issues with bug bounties today and how security issues should really be dealt with.
Prezi, has now both apologised to me and also have offered to pay me for my findings. I have updated my blog post to show this, as well as the emails exchanged between us. I'm glad that it ended this way - all within the last 12 hours.
Initially, I did not redact the developers names, and after the blog post became I had to rush to make sure that I had removed them from all places which were indexed by Google. My intention was not to negatively affect the careers of the Prezi developers affected from my findings.
I thank everyone here, and generally on the internet, for looking closer into my findings.
Break the rules, don't get the money. Surprise!!?? After reading the entire email thread, I think Prezi comes out better off than the OP:
Actually we're continuously thinking on your case and struggling on the right move. On one hand, your finding was very useful for us, and we learnt a lesson from it. On the other hand, intra.prezi.com is out of scope, and by using the credentials to log in you violated the terms and conditions of our bounty program.
...
In the past we turned down the bounty request of people finding issues in out-of-scope services. We had a lot internal discussions about your request: if we were about to pay, we couldn't justify our out-of-scope decisions for anyone else.
...if we were about to pay, we couldn't justify our out-of-scope decisions for anyone else.
What, are we in kindergarten? Does Prezi not have managers entrusted with taking decisions? They can run their bounty program however they want.
That they choose to run it in this fashion sends several messages in addition to the obvious, "we are obnoxious miserly prats". While hackers in white hats might be hearing "concentrate your efforts elsewhere", those in black hear exactly the opposite message. Many people who might previously have admired Prezi for their innovation and paid them money for their services, have now heard a reason to find other means to create presentations. Potential acquirers and potential hires have heard that this company's management finds running a bounty program challenging.
EDIT: Maybe I'm being too harsh. Apparently this is a largely Hungarian company; it's possible there are cultural misunderstandings in play. From a (perhaps cliched?) American perspective, however, following the rules is less important than accomplishing the goals of the program.
What this guy describes doing (using accidentally exposed credentials to log in to somewhere) is quite a bit more than what other people have been successfully prosecuted for violations of the CFAA for. I'd be careful.
Really? According to this monograph even logging into a non-password-protected wifi network which doesn't belong to you has been treated as a case of theft in Hungary:
"We're pretty sure your actions were taken in good faith". Ouch, their email response contained barely an iota of gratitude and it was almost on the verge of passing judgement on his character.
So let me get it straight, someone, aware of their bounty program or not, found their closed SOURCE CODE, and is getting a T-Shirt? How much do you value your own source code? at least 10,000$ right? ;) (probably much, much more) who cares about the scope, if someone found my wallet on the street which had 10,000$ in it, I would give them a bit more than a T-Shirt, I would buy them a whole wardrobe.
Think if someone found the source code for Windows / Office / Photoshop, without any bounty program, and responsibly disclosed it to the respective companies. If he didn't walk away with nice amount of money, he could easily just put it in the nearest torrent site* without even feeling guilty (*this is wrong, and illegal, don't do it)
Ignoring the bounty thing for a second, their email response "we think it was in good faith" seems... Not right to me. Am i reading that weird or did they seem pissed about him finding something like that?
He plugged a huge issue for them, and they screw him over due to "scope"... That's their choice, but it still seems bureaucratic to me.
They're talking about viewing the source code and testing the login. The author could have just reported the leaked credentials and not logged on. Testing them especially since it wasn't part of the program falls under potentially extremely malicious.
I don't understand why companies start those bug bounties and later try to avoid paying out the rewards. If it were me, I'd book the reward amount as "spent" the minute I decided on a bug bounty hunt.
I think this is (yet another) lesson that participating in these kinds of bounty hunts is very risky and should only be done if the company is reputable (which this one apparently is not).
Seems like Prezi has changed its mind about not paying. Prezi being a Hungarian startup made a buzz in the local media with this story and one of the leader news site reached out to them and got this reply:
"Prezi: Hibáztunk és fizetni fogunk" witch means:
"We made a mistake, we will pay"
They also said that they will release a blog post and they will change the bounty program, so mistakes like this will not happen again (hopefully)
The analogies are beside the point. Logging in to a system which you don't have permission to access just is illegal in many countries, whether you think that it ought to be or not.
I don't see how those two statements are consistent with each other. The first says you're trying to judge whether or not the law is reasonable, and the second says that you're not trying to comment on what the law ought to be.
Apparently neither Prezi nor the guy who found the login are American, so this particular law might not apply, but many other countries have similar laws.
One wonders if he wouldn't have been better[1] off downloading their app source, and using that to find 'in-scope' vulns much easier than everyone else. They might catch on if you're too effective though. Maybe a spot of plausible parallel construction.
[1] Except for the totally illegal aspect, obviously. And the not-telling-them-their-source-is-open-to-the-world bit.
Presumably the goal of the bounty was to make Prezi more secure. OP found a serious security hole, without using a "violent" approach (spear phishing, cutting the power, etc). OP reported this security hole.
In a legal sense, they aren't obligated to pay. There are a lot of legal loop holes. By not paying for something that they obviously want to know, they are discouraging other security researchers to disclose "out of scope" holes. To what end?
If you succeed, we will give you cash. That’s right; we’ll pay cold hard currency into your bank account. Think of it as a thank you. (Prezi bug bounty site)
I guess the right way to read this is as a (legal, of course) fuck you.
This sends a worrying message to others - in future don't bother reporting vulnerabilities to Prezi, just obtain the source and sell exploits to the highest bidder.
It's no wonder security researchers turn to black hat methods, when they're treated/compensated like shit for their effort. "Swag" in return for your source code? What a joke
"It's no wonder security researchers turn to black hat methods" -- this seems such a binary and pointless reduction of the options available. Yes, Prezi could have turned this into a PR and security win, and failed to capitalize; but the assumption that now the only option for a security researcher is to turn to the dark side is... pretty ridiculous.
Those who "turn to blackhat methods" do so because they want to make money and don't place a premium on the potential moral/legal/ethical issues at play in how they're doing it. They make a choice, irrespective of the shortsightedness on display by Prezi here. Don't conflate the two behaviors.
I'm noticing yet another instance of HN modifying post titles. I originally titled this post "Finding Prezi's Source Code" specifically because I did not write the article. Now the post title reads (at first glance) as if I'm taking credit for the author's hard work.
I think they acted pretty fairly by pointing out that it's the logging in that they have issue with. Although it's not as satisfying, I think Shubham could have submitted the link and credentials to Prezi without actually accessing the repo. In particular, the report email contains the snippet "... I explored the nexus console to confirm that ..." and I can understand Prezi not wanting to encourage pen testers to explore their systems, even if they find them open to the world.
I don't get how there seems to be absolutely no human side to these cases.
Guy discovers critical vulnerability and could have completely fucked the company over. Instead he responsibly reports it, and he gets back a big fuck you. How can you possibly think that's fair? The fact that it's out of scope only means they should give him an out of scope reward - much higher!
Saying he could have not checked the credentials is a bit silly, because if the credentials were invalid (quite likely), it goes from CRITICAL to MINOR.
And isn't the entire point in bug bounties to encourage pen testers to explore your system? Sure, you don't really want them poking around your source control, but better that than black hats.
All of the above aside. They really couldn't spare $500 for someone who could have caused $millions of damage?
> Guy discovers critical vulnerability and could have completely fucked the company over.
We all frequently have the opportunity to cause damage, but we don't get rewarded for _not_ doing so. I think Prezi may have given the cash reward if the pentester hadn't logged in and browsed around. They probably don't want to set a precedent (take the data you find, get cash reward).
> ... because if the credentials were invalid (quite likely), it goes from CRITICAL to MINOR.
Agreed, but either way the pentester won't be able to fix it. All he can do is report his findings.
> ... but better that than black hats.
Agreed, but if you stray outside the terms of the bounty then you're no longer guaranteed the rewards. I think the pentester tried his best to report responsibly but I don't think Prezi are obligated to give the reward, based on the terms.
This seems to be key. Did he just verify the credentials, or did he poke around thereafter? If the latter, Prezi has a better case but they should have stated it more clearly.
I suspect that the biggest reason is that this amazingly gigantic, critical vulnerability was so ridiculously easy to find that they cannot stand the idea of paying someone a large amount of money to "fix" it, when the fix is to simply deny access to that service from outside a LAN or whatever. Prezi thought that they found all of the easy ones. Not quite.
My problem here is that the OP did not mask the names. Actually he did quite the opposite: he bolded them. This is no good. I can imagine the dev searching for his name in google and finding that post.
Hi, I'm the author of the blog post. I've masked last names from the post and PDF, hopefully meaning that they wont be indexed with that post. Thanks for bringing that to my attention.
This is definitely out of the scope of their "bughunt", although I think the guy should be rewarded anyway.
But I'm also quite upset with the fact that OP is outing the dev. Everybody makes mistakes, no need to out any individual developer because OP is pissed at the company management.
I realised 2-3 hours after my blog post, and rushed to redact the last names from the post + pdf. I have now also redacted last names from the screenshots. Sorry about that! But thank you for letting me know. :)
> "Anyways, they did try and get it right, by emailing me an apology as well as responding to my constructive criticism. This blog post, is by no means attempting to discourage people from participating from Prezi’s bug bounty, but rather just a blog post about how finding Prezi’s source code was not eligible for their bug bounty."
Passive aggressive much?
I think he should have got a bounty -- if not the official one, then a special, bigger one. However, this is an odd way to conclude the post. "Oh, I'm not at all trying to discourage others for participating, oh no no". Of course he's trying to discourage others. With justification. I don't get it.
I think he's just being humble: he disagrees with their policy, but isn't claiming that everyone else should just because he does--make your own decision. Fair enough.
Oh I see. You mean like, "Here's my experience; I decided to stop participating. But I'm not advising you to. Offer not valid in all areas. Yada yada..."
This would be unethical and I would never do it, but the interesting scenario would have been if he'd secretly pulled the source code and used his access to it to find a bunch more bugs. He would look like a genius and pocket a bunch more money.
The rules seem to allow a reward for this kind of vulnerability,
What’s up with other vulnerabilities? ... we will consider if they are eligible for a bounty or not
What is the bounty? ... we will increase it at our discretion for distinctly creative or severe bugs
Prezi explicitly designed the rules to be flexible, so they could give the award in this case, but decided not to because "intra.prezi.com is out of scope".
The rules about scope appear to exclude vulnerabilities in 3rd-party services such as AWS, not backends, e.g., the backends for our iPad and desktop applications are in scope
It would have been easy for him to steal the source code and blackmail them for bitcoins... companies are encouraging others to turn to the dark side by not giving fair rewards. I'm pretty sure there are lots of smart people living in difficult economic conditions who will now think twice before reporting a serious vulnerability at the risk of an unfair reward. If Synack can solve this it would be a major win for everyone.
A bounty program is to get 'white hat' hackers to find and report vulnerabilities. The bounty is small, nowhere near what an extortionist could charge to keep the source secret for instance.
By paying nothing for what could have been sold back to them for a huge sum, they may disaffect hackers, who could do them real harm. You become a sucker to volunteer for their 'bounty', and decide to turn to the dark side instead.
I think Prezi are very silly to be splitting hairs about this. They stuck the stick in the hornets' nest, now they are arguing with the hornets.
The guy found and brought to their attention a simple exploit that could have seen valuable source code released into the wild and the guys at Prezi are debating about paying him a bounty?
Does this mean that Prezi do not value their code and don't believe there would have been any significant loss if that code became public?
Are they saying that the next person that discovers serious flaws in their security should just keep quiet - or sell it on to some hacker, where at least they can make some money from it?
Just what message are the Prezi people trying to send by nit-picking over $500?
Silly PR move on their part. They should've given this guy some shush money to prevent this (now) PR nightmare. Shoddy security practices, shoddy marketing and PR. Tsk, tsk.
So the question I haven't seen asked in this thread is: Why is anyone still using something other than SSH to connect to their version control system? Why is any software still using usernames and passwords stored in plain text anywhere? With SSH, you create SSH key pairs and set a passphrase on the private key... which shouldn't end up in any public place, ever.
Well the credentials in the properties file shouldn't have ended up in a public place ever. So if you replace username/password with a key, a human can still accidentally publicize the key.
This policy of limiting security assessments/bug bounties to only certain things is really stupid.
Do you really think that any extremely motivated hacker would just stick to the arbitrary terms you set.
He will do whatever it takes to get in and by limiting security research you're making yourself vulnerable in other areas not defined in that assessment request.
But.. that can be said about any java (jar) programs class files. It is also not difficult to decipher the asm of a disassembled exe file, but to equate that with finding the source code of the program would be disingenuous.
Decompilation of executable C files is much less accurate and usable than decompilation of Java class files, which usually produce verbatim Java source code. I don't know if source was or wasn't directly disclosed here, but if they leaked vanilla Java class files, that's basically equivalent to their source code.
You can drag drop that jar file into http://jd.benow.ca/ and in two clicks you have 100% of the source code, variable names and all. It's not the same as decompiling an C executable by any means.
I don't know of enough places that use Nexus to say whether it is common practice or not, however we do not bundle jar files with sources at my place of employment where we do use Nexus. If we wanted to bundle sources into jar that would have to be done so explicitly, as it would require something like mavens source plugin. In fact in maven the standard seems to be to include sources in a separate jar, if one wants to publish the sources i.e. again requiring explicit choice and configuration.
Having stringent terms for a bug bounty program basically means you're trying to get the community to do your team's job. Agree with @nikcub - it should be wide open, because finding this out was huge, no matter how "simple" it may have been.
Bug bounty program or not, I would be pretty afraid to try to log into a source code repository without authorization to do so. It seems like a lawyer could really go after you for doing something like this.
The main point is the thing that OP found is really important for Prezi. I don't really understand why they have to figure out whether the vulnerability is in "the scope", or not.
So, the message they are sending is "if you find an 'out of scope' bug, sell it on the blackmarket because even if it could wreck havoc, we won't pay you for it."
Are bug bounties roughly the market value of security holes in software? I wonder if this guy or less scrupulous developers could make more for them on the black market?
If the exploits are for the right targets, you bet they're worth more on the black market, but with great reward comes great risk: now you're doing something that can possibly get you jail time.
I removed the last names from the blog posts and from the PDF, as they could be indexed by Google. I have now also removed them from the screenshots. Thanks. My intention was not to negatively affect these developers careers.
I think Prezi should have done something like this:
* Acknowledge the problem and the seriousness of it
* offer a reward, but not under the bounty, just a "thanks"
* Have him sign an NDA about the source itself, and the specific details of the issue, and the amount of the award
* Allowed him to write up the experience should he choose (good PR for prezi)
* (maybe) offered a contract for the researcher to find more such issues, or announced a different program as a result of it.
The reasoning behind doing it outside the program is that Prezi needs to walk a fine line between saying "just attack everything and we'll pay you!", "we are too process driven for our own good", or they end up getting bad press from people who tried to follow the rules not getting anything, but cheaters are getting paid.