Hacker News new | past | comments | ask | show | jobs | submit login

It's a bug bounty, not a "only if we have time to fix it" bounty.

He found a security problem, they decided not to act on it, but it was still an acknowledged security problem




>It's a bug bounty, not a "only if we have time to fix it" bounty

It's only a bug if it's not intended


I think a lot of developers and companies interpret "that's the way the code or process works" as intentional behavior, which is not always the case.


Do some companies intend for their platform to feature remote code execution?


Some might very well do. E.g. a company with a service for training hackers and security researchers.

In this case the question is moot, as this doesn't involve remote code execution.


Make a general point, get a general answer.

If the criteria for bug is "not intended", and that's solely judged by the company, then broken auth et al. suddenly become part of their product design.

If it quacks like a bug, it's a bug.


Remote code execution is literally a feature of GitHub…


Sandboxed code execution is a bit different than RCE.


The point of a bug bounty is for companies to find new security problems.

If the (class of) problem is already known, it’s not worth rewarding.


I can see this argument making a bit of sense, but if they documented this 3 years after the issue was reported, they don't have a way to demonstrate that they truly already knew.

At the end it boils down to: is Github being honest and fair in answering the bug bounty reports?

If you think it is, cool.

If you don't, maybe it's not worth playing ball with Github's bug bounty process


It doesn't matter if they knew. If they don't deem it a security vulnerability --- and they have put their money where their mouth is, by documenting it as part of the platform behavior --- it's not eligible for a payout. It can be a bug, but if it's not the kind of bug the bounty program is designed to address, it's not getting paid out. The incentives you create by paying for every random non-vulnerability are really bad.

The subtext of this thread is that companies should reward any research that turns up surprising or user-hostile behavior in products. It's good to want things. But that is not the point of a security bug bounty.


> The incentives you create by paying for every random non-vulnerability are really bad.

So much this. It's pretty clear that most people commenting on this thread have never been involved in a bug bounty program on the company's side.

Bug bounty programs get a lot of reports, most of which are frankly useless and many of which are cases of intended behavior subjectively perceived as problematic. Sifting through that mess is a lot of work, and if you regularly pay out on unhelpful reports you end up with many more unhelpful reports.

This particular case definitely feels like one where the intended behavior is horribly broken, but there are absolutely many cases where "this is intended" is the only valid answer to a report.


I would argue that even if the behaviour was as intended, at least the fact that it was not documented was a bug (and a pretty serious one at that).


Again: you don't generally get bounties for finding "bugs"; you get them exclusively for finding qualified vulnerabilities.


That's true, but what's stopping a company from documenting a security issue as a known (mis)behaviour/bug? [*]

Companies can join/setup a bug bounty program, and just use it as a fig leaf for pretending to care about their own product/service's security.

Of course bug bounties can and are abused daily by people who report trivial non-issues in the hope of compensation

But in the same way, companies can also be bad actors in the way that they engage with bounties. I would usually expect big names (like Google, Apple, Github, etc.) to be trustworthy...

[*] Of course what stops companies is precisely them not being seen as trustworthy actors in the bug bounty system anymore... And for now, that's a decision that individuals have to make themselves


No large company cares even a tiny bit about the money they're spending on bug bounties. They would literally lose money trying to cheat, because it would cost them more in labor to argue with people than to pay out. In reality, the bounty teams at Google and Apple are incentivized to maximize payouts, not minimize them.

If you don't trust the company running a bounty, don't participate. There are more lucrative ways to put vulnerability research skill to use.


If a renown company won't pay a bug bounty, a foreign government often will.


Why would a foreign government pay for a commonly known security limitation of a product?


Good luck selling this to a foreign (or domestic) government. It doesn’t seem valuable to me, but who knows, maybe someone finds it worth payout.


The property (“bug”) in question is an inherent and intentional property of meekly-tree type storage systems such as git.

Calling this a bug is like reporting that telnet sends information unencrypted.

The actual bug is in the way that their UX paradigm sets user expectations.


Don't blame Git for Github decisions.

Github chooses to store all "Github forks" in the same repository, and allow accessing things in that repository even when they are not reachable by the refs in the namespace of one "fork". That is purely a Github decision.


They could have split forks off into new repos, but then they wouldn’t be forks, in the repository sense. It was never hard to just copy a repo instead of forking it. The UX just leads people into holding it wrong.


s/meekly/Merkle/g


lol. Someday autocorrect is going to take over my social media entirely.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: