Hacker News new | past | comments | ask | show | jobs | submit login
Slack account takeovers using HTTP Request Smuggling (hackerone.com)
459 points by bartkappenburg on March 13, 2020 | hide | past | favorite | 102 comments



Protecting against request smuggling:

- If you don't have a proxy fronting traffic, no action required

- If you're behind Fastly/Cloudflare [1] or Akamai [2], no action required / they protect against this attack

- If you're behind AWS Cloudfront, no action required / they protect against this attack

- If you're behind AWS ALB, you're vulnerable by default but can opt-in to protection by enabling the "routing.http.drop_invalid_header_fields.enabled" attribute [3]. They initially had it on by default but it broke customers

- If you have a different proxy (e.g. some other provider or your own nginx, haproxy before 2.0.6 [2], etc), you might be vulnerable

[1]: https://portswigger.net/research/http-desync-attacks-request...

[2]: https://portswigger.net/research/http-desync-attacks-what-ha...

[3]: https://docs.aws.amazon.com/elasticloadbalancing/latest/APIR...


This is only partially true. The attack exploits a desync between your application and the proxy, so you can vulnerable to this attack even if you are behind a proxy that respects the HTTP spec if your application doesn't. I actually wrote a medium article about this since I fixed the vulnerability in gunicorn [1] a while ago, but I didn't release it (I will and repost here). What you linked to for the AWS ALB is only fixing ONE way to create a desync between the proxy and the server.

EDIT: Here is the link to my blog post https://medium.com/@emilefugulin/http-desync-attacks-with-py...

[1] https://github.com/benoitc/gunicorn/pull/2181


> If you have a different proxy (e.g. some other provider or your own nginx, haproxy before 2.0.6 [2], etc), you might be vulnerable

So I think I'm vulnerable because my setup is nginx->customhttp and my customhttp doesn't understand transfer-encoding-chunked (if there is such a request I just return an error).

As far as I understand this problem only arises if there is both: content-length and transfer-encoding, in which case content-length will be ignored.

Shouldn't this attack be very easily avoided if nginx just discards the content-length-header in such cases? Why should nginx ever send both to the backend?


(I'm assuming here that customhttp means you've got hand-rolled code and isn't somebody's terrible name for their product)

If your customhttp isn't smart enough to handle Chunked encoding it might also just always Connection: Close every request or even just act like an HTTP/0.9 server, in which case it isn't vulnerable.

Request Smuggling requires that both the intermediary (for you nginx) and backend (your customhttp) believe it is possible for one TLS connection to contain multiple HTTP requests, they just disagree on where the boundaries are between those requests.

If either of them insists that no, one TLS connection = one HTTP request, that's legal (but has poor performance which may or may not matter to you) and immunises against Request Smuggling.


By "customhttp" I meant my hand-rolled (subset of)http-server. It doesn't support chunked encoding because I also control the client-code but it does support "connection keep-alive" for better performance.

I haven't testet it yet but I'm pretty sure its vulnerable: my server just looks for content-length and if it doesn't find it, it returns an error. So if nginx sends content-length but then continues to send chunked content it should be possible that the next request won't come from nginx but from an attacker (probably not really a problem at the time but nevertheless not the expected behaviour).

My failure was to always expect "valid" HTTP-input from nginx whereas "valid" means my limited knowledge of HTTP.

But the question remains: Is there a reason why nginx should send both headers to the backend?


If you don't implement an encoding you should obey the HTTP/1.1 standard and "return 501 (Unimplemented), and close the connection". In your case even though not implementing chunked encoding is prohibited, this drop through would save you.

Valid means what the standard says it means. What you don't know can hurt you.

However, if modifying your backend code is hard, you can apparently tell nginx that you don't do chunked encoding and it will sort out the 501 on your behalf. I have not tried this and YMMV.


I completly agree and I'm sure nginx is doing it right by following the specification but isn't this a bug in the specification? Why should there be a content-length but then a chunked body.

And if this is a bug in the specification, shouldn't nginx fix it to help us backend-fools?


Regarding Cloudflare/fastly, you do need to make sure you're only allowing requests that originate from the proxy, either via IP-based firewall rules or something like CF's authenticated origin pulls [0]. Otherwise someone could find your origin server's IP and potentially perform this attack (and generally bypass your security settings).

0: https://support.cloudflare.com/hc/en-us/articles/204899617-A...


Allowing Connections to your backend directly might make you vulnerable to certain types of attack but it doesn't impact Request Smuggling.

The trick in Request Smuggling is that you're trusting an intermediary (in this case a frontend reverse proxy) to mingle everybody's requests into a single pile for you to process and they don't agree with you about how to do this. Chuck thus gets to submit a request which is mingled with Alice's and you end up letting Chuck modify Alice's request. Oops.

But Chuck sending requests directly to your backend doesn't allow him to do this. You're definitely not going to think Chuck's weird garbled nonsense is part of Alice's request when it isn't even on the same TLS connection.


Also keep in mind that if you use IP-based whitelisting, an attacker can register their own CF/Fastly account and target your origin server with whatever CDN settings they want (assuming they can discover your origin server). With Fastly at least you can even do this from the free tier.


Took me a second to wrap my head around what you were saying, so I'll point it out: they'd be pointing their CDN account to your origin server, and making requests through it.


Same for Cloudflare - to mitigate this your server should only respond to the correct HOST header for your website.


I was unfamiliar with request smuggling; here's an explainer: https://portswigger.net/web-security/request-smuggling

The first in-band signalling attack I came across was the blue box [1], invented in the late 1960s. It still occasionally worked in the 1980s on older phone systems. It amazes me that we're still creating new systems vulnerable to in-band attacks.

[1] https://en.wikipedia.org/wiki/Blue_box


Can someone explain the redirect/cookie-stealing part of this? I read through the post explaining request smuggling, and then re-read their exploit description, and the request smuggling part all makes sense. However, I don't fully understand the significance of the 301 in relation to the browser sending the attacker's server the cookie with the request.

If the back-end server is already proxying whatever request it receives from the front-end proxy server, why was it necessary to first get it to redirect an HTTP 1.1 header to an https request? The only thing I could think of was that maybe it has something to do with getting the unencrypted cookie, but according to their description, the forwarded request with the cookie doesn't happen until after the redirect to https anyway.

I know I'm missing something obvious here.


The collab_2.png screenshot shows `User-Agent: ... Slack/4.1.2 ... Electron/6.0.10 ...`, so it's their own desktop app doing the https://slackb.com/... HTTP 301 -> https://*.burpcollaborator.com request. Perhaps their client implements its own quirky redirect-following, which keeps the the original `Cookie: ...` headers in the redirected request?

I find it hard to believe that any browser would keep the original `Cookie: ...` headers in a redirected request to a different origin.


Interesting! That's definitely something I missed, thank you.

I still wouldn't expect an Electron app to subvert basic browser sandboxing by default, particularly where they wouldn't have expected to need to redirect users to other domains with cookies intact. It seems like they'd need to go out of their way to enable that.

I wonder if it has to do with the sign-in tokens they send or otherwise allowing the user to move between the browser and the app within their account. For example, when you're in the app and click "Manage Users" and it sends you to a management dashboard in the browser. or when you click a link with an auth token in the browser and it launches you into the app.


Some proxies follow redirects... It enables devs to do things like "redirect request to the old server", and the client never needs to know (which is important for maintaining compatibility with an old api)

In that case, I'd expect all cookies etc. to be forwarded.


What's subtle is that the middlebox is being tricked into parsing the body of an HTTP request as the envelope, or vice versa. The requests hitting the backend are being scrambled. The browser is doing what it can to ensure cookies only go to the right domain, but the servers are chopping those careful requests up and sending them to random places.


> the servers are chopping those careful requests up and sending them to random places.

I understand that the backend is parsing different requests than the frontend, but I don't see how the cookies ever get sent to the attacker's domain.

I understand three requests as moving like this:

    1. attacker -> frontend -> backend (partially) -> frontend -> attacker
    2. victim -> frontend -> backend (including a bit of 1) -> frontend -> victim
    3. (after 301) victim -> malicious server
The second request results in a 301, causing the victim's client to make a new request to the target of the 301. This new request does not include cookies, because it is to a different domain.

What am I misunderstanding? When do the cookies get sent to the malicious server?


Alongside a legitimate request with cookies, there is queued an evil request that uses mismatched Content-Length and chunked encoding headers to graft the body of the evil request to the envelope of the legitimate request. The evil body becomes the envelope for the entire legitimate request, and the evil envelope generates the redirect.

Skip down to the "Explanation of malicious request" section.


All of your explanation (which is appreciated) just further describes the request smuggling, not how that ends up sending the slack cookies to the hackers server.

If you take all of this together, it's all leading to Slack's back-end server returning to the user's client a 301 response that tells the user's browser to send a new request to the hacker's domain. However, when the browser does this, it's to a different domain, so typically the slack cookies would not get sent with that request, unless I'm missing something.

The "Explanation of malicious request" section doesn't explain this, either. All it explains is that all this effort was to get Slack's server to send a 301 response to the user's browser pointing to the hacker's server, but then it glosses over any information about how the cookies are getting sent with a single sentence, "and all cookies (including 'd') get redirected there too.... :("

My question is, how?

Is something going on where Slack's back-end servers are explicitly setting response headers that allow their cookies to be sent to the hacker's domain? Is there some additional vulnerability that's allowing the browser to send the cookie across domains on a 301 redirect?


If you zoom in on the last image in the report, you see that the victim's client is not a web browser, but Slack's Electron-based client. Perhaps that client is willing to send Slack cookies cross-domain after a redirect for an auth'd request?

Edit: looks like terom above me had the same idea before I did


I get that the evil body essentially gets prefixed to the legitimate request, making the server generate the redirect, but this redirect is returned to the victim. The victim then makes a request to the malicious server, and this request should not contain cookies because the origin is different. How do the cookies get to the attacker?


Are you're saying its the slack's frontend server that is processing the 301, and not the user's web browser? And that slack's front-end server is configured as such that when it gets a 301 it just resends the entire request (including all headers) to the redirect destination?

Edit: this seems likely to not be what he was saying and also not the case.


The hackerone write up which this submission points to actually goes into explicit detail on what happens step by step, along with nice infographics.

The TL;DR is that you can make the front-end see something like GET /my/supplied/url\nX:X at the end of your request, but the back-end see it as the beginning of the next request, (and the X:X turns the real GET/POST whatever info a custom header of X:XGET /real/request/url), and Slack returns that request with cookies for that person, but to a domain controlled by you when it comes back. The diagrams included in the post and the accompanying text do a better job of explaining it than I am, I think.


I'm also similarly confused. The post explains request smuggling really well, but doesn't really explain how that equals getting the victim's cookies.

This is my understanding of the bug:

* Attacker sends malformed request to slack servers. Request gets split in two. First part results in something sent back to attacker, next part of request is treated as a prefix to the next legit (victim's) http request made

* Slack backend server responds to victim's request treating it as having part of the attacker's request prefixed. The merged request results in a 301 http redirect response, redirecting the victim to an attacker controlled domain.

* Victim's browser gets the 301, and follows the redirect. When following the redirect, the cookie header is somehow sent to the attacker's site <-- Part i don't understand here

I don't understand why the victim's browser would send the cookie header when following a redirect to a different domain. I don't understand how causing the victim to follow a cross-domain redirect would allow the attacker to extract the victim's cookie.


Yes, this was precisely my question. I agree with you that the post does a great job of explaining the how request smuggling works, and how they used it to trick Slack's server into responding to the user's client with a 301 redirect to the hacker's server. But then, when they got to the part where the cookies are sent, they just said, "and the cookies are sent" with no further explanation.

That's the part I'm stuck on. Is there some browser behavior I'm not remembering where it sends sensitive cookies across domains just because one redirected the browser to the other?


I think this rather applies to API clients which can handle that differently. A quick test for python's shows that all headers are redirect regardless:

  requests.request("GET", "http://localhost/redirect-to?url=http%3A%2F%2Fhttpbin.org%2Fget", headers={"x-foo": "bar", "Cookies": "abc=dcv;"}).json()
  
  {
    "args": {},
    "headers": {
      "Accept": "*/*",
      "Accept-Encoding": "gzip, deflate",
      "Cookies": "abc=dcv;",
      "Host": "httpbin.org",
      "User-Agent": "python-requests/2.22.0",
      "X-Amzn-Trace-Id": "Root=1-5e6bebe1-cd6e71729a818015a06aa7cb",
      "X-Foo": "bar"
    },
    "origin": "91.58.8.128",
    "url": "http://httpbin.org/get"
  }
  

I'm running a local copy of httpbin on localhost, so python's request should not send sensitive headers for redirects but it does. Golang is a bit more explicit about it's http client behavior:

> • when forwarding sensitive headers like "Authorization", "WWW-Authenticate", and "Cookie" to untrusted targets. These headers will be ignored when following a redirect to a domain that is not a subdomain match or exact match of the initial domain. For example, a redirect from "foo.com" to either "foo.com" or "sub.foo.com" will forward the sensitive headers, but a redirect to "bar.com" will not.

https://golang.org/pkg/net/http/#Client

Though this might also cause problems if you are using sensitive non standard headers such as X-Token for token authentication, etc.

So while you could probably mitigate this vulnerability on some clients, you are trusting the server to only redirect you to trusted URLs which is not the case here.


Hmm that would make sense. The user-agent header does have "slack" in it so probably not a web browser, and its not far fetched that most libraries dont implement the same origin policy when it comes to headers and redirects.


"Cookies" is not a sensitive header in a request. "Cookie" is.


I imagine one way to do it is to make the attacking request to a redirect. Since the back-end conflates the attacking request and the next request (or a portion of the next request), you can get that next request, including the headers (i.e. cookies) as the body/payload redirected to that external site. If the attacker controls that external site, they then can see the cookie data.

Another way would be to make it so the next request is the content of a param which is displayed in some manner. For example, if it's the content to a message post to a slack forum.

Those different than what they outline in the article, but neither seems unlikely to be found on the back-end in some manner. Redirects are often locked down, but not always, and finding something that sends a variable back to you (such as a form port target that you know will error "helpfully") doesn't seem like it would be hard to me.


I'm not sure I am understanding

> I imagine one way to do it is to make the attacking request to a redirect. Since the back-end conflates the attacking request and the next request (or a portion of the next request), you can get that next request, including the headers (i.e. cookies) as the body/payload redirected to that external site. If the attacker controls that external site, they then can see the cookie data.

Are you suggesting that if the backend server was setup as an HTTP proxy, you could get the backend server to proxy the merged request somewhere else?

That's true, but it seems pretty unlikey that the backend servers would be setup that way.

Although i think i might just be misunderstanding what you mean.

> Another way would be to make it so the next request is the content of a param which is displayed in some manner. For example, if it's the content to a message post to a slack forum.

I.e. you're suggesting basically using this as a CSRF attack to post part of the victims request (including secret cookie) somewhere public to retrieve later?

I think this would work in certain situations but would be difficult to pull off. Its a bit more complicated because if the site was using csrf tokens you would need to make sure the victim request was a POST with a token,but that is easy enough - retry until it happens. The bigger issue is the request body would be kind of malformed. If the endpoint you are posting to accepts application/x-www-urlencoded (and not json), i imagine its interpreted super liberally. I guess the biggest issue is if ; is considered a form field separator (since user-agent usually has ; in it). I think some standards tried to promote that, but i dont know if webservers actually implement it. If they do,it would be very difficult for the attacker to control the name of the form field, and thus actually get the part they wanted publicly posted


> Are you suggesting that if the backend server was setup as an HTTP proxy, you could get the backend server to proxy the merged request somewhere else?

I'm suggesting that since the back-end server may see the requests as a single request with the second request as part of teh payload of your crafted request, the question then becomes one of "In what ways can I get this site to show one of my params". That's similar to a CSRF, but simpler in that that you don't care that it's unescaped, you just care that it's in the source in some way, since it's what should be displayed back to you for your request.

My form example is straightforward. Imagine you are requesting http://slack.com/help/search. If you reorder the params slightly, such that the "query" param is last, you might get a good chunk of the next request to be sent to the back-end as your query param, which slack will helpfully display for you both on the page and in in the search input on the resulting page. It will clean our newlines, and likely some other characters, but that hardly seems a problem.

> I.e. you're suggesting basically using this as a CSRF attack to post part of the victims request (including secret cookie) somewhere public to retrieve later?

Yep.

> I think this would work in certain situations but would be difficult to pull off. Its a bit more complicated because if the site was using csrf tokens...

I just found one and used it in an example above. Took me less than 5 minutes (I tried a different form that did have a csrf token first).

> The bigger issue is the request body would be kind of malformed.

That's a problem for if you want it to look legit to someone else. If you just have to note that it has the equivalent of /\sCookie:\s(\S+)\s/ in it... well that's not too bad. Removing the newlines makes it all one string, but in the example above they helpfully replace them with spaces. Even if they didn't, it's probably not hard to see where the next header begins.

> I guess the biggest issue is if ; is considered a form field separator (since user-agent usually has ; in it).

Depends on how you do it. My example above uses GET, so any ampersand (which is probably likely to exist in many queries) will throw it off. A POST might offer more options. Even with the GET, you might be able to run it enough times to find a post as the included second request, and that's less likely to have an ampersand in the URL, and you would probably scoop up all the headers at least. If you post with multipart mime, depending on how lenient it is on a final terminator, you might not have any problem at all if you find the right page to post to.

I did just confirm that that same search page doesn't seem to honor params passed in a POST, so that's good (for Slack).

I think the bottom line is that once you can make part or all of someone else's HTTP request show as part of your request's payload, it's a drastically lower bar for exploiting, as there are many possible ways to exfiltrate the data at that point, and they only have to screw up by allowing one of them to work.


> I'm suggesting that since the back-end server may see the requests as a single request with the second request as part of teh payload of your crafted request, the question then becomes one of "In what ways can I get this site to show one of my params". That's similar to a CSRF, but simpler in that that you don't care that it's unescaped, you just care that it's in the source in some way, since it's what should be displayed back to you for your request.

But the response to the mangled request goes to the victim not the attacker. You need someway to get it from the victim to the attacker (hence use of redirects in the original bug bounty).

> My example above uses GET, so any ampersand (which is probably likely to exist in many queries) will throw it off.

The point i was trying to make is that various specs suggest that semicolons should be treated as an alternative to amparsands, and they are much more common in headers (especially in user-agent). However i imagine much server software doesnt follow that reccomendation so maybe its a moot thing to worry about.


> But the response to the mangled request goes to the victim not the attacker. You need someway to get it from the victim to the attacker (hence use of redirects in the original bug bounty).

That's the route they took, but it's unclear to me if it wouldn't work the other way as well (or if that's more likely to work with TE.CL setup than the CL.TE one in place.

If you can use the difference in what is specced to make the back-end think your request is smaller than it is, can you reverse that to make it think it's bigger than it is, so some of the next request leaks in? If that makes the next request invalid, is that gracefully handled by failing that one (victim) request, or does it cause the whole set of requests to fail? I don't know the answer to these questions, but it certainly seems like there's a lot to explore here.


Wow, I was fascinated by phreaking when I was younger and first getting into programming. Been working on mac/iOS for years and never knew that the guys responsible for the most enjoyable part of my career, the Steves, built and sold a blue box!


Of all the stories I heard, Woz was way more interested in using them. When Jobs found it out it was illegal, he dropped selling them and distanced himself from the other phreakers he was hanging out with, too afraid of the legal ramifications of getting caught with this equipment.

Most of the books I've read basically paint Jobs as a goody-two-shoes type who cringed at the illegality of phreaking - while saying he probably was thinking ahead, knowing he already wanted to start his own company and didn't want stuff like this to come back and haunt him.

It was an interesting dichotomy to me.


How can you avoid in-band attacks? I'm not going to send a postcard to a webserver with metadata about every HTTP request.


The phone system switched call routing from in-band signalling (that is, tones you could hear on the wire) to out-of-band signalling (where control information went over separate channels to content).

Another approach is reliable encapsulation. Ethernet and TCP/IP, for example, are technically all in one channel. But packet content won't be mistaken for protocol information. You could also look at things like SSH and how it has multiple channels: https://tools.ietf.org/html/rfc4254

I'm sure there are other approaches, too, and I hope people will mention them.


Is it what Steve Jobs and his friend used to hijack telephone lines and call the pope, free of cost?


Of all the places on the internet to hear Steve Wozniak referred to simply as Steve Jobs's friend...


Sorry, I didn't realise that other friend in this incient was Steve Wozniak specifically. I saw his interview after his return to Apple.



Anyone know if the `smuggler` tool used is available online? Can't find any reference to it in github or elsewhere, and I'm not familiar with it.

Edit: Found it here: https://github.com/gwen001/pentest-tools/blob/master/smuggle...


Thank you, I was looking for this as well.


> I did not expect for this finding to go from submit to fix/bounty in a matter of 24 hours.

I didn't expect that either. I am very pleasantly surprised. I hope my own company would do as well as Slack did here, but I am not certain whether we would.


This was probably a one-line code fix in the end. I'm sure they added extra tests and guards, but at its core, this is a trivial change. I've worked at less-than-agile companies before, but if the turnaround on something like this is more than 24 hours, that's pretty bad.


Greatly detailed report and impressive resolution speed, indeed, but sadly it fell behind the cracks regarding disclosure.


What do you mean by "it fell behind the cracks regarding disclosure"?


After the issue was resolved the reporter was asked to wait before disclosure. The reporter waited three months before asking about it again. After that there was response but it took another month the approve the disclosure.


Oh, now I get it. Thanks for the clarification.


I'm impressed by how competent and professional some independent security researchers are.

How do people learn this stuff? Are there any resources that anyone here can recommend?


In this particular instance, it's a savvy understanding of the HTTP protocol. Both the 'Transfer-Encoding' and 'Content-Length' headers have opposite goals: one says how many bytes of response body data there is; the other signals that it is limitless until (IIRC) an empty newline is transmitted.

Realizing that different systems resolve the question of "Well, what should I (the system) do when I get both headers?" and you realize you can break the atomicity of the HTTP request-- which opens up a really interesting (and severe) class of exploits.

I'm not very good at finding vulnerabilities, beyond the obvious sql-injection attack types that we are generally trained to avoid. I imagine, however, getting good at finding these is a skill that can be learned with time. Alternatively, finding vulnerabilities might be more akin to pharmaceutical developments: with a 1,000 swings, you're missing at least 950 of them, or more. A dash of luck in there.


It probably helps to know that this particular bug was a major announcement at Black Hat last year, by James Kettle, who is a vulnerability research celebrity. So lots of people are looking for this particular bug.

Black Hat talks are eventually published online for free; here's this one:

https://www.youtube.com/watch?v=upEMlJeU_Ik


Given how widely publicized request smuggling was, I'm surprised it's still a problem for apps with large user bases like Slack.


Other commenters note that some platforms intentionally leave this vuln open because closing it breaks some buggy clients. Convenience over security.


I suppose if having account takeovers is worth the convenience-- sure, it's a tradeoff each company will have to make individually.


I found it very similar, conceptually speaking, to buffer overflows; it’s just that, instead of messing with boundary checks in memory, you are doing it on boundary checks in connections.

It also reminds me of the issue of character encoding in (x)html, where you have different ways to specify it (http headers, xml declaration, meta tags...) and if they don’t agree you can get corruptions downstream.


> Re: disclosure - a redacted disclosure will be fine, but we'll need to hold off for a little bit while we perform our investigation. We'll keep you updated in the meantime, and once we've concluded there wasn't a customer impact we can disclose this. Thanks for your patience!

Does this mean they wouldn't want to disclose it if the same exploit was used against users by another hacker?


No, it means they'd want affected customers to hear it from them first.


Maybe they could have had similar vulnerabilities in other parts of their stack?



The fact that you can steal cookies like this is such a flaw in the way the web works. We need to move to tokens that the browser is in control of that provides this device has access to this resource.


And the real lesson here is HTTP probably isn't a good protocol between your proxies and your backends, and you should probably use HTTP/3 to fully eliminate this entire class of bug.


HTTP/2 also works to fix this, right?


Proxies would be forced to keep track of headers.


Wow, an ATO exploit that only received a $6,500 bounty? This signals to grey-/black-hat researchers that their research efforts or Slack bug disclosures are best directed elsewhere..


The literal hacker themselves indicated it was a fair payout.

I don't understand unsolicited complaining for other people.


>I don't understand unsolicited complaining for other people.

If you observe someone getting paid a much lower-than-market amount for something you can complain that the company is being cheap and likely driving away a lot of potential sellers (security researchers in this case) regardless of how happy the one person is.


This is not a below-market rate.


I noticed that throughout this thread you have been making this assertion. Could you share any data or citations to support this?

Would you feel any differently about the value of this bug if it affected, e.g. Google or Facebook?


No, I would not feel differently about it. The same dynamic would apply.

People on HN seem generally to believe that for any malicious activity you could do with a bug, there's a bidding group of willing buyers somewhere on some darknet site. That's not the case. Random bugs like this may get passed around, but the bugs that command a price all fit a couple specific molds: they're things you can drop into someone's existing operational process.

A Firefox drive-by RCE has some value: many organizations are set up to actively exploit Firefox browsers. So does an iOS jailbreak: lots of people stockpile iOS jailbreaks, for malware implants and for other purposes.

An important common thread among the bugs with liquid markets is that they have a meaningful half-life: once they're burned, it still takes time to eradicate the vulnerable installations. Serverside bugs are fixed worldwide instantaneously. You can see this dynamic in how grey-market payments are tranched.


This ignores the black market. This exploit would have been extremely valuable to insider trading rings.


They're not 'complaining' about the amount in the way that you're thinking.

They're suggesting that the severity of such a bug warrants a larger payout, because not doing so creates possible incentives for future explorers to consider selling these sorts of things on the black market.

This person may be satisfied, the next person that finds something similar may think twice.


I do wonder what the FSB would have paid for it tho.


It's a free market. Don't like the payout? Don't submit the bug. Someone else probably will anyway


Probably took weeks to build his own suite of automated tests for these types of exploits. All he has to do at this point is reconfigure the scripts to point to a new site/app with a bounty program, post report, and reap the benefits.

Assuming he has done this for several other companies, I would say he has effectively earned more than what is put in


How much should they have been paid? This seems like a week of pay for a pretty good software dev. That's not fair comp?


The value is not about the time spent finding the bug. It's about the severity of the issue, the scale, & the competitive cost of me selling it on the black market. If Apple left open a 0-day rootkit exploit that took me somehow 1 day to find it's still worth hundreds of thousands of dollars.


This thread is interesting because it shows different ways people value their work.

This is reasonable if you look at it as "just another job" -- you're being paid to build Good Software, so just another day at work. Or you're doing a Good Thing by helping a lot of people not get pwned.

This is unreasonable if you look at is as value-creation: "how much is this worth on the black market" or "what is this worth to Slack as a company".

Other people can get into the socioeconomic or means-of-production or entrepreneuring implications of all this, but I just think whether you downvoted or upvoted this provides a useful mirror into how one values one's own professional work.


The difference I think most people are missing is that Hackerone will not pay you if you don't find a bug. So they do not value your time, they value the results.

Leads to the conclusion that they should pay for the results and not for the time it took to find it. Also, there might have been weeks of failed attempts before finding this.


And then there are the people who want to be paid whichever is the higher of the two approaches.


Well yeah if your comparison is that the person's morals allow them to just turn around and sell it in the black market, maybe they could've paid more. But the reality of HackerOne is that most people are really just doing it as a hobby or side project that happens to generate cash.

Some people build 10 different static website generators, others do bug bounties. It doesn't mean they'd go on to sell these exploits and risk going to jail.


It's not the people using HackerOne to be concerned about. It's the ones who don't use HackerOne because they realize they'd get more money on the black market.

When it comes to vulnerabilities with a large enough impact it isn't enough to learn about most of them, because all it takes is one financially motivated actor to weaponize things.


There is almost certainly no liquid black market for this bug, even though Slack is very important to lots of businesses. It had no half-life at all (the fix was one-and-done) and doesn't fit into any existing business/operational model (nobody has an infrastructure where different targeted Slack bugs are pin-compatible drop-ins).


This assumes the researcher is indifferent to white/black hatting. In all likelihood, the researcher may have some personal preference to be a white or black hat, and it could depend on the ethics of the company in question.

There is also the cost of the likelihood of being caught while selling vulnerabilities on the black markets. If fascists have some personal stake in the company, black hatting would likely involve more careful and high stakes anonymity measures. Remember that US federal law enforcement agencies operate tor exit nodes!


I'm genuinely curious about this. If you ask for black-market rate compensations for disclosing a vulnerability, how is that not extortion? "Pay me this sum or I / someone will use this against you." seems to be what you are suggesting.


I think you're looking at this from not quite the perspective I'm taking. I'm not saying that any individual is going to go "Pay me this or I will attack you. No that's not enough. I want $X". That is extortion of course.

I'm looking at it from the perspective of the market economics. Reward programs are about incentivizing people to do responsible disclosure. If the market (i.e. the black market here) pays significantly higher for an exploit then a reasonable company will try to reflect their payout to match what the "market" has valued that exploit to be worth. This way someone who would otherwise have sold on the black market may be incentivized to do responsible disclosure instead (significant payout, maybe not as high but 100% legal & no legal risk). It's all about shifting the incentives and structure before people even make any decision. I think it's silly that companies get dinged that responsible disclosure programs don't pay out at the same rate as the black market. There's a legal risk element their not factoring into the math. But it should be roughly comparable (my uneducated gut check is within ~20%).

Think of it like drugs. Marijuana on the black market is cheaper. People still opt to buy legal marijuana even though it's slightly more expensive because it's safer, legal, & vendors are accountable to their customers & community. If the cost grows too large then the black market starts to grow again (e.g. cigarettes are a notorious example of this due to taxation as an attempted lever to kill it).


It's very different than selling drugs. This bug's only value is for someone to use it in order to defraud someone else, and that's clear to understand. They are both illegal, but the two actions are not morally equivalent to most people.


The black market does not outbid bounties for this kind of bug.


Good to know. Is there some resource you use to track this kind of stuff?


Nothing super authoritative; mostly from talking to people who've done it. But: there's Maor Schwartz's excellent Black Hat talk from last year:

https://i.blackhat.com/USA-19/Wednesday/us-19-Shwartz-Sellin...

And you can always look at things like the Zerodium Price List. I don't know that it's taken very seriously in its particulars, but the general structure of it mirrors what I've heard from other sources.

You'll notice on the Zerodium list that they will pay for serverside RCE in web apps --- but only a particular kind of web app: the kind that is deployed in lots of places. National IC agencies will, for instance, pay for phpBB RCE, because they have targets that use phpBB, and there are lots of phpBB's (when you hear people talking about how valuable a web bug is, ask yourself whether that person has mentioned the weird market for phpBB bugs --- something I've had firsthand [refused!] experience with). What you won't see are bugs in SAAS applications. Again, the reason is that a phpBB vulnerability has a half-life: everyone has to install the patch once it's burned.

I have no evidence for what I'm about to say here, so take it with a grain of salt:

I assume you can get paid for a vulnerability even as esoteric as this Slack ATO bug. But you'll get paid for people who are buying Slack accounts, not Slack bugs. That is: you'll have to be the one exploiting it, and you'll be making one-off deals to use it to get targeted accounts. People sell all kinds of accounts; it would not surprise me even a little to hear that there was a market for company Slack accounts.

But to participate in that market, you'd almost certainly have to directly enter a criminal conspiracy. You wouldn't be selling a bug to a market; you'd be participating.


There's definitely a bunch of cottage industry opportunities for crooks in this space, yes, if you are morally flexible and either can't be extradited to the victim's country or are confident of your OpSec.

For me the plausible deniability stops at something like the firm we hired (years ago, they were subsequently bought by CSID which was in turn purchased by another of my employers because it's a small world and apparently some people have unbounded appetite for risk) which steals credentials from people who steal credentials.

I can just about imagine sleeping after doing that all day. Because at least some of the time, indirectly, and after somebody gets paid, you're helping. It'd be like working in insurance sales. Your employer creams a profit off misery, but arguably the misery would be worse without them, maybe.

But any of these "Well I could get more money from bad guys" arguments miss in my opinion the most important constraint which is that most people don't want to work for the bad guys. Peanut farmer or crook is an easy question for most of us.


It is, as the bounty reporter says themselves, eminently fair. People on this site have very weird ideas of what the going rates for bounties are.


> This seems like a week of pay for a pretty good software dev.

Wait, 6500 * 4 * 12 = 312,000/yr

Might be a bit high for a week :P

EDIT: Okay turns out that if you live in the Bay area this isn't unheard of - the rest of us make 4x to 5x less then that (Saying this as a mid-level software dev from West Michigan).


Salary information on HN is skewed by Bay area engineers where $312k is not out of the realm of possibility for a senior developer afaik.


In Germany you will be VERY lucky making 100,000 euro / year. Germany has the lowest

software dev. salary / average country salary

ratio of all countries on planet.


Unless you are doing anything more or less related to SAP. Then you can make $150k/a easily.

If only other companies would realize that that's one of the reasons why SAP is Germany's no. 1 software company.


That matches up with the mid to senior range you'll see for FAANG at https://www.levels.fyi/


60-75k is on the low side for a qualified dev, even in Michigan.


If he worked a week and didn't find a vuln, he'd get 0. Average that in.

Bug bounties are the uberification of security research.


is there any website which teaches these kind of hacking? I am interested in it but not as a full time job.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: