I'd like to mention that we never suggest running Sentry in DEBUG mode, nor do we document how to do this. Sentry does use Django, so it's pretty easy to put pieces together and figure out how to do it.
So while we can help obfuscate this specific issue, running in DEBUG mode always has the potential to surface insecure information due to the nature of it.
Our patch to suppress this information entirely can be found here: https://github.com/getsentry/sentry/pull/9516
And if the secret key were secure, the pickle use would not be vulnerable.
Still, multiple layers of security, yadda yadda, sure.
But this is beyond the pickle issue. I'm not sure I'm completely convinced you should not use pickle for browser cookies that are appropriately cryptographically verified. (although fwiw I believe Rails changed it's default cookie serialization to use json instead of the ruby equivalent to pickle which suffered from the same issues).
(but yes you still need to implement a safe unserialize white list)
2. There are many situations where the person who we want to see the stack trace and debug output does not have access to the console output. E.g. hosting the service internally for a QA team.
In general, it is an error often made by people who focus on security to think that security considerations always trump convenience. In fact, it is a trade off like any other.
That might not work for all edge cases, but broadly there's a lot of machines which are only for non-development/production code, and a system-wide setting makes you a lot safer.
I used to feel the same way. It’s worth changing.
Well, at least it was clarified. It got a couple upvotes, so I guess others misinterpreted it too. Sorry.
For sake of argument, this is pretty easy to do just by looking at job postings.
There's still chances of this getting overridden down the line and all apps have to conform to one style but at least it's possible?
There's no specific technological challenges, this is entirely political, getting half a dozen or more different projects with different priorities to check the same variable for the same purpose.
With nodejs in a docker contianer for example you specify whether the server is designated for production or not during build time, not when you run it.
Small to middle size companies whose main business is totally unrelated to selling software just care that their stuff works somehow. And everything that IT does is a cost center.
The Django project also has a lot of warnings about the pickle serializer: https://docs.djangoproject.com/en/2.1/topics/http/sessions/
Facebook has had vulnerabilities and exposures, but nothing like that.
I'll have to remember this next time I think an exploit scenario is too unlikely.
People are naturally more lighthearted and organic than the sterile software and documentation we tend to write, and we're more social too, wanting to connect with others even if it means through a simple in-joke or catch phrase.
FWIW, I suggest the book On Writing Well. William Zinsser has valuable advice on how we can still retain humanity in our writing without being dull and lackluster when dealing with dry and "sterile" topics like software.
(My pet peeve being "$thing 2: electric boogaloo", which just seems to be filling up space with nonsense words - at least "for fun and profit" makes gramatical sense...)
Just stop with these. They aren’t funny or original.
MODCON: Revenge of the Boring Moderator
Dijkstra framed it best:
In my opinion there are two date formats that make sense: DMY (decreasing granularity) or YMD (increasing granularity). I tend to favor the latter because of ISO. MDY definitely doesn't make any sense at all.
One wonders why that is even there. Was Django's own session code not good enough?
This abstraction exists because some options can be set from the admin UIO.
1. Enabling debug mode in production
2. Running publicly-accessible app with publicly-accessible crash screens without any monitoring system noticing it's happening
3. Relying on auto-cleaning in debug facilities to sanitize security information (never works)
4. Using over-powered serialization protocol which allows for code execution for storing user-accessible data
5. Thinking that merely signing content prevents abuse of (4)
3 as well I think is unfair. That isn't something facebook implemented or is relying on; it's just the default behavior of django's debug stack. That's entirely on django to do that and lull people into a false sense of security in some cases (though it also probably helps in many cases too, so it might be okay).
The real issues are 1 (leaving debug mode on by accident), 2 (not noticing 1), and 4 (which is a django issue I think, not a facebook one).
If the secret key is not compromised. So you have to ask yourself - why you send to the user some info that is so sensitive that needs signing? Why not just keep this info to yourself and send an opaque ID instead? Yes, I know there are issues with it too, but at least this issue is not there.
> 3 as well I think is unfair. That isn't something facebook implemented
I didn't say it's Facebook fault - though ultimately, of course, it is as much as if you run certain software on your servers and do not configure it properly, it's your fault. So there's a fail in having security key in a place that's so easily accessible that debug mode dumps it without even asking. Not necessarily a direct Facebook fail, but a fail.
If you want disposability with the ID method you need some sort of datastore or cache to contain session info
Though, there's no excuse for leaving Django debug on in production.
But on the other hand, the server or the network should not trust the application to be secure at all. The infrastructural setup should assume the application to be an <exec($_POST['do_me']);>. And that's why the application should be isolated on a system level, on a network level, and as much as possible. That's the good part I mean - the part that worked.
The severity of security vulnerabilities should be judged on their context, not on their classification or category.
Think it was this one: http://exfiltrated.com/research-Instagram-RCE.php (actually it's even worse than I thought with Facebook threatening legal action against him)
If Facebook wanted to discourage pivoting access, they should have clearly stated so as Google and Microsoft have.
There's a whole thread on HN about this.
Here's Alex Stamos' writeup:
>In the case of Facebook, the rules can be seen at https://www.facebook.com/whitehat. There is no rule which states what to do when a vulnerability is discovered, but there are several which imply that my testing was valid. These include:
Report a bug that could ... enable access to a system within our infrastructure
Remote Code Execution
We both agree the initial RCE was in-scope. The researcher reported the RCE immediately, then reported the privilege escalation by weak user passwords, then reported the API key escalation.
Moreover, you're factually incorrect about the "fit of pique." The researcher stopped poking around immediately after receiving the email; he simply continued before Facebook contacted him. When Alex said,
Please be mindful that taking additional action after locating a bug violates our bounty policy.
There was no reference whatsoever to that policy in the official Bug Bounty guidelines. Alex fibbed. Indeed, how is a canonically "in-scope" privilege escalation supposed to work if researchers are to stop at the first bug?
Lastly, if Facebook's idea of defense-in-depth is a master API key to all Instagram S3 buckets, accessible from a simple diagnostic panel, any bug bounty program is merely window-dressing.
Remember, he didn't simply pivot. He back-pocketed credentials, didn't tell anyone he had them, and used them later to hit out-of-scope systems. Nobody is OK with that.
You've changed my view.
>He back-pocketed credentials, didn't tell anyone he had them
It should be assumed that any data on the pwned server is now accessible to attacker, just like in any real world scenario.
It's pentesting not penthieving. That's like saying military training is useless because they don't actually kill people.
He didn't say that at all. Your thesis here is that preventative discovery has no utility if it does not perfectly simulate real world conditions. That's a pretty extreme position; I don't think you'll sell many people on it.
> It should be assumed that any data on the pwned server is now accessible to attacker, just like in any real world scenario.
I think you'll have a hard time finding companies who are okay with security professionals taking sensitive data for themselves just because they're reporting a vulnerability.
Regardless, I feel like he deserves at least $15,000 for this, since it is full RCE.
"09.08.2018 20:10 CEST : a 5000$ bounty is awarded – the server was in a separate VLAN with no users’ specific data."
Please don't insinuate that someone hasn't read an article.
ping -4 facebook.com
results in 184.108.40.206. Maybe, author used some other way to get those IPs. Can anyone throw a light on this?
Run the same query from a bunch of sites with different connectivity in the same town and you might get different answers depending on who's got peering agreements with who. Then spin that query again using other places in the world and you can get even more variety.
That is, assuming the site in question has worked out a way to vary the A/AAAA records for their actual domain (as opposed to the hosts within it, like "www"). Some of them might just point it at a single POP/frontend/whatever, and when that one goes down for whatever reason... things get interesting. Not that I would know anything about that.
30.07.2018 00:00 CEST : initial disclosure with every details.
09.08.2018 18:10 CEST : patch in place.
One suggestion: You use "However" quite a bit. Not sure if you intended to show your thought process as it evolved, but that is the feeling I got.
Why does it run __reduce__?
I am thinking something like selinux, docker or chroot - a bit like internal firewall for Django (or any other webapp).
Any suggestions on the links to latest best practices?
Of course, in a sense, Facebook appears to have accomplished something simpler simply by sacrificing an instance to this application and putting it on a lonely isolated VLAN.
The other responses to your question are pretty weird, since it's obvious that there are things you can do to mitigate the possibility of your Django program literally calling execve or whatever.
That said, I'd think it's pretty obvious that you can only address the second part, so it's what's interesting. If your application speaks HTTP, then the OS can't do that much to keep it from doing stupid things over HTTP.
Huh, this isn't something I've ever come across before.
Off the top of my head, I guess it would be possible on Windows using a kernel mode driver, but that's pretty hardcore, and really easy to get wrong.
I know you can easily audit syscalls on Linux with auditd, but haven't seen preventing them before.
Is this an option on both Linux and Windows, and is it commonly used? Interested to know more!
(Docker is the most typical way this gets deployed but not the only way.)
Ultimately the only thing you can do is not run debug mode in production, not use insecure serialization formats and not leak secret keys. If you want to be a bit more pre-emptive about it, avoid language-specific serialization formats especially in dynamic languages ("serialization" that executes arbitrary code seems more common in those) and use taint checking or, better, a type system that can avoid leaking secrets (you likely need rank-2 types to be able to have values that you can use in certain contexts but never access outside them, a la http://okmij.org/ftp/Computation/resource-aware-prog/region-... ). It sounds like Django already has some level of taint-like functionality but this plugin/addon (sentry) then didn't use it properly. So, multiple failures interacting to create this vulnerability - I guess we should take that as a sign of progress compared to the days of basic buffer overflows?
Maybe if you tell your OS "never let any data containing this string leave the machine" you can kind of mitigate this, but it's unlikely to work. The HTTP response is probably compressed. Someone probably base64-encoded the thing and stuck it inside some JSON, which is then base64-encoded again.
Ultimately, it's about managing complexity. Django and the extension punted: Django says it will print anything and everything it has access to, and the extension has a documentation caveat about how bad that would be. One could imagine an API that tries to be resistant to this sort of thing; when the extension is initialized, it deletes the key from the environment dictionary (only partially possible), stores it in a private attribute, and only provides public EncryptAndSignCookie and DecryptAndVerifyCookie methods. This will be better than a documentation caveat, maybe, but the truly ambitious debug mode will probably get the key and print it out.
(I would also point out that I'm not a fan of storing state in encrypted+signed cookies, if only because there is no way to revoke a stolen cookie without revoking every cookie ever. If you have the state on the server to store a revocation list, you might as well just store everything there and never have this problem.)
But Facebook may view a compromise on their edge network and potentially one of their trusted servers as serious even if actual damage on that server itself is limited.
> wow, it looks like it’s a sort of Django SECRET-KEY override!
Sentry uses its "own" secret key, so Django doesn't know it should be stripped from the stacktrace.
A simple `DEBUG = False` in `settings.py` would fix the bug. You don't have Django run in debug mode in production.
I do not remember on top of my head now but I think there are few scanning software to find all the running apps on a remote machine. If you are aware then please share
I've also seen plenty of references to shodan.io, but I have no experience with it.
Did the author tell Django about this yet, or is this a (possibly unintentional) 0-day?
Besides the above interestingness, the morals of this story I get are
- Stay persistent and leave your scanners running; you never know what new things will turn up.
- Crashdumps _are_ interesting
- Yay, $5,000!
- Middleware and frameworks will always clash in useful and interesting ways?
Wait, so that means Sentry kind of has a vulnerability.
Basically if someone requests a password reset on your account then the PIN number gets sent to all email addresses associated with your account, not only the primary one. This is an issue because many people have one locked down email address for things like registering accounts, but others they use to talk with people, delegate to their staff, use with CRM apps, etc. (But you still need your everyday email addresses linked to your account so that people can find you by email, see your email on your profile, etc.)
The FB security team just says that delegating your email address isn’t secure so it’s not their problem. Like no shit, that’s why it’s a vulnerability. But for some reason the FB security team thinks it’s a good idea to let anyone immediately bypass 2FA and hijack your account.
As an analogous example, what if you did the "forgot my password" flow and they sent a recovery code over SMS to any listed phone number on your profile, sent a twitter DM to your listed twitter account, and sent postal mail to any postal address on your profile? (All at once, without waiting or confirmation.) That would expand the attack surface significantly, and it would be pretty easy to steal someone's account by stealing their postal mail. This case is similar: a secure email for account recovery and a less secure email for contact info, but Facebook forces you to use both for account recovery.
On the other hand, some people intentionally want multiple account recovery emails so they're less likely to lose access to their account. I imagine Facebook's hesitation with this feature is that it's hard to clearly communicate the distinction between the two use cases, and they want to bias toward simplicity.
I mean if you're delegating your email address, your staff aren't adversaries. But they shouldn't be able to, say, drain all your retirement accounts either.
Just because I want people who search for email@example.com to be able to find my Facebook account doesn't mean I want password reset requests sent there. It wouldn't be at all unreasonable to send them there if that was explained in the UI, but just immediately sending a password reset pin to a non-primary email address without any warning is crazy. At least wait a few days if the user doesn't take any action after it's sent to their primary address.
That's how it should work. You shouldn't have to remember which email address you signed up with in order to request a password reset. But that's fine as long as the pin number only gets sent to an account that's locked down to a degree that's appropriate relative to the assets under protection.
In other words, cases where I had no intention of ever allowing a password reset request to be sent there, knew fully well that it wouldn't be safe, but did so anyway because Facebook provided zero indication that they would do this and it's not at all intuitive that they would.
So, in order to get in there, you would need to know the account you're trying to get into, and have access to an associated account in order to lift the PIN from there.
It isn't to say that it cannot be done, but it does sound like a reasonable action.