Okay, so who has been fired?
That's what "zero tolerance" means: no excuses, not even "someone tricked me." And no punishment but the maximum.
Anything less would involve some degree of tolerance, and when you say "zero" that means no tolerance whatsoever.
It's obviously stupid to manage any organization that way, of course. It's a fatuous, dishonest phrase.
So stop talking about "zero tolerance" since all it means is "we make hyperbolic claims that we have no intention of living up to."
I'm not trying to split hairs or be a Twitter apologist here but there is a meaningful distinction here. Intentional misuse of credentials is ultimately subordination (which is immediately fireable in most situations), whereas accidental exposure is a mistake. Twitter is effectively reinforcing that employees are forbidden to puruse private data. They are not making the point that they will fire anyone accidentally involved in a security breach.
It does depend on that, you're right. "Zero tolerance" sounds so clear, it even has a number in there! But, nevertheless, one can rationalize just about any outcome by invoking it.
Specifically, any administrator who hasn't worked out a detailed meaning will have to crystalize their understanding when it comes time to apply the idea. This process of rationalizing will be different depending on the situation and their biases.
The supposedly clear policy becomes capricious or arbitrary. And if it's not arbitrary because they have some actual doctrine that can be consistently applied, then it would make more sense to use that doctrine.
> I'm not trying to split hairs or be a Twitter apologist here...
Splitting hairs is the raison d'etre of this site.
I'm not annoyed at Twitter specifically as they're hardly the inventors of the phrase. My issue is with concept itself, and the broader mindset that you see in legal concepts like strict liability.
> Intentional misuse of credentials is ultimately [in]subordination...
Well, intentional is your head-canon since they didn't use that word. But intent is useful to the discussion; let me explain why I don't think zero tolerance allows for intent and other mitigating factors.
The point of tolerance is that some harm is done, and the injured party is going to limit their response to it.
Law typically breaks it out as the action that caused the harm, the intent to cause that harm, and the certainty of your knowledge of the facts.
As soon as you bring intent into the equation, you're willing to tolerate a great deal of harm. Someone can get hurt in a car crash, and if it's clearly an accident, the injured party is generally not going to hold a grudge.
If there's sufficient uncertainty, we aren't even sure we can direct our response to the harm at the correct party. Then we're stuck tolerating it, or taking it out on some scapegoat. And I'd even argue that the fact that we inevitably have to tolerate some harm makes the concept of zero tolerance fundamentally contradictory.
Tolerance is what civilized people do in response to real life situations, and when they don't you get feuding and war. This isn't a new problem, the point of "an eye for an eye" in Mosaic law was to limit vengeance and vigilanteism with a doctrine of proportionality. Not surprisingly, people still didn't get it, which was why Christ revised it to "turn the other cheek."
In this case “zero tolerance” is short for something like, “except for understandable slip-ups that aren’t fully your fault, we’re not going to tolerate any slip-ups”.
Just like when I used to rock climb, I felt like we were basically following safe practices - but there was no one to adjudicate them, probably far less testing of various practices with stats than bikes. Also, where we climbed there wasn't expected rockfall. I had barely heard of that being an issue, and we never wore helmets. Later on I realized that was something I might have missed out on. And then the next step was "what other safety practices was I unaware of" ;-)
And of course I get that rock climbing is much more dangerous than hiking.
They have been caught banning people based on their political stances, and refuse to remove the algorithmic timeline sorting method which is designed to strip adolescents (and easily persuaded adults) of their critical thinking skills.
Perhaps have another layer where each use of specific functions require unlocking through an incident management tool?
'Zero tolerance' only means that some action would happen without any subjective application of the rule.
So much effort for so little gain... With proper preparation (i.e. a simple app ready to download everything from an account), they could have made out with the whole data of 130 accounts, silently, before tweeting the hopeless scam message. Instead, this seems like a mostly-manual effort, done in haste.
Just dumb thieves pulling off the scam of their life, or cover for a targeted attack of one or two of those 7 they actually siphoned properly? I hope US authorities will figure stuff out beyond the usual "it was a Chinese/Russian/Eastern European gang", which is just code for "fuck knows".
Most of the time, the "character assassination routine" is to emphasize the first point.
As it is, the only people who actually lost something are a bunch of Bitcoin owners that got scammed, and it's just not a big deal.
Having insider knowledge of what certain billionaires say to whom, though, can be immensely valuable. It might well be that a few of them will lit a fire under FBI and friends. We'll see.
Trump is a bit of a special case because he's POTUS. I'm pretty sure there are no remotely interesting DMs on his account, but if there's one thing the US doesn't like, it's anyone even perceived of messing with national security. An attacker merely logging into Trump's account could be seen as a national security issue, and attract an entirely different level of law enforcement attention.
So I'm confused by the idea "even infosec people think training will stop this".
At the bank that I see this used at, the employees are far less trusting of emails and such.
Training works if it's done right.
This is carefully planned phone-based spear phishing, though, and that's a lot tougher to protect against. It can be easy for a skilled con artist to gain someone's confidence over the phone, no matter how much you warn about vishing (voice phishing). I'm sure training can still help there, but attackers just keep trying again and again until they find someone it works on.
Any successful attack vector can be turned into a training scenario and repeated until better responses are trained into the target group.
Military casualty drills are very effective at instilling near instinctive responses... same principle applies.
I also believe if a real phishing email makes it to a user then there’s a problem. Some of the real ones I get were easy to spot, “we tried to deliver a package” or “your order is on its way” type stuff. Spam filters should’ve picked them up.
As-is the grammar is ambiguous whether badrabbit meant "some" or "all".
Granted, there is a place for some of these things temporarily while working to fix the actual problem, but that's a mitigation, not a solution.
There are shops where the goal is to have someone to blame when you get owned and there are rare shops where the goal is to do it right to catch/stop bad guys even if it means you get blamed (because management understand security is not absolute)
If they fire for repeated failed test, perhaps the person who is failing is not very well suited for a role where you have to resist phishing.
That depends on what you mean by "entry point". If you define the entry point as a person, then yes don't focus on that. But if you define the entry point as phishable credentials, then focusing on that is good, it will prompt companies to switch to phishing-resistant credentials (U2F security keys).
What is "Exploitation" standing for here? Exploitation of... what? How and by who?
And even in that attack, the victim's long term credential is protected if they use FIDO authenticators - the bad guys can't use the authenticator without help from the legitimate user and they don't gain any enduring credentials.
So you need to do the attack live and then hope the victim not only doesn't realise you just infected them with malware, but conveniently signs into something at the moment you need a signature, for which you can hijack their expectation to press contact on the authenticator. Then you get one authentication. If you need another one, for any reason (timeout, subsequent operation asks to re-authenticate, anything) you have to do it again because you do not gain enduring credentials.
And this specific Twitter attack might not have needed enduring credentials. It seemed to happen over a short time period.
Nothing compels Twitter to design their user administration tool so that it says "Oh you have a cookie well then it's fine for you to change Elon Musk's email address and switch off his 2FA".
For example it's perfectly easy to have a "Confirm" step for a privileged operation that requires WebAuthn authentication. But if you're the attacker that means a cookie doesn't help you.
Any good literature which you'd recommend to read to avoid something like this?
With U2F it's impossible to "enter" a 2FA code on the wrong domain, making you immune to phishing attacks by most definitions. This Kerbs article from awhile back says that Google had zero phishing incidents after making this switch: https://krebsonsecurity.com/2018/07/google-security-keys-neu...
I don't know that they even need to go that far. Just U2F on the god-mode admin tool would have been reasonable.
I don't think published textbooks are very useful — attackers also have access to them, and if the attack has been written down it'll likely be encoded into a firewall software or security process rulebook already (though it might still work for smaller companies lagging behind on the curve).
Microsoft for example recommenda privileged access workstations. If twitter's employees used a separate set of credentials and workstations for privileged twitter moderation than their regular account/machine used for email and day to day stuff I bet the attack wolf have failed.
As people pointed out here, hijacking Twitter accounts can lead to big stock market crashes, mass panics ("bomb found at XXX") and maybe even military escalations.
Under this circumstances, leaving a platform with an unknown number of compromised accounts online, seems irresponsible to me. In such a case you must stop the bleeding ASAP, either by locking up "important" accounts (what they eventually did, after a few hours!) or taking the site offline.
Next time this happens, we might not be so lucky.
This was good. I hope it keeps happening.
But yes u2f/webauthn would probably prevent this.
That said keep in mind they also do PR/damage control so we only know what they tell us. For all we know maybe they have u2f and an employee still did bad stuff while a phone was somehow involved. Or whatever else.
The recent hack just shows how lackadisical their attitude to security is.
I used to work at FB some years ago and they had U2F for everyone, even back then. Also, regular phishing test drills and red-team exercises.
Phishing email attacks? Why do employees have business emails at all?
Phishing phone attacks? Why would employees have phones with external access?
Front of the house (dealing with users) should probably be disconnected from back the of the house (admin access).
Before you know it you're in DoD or Bank territory. No Wifi allowed etc, where's your badge buddy?!
Things get complex quickly with security.
They really do. And so you should design security systems with the assumption that your employees will actively undermine security "to be helpful" to adversaries.
> And generally speaking the cost for stopping it at non-secure businesses is going to be too high until a security incident happens.
The cost for Yubico's "Security Key" is $20 and there is a volume discount. You should buy each employee a key, and if there's no secure means by which they can be re-authorised when they inevitably lose it, a second one to keep safely for that case.
The attackers correctly anticipated that while "Can you get me Jenny in user assistance's phone number?" is just being helpful, "Can you disable Elon Musk's 2FA and give me control over his account?" is a bit... obvious. So they got themselves credentials to do that stuff. But there is no need for Twitter employees to be able to give away those credentials.
For example if a Twitter user was logged in with a dongle but the attacker had access via social engeneered remote desktop access a dongle still could mean access to private data.
But yes. As far as I know Google and Facebook require them. Also Google sometimes require permission of an other co-worker to access data.
It depends on the dongle. YubiKeys and similar devices require the user to physically touch/tap it to enable U2F auth, and it automatically powers down after a timeout to prevent remote desktop attacks.
I would hope Twitter already had this kind of setup, but their blog posts about this are all targeted at a more general audience, so I doubt we'll get that kind of detail anytime soon.
How often is the tap needed? Is it needed on every action or 1/day or 1/month? It would stay valid via browser cookies valid for that period. If it's 1/day the employee might have tapped it in the morning, then went to lunch, then the attackers hit with the remote desktop attack.
If the app maintains a session, then that depends on how long the app allows sessions/tokens to live for at that point. The Yubikey won't come into play until login is required again. So, I think you're getting at a different part of the security model at that point.
I guess one option would be to ask the victim to read out the token from memory or disk. That seems pretty hard though. It's debatable whether that would be considered credential phishing.
A more likely method would be to trick someone into going into devtools and copy and pasting something from there, possibly a curl command, like in this epic "bug report". That's also debatable whether it would be credential phishing.
This FIDO authenticator has absolutely no idea what your per-site keys are. Instead, the random-looking ID provided to the site when you register and then given back by the site when logging back in actually is your private key for that site... encrypted using AEAD with symmetric keys only the FIDO authenticator knows.
One of the ingredients for decrypting the key is rpIdHash which is SHA256(dnsName) where dnsName is the FQDN of the site you're looking at or some suffix of that FQDN chosen by the site. So here it could be news.ycombinator.com or ycombinator.com (Public Suffixes like com or co.uk are prohibited). The browser is responsible for calculating rpIdHash.
Thus on a phishing attempt usually the AEAD fails, the authenticator not only doesn't give the phishing site a signature that can be used to sign in on a different site, it will ignore this ID and act as though the user doesn't have a FIDO authenticator plugged in at all.
I guess you would need to ring someone up pretending to be tech support and convince them to disclose the private key.
But a practical attack along the lines of what you mentioned would be to ring someone up and convince them to disclose their cookie. Check out in which the victim disclosed their cookie without the attacker even asking for it.
They aren't sufficient by themselves however, they don't protect from is malicious internal employees.
Do a cursory amount of preparation. Outside of basic measures, you're probably doing more harm to the business than good. The likelihood of internal malicious attackers is very low in the grand scheme of things, and the attack surface is huge.
Most companies are going to be compromised by outside attackers—its there that you should focus your energy. If internal attackers are your biggest threat, you've done a fantastic job.
If you're hit by a paywall:
From a defense-in-depth perspective, agreed: most attacks involve privelege escalation on the inside as soon as they switch from attack vector to breach, even if just host-level, so teams should absolutely "assume breach". Attackers will phish folks, get on their devices, get root, and then have fun there and potentially elsewhere. Ransomware is a more common goal than what Twitter got hit with as it is easily profitable, and it means a takeover. Controls on what most users can do and the ability to scope & report is part of growing up (in the US). It's good Twitter was able to map the attack - I bet many popular social networks couldn't, esp outside of the US or non-top-10.
Shameless plug: A lot of folks use our tool for mapping network logs, and I always encourage to also map out host / app / cloud logs as well, such as logins and the oftentimes black hole that is winlogs.
Almost nothing :(
Why isn't there a whitehouse.gov ActivityPub instance that no single admin can censor or subvert?
Account for your time in 6-minute increments. Milestones I recall off the top of my head were preliminary design, detailed design, 3-5% of your time coding, software integration, hardware software integration, acceptance.
It was stable, predictable, and (to me) very soul-crushing.
Technical question for you, how does this time tracking work in practice? Do you pause every 6 minutes and note what you’re doing? Or just roughly remember at the end of the hour/day?
So recording 1.1 against project 3456 would charge that project for 1 hour and 6 minutes of your time.
You had to do the same thing for a dentist appointment. 1.0 hours for an "overhead project".
(I should mention this was years ago)
Also, lots of the people who worked there were ex-government employees and were fine with it, because software folks got to go home to their family every day at a predictable time, you would get training at regular intervals and although the pay wasn't super competitive it was a good job, indoors and in an air-conditioned office.
Legislation is sorely needed for public institutions to make public announcement messages (microblog posts, or "tweets") using publicly managed and controlled infrastructure, contributing back to the commons.
Stop building into broken commercial services, the standards exist today to rebuild a commons-oriented Internet.
The general public doesn't need to know anything about the underlying standards.
Does a salesperson care about how SMTP works in order to send and receive emails from their customers?
Anyone here with any literature / sessions one could go through for a good gist of things with respect to Blast Radius?
The recommendation you read was probably about limiting the blast radius. It's a general security best practice, and you implement it through techniques like federating (compartmentalizing) services away from each other, limited lifetime credentials, attribution, SSO for single point of control for invalidation of credentials, principle of least access (PoLA), privilege separation with role-based access control (RBAC), session logging/audit logging, etc. Most importantly the underlying system needs to have a well-defined and pentested authentication/authorization architecture. The hallmark of systems that limit the blast radius is that they have well-defined limits on how much they trust each other.
OWASP (https://owasp.org/) is a great starting point for reading about this stuff.
So, they limited the account support access to smaller trusted team. This I can understand but it might also cause the delay in identifying such attack, if it happens today.
It's not easy to say those words about your own company.
I really believe them.
“They” only “care” to the extent it materially affects the business.
But - what is the reason to allow support personnel to pose as specific users and send tweets from their accounts? There is more than a security issue here. There is a complete security breakdown.
The specific text is, "Using the credentials of employees with access to these tools, the attackers targeted 130 Twitter accounts, ultimately Tweeting from 45, accessing the DM inbox of 36, and downloading the Twitter Data of 7."
So, this doesn't say what actually happened. If it was employees posing as users in order to post, that is an permission which should not be granted. If it was as you suggest, a password reset, then there is a separate issue with 2fa that would be expected on these accounts.
Either way, there are serious security issue. This is similar to Oracle calling itself "Unbreakable" and then getting broken. If Twitter cannot safeguard against so many accounts getting injected with tweets, then something is broken with Twitter's security model.
I believe the tool also allowed deleting 2FA from the account.
> But because the attackers were able to change the email address tied to the @6 account and disable multi-factor authentication,
> The social engineering that occurred on July 15, 2020, targeted a small number of employees through a phone spear phishing attack. A successful attack required the attackers to obtain access to both our internal network as well as specific employee credentials that granted them access to our internal support tools. Not all of the employees that were initially targeted had permissions to use account management tools, but the attackers used their credentials to access our internal systems and gain information about our processes. This knowledge then enabled them to target additional employees who did have access to our account support tools. Using the credentials of employees with access to these tools, the attackers targeted 130 Twitter accounts, ultimately Tweeting from 45, accessing the DM inbox of 36, and downloading the Twitter Data of 7.
So while the tool did not directly have the ability to tweet, it effectively did.
Also, admin tools are often "afterthoughts", there is usually a motley collection of them, and often considered as an expense/cost to be minimized and not a revenue generating asset that gets more budget and attention.
Of course, some of the targeted users presumably had 2FA enabled. How to do account recovery with 2FA in a consumer context is a complicated problem and I'm not aware of any good answers, but there's certainly an argument that the protections in place there weren't adequate and I wouldn't be surprised to see them changing soon.
I would also hope that rank-and-file support staff can't change users' email addresses, and the attackers had to spear-phish one of a smallish number of people whom more complicated account-recovery cases are escalated to. But who knows if that's how it works.
I've always wondered why there isn't more use of time delays for this sort of thing.
If there was a notification e-mail and a 7-day wait, that would offer a fair chance for the real account holder to cancel the change. Not 100% - the user might be on holiday - but it would catch a lot, and hence decrease attackers' motivation. And while a 7-day wait is inconvenient, for services like Twitter and Steam losing access for a week isn't the end of the world.
We had a running gag in our social media startup of tweeting "poop" from people who left their phones/computers unlocked ... someone did it to an employee that was logged in as a customer (corporate brand) context and that was the end of that 'joke'.
> For 45 of those accounts, the attackers were able to initiate a password reset, login to the account, and send Tweets.
Furthermore, they're a classic cost center, not a lot of love or budget goes into reducing their tech debt, or bulwarking them up against a sophisticated adversary. Red teaming yourself full time is expensive and not profitable. What's the worst that happens from a breach like that? Well, Equifax is still going strong!
I recall being party to an amusing conversation at a major network services provider at a team meeting for people with access to such tools, to the effect of:
- Alright, we're modifying <internal tool A> to lock down access to accounts related to <major political figure>. You will no longer be able to use <internal tool A> on <accounts>, only select supervisors will have that access.
%%% ah, okay, that makes sense
# uh, hey, regarding <internal tool B>, which allows us to look up <thing that would provide equivalent access to internal tool A>? does that still work the same?
- Uh, yeah, it does.
%%% ... silence ...
- Alright, next item!
To the best of my knowledge, that was never addressed. <internal tool A> has audit logs. <internal tool B> doesn't.
I don't know what all Twitter uses, but I know that many companies have various methods of authentication depending on how much damage can be done:
- Logging on using a username/password and 2FA is enough for some activities.
- More sensitive operations have to be done on hardware that has a certificate installed and backed by something like Windows Hello.
- Even more sensitive operations require a JIT account and a certificate stored on a separate hardware key such as a yubikey.
- Very sensitive work gets done on a secure device that is very locked down and can detect changes to the hardware that may suggest tampering.
- Some stuff simply isn't allowed to be done remotely, even with the above restrictions.
Obviously not every company needs such a complex setup, but for someone as high profile as Twitter, you'd expect more thought to be put into this.
This isn’t the right way to do it, but given they work at Twitter I could imagine this isn’t the first big mistake they’ve made.
If that is the case, that same proxy software could proxy the security key requests too.
Are you sure there wasn't a notification?
But I don't know what "The right kind of xss vulnerability would enable them to bypass 2fa too" means. If the attacker doesn't have 2FA I would think the attacker can't log in, thus meaning the first link of the chain has no purpose.
But I also think XSS in this case is not very likely. From interviews with the attackers it sounds like they're social engineering experts who hang out on social engineering forums, not XSS experts.
for a moment I thought it read `tweeting from 45's`
Hands down the easiest way to make Shaun Cassidy sound like one of the Chipmunks.
Many security researchers have already established that the benefits of a VPN especially in the modern distributed world are marginal at best.
Basically, yes a VPN makes you a tiny bit safer but it also adds a lot of networking complexity and adds more friction to the job of your employees. It also becomes an attack vector for malicious parties, since once they get VPN access they can theoretically access at least the first layer of protected resources.
So in layman's terms an attacker just needs to phish for VPN credentials, maybe steal an OTP token and they will have access to a non-trivial amount of network protected resources.
On the other hand if every service you use has its own authentication then the attacker needs to target each service and to know what services to attack they need knowledge that is possibly contained in another system that also requires authentication and is definitely not guaranteed for the attacker that all the systems will have the same password and/or have 2FA disabled.
Honestly, in my opinion VPNs are just an excuse to monitor traffic. This is a bit of cynical take, but I'm convinced that companies that use VPNs are more interested in seeing what goes in and out their network than in protecting their resources.
If your enterprise is a global network with millions of nodes operating a blend of modern and legacy systems accumulated through hundreds of acquisitions in 100+ countries over the course of the last 50 years, a VPN with hardware tokens isn't a bad additional layer. It isn't even mutually exclusive with zero trust, it's just another layer of auth and access.
Twitter? Largely a different story and commando zero trust might be a viable option. As observed many other places, this sounds like a poor authentication model and probably poor governance for highly privileged access. Presumably they will take a look at their authentication, which sounds like it's making some bad assumptions, and improve.
This would be a nightmare for the people managing any nontrivial system. There are good reasons to use something like Active Directory and tie systems and applications to it for easier policy enforcement and management. There are good reasons to avoid this centralization for certain things too. Either extreme would be an exercise in frustration.
I’m not so sure that it works that well once it becomes the actual authentication middleware. But as a single sign on directory it definitely reduces the complexity for the employees and for IT departments.
Either way I think more than systems, people need training. I know there are sophisticated phishing attacks but someone who has been trained to understand and acknowledge these situations should be able to detect when someone is trying to steal information.
I think Twitter’s failure was to not properly train their employees especially when they are such a visible and juicy target for bad actors.
Yes, with the (wrong) assumption that after you have connected to a VPN, all other services are free for the taking, without any further authentication.
On our vpn we require a non-exportable certificate in the tpm chip, normal user credentials, then we have a captive portal that forwards to our SSO that requires a yubikey.
* You go to your office, connect to the network
* Now you have access to internal services, by virtue of being on the network
In a Zero Trust network it does not matter what network you are on. Trust is handed out individually, based on the identity/ role of the user and the context of their session (is their os patched? running security tools?).
The attacker can surely use a patched OS. Are the security tools secret? If not, then the attacker can run the security tools too.
User agent is a great place for a version 0, sure. 99% of your assets aren't compromised, so worrying about a bypass isn't important to most of them. For a v0 just knowing that most of your boxes are patched is a huge win.
Of course you'll want client certificates on devices, or some sort of TPM, which is how Chromebooks work. The attacker having a box is not enough - identity is a key principal of zero trust networks.
Security comes in layers. That first layer of requiring a VPN can stop many types of attacks from happening.
Next layer is requiring MFA for VPN access. Then for admin access, require MFA only from approved devices on the domain.
Large banks and the DoD have been doing this for years.
The "fail often and fail fast" crew are always reinventing the wheel after bad experiences. I honestly feel sorry for them.
That said, much of this is atrocious news. For all of their engineering prowess, seeing poor opsec failures combined with the lack of basic security principles like "containment of blast radius" and "fast response to critical failures" is not something you can easily forgive at this scale.
But who am I kidding... it would be especially rich if some of these takeovers were enabled by simjacking-like attacks. Not that long ago, the only two-factor auth mechanism that worked for me was SMS.
Admitting that to yourself is a huge step forward in being able to detect it. Believing yourself immune increases your chances of being spearfished.
I bet there are some that are immune. But yes, 99% of employees can be phished.
Assuming that none of your employees fall for phising, much less targeted phising, is woefully unrealistic. Especially at twitter's scale.
Assuming humans won't do stupid things 100% of the time is never an effective security control.
I also worry that the emails might not represent real attack emails, and we end up training users to identify the test emails but not real attack emails.
(Not that i got any better solution)
It’s not a solution to the problem, but it certainly helps.
My gut feeling is for engineers, the phising training that most companies use is wholly ineffective at doing anything, and in particular it is especially ineffective against targeted attacks. But i have yet to see any research one way or another.
I suspect less technical users might benefit from such training a bit more (but still not that much)
All of them.
In short, I do not know my ebay password, but I could have fallen for this phishing attack.
Nobody is perfect.
Everyone is vulnerable given time/effort.