So this was primarily a social engineering hack after Slack was used to get access to a trusted messaging channel.
Are we talking about a physical token like an old-fashioned SecurID OTP keyfob or a Yubikey? Or something custom?
Or are we just talking about a code that real employees get via TOTP or worse SMS?
It seems like the same week large employers figured out they would need to FedEx the new guy a laptop since he can't come to the office, they would likewise realise they want to make sure they FedEx that laptop to his actual home, not some building site a scammer told them to send it to. And so you'd hope that physical tokens likewise can't just be sent to some random social engineer based on one chat.
Even if you do succeed in getting them to FedEx the token to a building site, that's not a trivial extra step to retrieve. If some teenage "criminal mastermind" gives their parents home address to get the token delivered, they've also told cops where to start looking for the "hacker".
Whereas if it's just a code then I can more easily imagine IT support just pastes it into Slack for you.
How is something like this advertised on forums?
Valid auth token for employee at EA Games for sale? Does it list the expiration? Refunds if it isn’t valid at time of sale?
I used to have a citrix fob that gave me a OTP but then my employer switched to azure and now it's all text message based. That was just last year.
Maybe the IT team/policy is just weak across the board, and they would've handed keys over the phone or through internal email. But it's not impossible for the IT team to have a complacent mindset around Slack.
Was bored the other week and found a ton of slack web hook urls in public github repos. I think it would be a pretty great way to do some phishing, just need to scrape the urls and brute force channel names with messages with links to websites you own.
I think with slack the danger is exacerbated by the fact that this value gets stored in a cookie that is a lot easier to gain access to - and, once you have it it has a much better documented format than whatever MSN Messanger might roll in-house. We're essentially just talking about an added layer of security by obscurity when it comes to the format - but actually nabbing a cookie is easier than gaining arbitrary file access.
But the problem goes beyond that. Many organizations have disabled sharing executable file formats in attachments over e-mail. GMail flat-out prevents you from sharing executables, and macro enabled word documents as attachments, even when put into a zip file.
But on Microsoft teams? I sent a zip file with 8 unsigned executables to a colleague a few days ago. No warnings, no messages, no nothing :)
As for me, I really hate this "feature". I work with IHVs and often have to share private binaries and it's a chore using xcopy/sfpcopy to their bespoke network path, from where I guess then someone manually copies over to their local subnet.
We should have a more robust mechanism in place than to outright ban sharing of executable files. Windows Smartscreen and Mac's Gatekeeper method of online checksum/signature verification is sort of interesting.
You need to build security for the lowest common denominator.
This is like banning guns and cars because they can kill people.
Also, you do 24 hours of lemons?
I don't think the signing itself does much.
On top of that once you're in Slack you can access every public channel, search string etc ... I can tell you that large compagny have a lot of thing displayed on Slack ( CI/CD pipelines results, credentials in logs, alerts ect ... ) you don't even need to talk to someone to have a lot of info.
If you want some domain to be documented, it happens outside of Slack. Secrets go in a dedicated secret management resource that requires 2FA for every login with strict timeouts and audits.
For the team I work on, this means piling more crap in Jira and Confluence. If a decision is made over Slack, that decision is then codified in a ticket or in a Confluence document. This also means some people constantly send links to the same confluence pages over and over again since there's no history for someone to search through.
I think overall it's a decent solution if you're diligent managing the tradeoffs. I can't really think of a better way to keep things off of a platform where they shouldn't exist, other than taking the nuclear option like they're doing now (albeit with a generous countdown timer).
If they did this in your company, then all they'd have to do is scan through the Slack channels till they found a link to the internal company Jira and Confluence sites and then they'd have free reign to start mapping out your network and prepare for an attack.
I think an effective mitigation that could be implemented on the Slack side would be to sign the cookies and include the origin IP as part of the cookie. If you get a request with a cookie issued to a different IP, then you invalidate it and have the user login again.
This might be problematic on mobile devices, so maybe another option might be to include a device id and a nonce in the signed token and each time the cookie is used to establish a connection, the device is issued a new signed token with which to establish the next connection. If a user logs out on a device or the same token is used twice, then Slack could immediately invalidate all tokens.
How do I mitigate a stolen cookie from successfully authenticating someone else?
Do I store user browser-agent/IP and check that on every request?
I can often switch thru many IP addresses in an hour (especially travelling) - various WiFi points, 4G, etc. Services will appear incredibly broken these days if they require a new login per IP address.
Obviously you could force all traffic to be routed thru a VPN and list it there, but it seems people are moving away from that approach.
To me, the better question is how are these cookies get stolen in the first place?
Otherwise you probably need short-lived cookies that get renewed by the client in the background, with a hard expiry of some reasonable "work day" length such as 8, 12, 16 hrs. Then even if it's stolen, there's a fairly short window of time that it's useful to anyone.
If you want to avoid this, you have to use something in your authentication scheme that can't leave the device/user, so we're talking certificate or other public key crypto based schemes.
TLS mutual authentication is one common tactic for this, although the scenario itself is uncommon.
Instead what I'd think I'd like is just the risk assessment to be be performed by a third-party when I'm handling authentication (i.e. a third-party that has a broader view of what's happening across multiple services over time). I just send the pieces of information that I'm willing to share as an API call and they make the best risk assessment they can.
Then I can take that risk assessment result and make a final decision if authentication succeeds or not.
https://sift.com/ Is one you call out to that gives you a risk score.
https://datadome.co/ can sit within your cdn layer that does risk assessment.
Besides, if you're large enough it makes business sense to do it yourself anyway.
1) Slack does not invalidate sessions after a short enough period of time or inactivity (for a cookie to make it to another site, be purchased and used, probably takes some time).
2) Slack does not properly terminate sessions on logout or inactivity (allowing cookies to be reused after logout).
3) Slack is not using any more clever techniques to make cookies useless to attackers.
Having the same cookie used on a different computer should be the reddest flag of them all. This could be identified by the user agent (different version of the Slack client) and a different IP (cookie accessed from a different country than it was originally created).
Fun fact, if you want SSO for your Slack, you have to pay extra unless you are already using one of the top tier Slack editions. So either pay up, or have multiple ways of administering your users, with the security implications that causes.
Once you get your hands on one of those, you've got a fair shot at a phishing attack.
In pre-COVID-19 days, IT handling that kind of request would have required it to come in face-to-face. But since everyone's working from home, that's intractable.
The failure point here is that IT should have confirmed via an independent secondary channel the identity of the requester, but it appears they either got lazy or their protocols assumed Slack could not be compromised in this way so the request was already authentic.
In pre covid times it wasn't uncommon for people in my office (which didn't have a dedicated onsite IT person) to ping IT on slack for the person sitting next to them to ask for a PW reset.
Face to face doesn't solve anything. If there's a 2000 person company and you can get into the building, chances are you could walk up to the IT desk and say "hey my name is John Doe I'm an engineer here and I locked myself out of X, can I get a reset?" And you'd be given it without any verification
And in all but the smallest companies I've worked for, you needed a card swipe at the door (or an escort) to pass from public to internal areas.
This is just really, really basic physical security stuff.
> And in all but the smallest companies I've worked for, you needed a card swipe at the door (or an escort) to pass from public to internal areas.
And once you're past the barrier of those internal areas, you're unlikely to be questioned in all but the most strictly controlled places.
Plus that's still vulnerable to googling photos of the employee you're impersonating and photoshopping a piece of paper with the date.
It’s not foolproof, but it’s incredibly difficult to prevent tailgating because people don’t want to start an incident by incorrectly challenging a coworker. I’ve heard of it somehow happening even with card turnstiles, double door scans and required photo confirm by lobby security.
Most employees have a photo badge, so they could scan that and look up your records in the HR database. If you've lost your badge, they could ask your name and look that up in the HR database, which hopefully has a photo on file.
I did it accidentally once at a big hedge fund sort of company. I thought I was being escorted by my friend’s friend. But it was actually just some one familiar with them/their names not fully understanding what I was saying letting me thru the ~3? Times a badge scan was required. I wasn’t paying attention. Only when my friend got scared he would get in trouble did I realize what had happened.