Hacker News new | past | comments | ask | show | jobs | submit login

The key bit is "The hackers then requested a multifactor authentication token from EA IT support to gain access to EA's corporate network. The representative said this was successful two times."

So this was primarily a social engineering hack after Slack was used to get access to a trusted messaging channel.

I'm interested in what sort of "multifactor authentication token" and how IT support were able to grant this request.

Are we talking about a physical token like an old-fashioned SecurID OTP keyfob or a Yubikey? Or something custom?

Or are we just talking about a code that real employees get via TOTP or worse SMS?

It seems like the same week large employers figured out they would need to FedEx the new guy a laptop since he can't come to the office, they would likewise realise they want to make sure they FedEx that laptop to his actual home, not some building site a scammer told them to send it to. And so you'd hope that physical tokens likewise can't just be sent to some random social engineer based on one chat.

Even if you do succeed in getting them to FedEx the token to a building site, that's not a trivial extra step to retrieve. If some teenage "criminal mastermind" gives their parents home address to get the token delivered, they've also told cops where to start looking for the "hacker".

Whereas if it's just a code then I can more easily imagine IT support just pastes it into Slack for you.

It was probably a one-time-use bypass code offered by a 2FA provider.

sounds like it's a simple software token. Hackers needed an account for this, which they got by paying 10$ for stolen cookies lol. So now they can log into some random employees account, then get an admin issued 2FA code. Never seen these "bypass" codes before, but that's my best guess.

The stolen cookies imply full access to one of the employees machines.

How is something like this advertised on forums?

Valid auth token for employee at EA Games for sale? Does it list the expiration? Refunds if it isn’t valid at time of sale?

nah not to the full machine. This is just a Slack token. Someone basically had the employee click on something with XSS or some other way to steal cookies (idk there's a bunch of ways you can steal cookies out of someone's browser)

Token most likely refers to a software OTP token. That would require having an account, but the same would be true for a hardware token.

A simple Zoom call with your camera on for anything private could have stopped all of this.

Not really, IT generally have no idea what you look like. I suppose if the company provided your ID photo to tech support, then that could supplement this though. However, I've never seen a company do that.

I’ve definitely worked at companies with a formal corporate directory that had your name, phone #, office location, and badge photo. Seems less common in startups though.

Even just having your face recorded (after which, if it turned out you were a hacker, it could go on wanted posters or whatever) might be a significant deterrent.

I assume with some OS plug-in you can have an AI model slightly change your face enough to not be traced back to you

Does it look legit? Visual glitches would raise a red flag.

Having seen some of the filters people show off on TikTok... Yes. Yes they do.

we have a company directory with everyone's photo in it all employees can access. It's part of the HR onboard to put a photo in your profile day 1.

>Are we talking about a physical token like an old-fashioned SecurID OTP keyfob or a Yubikey? Or something custom?

I used to have a citrix fob that gave me a OTP but then my employer switched to azure and now it's all text message based. That was just last year.

Exactly, doesn't sound like this has anything to do with Slack in particular.

The problem probably isn't Slack itself, but EA's policies around Slack. I think it's not nothing that the social engineering happened over Slack. For starters, somehow login cookies to the Slack were stolen and then sold, and those were enough to get into the Slack. And then in 2 separate instances, the hackers were able to convince IT to give them 2 new tokens.

Maybe the IT team/policy is just weak across the board, and they would've handed keys over the phone or through internal email. But it's not impossible for the IT team to have a complacent mindset around Slack.

Yea, I think most users think slack is a safe space. They haven't been conditioned to be suspicious of messages on slack like they have with email. It's a pretty ripe attack vector.

Was bored the other week and found a ton of slack web hook urls in public github repos. I think it would be a pretty great way to do some phishing, just need to scrape the urls and brute force channel names with messages with links to websites you own.

I think it's actually a pretty unsurprising assumption. Slack is significantly more vulnerable service than a lot of other chat applications due to the fact that it has a native webclient that allows cookie based authentication. Every chat program in existence is going to use something extremely similar to maintain user sessions - having either authentication credentials or (preferably) an access token written to disk to allow users to avoid re-entering passwords every time the program falls out of memory.

I think with slack the danger is exacerbated by the fact that this value gets stored in a cookie that is a lot easier to gain access to - and, once you have it it has a much better documented format than whatever MSN Messanger might roll in-house. We're essentially just talking about an added layer of security by obscurity when it comes to the format - but actually nabbing a cookie is easier than gaining arbitrary file access.

Modern web doesn’t need cookie auth so let’s hope they move away from it. Cookies in general are an idea past expiration date.

It's the same old same old. Like you correctly identified it's a new place, it looks different. People don't think an attacker could approach them over IM, but they can.

But the problem goes beyond that. Many organizations have disabled sharing executable file formats in attachments over e-mail. GMail flat-out prevents you from sharing executables, and macro enabled word documents as attachments, even when put into a zip file.

But on Microsoft teams? I sent a zip file with 8 unsigned executables to a colleague a few days ago. No warnings, no messages, no nothing :)

For general public I can see how that is helpful if they can't decide if a foreign executable is trust worthy or if they execute everything with admin/root privilege.

As for me, I really hate this "feature". I work with IHVs and often have to share private binaries and it's a chore using xcopy/sfpcopy to their bespoke network path, from where I guess then someone manually copies over to their local subnet.

We should have a more robust mechanism in place than to outright ban sharing of executable files. Windows Smartscreen and Mac's Gatekeeper method of online checksum/signature verification is sort of interesting.

1000 times no.

You need to build security for the lowest common denominator.

Which is what Defender smartscreen and Gatekeeper are. With their full on setting, they give you a big fat warning on running files from internet.

This is like banning guns and cars because they can kill people.

Also, you do 24 hours of lemons?

I don't know if it's true, but a co-worker of mine said that, "in the wild" signed binaries are positively correlated with being malware.

I don't think the signing itself does much.

Perhaps not in itself. The lack of scans is concerning though

Signed by real vendors?

Depends what you mean by that, but probably not. But, signed definitely. The main problem here is that there doesn't seem to be an authoritative vendor list of the "real".

The thing is you need MFA to log into Slack but having a valid cookie bypass that.

On top of that once you're in Slack you can access every public channel, search string etc ... I can tell you that large compagny have a lot of thing displayed on Slack ( CI/CD pipelines results, credentials in logs, alerts ect ... ) you don't even need to talk to someone to have a lot of info.

I work with a large enterprise company that is actively thinking about this problem within their organization. The long and short of it is that everything in Slack is ephemeral past a month or so. Everything gets wiped on a rolling basis, including all media and messages.

If you want some domain to be documented, it happens outside of Slack. Secrets go in a dedicated secret management resource that requires 2FA for every login with strict timeouts and audits.

For the team I work on, this means piling more crap in Jira and Confluence. If a decision is made over Slack, that decision is then codified in a ticket or in a Confluence document. This also means some people constantly send links to the same confluence pages over and over again since there's no history for someone to search through.

I think overall it's a decent solution if you're diligent managing the tradeoffs. I can't really think of a better way to keep things off of a platform where they shouldn't exist, other than taking the nuclear option like they're doing now (albeit with a generous countdown timer).

I don't see how the policies you described would help at all in this situation. The main use of Slack here was to get the one-time-pass, which allowed them to login to the corporate network.

If they did this in your company, then all they'd have to do is scan through the Slack channels till they found a link to the internal company Jira and Confluence sites and then they'd have free reign to start mapping out your network and prepare for an attack.

I think an effective mitigation that could be implemented on the Slack side would be to sign the cookies and include the origin IP as part of the cookie. If you get a request with a cookie issued to a different IP, then you invalidate it and have the user login again.

This might be problematic on mobile devices, so maybe another option might be to include a device id and a nonce in the signed token and each time the cookie is used to establish a connection, the device is issued a new signed token with which to establish the next connection. If a user logs out on a device or the same token is used twice, then Slack could immediately invalidate all tokens.

This is a concern of mine on systems I design/maintain.

How do I mitigate a stolen cookie from successfully authenticating someone else?

Do I store user browser-agent/IP and check that on every request?

I think we are living in a post "IP address" world to be to totally honest.

I can often switch thru many IP addresses in an hour (especially travelling) - various WiFi points, 4G, etc. Services will appear incredibly broken these days if they require a new login per IP address.

Obviously you could force all traffic to be routed thru a VPN and list it there, but it seems people are moving away from that approach.

To me, the better question is how are these cookies get stolen in the first place?

IP based auth is super annoying for legitimate users since it logs them out frequently.

If it's an internal corporate system where all the users sit at assigned machines and have fixed IP addresses, yes you can do stuff like IP address checking.

Otherwise you probably need short-lived cookies that get renewed by the client in the background, with a hard expiry of some reasonable "work day" length such as 8, 12, 16 hrs. Then even if it's stolen, there's a fairly short window of time that it's useful to anyone.

As long as your authentication scheme is based on a bearer token, you can't really prevent it, but binding to IP and setting short expiry can help motivate it.

If you want to avoid this, you have to use something in your authentication scheme that can't leave the device/user, so we're talking certificate or other public key crypto based schemes.

TLS mutual authentication is one common tactic for this, although the scenario itself is uncommon.

In my opinion you don’t. Rely on the authentication provider to handle that responsibility. Services like duo/Okta perform this risk assessment and may opt to request a mfa request.

I've never wanted to completely hand over authentication to a third-party.

Instead what I'd think I'd like is just the risk assessment to be be performed by a third-party when I'm handling authentication (i.e. a third-party that has a broader view of what's happening across multiple services over time). I just send the pieces of information that I'm willing to share as an API call and they make the best risk assessment they can.

Then I can take that risk assessment result and make a final decision if authentication succeeds or not.

There are risk services out there.

https://sift.com/ Is one you call out to that gives you a risk score.

https://datadome.co/ can sit within your cdn layer that does risk assessment.

That's not always an option.

You can downvote all you want. Some projects are sensitive enough to not allow third party authentication (military systems anyone).

Besides, if you're large enough it makes business sense to do it yourself anyway.

If the client device has a TPM or some sort of hardware that can manage the secret you can leverage that. Otherwise, protecting against "attacker has a valid session" is not very easy. Even in the TPM case attackers with code execution on the device can likely bypass it.

Well, since the user was able to gain access with a stolen cookie, these things are possibly true:

1) Slack does not invalidate sessions after a short enough period of time or inactivity (for a cookie to make it to another site, be purchased and used, probably takes some time).

2) Slack does not properly terminate sessions on logout or inactivity (allowing cookies to be reused after logout).

3) Slack is not using any more clever techniques to make cookies useless to attackers.

This, and especially #3.

Having the same cookie used on a different computer should be the reddest flag of them all. This could be identified by the user agent (different version of the Slack client) and a different IP (cookie accessed from a different country than it was originally created).

Session invalidation time is someting the slack admin (That is, EA in this case) configures themself.

Fun fact, if you want SSO for your Slack, you have to pay extra unless you are already using one of the top tier Slack editions. So either pay up, or have multiple ways of administering your users, with the security implications that causes.

I've rarely seen any enterprise Slack ever configured for session timeouts. Misplaced trust on the employee device combined with convenience over security.

Something important to keep in mind is that Slack has a lot of not so great defaults. Go check your settings - infinite sessions, infinite channel retention, etc.

Aside from that small part of a stolen and long lived cookie working on an untrusted device.

It could happen in other scenarios, if you generate incoming webhook URLs and don't treat them as secrets.

Once you get your hands on one of those, you've got a fair shot at a phishing attack.

Exactly. It has about as much to do with COVID-19 as with Slack.

In pre-COVID-19 days, IT handling that kind of request would have required it to come in face-to-face. But since everyone's working from home, that's intractable.

The failure point here is that IT should have confirmed via an independent secondary channel the identity of the requester, but it appears they either got lazy or their protocols assumed Slack could not be compromised in this way so the request was already authentic.

> In pre-COVID-19 days, IT handling that kind of request would have required it to come in face-to-face. But since everyone's working from home, that's intractable.

In pre covid times it wasn't uncommon for people in my office (which didn't have a dedicated onsite IT person) to ping IT on slack for the person sitting next to them to ask for a PW reset.

Face to face doesn't solve anything. If there's a 2000 person company and you can get into the building, chances are you could walk up to the IT desk and say "hey my name is John Doe I'm an engineer here and I locked myself out of X, can I get a reset?" And you'd be given it without any verification

The one time I locked myself out hard at work and had to go "in person" for a password reset, I had to show ID even when the person who was resetting my password knew me by sight. And this was well over a decade ago.

And in all but the smallest companies I've worked for, you needed a card swipe at the door (or an escort) to pass from public to internal areas.

This is just really, really basic physical security stuff.

That might be true at some companies, but has not been my experience at 3 different companies.

> And in all but the smallest companies I've worked for, you needed a card swipe at the door (or an escort) to pass from public to internal areas.

And once you're past the barrier of those internal areas, you're unlikely to be questioned in all but the most strictly controlled places.

In my case, everyone is supposed to be visibly wearing their company ID, on a necklace/lanyard or something like that. However even if challenged without one, "oh sorry, I left my ID at my desk" is likely to work. However everyone but brand-new employees is a least vaguely familiar to everyone else, so I'm not sure how easily a complete stranger would be able to move around.

Previous company I worked at had a policy that you needed to include a photo of yourself with the current date written with any PW reset request, but of course that doesn't work as well at a 2000 person company as at a 100 person startup where IT knows everyone's face.

Plus that's still vulnerable to googling photos of the employee you're impersonating and photoshopping a piece of paper with the date.

If you’re over 2000 employees, there’s little excuse for not requiring a badge scan to queue for all IT desk interactions. That scan should display name and photo to both desk workers and on a ‘now helping’ screen others can see.

It’s not foolproof, but it’s incredibly difficult to prevent tailgating because people don’t want to start an incident by incorrectly challenging a coworker. I’ve heard of it somehow happening even with card turnstiles, double door scans and required photo confirm by lobby security.

If you walk up to the IT desk, they should check your identity. (At least for security-sensitive things like password resets or locked accounts.)

Most employees have a photo badge, so they could scan that and look up your records in the HR database. If you've lost your badge, they could ask your name and look that up in the HR database, which hopefully has a photo on file.

Most 2,000 person office buildings have security guards and doors/gates/turnstiles that won’t let just anyone waltz in.

Right, but once you get past that area nobody will question it. In the same way that someone who works for your company on slack pings you, you assume they work for your company.

In the companies that I am familiar with the internal space has doors and every single door opens only with a badge

I think more than 50% of people would let you tailgate their badge

A lot of people in this thread are acting like it’s super hard to get through or in. You can “social engineer” your way in.

I did it accidentally once at a big hedge fund sort of company. I thought I was being escorted by my friend’s friend. But it was actually just some one familiar with them/their names not fully understanding what I was saying letting me thru the ~3? Times a badge scan was required. I wasn’t paying attention. Only when my friend got scared he would get in trouble did I realize what had happened.

Risk profiles and scale for Slack attacks vs in person soc eng are way different.

I appreciate being able to open the comments and read the essence of the article along with a clarification correcting the messaging in the clickbaity title.

Thank you.

At the companies I've worked for we were prohibited from ever giving out OTP codes. If you lost you ability to generate one it meant you need to submit a request through the ticket system and come into the office if you need to provision it. Certainly a request like this should never come from slack.

That should have never succeed because the request and the requestor's identity should be confirmed by using other data known and trusted by both parties, and possibly by requiring confirmation by someone up your management chain to boot.

Humans, the unpatchable vulnerability.

Humans are somewhat patchable but I like your point.

Applications are open for YC Winter 2022

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact