Hacker News new | past | comments | ask | show | jobs | submit login
How we found the file that was used to Hack RSA (f-secure.com)
277 points by Garbage on Aug 26, 2011 | hide | past | web | favorite | 78 comments

> So, was this an Advanced attack? The email wasn't advanced. The backdoor they dropped wasn't advanced. But he exploit was advanced. And the ultimate target of the attacker was advanced. If somebody hacks a security vendor just to gain access to their customers systems, we'd say the attack is advanced, even if some of the interim steps weren't very complicated.

The whole post read to me as "meh. We haz hack for months. not impressed".

But they neglected to mention the most sophisticated hacks of the whole incident:

1. cracked the vault/SCM of where the SecrueID token generation algo (generates a pseudo-random number) is stored

2. reverse engineered the SecureId token (ok, not that hard as apparently the crypto allowed regeneration the seed and associated token but still more sophisticated then a Excel/flash exploit)

The whole Excel/Flash exploit was just to get through the patio screen door (equates to the same access an employee has). What was more serious and not mentioned was the attackers cracked the safe in the house.

Disclosure: I work at EMC.

It's sad to see that the most detail of the attack we employees of EMC have seen comes from an outside source. The infiltration clearly began by targeting EMC employee(s) not RSA. It's been many months since the attack with nothing being told to the employees whatsoever. I've even heard additional stories about other attempts at social engineering to gain more information recently.

I'll answer questions that I can if anyone is interested but as mentioned above, we really aren't informed ourselves.

To address your first point: I was under the impression seeds were stored for most(all?) customers for "bakcup" purposes. These were taken and matched against SecureID serials for use in more social engineering attacks (getting the other part of the two-factor auth).

Are they still storing 'backup' seeds? Because to me, that sounds like a pretty huge target at this point.

I cannot say for a certainty that they aren't as I'm not privy to that information. I'd hope with the new tokens released the backup of the information should rely on the owners infrastructure. RSA has been put on a secluded network away from EMC network.

Is there a description somewhere that talks about points 1 and 2 in greater detail?

I was reading the article waiting for something about that too. It's a shame all it mentioned was that the hackers used a relic of a RAT (Poison Ivy) in combination with a spreadsheet + flash object.

Agreed. One could argue that the post/title is totally linkbait. Plus they have a GIANT pic of the SecureID key fob but the content has nothing to do with it being cracked.

Forgive my ignorance but isn't the hash algorithm publicly known? Isn't "reverse engineered the SecurID token" basically have the same meaning as breaking RSA algorithm?

"why the heck does Excel support embedded Flash is a great question"

Gratuitously enabling embedding stuff like this in applications which don't really need it makes me always shudder when some software vendor is touting buzzwords like 'rich content' and 'rich user experience'. Most of the times it means unnecessary bloat, and as we can see from this example, a security hole too.

Presumably because the Flash widget is OLE? You might as well ask why "enable" Flash over HTTP. The answer is of course that no-one "enables" anything, you write something generic like OLE or HTTP and people just use it for whatever they want to do.

It is ActiveX. When you install Flash, it registers itself as an ActiveX object. If you run through your registry, you would probably be shocked to see all the different libraries that have been registered.

To Microsoft's credit, running embedded ActiveX objects in Office is switched off by default. In large enterprise networks this can be set as part of a group policy. EMC must have switched it on. There is no granularity to it where you can say 'enable activex for video files but not for flash', so it is all or nothing.

What you can do is set it to only allow embedded loads from the 'trusted' network, for eg. the intranet (a great feature which is rarely used). The reason why they attacked with an ActiveX exploit in an office document instead of a web page is because the office documents allow you to embed the entire object so that it is loaded locally - but newer Windows will still know that the original source is from the web, so it will treat it as such. EMC must have had liberal group policies.

But you see the situation often - sales guys receive powerpoint presentations with embedded objects, or are creating them, and need it enabled.

You can review the various objects you have registered in windows by going to HKEY_CLASSES_ROOT\CLSID in your registry, or using a tool such as:


To change your Office security settings, see this msft article:


Edit: just to add, OLE is just the old ActiveX. OLE on COM was very difficult to implement - so Microsoft stripped it down, simplified it and called the new set of interfaces ActiveX.

> What you can do is set it to only allow embedded loads from the 'trusted' network, for eg. the intranet [...] newer Windows will still know that the original source is from the web, so it will treat it as such.

I think Windows uses alternate data streams to mark files as coming from the web, if the web server is Unix it can't, and will not do such a thing. Files will apear as coming from the intranet.

> I think Windows uses alternate data streams to mark files as coming from the web, if the web server is Unix it can't, and will not do such a thing.

Doesn't that depend on the e-mail client / browser rather than the server? The client will mark the file as downloaded from the internet when downloading it. The data in the HTTP protocol is not altered.

Well OLE (and ActiveX) was a terrible idea.

/edit: to the guys downvoting, does anyone really think running arbitrary, unknown code is a good idea?

I loved it when it first came out. We built some really nice applications using ActiveX. The problem was when they enabled it in IE without thinking through the implications.

IIRC, by default, only intranet apps would be allowed to run ActiveX controls that were signed. Internet apps would prompt a warning - but as we all know now, users just clicked through them and enabled them to run.

ActiveX itself is very nice and very simple - it is, after all, just a documented interface for components.

It allowed us to port all those enterprise desktop apps into the browser. We had very efficient and fast web applications running in the browser long before the ajax revolution that came some years later. It completely changed the cost of IT and administration for a large number of businesses - you no longer had to maintain all these different custom-built desktop applications for every business unit - you just pointed their browser to the web app (we all know these advantages today, but back then it was completely revolutionary).

You couldn't do it with just HTML then, you can now thanks for the new input types, xmlhttprequest (from Microsoft) etc.

Once you've got sandboxing it seems only logical, until then it's madness.

[offtopic] But, Flash games inside excel has given immense pleasure to the employees where Flash games are not allowed directly. [/offtopic]

That question actually made me question the technical competence of F-Secure. You would think an anti-virus company would be more knowledgeable about embedded objects in Office documents.

Someone at EMC ran it, not F-Secure - F-Secure found the email in their submitted virus sample inventory, i.e. someone at EMC uploaded it to F-Secure as a sample.

As far as I can tell, the statement "why the heck does Excel support embedded Flash is a great question" was made by the F-Secure blogger. Microsoft Office applications have a long history of allowing embedded OLE/ActiveX objects. There are a bunch of companies that sell Flash dashboard tools for Excel. While I'm not a fan of Flash dashboards, the question just gave me the impression that the author doesn't have a good understanding of the environment that their anti-virus products are trying to protect.

It was a rhetorical question. I'm sure most of their researchers know why it is enabled (because the registry is marked blah blah). They were asking why did Microsoft decide that users should be capable of embedding executables inside a document format that inherently cannot have executable code stuffed inside it and frankly does not need executable code stuffed inside it.

I forward this file to you for review. Please open and view it. -web master

I cannot fathom why anyone would open an attachment in an email like this. Someone at EMC has dropped the ball if an email from "web master" doesn't raise every eyebrow in the house.

I'm sure most people didn't open it. But it only takes one person.

It shouldn't, that's the thing. Your employees should be able to get hacked without any one of them potentially exposing your crown jewels.

I agree. That why, as other comments have pointed out, this is not the interesting part of the hack.

They hacked a company called 'RSA Security' [1] and not the cryptography algorithm RSA [2]. While the former is still interesting, the latter would be big news indeed.

[1] http://en.wikipedia.org/wiki/RSA_Security

[2] http://en.wikipedia.org/wiki/RSA

Since RSA Security is the company that supplies the tokens that are used to secure most banks and defense contractors, etc., and these tokens were in turn compromised via the hack, I would argue that this hack is still big news indeed.

Indeed... Extremely interesting.

Obviously a very advanced (team?) of people were involved in this and the whole story has only been hinted at.

I personally would have expected much higher standards of security from a company selling security.

To me the worst part was how they managed the incident. They should never have been hacked, but once it happened they should have been honest about the scope of impact.

I'm sure they were very honest to their defense and security agency customers. First. Then the rest of the world.

As soon as notice that they had been breached was leaked, all the admins I knew assumed their keyfobs were compromised.

Can you order RSA tokens WITHOUT them storing the seed for 'backup'?

Just wondering if anyone else noted the irony ... of demonstrating the vulnerability of opening a random Flash video by ... posting a random Flash video. In a security article.

So: who among you still played that video?

Just askin'.

I played it, but my laptop does not have flash installed, so nyah.

> Turns out somebody (most likely an EMC/RSA employee) had uploaded the email and attachment to the Virustotal online scanning service on 19th of March.

Would that be some automated system that sends samples - or would the user have to manually find the .msg file and upload it?

Because I really can't see a generic 'office drone' at EMC uploading every bit of malware that comes into their inbox, especially if they're also likely to open this kind of dodgy-looking email...

You just drag the message to the browser window, or the desktop, if you drag a message from Outlook to the desktop it is automatically saved as a .MSG

If anyone from F-Secure is reading this, can you give a link to the Virus Total report? I'm very curious which 18 of the 41 antivirus applications flagged this file:


It’s a shame that they obscured incriminating details in the screenshots with simple image operations like low-radius 1- and 2-dimensional blurring. Using a guessable point-spread function on an image region with so many known characteristics (because we know the font) is very, very leaky. In this case it would not surprise me if someone could deconvolve the e-mail addresses enough to match them against EMC employee profiles and such.

If you want to hide text in an image, you should, at minimum, replace every pixel in its bounding box. That still leaves spacing data, but it’s a start. Smearing or lightly jumbling the pixels is barely a notch more secure than rot13.

A neat trick would be to overwrite the original details (such as email subject) with something like "Did you really think you could unblur this?" in an identical font. With a security company, whose followers probably enjoy puzzles, you could go a step further and make a game out of it. For example, the unblurred text could contain a URL for a microsite that has a little game related to one of their products. It'd be fun and promotional at the same time.

The downside is that this trains people to always try unblurring your blurred text and almost guarantees someone will find the one instance where somebody forgets to blur something.

Heh, yeah. I was amused by VesselTracker’s strategy. Go to at http://www.vesseltracker.com/en/Ships/Mv.-Deira-9149768.html (for example) and look at the “blurred” pay-only data. Rot13 spoiler: it all says “Ab qngn sbe lbh!”

I was looking forward to how they sleuthed the thousands of computers at EMC/RSA to determine what the attack vector was and how they found the file.

Instead, what I learned was that back in April they knew exactly what file it was, but they'd deleted it, and EMC doesn't back up its email.

Why isn't Outlook (or any email client) using a sandbox when opening attachments today? It would be a real benefit to users if the next version of Outlook had that virtual machine capability.

How would that work? Is Excel also running in that VM? Not much of a sandbox.

Sandboxes are a fairly new idea, and they are hard to write, and hard to get right.

Mail clients (at least Outlook and Thunderbird) mark attachments as untrusted, and then it's up to the application to open them safely.

Slightly related: does anyone know how they found all the domains that were hosted on that IP? I've tried a couple of online tools, entering '', and none seem to return anything.

You normally need a reverse dns lookup tool like http://remote.12dt.com/ or http://whois.domaintools.com/ or http://whois.webhosting.info/

But none of these tools could find details of the above mentioned IP!

I've tried a couple of online tools (the ones mentioned above, and a few more), and also dig:

dig @<myDnsServer> -x

No dice on any of them. Also, I was under the impression that most times there's only one PTR record for a given IP, which means you'd get at most one domain name. Not sure about that, though.

There can only be one PTR record, but there doesn't have to be one. "Reverse DNS" works by caching the forward ones and can thus only work for names it's seen.

Not to be glib, but that isn't how reverse DNS works at all. http://en.wikipedia.org/wiki/Reverse_DNS

Google "Reverse IP Lookup".

He said the Poison Ivy backdoor wasn't advanced - did the attacker create this trojan or is it something available?

If it's available, why didn't the virus scanner catch it?

Poison Ivy is the new Sub7 (or new Back Orifice, if that's your thing). While it is actually an advanced trojan, the stub (small file it begins with) is relatively easy to detect. PI will commonly inject itself into the default browser's process. PI is a reverse connecting trojan, so an attacker must host the PI server somewhere. PI also encrypted the data back and forth using RC4. So if the attacker is really smart, they'll have PI attach to the default browser and connect to a remote server on port 443. To the average user it looks like HTTPS traffic.

The reason that PI was not detected is because the attacker embedded a Flash object inside the Excel file. The Flash file was a 0day exploit that could download and execute a file, which in this case was the attacker's PI client.

From there, the game is up. Once inside the network, an attacker has a whole new set of doors to open. The simplest route, if the target is on an older operating system, is to dump the SAM file, which contains both local passwords as well as cached passwords. Cached passwords are nice because they are network logins, however 9 times out of 10 the local Administrator password is the same on all systems because system administrators frequently use the same local admin password when imaging lots of computers. Additionally, cached passwords are sometimes out of date due to password update policies. Local passwords are usually not subject to this, or if they are, it becomes irrelevant since a system administrator hasn't likely gone through and changed each local password.

Once you have a network password or local password, things get fun. There are 2 routes here. The attacker can go the frontend route, and attack the internal CRM that EMC has, or they can attack the development servers. Alternatively, they could just keep hacking each workstation, but that is unnecessary. Assuming we went the CRM route, we likely have or can easily obtain a valid login from our first target's computer. Once inside, unless there are solid permissions, we may have won. RSA likely had a record of each customer's purchase, which then had a record of each device and potentially some sort of key or code needed to predict the next token. I'll give RSA the benefit of being slightly smart, so those sort of keys probably won't be on the same CRM, or perhaps our login doesn't have access. Either way, we are in, and with some dissemination of materials available on other drives or within emails, the attacker could easily determine the location of the keys.

Once the attacker has whatever he needs, it's a quick trip to LinkedIn to find people that work at Lockheed Martin or whatever company you fancy. Then it's another spear phishing attack on that target to a page that looks like the target companies VPN. Grab the username, password, and PIN (also log the time) and you're good to go.

Now repeat the part where you enter the internal network and scour for information. Congratulations, you're now an Adaptive Persistent Threat. Pick your certificate up at the door.

I don't understand why there wasn't an air gap between this accounting desktop and the production key servers. The Excel 0-day vector is interesting, but what this tells me is that even without that, all those secureid tokens could also have been hacked by one malicious janitor.

Just to be clear, it's a Flash 0-day, not an Excel 0-day.

Fantastic reply, Thanks.

It is easily available: http://www.poisonivy-rat.com/

So if I download (which I already did) and install this file, how can I know it's not going to inject code into my own computer?

you can't, obviously. so start up a virtual machine, run a new version of your OS, and install it there. you should also take care to isolate the network connection of your vm as much as possible (and/or monitor it). i'm not promising this is sufficient - good luck :o)

ps to answer your other question - antivirus scanners look for patterns in the file itself, so they don't need to install it, but are vulnerable to alternative packaging, modified code, etc etc (of course, scanners also check for problems with installed files, but the first line of defense is to inspect the data - including unpacking zip files etc).

Trust no one.

What's the best way to scan a file (Windows) then to ensure it's clean? I believe this file to be clean, I suppose I'm just asking to better understand the theory.

Specifically for poisonivy, off the top of my head, I would run a virtualized instance of windows inside of a different OS, and then monitor all network activity between the virtualized OS and the host system and verify every IP it is connecting to during installation and once installed.

Maybe somebody else can jump in here and offer better advice?

The virtual machine isn't guaranteed to work: http://www.zdnet.co.uk/news/security-threats/2009/06/09/virt...

Unless you know exactly what it can do, you should probably run it on an old machine without [direct] internet access.

From your link:

Cloudburst uses a vulnerability in the virtual-machine display functions of VMware Workstation that can be exploited by a specially crafted video file.


However, the Cloudburst exploit currently has certain limitations: it will only succeed on Workstation 6.5.0 or 6.5.1 or the associated Player versions. In addition, the guest and host must be Windows-based, among other requirements, Immunity said in its release notes.

Remember, that's a publicly released exploit that's not even very new.

Assume that if that's been publicly released, more advanced stuff has already been seen in the wild.

Makes sense. That is good advice but as oconnore pointed out that even a VM can be exploited, though I think your solution would work well in the majority of cases. I suppose using a virtual copy of Windows in my OS X wouldn't be 100% safe because of the exploit. I suppose I'll be getting out my old Dell Windows XP machine then to test this out until I am sure it is safe (which I imagine it is but who knows), and if something happens to it then I'll just wipe the drive and re-install Windows. Poison Ivy seems like it would be an awesome tool to know which would be worth my time.

Did you read the actual article about the VM exploit? It requires both OSs to be Windows based AND the use of a malformed video file.

But, yeah, paranoia is healthy in this circumstance.

I think you're missing the point of "Trust no one".

You don't scan it, just use it in a disposable environment (usually a VM, on a non-valuable machine) and see what it does.

If this software is so widely available why didn't ECM's antivirus software detect it?

This is answered in a response[1] to the grandparent post.

"The reason that PI was not detected is because the attacker embedded a Flash object inside the Excel file. The Flash file was a 0day exploit that could download and execute a file, which in this case was the attacker's PI client."


The reason that PI was not detected is because the attacker embedded a Flash object inside the Excel file. The Flash file was a 0day exploit that could download and execute a file, which in this case was the attacker's PI client.

The Poison Ivy client was downloaded to the target system. Why did the anti-malware software installed there not pick it up? (Attempting to hand-wave this away by talking about 0-day flash exploits really isn't answering the question.)

It's common for the free detectable version of popular trojans to be used as the advertisement for the paid undetectable one.

Looking at the poison ivy website they have a customer portal, so presumably this is how they did it.

There are also methods to pay without leaving a paper trail back to you (pre-paid cards I think).

Edit: It's also possible to modify detectable executables to make them undetectable if you don't want to pay. Virus scanners for the most part work by reading a few bytes from an executable at a particular point, hashing those bytes and if they match a known virus, report it as one. By finding those parts of the executable (there are often multiple signatures, and different vendors will have different signatures too) and modifying them slightly, the resultant hash will be different and the executable undetected.

some paradoxes...

1) by running an antivirus, your emails end up on some public searchable and discloseable database?

2) They couldn't hack RSA clients that were using rsid, but they could hack RSA itself? that's the worst case of not eating your dog food in history.

Did you even read the article?

1) The file was uploaded manually by a security researcher.

2) They couldn't access a particular part of LM/NG protected by SecurID. They could've sent them an email also, they just wouldn't have gotten access to the information they needed. I'm sure RSA is using SecurID also, but someone somewhere fucked up and the attacker was able to find a security breach starting with the infected workstation. From there, it's easy to get personal info for social engineering, access network drives, etc.

Look, if you have a determined, well funded country state hell-bent on cracking into your system, all the security in the world won't protect you.

1) this is not just "an antivirus". VirusTotal is a website where you go and upload a suspicious file, and they scan it. Also, this says nothing about the file being "public searchable", it says it's made available for security industry professionals.

2) yes. typically you don't attack strong crypto, you find a weakness in its implementation. In this case, RSA Security's network was that weakness.

On the second point, a strong cryptosystem wouldn't have such a poor security design that RSA could break into any of its customer's networks. I'm sure a number of other RSA customers were surprised that RSA retained this power; I was.

The theory that this was an advanced persistent threat trying to get into Lockheed doesn't pass the smell test. Here are faster and easier ways to get ahold of an rsa fob.

A faster and easier way to get an RSA fob belonging to a person with access to Lockheed? It's not just about getting a generic RSA fob (afaik), it was about accessing the specific data of a client. Additionally, if the person was targeting other RSA clients, a seat inside the organization would become exponentially more valuable.

It seems more safe to attack RSA from a distance over the internet and get the data you want than to try the old-fashioned physical infiltration method, into either the data center or to an individual's home where a fob could be found.

Please correct me if I'm misunderstanding something.

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact