The whole post read to me as "meh. We haz hack for months. not impressed".
But they neglected to mention the most sophisticated hacks of the whole incident:
1. cracked the vault/SCM of where the SecrueID token generation algo (generates a pseudo-random number) is stored
2. reverse engineered the SecureId token (ok, not that hard as apparently the crypto allowed regeneration the seed and associated token but still more sophisticated then a Excel/flash exploit)
The whole Excel/Flash exploit was just to get through the patio screen door (equates to the same access an employee has). What was more serious and not mentioned was the attackers cracked the safe in the house.
It's sad to see that the most detail of the attack we employees of EMC have seen comes from an outside source. The infiltration clearly began by targeting EMC employee(s) not RSA. It's been many months since the attack with nothing being told to the employees whatsoever. I've even heard additional stories about other attempts at social engineering to gain more information recently.
I'll answer questions that I can if anyone is interested but as mentioned above, we really aren't informed ourselves.
To address your first point: I was under the impression seeds were stored for most(all?) customers for "bakcup" purposes. These were taken and matched against SecureID serials for use in more social engineering attacks (getting the other part of the two-factor auth).
Gratuitously enabling embedding stuff like this in applications which don't really need it makes me always shudder when some software vendor is touting buzzwords like 'rich content' and 'rich user experience'. Most of the times it means unnecessary bloat, and as we can see from this example, a security hole too.
To Microsoft's credit, running embedded ActiveX objects in Office is switched off by default. In large enterprise networks this can be set as part of a group policy. EMC must have switched it on. There is no granularity to it where you can say 'enable activex for video files but not for flash', so it is all or nothing.
What you can do is set it to only allow embedded loads from the 'trusted' network, for eg. the intranet (a great feature which is rarely used). The reason why they attacked with an ActiveX exploit in an office document instead of a web page is because the office documents allow you to embed the entire object so that it is loaded locally - but newer Windows will still know that the original source is from the web, so it will treat it as such. EMC must have had liberal group policies.
But you see the situation often - sales guys receive powerpoint presentations with embedded objects, or are creating them, and need it enabled.
You can review the various objects you have registered in windows by going to HKEY_CLASSES_ROOT\CLSID in your registry, or using a tool such as:
To change your Office security settings, see this msft article:
Edit: just to add, OLE is just the old ActiveX. OLE on COM was very difficult to implement - so Microsoft stripped it down, simplified it and called the new set of interfaces ActiveX.
I think Windows uses alternate data streams to mark files as coming from the web, if the web server is Unix it can't, and will not do such a thing. Files will apear as coming from the intranet.
Doesn't that depend on the e-mail client / browser rather than the server? The client will mark the file as downloaded from the internet when downloading it. The data in the HTTP protocol is not altered.
/edit: to the guys downvoting, does anyone really think running arbitrary, unknown code is a good idea?
IIRC, by default, only intranet apps would be allowed to run ActiveX controls that were signed. Internet apps would prompt a warning - but as we all know now, users just clicked through them and enabled them to run.
ActiveX itself is very nice and very simple - it is, after all, just a documented interface for components.
It allowed us to port all those enterprise desktop apps into the browser. We had very efficient and fast web applications running in the browser long before the ajax revolution that came some years later. It completely changed the cost of IT and administration for a large number of businesses - you no longer had to maintain all these different custom-built desktop applications for every business unit - you just pointed their browser to the web app (we all know these advantages today, but back then it was completely revolutionary).
You couldn't do it with just HTML then, you can now thanks for the new input types, xmlhttprequest (from Microsoft) etc.
I cannot fathom why anyone would open an attachment in an email like this. Someone at EMC has dropped the ball if an email from "web master" doesn't raise every eyebrow in the house.
Obviously a very advanced (team?) of people were involved in this and the whole story has only been hinted at.
I personally would have expected much higher standards of security from a company selling security.
Can you order RSA tokens WITHOUT them storing the seed for 'backup'?
So: who among you still played that video?
Would that be some automated system that sends samples - or would the user have to manually find the .msg file and upload it?
Because I really can't see a generic 'office drone' at EMC uploading every bit of malware that comes into their inbox, especially if they're also likely to open this kind of dodgy-looking email...
(Not an F-Secure employee, by the way)
If you want to hide text in an image, you should, at minimum, replace every pixel in its bounding box. That still leaves spacing data, but it’s a start. Smearing or lightly jumbling the pixels is barely a notch more secure than rot13.
The downside is that this trains people to always try unblurring your blurred text and almost guarantees someone will find the one instance where somebody forgets to blur something.
Instead, what I learned was that back in April they knew exactly what file it was, but they'd deleted it, and EMC doesn't back up its email.
But none of these tools could find details of the above mentioned IP!
dig @<myDnsServer> -x 18.104.22.168
No dice on any of them. Also, I was under the impression that most times there's only one PTR record for a given IP, which means you'd get at most one domain name. Not sure about that, though.
If it's available, why didn't the virus scanner catch it?
The reason that PI was not detected is because the attacker embedded a Flash object inside the Excel file. The Flash file was a 0day exploit that could download and execute a file, which in this case was the attacker's PI client.
From there, the game is up. Once inside the network, an attacker has a whole new set of doors to open. The simplest route, if the target is on an older operating system, is to dump the SAM file, which contains both local passwords as well as cached passwords. Cached passwords are nice because they are network logins, however 9 times out of 10 the local Administrator password is the same on all systems because system administrators frequently use the same local admin password when imaging lots of computers. Additionally, cached passwords are sometimes out of date due to password update policies. Local passwords are usually not subject to this, or if they are, it becomes irrelevant since a system administrator hasn't likely gone through and changed each local password.
Once you have a network password or local password, things get fun. There are 2 routes here. The attacker can go the frontend route, and attack the internal CRM that EMC has, or they can attack the development servers. Alternatively, they could just keep hacking each workstation, but that is unnecessary. Assuming we went the CRM route, we likely have or can easily obtain a valid login from our first target's computer. Once inside, unless there are solid permissions, we may have won. RSA likely had a record of each customer's purchase, which then had a record of each device and potentially some sort of key or code needed to predict the next token. I'll give RSA the benefit of being slightly smart, so those sort of keys probably won't be on the same CRM, or perhaps our login doesn't have access. Either way, we are in, and with some dissemination of materials available on other drives or within emails, the attacker could easily determine the location of the keys.
Once the attacker has whatever he needs, it's a quick trip to LinkedIn to find people that work at Lockheed Martin or whatever company you fancy. Then it's another spear phishing attack on that target to a page that looks like the target companies VPN. Grab the username, password, and PIN (also log the time) and you're good to go.
Now repeat the part where you enter the internal network and scour for information. Congratulations, you're now an Adaptive Persistent Threat. Pick your certificate up at the door.
ps to answer your other question - antivirus scanners look for patterns in the file itself, so they don't need to install it, but are vulnerable to alternative packaging, modified code, etc etc (of course, scanners also check for problems with installed files, but the first line of defense is to inspect the data - including unpacking zip files etc).
Maybe somebody else can jump in here and offer better advice?
Unless you know exactly what it can do, you should probably run it on an old machine without [direct] internet access.
Cloudburst uses a vulnerability in the virtual-machine display functions of VMware Workstation that can be exploited by a specially crafted video file.
However, the Cloudburst exploit currently has certain limitations: it will only succeed on Workstation 6.5.0 or 6.5.1 or the associated Player versions. In addition, the guest and host must be Windows-based, among other requirements, Immunity said in its release notes.
Assume that if that's been publicly released, more advanced stuff has already been seen in the wild.
But, yeah, paranoia is healthy in this circumstance.
You don't scan it, just use it in a disposable environment (usually a VM, on a non-valuable machine) and see what it does.
"The reason that PI was not detected is because the attacker embedded a Flash object inside the Excel file. The Flash file was a 0day exploit that could download and execute a file, which in this case was the attacker's PI client."
The Poison Ivy client was downloaded to the target system. Why did the anti-malware software installed there not pick it up? (Attempting to hand-wave this away by talking about 0-day flash exploits really isn't answering the question.)
Looking at the poison ivy website they have a customer portal, so presumably this is how they did it.
There are also methods to pay without leaving a paper trail back to you (pre-paid cards I think).
Edit: It's also possible to modify detectable executables to make them undetectable if you don't want to pay. Virus scanners for the most part work by reading a few bytes from an executable at a particular point, hashing those bytes and if they match a known virus, report it as one. By finding those parts of the executable (there are often multiple signatures, and different vendors will have different signatures too) and modifying them slightly, the resultant hash will be different and the executable undetected.
1) by running an antivirus, your emails end up on some public searchable and discloseable database?
2) They couldn't hack RSA clients that were using rsid, but they could hack RSA itself? that's the worst case of not eating your dog food in history.
1) The file was uploaded manually by a security researcher.
2) They couldn't access a particular part of LM/NG protected by SecurID. They could've sent them an email also, they just wouldn't have gotten access to the information they needed. I'm sure RSA is using SecurID also, but someone somewhere fucked up and the attacker was able to find a security breach starting with the infected workstation. From there, it's easy to get personal info for social engineering, access network drives, etc.
Look, if you have a determined, well funded country state hell-bent on cracking into your system, all the security in the world won't protect you.
2) yes. typically you don't attack strong crypto, you find a weakness in its implementation. In this case, RSA Security's network was that weakness.
It seems more safe to attack RSA from a distance over the internet and get the data you want than to try the old-fashioned physical infiltration method, into either the data center or to an individual's home where a fob could be found.
Please correct me if I'm misunderstanding something.