Because each country is on different continent, following different rules and laws, accessing data is extremely hard.
Many countries from different geographical locations wont be happy to give access to banking information.
Before they can track the money, it will be deposited to shell company. With amounts like that there are often bigger powers that will help make them disappear, like politicians and bank executives.
I personally have been dealing with banks on higher level and their willingness to ignore the rules and regulators is enormous. Thats in EU, I believe in countries like Gibraltar or Cayman Islands must be pretty easy.
1) In the black markets reputation is meaningful... if word gets around that you rip people off, then you might have trouble finding more customers.
2) Stealing that much money is a good way to get killed. I wouldn't put it past a powerful white collar criminal to organize a murder or two.
Probably someone knew the withdrawals were iffy but got paid off - Manila's a fairly corrupt place. They are arguing about who's fault that was http://newsinfo.inquirer.net/772643/rcbc-branch-head-rogue-o...
Registering for SWIFT training is easy enough, especially in developing countries where verification can be harder achieve.
Gaining access to swift.com through either "proper" registration or phising will get you access to the SWIFT SDK as well as tons of other material.
So no sorry but to me and to anyone else who's even remotely familiar with how shitty SWIFT and usual banking internal security is, it doesn't sound like an inside job, just a job well done.
This strongly points to an inside job, but does not exclusively prove it.
If you can present an all clear signal to the bank staff at all the immediate human readable interfaces no one would notice, at best the fraudulent transactions would be detected 30+ days down the line when the banks perform account consolidation.
This is a good post about how money moves around between banks https://gendal.me/2013/11/24/a-simple-explanation-of-how-mon... it doesn't go into payment/messaging systems (i.e. SWIFT) too much which is a good thing but it does explains how bank handle and settle transfers.
SWIFT is a glorified messaging service for banks, that's how it started, today allot of "out of the box" applications have been developed on it but in it's core SWIFT is just a trusted network that enabled it's members to securely transfer messages between each other, these messages often end up being used to facilitate transactions but they aren't what actually moves money around.
However they gained the knowledge, it is incredible.
To get a swift.com account you need to have details, name, institution code (BIC code for banks) and some additional details.
SWIFT doesn't handle the authorization the institution you are registering on behalf does which means it's probably even more easier to phish/social engineer yourself in.
Gaining access to swift.com credentials also shouldn't be that hard, especially if have already compromised the bank's network or any of it's employees.
To get access to the SWIFT documentation and other materials you also don't have to compromise your initial target which makes it even easier.
Registration to SWIFT's "Mystandards" is open to the public completely which both gives you access to quite a bit of information as well as allows to you participate in discussion groups which may lead to a few more vectors for social engineering.
tl;dr - the recent SWIFT / Bangladesh heist has been followed from outside by BAE systems of all people and they analyse some malware. It reminds me little more than what I would expect a (malicious) set of scripts developed by a good inhouse IT team to look like as they solve some MIS problem. It's that custom-built.
The main .exe replaces the eponymous two bytes in the swift system, preventing it from executing code if a check fails (presumably a swift authorisation check to access the underlying Oracle DB). This is a JNZ instruction in the target application and even I remember this one.
Then there is code dealing with SQL statements, so it can both delete malicious swift instructions from the local database and inject its own(?) and even tampers with the local printer to delete confirmation messages (where presumably hard copies of each transaction as printed). The actual printer model is it seems hard coded in the attackers toolkit.
This has several lessons, firstly if you have something valuable someone will really work hard to attack you specifically. Second there is really no excuse anymore not to move every OS over to randomised memory location access, and more. But even so I am not convinced this would help here. The specificity of the attack is incredible.
Lastly, Already modern software development seems to be about duct taping together other people's code and stopping once it "works". The cost of developing secure systems is way beyond the cost of developing "works on my machine" systems, and that cost needs to be raised at a business level as an insurance premium. Then we can make sensible trade offs. Not sure there is a 961M dollar trade off but still.
It might make it harder... but in this case couldn't an attacker search the entire address space for the location of the library? ASLR protects against buffer overflows as an attack vector, but here the attacker already has access.
(I'm no expert and would appreciate correction.)
At some point we need to go back to secure kernels only a few thousand lines long and they ddole out permissions and access - making all attack vectors ridiculously harder.
Can we do it? Will Facebook hand over its billions to the project? Will anyone?
At some point the cost of not doing it will exceed the cost of doing it, I'm not sure what it would actually take, shit security already costs tens of billions a year to the global economy and people just accept it.
Will something like this really cost more than $900M?
If so, I'm scratching my head, as it appears they do have knowledgeable infosec people, but their security is laughable. Anyone want schematics and parts lists for anything they make? There's an email address you can send a message to that will respond with them. All plain-text, no validation.
The attitude that good security costs trillions and is therefore unattainable is all pervasive.
Sure, there are some technologies that I want to not be leaked to allies where significant parts of how they work aren't public knowledge (stealth material/coating composition maybe?) but, it is only those that I would actually care about being secret, if I was a government/military buying hardware. If it isn't hard to figure out, I'd rather it was easy to access the knowledge so my engineers/technicians can get access.
BAE Systems are a significant weapons manufacturer. Sorting security 'problems' like the one you are describing isn't that expensive so, if it was bothering their clients, you'd think they would do it. It is more plausible that they just don't care that the this information is so easy to access because it is easier either for them or their clients.
This seems like an odd complaint. I can do this with my car and dishwasher, too. Unless BAE makes classified military stuff they're exposing in this manner it seems less like a security hole and more like a useful thing for people maintaining the stuff they manufacture.
Could you expand on this?
Google SCOMP, XTS-400, or SAGE guard to know what kind of INFOSEC goes back to 80's in military before firewalls were invented or security went mainstream.
Much of what that article is about, their software relies on Oracle DB for just about everything.
But now looking back, I'm not sure what better option they would have had.
I think they were probably just in it for the money.
As a separate note, state actors can be very amateurish. The Chinese did things, as disclosed by the Mandiant APT1 Report, like taunting users, using very non-native English phishing messages,and leaving plaintext signatures as a means of bragging.
Why were they uploaded there, and where are said repositories?
Well, we have a clue for piece #2:
"... sends result to attacker domain over HTTP"
How the hell does that happen in this day and age? You trust any traffic coming out of your network??
Apparently, evtdiag.exe was irreparably ambiguous. =|
1. Multi-signature transactions could require a hacker to compromise multiple machines possibly on separate network segments.
2. Multiple reporting and auditing machines could be employed on several separate networks to again increase intrusion requirements.
I suspect SWIFT already allows for or could employ similar methods on their network to mitigate these types of scenarios as well.
In fact it seems the whole situation could have been avoided if the bank had followed the recommended practice of having a secure wire room with a computer that's not connected to any network other than SWIFT. And if they don't do that they can have the same problem on a private blockchain network.
That wouldn't do anything for this. If you can change arbitrary bytes of the binary and have it execute, you can rewrite the whole thing, including patching out all these extra checks too.
Very shoddy terminal security :(
We can guess that the Bank of Banglesdesh uses not-locked-down Windows desktops to run their system.
Nothing in the post indicates that it was specific to a single victim.
The JNZ update was something everyone used to do to get rid of nag screens in shareware. I'd be very surprised if there wasn't a generic utility for extracting the locations of the right bits at this point, but either way it's a simple process for finding them.
Going after the print files just indicates familiarity with SWIFT and PCL.
It doesn't actually look like a particularly clever attack to me, at least not from what's in this article. Very basic security mitigations would have prevented it. (Like validating oracle files)