I’m a bit disappointed the mechanism to exfiltrate data is based on sharing the USB between an internet-connected and air gapped devices. It would have been cool if it used some other side channel like acoustic signals.
I felt like the article spent way too many words to explain the idea of "the agency shared data across the air gap using USB drives, and a vulnerability was used to surreptitiously copy the malware onto the USB and then onto the target machine", and AFAICT none on explaining what that vulnerability is or why it exists (or existed). Then the rest is standard malware-reversing stuff that doesn't say anything interesting except to other malware reverse engineers. The inner workings of the tools aren't interesting from a security perspective; the compromise of the air gap is.
(As for acoustic etc. side-channel attacks: these would require a level of physical access at which point the air gap is moot. E.g. if you can get a physical listening device into the room to listen to fan noise etc. and deduce something about the computation currently being performed, and then eventually turn that into espionage... you could far more easily just directly use the listening device for espionage in the form of listening to the humans operating the computers.)
There was no novel vulnerability. The pwned machine just replaced a recently-accessed folder on the stick with an exe to trick the user into executing it on the target machine.
Yeah it is very bloated. I am suspicious that the article was bloated with AI rather than a human, though. I wonder if they either made the first section as a summary or extended sections necessarily.
For example, early on it says:
" collect interesting information, process the information, exfiltrate files, and distribute files, configurations and commands to other systems."
and later on: " they were used, among other things, to collect and process interesting information, to distribute files, configurations, and commands to other systems, and to exfiltrate files."
It also mentions several times that the attack on a South Asian countries embassy was the first time this software was seen.
Repeating info like this was kind of a sign of part-applied AI edits with RAG a while ago, might still be true today.
Yup, no respect for the people who published the article. It was one paragraph of content impossibly diluted. TLDR: some idiots allowed USB sicks to be plugged into the supposedly air-gapped system. Hilarity ensued.
Such side channel attacks are academic. In fact someone on HN pointed out there's a researcher that invents new ones by the dozen and media run with it whenever he presents another one.
It's fun, and not hard to come by. Everything anything does - which includes everything an air-gapped computer does - constantly radiates information about its doing, at the speed of light (note: think causality, not light). We know the data is there, and inventing new and interesting ways to tap into that stream is a really nice hobby to have.
I mean, someone who researches security of airgap computers continually coming up with new ways to break them, seems like the expected outcome. Its their job after all.
I would start by asking what they need computers for.
You don't really need one to read text from a screen. Of that most would be old documents that for the most part should be public. What remains besides reading is most likely 95% stuff they shouldn't be doing.
The most secure part is the stuff we wish they were doing.
I’m having a real hard time understanding what this comment is saying. Are you asking what high side computers are used for besides reading classified information?
Maybe, I could also be asking why you would use a computer if all you want is to read documents.
If you have an operator send a telegram for you that person is capable of doing a lot more with your text than you want. On the other end is another telegram operator to further increase the risk. You might want to send a letter in stead. It's slower but more secure.
If you want to read text from a monitor a computer is super convenient but like the operator it can do other things you don't want. You don't need a computer to put text on a screen. Alternatives might be slow and expensive but in this case you don't have to send things to the other side of the world. That would be the thing you specifically don't want.
One of my favorite hacks of yore was somehow some folks managed to compromise the iPod to that point that they could run some of their code, and make a beep.
They compressed the ROM, and "beeped" it out, wrapping the iPod in an acoustic box, recording it, and then decoding it to decode the ROM.
This is the plot of most of Ghost in the Shell. That series looks more and more prescient as time goes on. Another big plot point is that most of the internet is just AIs talking to each other. 10 years ago that sounded ridiculous, now not so much.
"Ralfi was sitting at his usual table. Owing me a lot of money. I had hundreds of megabytes stashed in my head on an idiot savant basis, information I had no conscious access to. Ralfi had left it there. He hadn't, however, came back for it." -- Johnny Mnemonic, William Gibson, 1981
If you're a gamer, you should try Cyberpunk2077 :D Currently playing it, at over 200 hours, and it really feels like a scarily accurate, techno-dystopian version of our world.
I am not sure why you are being downvoted. Just like fridges, cars, ovens gained internet access, enhanced humans will be extremely likely to be, eventually -- and possibly with interesting consequences -- hacked.
<< You can already hack people by just telling them things.
True, but language fluctuates, zeitgeist changes and while underlying techniques remain largely the same, what nationstate would not dream of being able to simply have people obey when it tells them to do behave in a particular way. Yes, you can regimen people through propaganda, but what if it you could do it more easily this way?
To offer a contributory not-really-metaphor for viewing things: After a "grey goo" apocalypse covers the world in ruthlessly replicating nanobots, eventually there arise massive swarms of trillions of allied units that in turn develop hivemind intelligences, which attempt to influence and "hack" one-another.
I am one of them, so are you, and I just made you think of something against--or at least without--your will.
> True, but language fluctuates, zeitgeist changes and while underlying techniques remain largely the same
This applies to software as well
> Yes, you can regimen people through propaganda, but what if it you could do it more easily this way?
Widespread use of BCIs would help with this for sure, but don’t be under the impression that individual and population level manipulation techniques haven’t progressed well past simple propaganda.
<< don’t be under the impression that individual and population level manipulation techniques haven’t progressed well past simple propaganda.
I absolutely buy it based merely on the glimpse of the document from various whistleblowers over the years. At this point, I can only imagine how well oiled a machine it must be.
Certainly people would like an API for others without needing to reverse engineer them. Agreed that there is a threshold of simplicity past which it becomes easier to organize than having to give speeches and run propaganda.
Like the January 6 question, I’m assuming that anyone who had a neuralink would likely be ineligible for any sort of clearance to access information like this.
I am not as certain. Sure, Musk and his product are no longer 'cool' given his move to US political right faction, but tech is tech. Some tried banning cell phones and whatnot and the old guard there had to adjust their expectations.
In short, I am not sure you are right about it. If anything, and I personally see it as a worst case scenario, use of that contraption will be effectively mandatory the way having cell phone is now ( edit: if you work for any bigger corp that and and want to log from your home ).
As far as I am aware, no electronic devices from outside, and no devices that transmit anything, are allowed in these high security areas. That’s inclusive of cell phones, for example.
That is: the point I am making is more nuanced than whether something is popular (like cell phones or other tech).
Oh, I am sure there are restrictions for the rank and file, but the higher ups with such access can ( and apparently do ) get exceptions[1] and while this is one of the more visible examples, I sincerely doubt he is the only one.
What are you smoking, we hear about breaches of super important databases all the time and that doesn't seem to convince any company to give a single shit more than just enough to avoid negligence. Not to mention social media's entire business model is hacking people - keep them on your platform by any means necessary.
> we hear about breaches of super important databases all the time and that doesn't seem to convince any company to give a single shit more than just enough to avoid negligence.
I'm not sure why you think this is counter to my point (perhaps we should wonder what you yourself are smoking?), which to reiterate was that:
1. Most current security issues are due to the various insecure foundations we build our technology on, and
2. By the time Neuralink type implants are common, that won't be the case anymore.
We have both cars and pacemakers that can kill people if you send the right wireless commands. Why would Neuralink be different?
I agree that we do have the technology to make it secure if we want to. We've made flight software secure in the '80s or so.
What we don't have, is the incentives. We've built everything on insecure foundations to get to the market cheaper and faster. These incentives don't change for Neuralink. In fact, they create kind of gold rush conditions that make things worse.
What could change things dramatically overnight was the governenent stepping in and enforcing safety regulations, even at the cost of red tape and slow bureaucratic processes. And it's starting, slowly. But e.g. the EU is promoting SBOM's, sobtheir underlying mental model is still one where you tape random software together quickly.
At some point in the future no one will be using x86 or any variation, and we will all be using a secure architecture. Same as with insecure languages, far enough in the future, every language in common use will be safe.
I believe by the time brain implants are common, we will be far enough in the future that we will be using secure foundations for those brain implants.
> What could change things dramatically overnight was the governenent stepping in and enforcing safety regulations,
For a damn brain implant I don't see why they wouldn't.
I can tell you're high because #2. The only way Neuralink is secure is if we get rid of the system that incentivizes #1, aka capitalism, and not replace it with something equally bad or worse.
Oh, and Musk isn't allowed a Neuralink tripwire to blow up your brain via his invention because he saw pronouns listed somewhere and got triggered.
> The only way Neuralink is secure is if we get rid of the system that incentivizes #1, aka capitalism, and not replace it with something equally bad or worse.
Oh man, you've ingested that anti-capitalism koolaid like so many young college kids are so quick to do. It's always such a shame.
This isn't really anything to do with capitalism, it's a question of regulation e.g. what the FDA does, and also a question of time because when enough time passes, most computing will be secure by default due to having rid the insecure foundations.
And more than that, it's an issue with democracy more than capitalism. Fix the way people vote if you want to fix the world, or prevent the types of people who want to believe the earth is flat from having a vote at all.
Security will never be a "largely solved problem", when there are humans involved (and probably even when humans are not involved).
There is no technical solution to people uploading high res photos with location metadata to social network de jour. Or the CEO who wants access to all his email on his shiny new gadget. Or the three letter agency who think ubiquitous surveillance is a great way to do their job. Or the politician who can be easily convinced the backdoors that can only be used by "the good guys" exist. Or the team who does all their internal chat including production secrets in a 3rd party chat app, only to have them popped and their prod credentials leaked on some TOR site. Or the sweatshop IT outsourcing firm that browbeats underpaid devs into meeting pointless Jira ticket closure targets. Or the "move fast and break things" startup culture that's desperately cutting corners to be first-to-market.
None of the people involved in bringing "enhanced human" tech to market will be immune to any of those pressures. (I mean, FFS, in the short term we're really talking about a product that _Elon_ is applying his massive billionaire brain to, right? I wonder what the media friendly equivalent term to "Rapid Unscheduled Disassembly" for when Nerualink starts blowing up people's brains is going to be?)
> Security will never be a "largely solved problem", when there are humans involved (and probably even when humans are not involved).
It absolutely will. I didn't say completely solved, I said largely solved.
> There is no technical solution to people uploading high res photos with location metadata to social network de jour.
Bad example honestly, since most social media sites strip out exif data by default these days. Not sure there are any that don't.
> Or the CEO who wants access to all his email on his shiny new gadget. Or the three letter agency who think ubiquitous surveillance is a great way to do their job. Or the politician who can be easily convinced the backdoors that can only be used by "the good guys" exist. Or the team who does all their internal chat including production secrets in a 3rd party chat app, only to have them popped and their prod credentials leaked on some TOR site. Or the sweatshop IT outsourcing firm that browbeats underpaid devs into meeting pointless Jira ticket closure targets. Or the "move fast and break things" startup culture that's desperately cutting corners to be first-to-market.
Yes yes, humans can be selfish and take risks and be bribed and negligent and blah blah blah.
The context of the comment was in neuralink implants getting hacked the way an out of date smart tv might. As when it comes to the actual tech, security will be a solved problem, because most of the problems we see today are due to everything being built on top of insecure foundations on top of insecure foundations.