Kaspersky never specifically claims a recent or Stuxnet infection of the ISS, just of a power plant, so he's probably referring to the above as an example of air gap not being enough to prevent attacks.
The commercial laptops (originally running Win98, then NT, then XP, now moving to Debian 6) are used for normal computing and interfacing with segment control. The segment computers (60+) which actually run the station are heterogeneous and bespoke aerospace-heritage hardware and software. The Russian side and American side especially are very different and almost entirely independent. Pretty darn sure there's no Siemens systems running up there, so Stuxnet wouldn't be a problem.
The embedded systems are probably not all that hardened, but they are not widely-distributed (to say the least) and are hard to get to, so it would be awful hard to target them. Possible, but mostly through attacking the developers on the ground, I should think.
1) A virus provides some additional benefit. For example, if a virus' first stage acts as a sort of installer and pulls in some additional libraries for use, at some point of public infection rate there's a pretty good chance those libraries are available to anyone targeting programming for those systems. This becomes more feasible if AV cleaned the infectious and malicious portion while leaving libraries. Imagine, every system has libpcap one it...
2) Viruses start opening previously locked down portions of the OS to achieve their goals, and developers start taking advantage of these opening to extract more value for what they provide (in fact, being somewhat malicious themselves). This is already the case to some small degree with browser exploits. Keygen utilities that install malware seem to fit this somewhat.
This all presupposes a much more computer integrated and yet less computer aware society, which is where I think we are heading. More and more items have integrated computers and operating systems, yet we are much less aware of when this is the case (of course certain groups, such HN regulars, are more likely to be a aware of these items).
Edit: Here's a sample scenario for you: You turn on your new computer and attempt to use it to watch videos on your favorite site, but it doesn't work. You try again tomorrow and it does. What happened is that between then and now, a virus has infected your system and installed a hacked divx codec and made some changes in your browser to redirect you to some specific sites often. The author gets information about your system far beyond what is normally allowed, and you get access to videos as you normally did (possibly illegally).
It seems illogical to discuss possible benefits of real malware.
I work as a malware and enterprise security analyst, and I can certainly see no current or future benefits to malware. I think the "cons" of them harvesting your email, banking, and social networking credentials, abusing your system resources to spam and DoS, and siphoning important information from your hard drive to a remote server certainly outweigh any incidental benefits that may occur.
Of course. I made no argument towards it being beneficial, just scenarios in which we could possibly come to a homeostatic situation with malware, and as I note it assumes a fairly different software ecology than today. Imagine a world more in the vein of classic Neal Stephenson or Charles Stross as a prerequisite, if that helps.
I find it very unlikely that developers would start to rely on maliciously installed libraries. That means all the developers and tester machines need to be infected, and that none of them do testing on clean installs.
Additionally, for point 2, locked down OSes usually have a locked down app store. This means those app store tester machines would also need to be compromised, as well as the app store testing suite. Otherwise the testing suite would notice a dependency on not-allowed functionality.
I think your interpretation of "developer" is stricter than mine. I meant anyone that writes a program (the "developer" of that program), not developer in the sense of a profession, which generally implies a bit more rigor (hopefully). That said, yes, it's unlikely. I wasn't making any assertions as to how those scenarios will come about, just ways in which I think they could, given the right conditions.
As for OS lockdown, while I can see why you went towards that interpretation, and on re-reading it does make sense, I was actually thinking of something much more general (so OS is a bit of a red herring). Any hole that allows root/administrator access (so any multi-user secured OS), or even something that loosens security in the browser to make more, probably less safe, access available to web pages.
The critical difference is that parasites and viruses evolved, whereas bugs and malware were created by humans - and in the case of the latter, created deliberately.
Intent is not a factor here. Both organic and software parasites are simply capitalizing the opportunity. This is a pattern that is as much a natural law.
No, I think the critical difference is not the majority (no source here, just my gut feeling) of virii and parasites are not beneficial to their hosts and the few that are did so by chance.
In nature, a virus or parasite can't spread too fast or hurt its host too much or its reproductive cycle will not be able to sustain itself. We see this in software too: a virus that is too nasty will attract too much attention and everyone will end up installing tools to detect and remove it.
So, while you may find the occasional symbiosis in nature between host and parasite, there is little reason for that to happen in the software world. It's far easier, safer, and potentially more lucrative to fly under the radar. For example, once your software infects a computer and you get access to their bank account, you only take $7.69 out once every couple months or so instead of draining it in one go.
This link auto-plays a video with sound enabled for me (about Snowden, in the upper right corner, totally unrelated to the fine article about the ISS).
I don't see what the fuss is all about. They are open to infection. They're built on linux, rather than an entirely custom system. So they're vulnerable to existing malware floating around.
I think the implication is that had they been built on an internally built OS(es) they would have not been open to infections already existent, not that Linux is more prone to infections than alternative OSes.
I can see why people would assume that open source is a big security risk, but how deep does your head have to be buried in sand for you to think that Windows (which, AFAIK, is the only other OS often used in large government systems like this) has not been open to infection?
Who says that Windows is not open to infection? All that was stated is that Linux is open to infection. There's no implication that using another popular OS would have been a better choice.
Agreed, but it was still quite disingenuous of them to say that Linux was the specific reason they were open to infection. The fact that the space station has computers and ways to connect to those computers means there's a chance of infection.
Viruses are harder to implement on custom software, though. Perhaps the ISS should have used a custom OS to protect against malware? Popular off-the-shelf software is going to be targeted before an in-house and not-publicly-documented piece of software.
Building an OS from the ground up would take a lot of money and manpower, and NASA aren't exactly rolling in cash nowadays.
And while yes, in-house software would be safe against most existing malware, and general malware, ISS would not be safe from direct attacks. Those are the real threat anyway...
VxWorks is pretty popular with space agencies (NASA uses it, as well as SpaceX. Curiosity and Dragon both run VxWorks RTOS). It has a 64-bit mode for Intel processors, runs on HRFS, DOSFS, and NFS, has IPv6 built in, and is POSIX compliant.
While direct attacks are a real threat, I don't see anywhere saying this was a direct attack. Not using an off-the-shelf OS could have mitigated the attack if it wasn't a direct attack.
Kaspersky told the Press Club that creating malware like Stuxnet, Gauss, Flame and Red October is a highly complex process which would cost up to $10 million to develop.</quote>
really, 10mi to disable one strategic facility (or maybe N facilities) is expensive? that is probably the cost of a dozen smart bombs. And you can use the digital counterpart much more stealthily.
>Kaspersky revealed that Russian astronauts carried a removable device into space which infected systems on the space station. He did not elaborate on the impact of the infection on operations of the International Space Station (ISS).
>Kaspersky said he had been told that from time to time there were "virus epidemics" on the station.
Given the total lack of supporting evidence here, I'm going to stick a big ol' [citation needed] sticker on this.
Thanks for the references (and no worries on the "grumpiness" :) ) but I still don't think this qualifies as an "epidemic", to use Kaspersky's wording.
Yes, I was thinking perhaps this was trying to get the Kaspersky name out there in front of people when they are renewing their anti-virus packages. Since a lot of students end up with computers at Chrismtas time, it seems there is a renewal hump at the same time. It seems to correlate with people who make such systems trying to get into the press about how aware/good/active they are. It looks like a great example of a self organizing system :-)
Perhaps, but this is the first time I've seen the news presented on a website with an autoplaying video of unrelated material under a full page overlay popup that nags me to like them on facebook.
What do you talk about? Stuxnet eventually deploys the native infected code for non-Windows PLC device, Windows is just a carrier not the final target:
"the Stuxnet worm incorporates several sophisticated means of propagation with the goal of eventually reaching and infecting STEP 7 project files used to program the PLC devices.
For initial propagation purposes, the worm targets computers running the Windows operating systems. However, the PLC itself is not a Windows-based system but rather a proprietary machine-language device."
interesting because a common suggestion for increasing the security of a sensitive system is to maintain an "air gap". they have an "off-the-planet gap" and they still got compromised.
http://www.extremetech.com/extreme/155392-international-spac...
Kaspersky never specifically claims a recent or Stuxnet infection of the ISS, just of a power plant, so he's probably referring to the above as an example of air gap not being enough to prevent attacks.
The commercial laptops (originally running Win98, then NT, then XP, now moving to Debian 6) are used for normal computing and interfacing with segment control. The segment computers (60+) which actually run the station are heterogeneous and bespoke aerospace-heritage hardware and software. The Russian side and American side especially are very different and almost entirely independent. Pretty darn sure there's no Siemens systems running up there, so Stuxnet wouldn't be a problem.
The embedded systems are probably not all that hardened, but they are not widely-distributed (to say the least) and are hard to get to, so it would be awful hard to target them. Possible, but mostly through attacking the developers on the ground, I should think.