
German nuclear plant infected with computer viruses, operator says (2016) - edward
https://www.reuters.com/article/us-nuclearpower-cyber-germany-idUSKCN0XN2OS
======
Nokinside
If a virus can cause accident that releases radiation, it's not a computer
safety problem, it's a failure in nuclear plant automation design.

Modern nuclear plant's usually have analog monitoring and safety system's
parallel to digitized control in critical parts like neutron flux measurement
and calibration, fuel loading monitoring, control rod position indicator,
control rod actuators, .... Any error caused by virus attempting sabotage the
plant should result shutdown.

You can divide the system into 1) process automation 2) limiting systems and
3) Protection systems.

Process automation can be fully digitized and computerized. If it fails or
computer virus attempts to sabotage the system, it should not cause critical
conditions because limiting systems have diversified digital and analog
control systems that limit given parameters. If that fails independent
protections systems trigger shutdown if those same parameters are exceeded.

Another way to divide the safety is four principles:

1) parallel principle: parallel subsystems

2) separation principle: parallel subsystems are placed so that simultaneous
damage to all of them is very unlikely.

3) diversity principle: same function is implemented with different operating
principles. The same valve can be operated using electric motor, compressed
air, or manually.

4) safe-state principle: system failing falls back to the safe state

~~~
roywiggins
A shutdown is still a denial of service attack, which might be bad enough on
its own. How well can the grid compensate for one or several plants suddenly
dropping offline? We know cascading grid failures can happen.

But, I guess the risks there are about the same for a nuclear plant as they
would be for a computerized non-nuclear plant.

~~~
Nokinside
This is a unique new category for networked digital automation that is absent
from other systems. I don't think computer security people take it seriously
enough.

Say, you sabotage "dumb" automation system to break and turn the car suddenly
right. When the failures starts to happen, manufacturer eventually notices it
and recalls vehicles. Maybe tens or hundreds of people will be injured.

But what if the system has access to date and time trough CAN bus? If design a
system that steers all affected cars suddenly right at the same time the
cascade can kill and injure thousands and cause huge panic. People are afraid
to drive until the problem is fixed.

~~~
nickpsecurity
High-assurance security has been pushing separation kernels to solve that kind
of problem for a long time. Vendors even list automotive in their Solutions
menus. Two got evaluated by NSA for security. I think NSA cancelled that
protection profile because the hardware and firmware were having too many
leaks to find a software solution believable. The boards aimed at that market
are also increasingly complex. However, the separation kernels and
virtualization platforms were the strongest thing produced for this goal with
some still available.

Here's a good explanation of default, architectural style used in high-
assurance security from a team that FOSS's a lot of their prototypes:

[https://os.inf.tu-dresden.de/papers_ps/nizza.pdf](https://os.inf.tu-
dresden.de/papers_ps/nizza.pdf)

Here's a commercial one used in an automotive platform that got certified to
high assurance at EAL6+:

[https://www.ghs.com/products/safety_critical/integrity-
do-17...](https://www.ghs.com/products/safety_critical/integrity-do-178b.html)

I'm not endorsing the [closed-source] product so much as the methods it
claimed to use to get the job done. Look at the architecture, minimalist
runtimes (esp for Ada), determinism, partitioning, and certification data on
bottom-right. That last part hints at the kinds of things they have to do and
get reviewed for the certification. If a developer claims to do security, ask
to see the Covert, Channel Analysis of their product. That certification
requirement/activity is how cache-based, timing channels were discovered in
1992 by VAX Security Kernel team among other leaks. Covert and side channels
are getting popular again but most still don't apply storage/timing analysis.

You'll find that most developers and "security professionals" don't build
secure stuff using methods proven to work in the past. Companies building tech
like this sell it for high price since there's so few buyers. The networking-
oriented ones started at $50,000 when I priced them. You can bet the big
companies whose products are getting hit could've afforded them, though.
Especially the royalty-free options.

~~~
Nokinside
Partitioning kernels have been used decades in aviation and other safety
critical systems. I have developed software for integrity for example. It's
commercial software and not proper open source, but the code is available for
their their customers.

These system address the issue of separating different functions as well as
possible. What they don't address the problems that widely used homogeneous
systems face when a way to compromise the system is found.

~~~
nickpsecurity
I first encountered them made for aerospace for DO-178B. What's the oldest one
you know like a separation kernel whose design details are publicly described?
I like giving proper credit and knowing the history.

For me, we went from security kernels to Rushby's concept to separation
kernels. Then, they did aerospace versioms first since safety requirements
were already a subset of security requirements. Plus, aerospace companies
would actually buy it. Hard to sell that stuff in non-regulated markets.

~~~
Nokinside
I'm not good at history of operating systems.

Separation kernel is efficiency thing. The oldest and most secure way to solve
the same problem is separate computers and put data diodes between them if
necessary. Need for integrated modular avionics was the driving force for the
development. Aircraft have space and weight limits.

------
tjoff
This quote sounds strange to me:

" _As an example, Hypponen said he had recently spoken to a European aircraft
maker that said it cleans the cockpits of its planes every week of malware
designed for Android phones. The malware spread to the planes only because
factory employees were charging their phones with the USB port in the cockpit.

Because the plane runs a different operating system, nothing would befall it.
But it would pass the virus on to other devices that plugged into the
charger._"

What kind of malware would that be? Also obviously the OS is at risk since it
also must have been infected to be able to pass the malware along (I'm
guessing this might be the entertainment system or something and nothing
critical).

~~~
ztjio
You are thinking too conventionally. There are numerous ways to attack USB
devices that don’t involve the OS.
[https://www.bleepingcomputer.com/news/security/heres-a-
list-...](https://www.bleepingcomputer.com/news/security/heres-a-list-
of-29-different-types-of-usb-attacks/)

~~~
tjoff
I guess that a regular app would not have the necessary low level access (that
I assume is required) to do anything like that.

So that (another guess) means that the android phone needs to be under
complete control of the malware.

Is that common or are my guesses/assumptions wrong?

------
4ad
I know this is not going to be a popular opinion, but using Windows for
critical systems, even air-gapped and everything is simply irresponsible.
Windows seems to be too hard to secure, and these problem keep happening again
and again.

It's not that Linux or some RTOS would necessarily be immune to a targeted
attack like Stuxnet, but they would be immune from the random crap malware
that exists in the Windows world.

~~~
tetha
I've been thinking about that on and off for quite some time. These kind of
embedded systems are actually a really hard problem, because updates are hard
up to impossible. So eventually there will be 10 years of known bugs in that
system.

Digesting from a lot of defcon / blackhat talks, it seems that the best way to
minimize this amount of bugs is to deploy less code. And I think that's where
a BSD or a linux has a strong option to be more secure: you could strip down
the kernel and the system as much as possible. Don't deploy any USB related
code, potentially remove the entire network stack if you can.

That's the theory anyway. The IoS displays how badly managed linux can be
equally open.

~~~
laythea
My experience of redundant embedded controller systems is that whilst they may
not be WindowsTM software, they are usually based on an RTOS such as vxWorks,
QNX etc.

I don't suppose they are any harder to "hack" than Windows. In fact, it is
possible that they may be easier depending on the security mentality of the
manufacturer, and given the number of eyeballs on WindowsTM.

I can only imagine this will get worse as these controllers become more and
more sophisticated.

The only way, in my opinion, to safeguard against this, is have a fully
disconnected system, with no USB access. However USB is easy.

~~~
tetha
> I can only imagine this will get worse as these controllers become more and
> more sophisticated.

That is exactly the thing though.

If you could reduce a digital control unit to a small program and run that on
a write-once memory in a dead simple controller, you'd avoid spectre,
meltdown, you'd avoid the vulnerability of overwriting the program post-
deployment and you'd simplify ensuring correctness of the control program.
Verification might actually be possible there.

~~~
laythea
Hi, yes but they wont do that. Normally, there is no write once memory. Most
eg. oil rig control systems are not hammered down to that level. Heck - most
aviation flight control systems are not like that. In fact, I was, in my early
career responsible for writing a windows C# communications driver for the
entire oil rig. Its still working :)

------
GuB-42
It doesn't seem to affect the nuclear operation part in any way.

Nuclear power plants also have windows PCs running MS Office. There are people
there who plan budgets, write reports, send e-mails, etc... The boring stuff
found in every company.

Nuclear plant operation, at least for the critical part is still surprisingly
low tech. There are panel full of gauges, lights, switches and levers. And
there are people with paper sheets following procedures like "if pressure X
exceed Y, push button Z". I don't expect a virus to have much influence on
that part. It is bad for the economic operation, but not really a safety
issue.

I don't know enough to know if a well targeted attack, stuxnet style can be a
safety issue. I guess not, but that could probably cause a shutdown.

------
Theodores
In the UK they did not update the control rooms for the nuclear plants so,
last time I checked (which was about a decade ago), they still looked like
1960's vintage hardware. So I was sceptical about how a computer virus could
possibly affect things.

Looking for a picture of a 'modern' control room from a British nuclear plant
(with not so much as a Windows XP machine in sight) I came across this story
of how a decommissioned nuclear plant is now a school:

[https://www.businesswest.co.uk/blog/site-berkeley-nuclear-
po...](https://www.businesswest.co.uk/blog/site-berkeley-nuclear-power-
station-begins-new-chapter)

Girls and boys can go there to learn about STEM things, including our friend
the atom. Clearly that talk about how toxic nuclear power plants were going to
be for future generations was idle speculation and fear-mongering.

~~~
tonyedgecombe
_Clearly that talk about how toxic nuclear power plants were going to be for
future generations was idle speculation and fear-mongering._

We are spending a lot of money decommisioning legacy nuclear power stations.

------
5DFractalTetris
People are quick to forget that the human mind is a firewall in computing
environments. A car that can be remotely turned off by satellite signal is not
a car with which I would want to share the road. Process automation will
probably create the Runaway Train effect in due time.

~~~
winrid
A lot of modern cars with OnStar have this.

------
djsumdog
> infections of critical infrastructure were surprisingly common, but that
> they were generally not dangerous unless the plant had been targeted
> specifically.

Stuxnet

------
ccnafr
It's an article from 2016. What's the point of sharing this?

------
tosh
2016

------
doener
This article is from 2016.

