
On Safety Critical Software - di
https://alexgaynor.net/2019/nov/07/on-safety-critical-software/
======
roymurdock
Bait and switch. Author starts with one sentence on safety critical systems
then transitions entirely to discussing challenges with insecure, non safety-
critical software such as mobile phone OSs, email services, and web browsers
for important people (e.g. politicians and policy makers). Yes safety and
security are intertwined, no they're not the same and it's just confusing the
matter to talk about functional safety-criticality in the context of secure
communications. People who need secure communication channels often do their
research or have teams to recommend or build systems for them, I don't think
it's necessary for Google Chrome to tell the general public what tradeoffs
they are making between usability and security (nice to have somewhere in the
documentation? sure. but not necessary).

~~~
tptacek
Your "bait and switch" is the author's whole point: that we intuitively
understand the safety criticality of embedded infrastructure components, and
have expectations about how they're designed, implemented, and maintained, but
don't recognize when those expectations are implicitly imputed to commodity
software that happens to serve those roles for some users.

If every once in a while a nuclear power plant wound up using an Internet-
connected Mac Mini for its control systems, we'd immediately see the problem.
But we don't see that problem when browsers and email systems get embedded
deep into our political system, thus becoming (perhaps unintentionally)
critical infrastructure.

------
CivBase
The article mostly focuses on software security. "Safety critical" software
standards generally focus more on _stability_ than security. Security is part
of it. However, "safety critical" standards are overkill if you just want to
improve security.

Regardless, I doubt the most popular consumer-focused software will ever meet
the same "safety critical" standards as medical or avionics software. I work
on safety critical software and the bureaucracy that comes with it drives
innovation to a snails pace. As long as the average user is willing to give up
a little stability for cutting-edge features, developers who prioritize
"safety critical" standards simply wont be able to compete.

------
WalterBright
The article is about security, not safety critical. A couple articles I wrote
on safety critical software:

Safe Systems from Unreliable Parts
[https://www.digitalmars.com/articles/b39.html](https://www.digitalmars.com/articles/b39.html)

Designing Safe Software Systems Part 2
[https://www.digitalmars.com/articles/b40.html](https://www.digitalmars.com/articles/b40.html)

------
glitchc
This is an article written by someone who’s never actually worked on a safety
critical system. Such systems have an exhaustive threat model with likelihood
estimates developed and verified well before a single line of code is written.

~~~
tptacek
As someone who has (perhaps like the author) done security assessment work for
safety critical systems, I sure would like to hear more about these exhaustive
threat models with likelihood estimates, because they sure didn't seem to have
much to do with preventing me from popping a shell.

~~~
tedunangst
You popped a shell, therefore it wasn't a true safety critical system.

~~~
NovemberWhiskey
Although we cannot preclude the GP's experience, as a former developer and
verifier of safety-critical systems, my experience (from the 2000s) agrees
with the parent post.

Safety-critical systems (at the time: DO-178B design assurance level A in
airborne systems) were about custom processor boards based on mature
technologies (PowerPC was popular; caches often disabled for maximally
predictable peformance), quadruplex redundancy, software with 100% MC/DC test
coverage, verification of traceability between source code and object code,
zero-runtimes, no dynamic memory allocation, extensive time budgeting analysis
for control loops, worst-case stack utilization analyses, hardware watchdogs,
extensive built-in-test and multiple independence objectives (verification
team had to be independent of implementation team etc) as well as formal
methods like data and information flow analysis.

These systems are virtually nothing like conventional "information technology"
systems.

~~~
tptacek
This is approximately what people used to say about EAL4+ certified products,
until everyone realized you can just look at the list of EAL4+ products and
the CVE database and compare.

Look, I don't doubt that there are avionics systems that are secure by design
(largely because their inputs _and their functionality_ are extremely
constrained). But that's a question-begging definition of "safety critical",
because --- my expertise isn't in avionics but I've done a fair bit of
utilities and medical work, for example --- those "safety critical" systems
tend to be embedded in larger distributed compute systems that are riddled
with vulnerabilities, with an ultimate systemic security result of "some
doofus on the Internet can blow up a transformer".

So if the argument is "you can in fact built safety-critical systems by
massively expanding the cost in money and time to built drastically simplified
and less useful components", sure, you win. But if your argument is "the
safety critical industries all know how to field software-based infrastructure
that is safe from attack", then, based simply on personal experience, I'm
going to call "shenanigans", because no, they cannot.

~~~
pnako
Safety-critical systems and secure systems are completely orthogonal. Although
the article makes a good point, I don't think it was very sensible to use the
expression "safety-critical software", for this reason.

In a safety-critical _system_ (which might include software), you are trying
to protect yourself against mother nature. Which is tricky but doable, because
we can predict it. Generally, the security assessment is minimal because the
system is assumed to be "behind the firewall": in a secure cabinet in a plane,
in an access-controlled building or factory, etc.

Of course, all hell breaks loose the day someone decides to connect that
system to the Internet, where thousands of smart humans with bad intentions
try to break it.

I think the author would have helped his point better if he simply talked
about secure software, not safety-critical software.

~~~
killjoywashere
> Safety-critical systems and secure systems are completely orthogonal

This strikes me as the most true statement in this conversation. I feel like I
need this as a bumper sticker. I think a significant issue in this space is
figuring out how to get the security people to put as much skin in the game as
the safety-critical people have. The security people often get to continue
paying their mortgage as long as they say "no". And as a physician leading a
development team, I really struggle with this. If the budgets need to go up to
solve the compliance problems, that's fine, but the compliance people seem to
always have one more tissue-thin layer of requirements to add. It's like
trying to drill your way out of a growing onion.

~~~
tptacek
Compliance people and software security people aren't the same people.
Software security people aren't generally in the business of saying "no"; they
start with the requirements of the system, established elsewhere, and try to
ensure that the implementation of those requirements doesn't cough up
calc.exe.

~~~
irundebian
Saying "no" because of compliance requirements is imho pretty similar to
saying "no" because a software bug was revealed. Both don't help much in
building secure systems.

At least with certain compliance requirements you can focus on avoiding
certain bad practices or bug classes while with finding some bugs, in most
times, you can't be sure you have found all relevant bugs. Ideally bug hunting
acts as a validation of compliance requirements, to check that these are
fulfilling their overall goal, and to improve them, if not.

------
kahlonel
It’s not “safety critical” unless there’s a chance of someone dying as a
direct result of the system’s failure. Stop trying to mix two totally
different ideas together.

~~~
Tomte
Injury or environmental damage also counts.

Mnemonic: Safety is concerned with whether the system can harm its
environment. Security is concerned with whether the environment can harm the
system.

