
Inevitability of Failure: The Flawed Assumption of Security in Modern Computing [pdf] - singold
https://www.nsa.gov/research/_files/publications/inevitability.pdf
======
nickpsecurity
A classic. Bell's Looking Back Addendum [1] also traces the beginning of high
assurance security market (their solution) and how DOD/NSA totally killed it.
I specialized in extending such high assurance systems or approaches to handle
modern problems. That market is gone, though, thanks to DOD/NSA policies to
use low assurance systems and even pushing them. Post-Snowden, it's fair to
wonder if it was mismanagement or intentional that they marketed low assurance
alternatives (eg DTW, MDDS) to the kinds of OS's and guards they couldn't beat
with 2-5 years of pentesting (esp Boeing SNS Server).

Yet, I think I can say we all focused too much on the OS and software security
side of things despite what we accomplished. As Brian Snow noted, we're really
just trying to implement forms of isolation on machines designed for pervasive
sharing (read: insecurity). It's why I counterpointed Geer here [2] saying our
software security crisis isn't inevitable: we just need to use hardware and
tools that make secure software easier to write. I gave examples from the past
of many security-improving features systems had, including immunity to code
injection via app attack. Fortunately, at least a few groups (esp DARPA/NSF
sponsored) took notice and are working on such architectures today.

[1] [http://lukemuehlhauser.com/wp-content/uploads/Bell-
Looking-B...](http://lukemuehlhauser.com/wp-content/uploads/Bell-Looking-Back-
Addendum.pdf)

[2]
[https://www.schneier.com/blog/archives/2014/04/dan_geer_on_h...](https://www.schneier.com/blog/archives/2014/04/dan_geer_on_hea.html#c5598568)

~~~
Animats
I used to work in that area back in the 1980s, and worked on one of the early
high-security operating systems (KSOS for the PDP-11, written in Modula I. It
was really slow; trying to cram it in 64K 16 words of code just didn't work.)

NSA has an attack side and a defense side. The defense side builds the US
military's security systems. The attack side has more prestige within NSA. The
defense side does things like evaluate filing cabinet locks.

The NSA Orange Book criteria for OS evaluation (which was actually Grace
Nibaldi's master's thesis) set out tough criteria. NSA set up an evaluation
operation at NSA's Friendship Annex (named for Friendship Airport, which is
now Baltimore Washington International).

There was heavy industry opposition to NSA's secure OS efforts. The evaluation
procedure was borrowed from the safe and lock evaluation operation - a vendor
could submit a product for evaluation, and if it failed, they'd get a report
on the flaws found. They could then resubmit the product one more time. If it
failed on the second try, the product was rejected, just like they did with
padlocks. Acceptance and rejection were public. So an OS could be officially
be stamped insecure.

A few OSs passed those tests. Most of them are forgotten. Wang, the word
processor company, had a secure OS. Multics is at least remembered. There was
a secure version of IBM VM. But most commercial OSs could not pass.

The industry fought those tough standards, resulting in the "Common Criteria"
for security evaluation. The Common Criteria were much weaker. Instead of NSA
doing the evaluations, they were done by private testing labs paid by the OS
vendor. Vendors could try over and over again to pass. Failures were not
publicized. Microsoft was able to meet those new lower hurdles, as were other
vendors.

This was all pre mass market Internet (the academic and DoD communities had
Internet connections from about 1983, but the Internet world wasn't that big.)
and the security criteria were very single-system oriented. Also, the internal
use of cryptography within computer systems was very rare, and not trusted
within DoD. NSA took the position that crypto could only be done by special-
purpose secured boxes. (For high security, they're right.)

~~~
nickpsecurity
You're one of only two people I've met that worked on one of the exemplar
systems. I appreciate you sharing a few pieces of the puzzle. For instance, I
have KSOS's architecture but didn't know it was written in Modula. I have
papers on most of the others (incl little known ASOS). Also, confirming that
NSA preferred crypto on dedicated boxes (eg LOCK's SIDEARM). Two issues,
though.

One, the NSA rarely builds any secure product: they contract them out instead
while providing security engineering assistance. That includes most Type 1
products, guards, and even most components of EKMS. There's very few things in
high security market that they built themselves. Worse, many like "High
Assurance" Platform aren't high assurance by a long shot.

This brings me to two. Bell pointed out that what made high assurance market
successful was a clear set of standards plus an expected return on investment.
Companies lined up to build secure versions of everything. He claimed that
changed when NSA started competing with vendors directly under MISSI on top of
export issues. An already risky market turned into almost total risk and no
reward. So, it imploded with the entrenched defense contractors and their few
products being all that's left. So NSA and DOD policy are as much to blame as
the market given they (a) are only one with power to force market to build
secure stuff and (b) made sure it went away.

I think the biggest failure, though, was not focusing on hardware. DARPA is
focusing on it this time. I think the best thing we can do is find, like with
CHERI or hardware Control-flow Integrity, a technology that stops most
critical attacks with low overhead and compatibility with legacy software. The
government can give taxpayer grants to hardware companies like Intel/AMD,
legacy software vendors such as Microsoft/Oracle, and mainframe builders (esp
IBM & Unisys) to incorporate the stuff into their hardware and software. If
done right, most of the software just needs a recompile with only the
assembler or lowest level things needing to be changed by hand. This seems to
be the most promising approach given the market will not spontaneously do high
security or give up backward compatibility.

In parallel, we do clean-slate architectures like SAFE that inherently and
efficiently enforce arbitrary security policies at the word level. Like old
LISP or Burroughs machines, might target a language like Java or C# to them
where all software can be written in same high-level language executed
natively. New deployments and security appliances build on these types of
systems. Careful interfaces between the legacy-supporting and clean-slate
systems make up the overall IT infrastructure.

~~~
Animats
KSOS-11 was done at Ford Aerospace in Palo Alto, CA, from 1978-1982. It wasn't
that big a job; it was about 5-8 people. I wrote the file system and all the
drivers. We had a PDP 11/45 for debugging and a PDP 11/70 for development.
Supervision came from DARPA; they sent out an evaluation team every 6 weeks to
review progress. Some NSA people would show up on their team. The project was
unclassified but most of the people on it had SECRET clearances. KSOS was only
used for one special-purpose secure system; the PDP-11 era was over by the
time it was done.

Parts of it were specified in SPECIAL, SRI's language for formal
specification. That was mostly an API spec, written at SRI. SRI had designed
KSOS at an abstract level. We were just implementing it. This didn't work
well.

The most complex thing we were able to do in SPECIAL was to specify what
validity meant for the file system. A "weak invariant" was true at the end of
each write, and a "strong invariant" was true when the file system was
unmounted. File systems were not well understood back then, and UNIX file
systems tended to degenerate into garbage after crashes. We were able to avoid
that.

We didn't have much in the way of machine proofs, although we tried. We used
the Boyer-Moore theorem prover and the Stanford Pascal Verifier, but the tools
were not there yet. I later headed the Pascal-F effort, Ford Motor's in-house
code verifier for engine control programs. That turned out not to be useful
because the Pascal-F compiler team at Ford couldn't get their compiler to
generate usable code. We could write small programs and verify them, but could
only run them in a slow byte code interpreter.

DARPA was trying to get a secure OS. They really wanted a secure microkernel.
They pulled the plug on Berkeley BSD because it was too bloated, and put money
into Mach at CMU instead. It was a surprise to DARPA when BSD didn't die, but
went commercial (with Sun) instead.

~~~
nickpsecurity
Appreciate the details. I knew it was a demo but not that it was that budget.
The use was a guard for communications or mail, wasn't it?

re abstract vs implementation

More evidence for the claim by many lesson learned papers in formal methods:
people doing specs/proofs should work closely with implementers and in
parallel where possible. That way issues spotted one place get to the other
quickly. Better compromises, too. Neat that it helped with the file system,
too.

re verification tools

Yeah, that sounds as painful as I expected. The only one I read on in that
time was Gypsy and attempts to convert it to Ada. The proofs were done by hand
in most. Every paper had researchers griping about the tools. Main benefit
those tools gave, per users, was making verification easier just because the
prover's needs (and weaknesses) forced simplification of specs. Did your
projects see at least that benefit?

re DARPA

That's my impression. The consensus at the time was on a security kernel
approach. It's what all the A1 and B3 systems used. They learned a lot from
their endless Mach-related projects but the drawbacks (esp performance) were
too great. Personally, I think it was the hardware context-switch performance
& message passing overhead holding them back. Users wanted power, performance,
and price without much worry for security. Naturally, they built on BSD.

And now one of DARPA's best, CheriBSD, is a port of FreeBSD to the capability-
secure CHERI processor. They finally learned to be practical, eh? ;)

------
bediger4000
The youngest references are from 1998, so I'm guessing this was written by
2000, before the Very Bad Indeed 9/11 incidents. Also, it doesn't mention
terrorism at all.

I suppose this just represents that the NSA at least in the past had various
factions inside it, one of which lead to SELinux.

~~~
digi_owl
NSA have been divided from the word go.

One the one hand they are to spy on the communications of "enemies", on the
other they are to defend the nation against such spying.

Problem is that you basically can't do one without compromising the other.

~~~
nickpsecurity
That they have to defend the nation is a common misconception. They were only
legally required to protect the COMSEC of military. This was extended to
protect defense contractors as well. They don't have to protect the rest of us
and have a conflict of interest against doing it. They'll provide some support
(e.g. SELinux, "secure" configs) but ensure they can bypass it.

------
bnewbold
Just downloaded and opened a .pdf from nsa.gov.

Oops.

~~~
nickpsecurity
Compile your PDF viewer with Softbound [1], load a stripped Linux LiveCD
containing it on disposable machine, download PDF with sandboxed browser to
RAMdisk, and load with that viewer in a sandbox. Only so much they can do at
that point. Use a non-Intel, non-ARM processor to throw them off further.
Hand-type into trusted computer any insights you glean from the PDF.

A lot of work just to safely read a PDF from the enemy, eh?

[1]
[http://www.cs.rutgers.edu/~santosh.nagarakatte/softbound/](http://www.cs.rutgers.edu/~santosh.nagarakatte/softbound/)

~~~
dfc
Or open pdf on computer at public library and print a hard copy.

~~~
nickpsecurity
The old school methods are often easier. ;)

------
Cieplak
Even if an operating system were provably secure in the software layer, it
might still be vulnerable to hardware backdoors:

[http://www.eteknix.com/expert-says-nsa-have-backdoors-
built-...](http://www.eteknix.com/expert-says-nsa-have-backdoors-built-into-
intel-and-amd-processors/)

~~~
naveen99
What if you use multiple computers for the hardware and manually control
communication between machines. Basically use hardware only for pure functions
and store state out of band... Project output to a shared screen. Similar to
how manufacturing of parts is split to different untrusted factories. Do the
merging step on a breadboard or $10 commodity circuit that you fully control.

~~~
nickpsecurity
Clive Robinson and I came up with the same solution to this problem: running
same software on several different hardware with voting protocol. This is used
in safety-critical market where you need redundancy. I modified the scheme so
that different teams implemented the same spec in a compatible way on hardware
that came from different countries. My model is mutually suspicious countries
that work against each other too much to cooperate on a backdoor. Secrets
might leak to any of them but integrity can at least be protected.

Far as plugging leaks, air gap strategies plus high assurance guards built on
cheap boards they probably didn't backdoor. Especially old microcontrollers,
UNIX servers, gaming systems with Linux ports, and so on. Keep using different
stuff with simple, protected interfaces. They can't hit what they can't see.

------
stcredzero
It should be relatively easy to develop an automated tool to reformat LaTEX
papers to a single column format more suitable for web distribution. Actually,
LaTEX itself should be able provide all of the infrastructure for this. Given
this, isn't it odd that papers get published to the web as 2 column PDFs? I
know how/why it happens, but still.

~~~
jackpirate
This sounds like an impossible task to me. The advantage of Latex over
HTML/Markdown is that you get very precise control over the placement of
things like figures. Even something as minor as changing paper sizes from US
Letter -> A4 completely messes this up and requires lots of manual adjustment.
That's the price of using Latex, and in exchange we get documents that look
much nicer than HTML/Markdown.

~~~
vmorgulis
It is possible to achieve that in SVG more precisely (than HTML).

~~~
jackpirate
I don't understand. Does SVG = scalable vector graphics? Is so, how is that
related to positioning text on a page?

~~~
vmorgulis
CSS layout properties apply to SVG. It is possible to manage the flow of the
text ([http://blog.scottlogic.com/2015/02/02/svg-layout-
flexbox.htm...](http://blog.scottlogic.com/2015/02/02/svg-layout-
flexbox.html)).

------
dredmorbius
Does anyone have a fix for the missing ligatures problem this PDF exhibits?
"fl", "fi", and "ff" are blank as viewed under evince and xpdf

~~~
dredmorbius
Sigh: [http://pastebin.com/88VknLVZ](http://pastebin.com/88VknLVZ)

