
Dramatically Reducing Software Vulnerabilities [pdf] - neiesc
http://nvlpubs.nist.gov/nistpubs/ir/2016/NIST.IR.8151.pdf
======
Animats
Those are the usual answers. But they're too broad.

A good way to look at the problem is that trusted software needs to be far
less vulnerable, and untrusted software needs to be kept in a cage where it
can't make trouble.

On the untrusted side, all games, for example, should be caged or sandboxed.
(Yes, this breaks some intrusive anti-cheat mechanisms. Tough.) Applications
on phone-type platforms should have far fewer privileges, (Yes, that breaks
some ad networks. Tough.)

Until somebody with enough power to make it stick takes a hard-ass position
and sets standards, there's not going to be progress. It would be progress if
AT&T or Comcast or Verizon deployed secure routers, for example.

~~~
chubot
It's astonishing to me that I hit Ctrl-F and searched for "sandbox" and
"software compartmentalization" and there are no hits in that doc.

Chrome-like sandboxing seems like the current state of the art, and it's
complementary to all the techniques mentioned. There will be vulnerabilities,
but making attackers chain vulnerabilities to get into the system will have
the effect of "dramatically reducing them".

I don't see any numbers in that paper either, which seems like a big
oversight.

Chrome has proven to be more difficult than other browsers to exploit, and
this is with hundreds of thousands of dollars on the line. I think the Pwn2Own
escape in 2016 required the attacker to chain 4 exploits. I don't think people
hacking servers need 4 exploits today.

[https://en.wikipedia.org/wiki/Pwn2Own](https://en.wikipedia.org/wiki/Pwn2Own)

The browser is operating under a more extreme environment than a server: it
contains tens of millions of lines of C++ code, and it exposed to attackers
(web sites) by billions of people continuously throughout the day.

This is also the solution that doesn't require rewriting hundreds of millions
of lines of code, which is an economic impossibility in the short term.

There is some current research on how to help programmers split programs into
trusted and untrusted parts, e.g.:

[http://dl.acm.org/citation.cfm?id=2813611](http://dl.acm.org/citation.cfm?id=2813611)

~~~
tptacek
People hacking servers don't need 4 exploits, but hacking servers is still
difficult relative to hacking clientside code, because servers (a) have a more
limited attack surface than clients and (b) are less performance-sensitive and
thus can rely on memory-safe languages.

Docker and seccomp-bpf are techniques for sandboxing server applications, and
they're useful, but they run into the same problem as sandboxing any complex
system: the privilege to do many of the things the application ordinarily does
--- writing a row in a database table, for instance, or creating a new queue
entry --- can unpredictably result in privilege escalation.

The techniques you need to apply to make sure that basic application actions
don't escalate privileges are largely the same as the ones you'd take to
ensure the application doesn't have bugs.

Don't get me wrong: I think more and more applications will take advantage of
serverside sandboxes, to good effect. But I don't think it's going to be a
sea-change for security, and I don't think it's valid to compare the
"4-exploit chain" needed for Chrome to the situation in a complicated web
application.

~~~
tannhaeuser
I don't know docker should be considered security sandboxing. From
[https://docs.docker.com/engine/security/security/](https://docs.docker.com/engine/security/security/)
:

    
    
        Running containers (and applications) with Docker
        implies running the Docker daemon. This daemon 
        currently requires root privileges, and you should 
        therefore be aware of some important details.

~~~
chubot
Docker definitely doesn't follow least privilege. If it were least privilege,
then the Docker daemon WOULDN'T EVEN EXIST.

For example, Chrome's sandboxing tool (minijail I think) and systemd-nspawn
are NOT DAEMONS. They just set the process state and exec().

Docker is sloppy as hell from a security perspective. It is indeed
embarrassing that they mention it in this paper.

Docker has also enshrined the dubious practice of dumping an entire Linux
image into a container, and then installing a bunch of packages from the
network on top of that.

And now you need a "container security service" as mentioned in the article.
How about you just understand what your application depends on, rather than
throwing the whole OS plus an overestimation of dependencies in there?

------
tyingq
Fairly in-depth. I'm surprised though, at the generally positive tone around
containers/docker. No mention of the the current widespread practice of
containers running as root. Nothing about the relative lack of protection
against local kernel exploits escaping the container, etc.

Was expecting something a little more balanced on the topic.

~~~
hueving
Note that it doesn't say that containers are secure. It just implies that they
can be used to help with security practices like principle of least privilege
for processes.

In other words, containers are better than running normal processes for
security. Not better than running a VM.

~~~
tyingq
Agreed. But, seems odd to mention "principle of least privilege" when
unprivileged containers aren't the default :)

(yes, I get those are two different scopes of the word privilege)

------
ctz
I don't really understand why this doesn't cover memory safety.

~~~
duneroadrunner
Yes, after reading a draft[1] of this document, I suggested to them that they
seemed to be insufficiently emphasizing remote execution vulnerabilities (due
to invalid memory access). I also pointed out that they neglected to mention
Rust and the Clang/LLVM sanitizers. (And SaferCPlusPlus[2] too.) They
acknowledged my comments, but it doesn't seem to have had much effect on the
document.

[1]
[https://news.ycombinator.com/item?id=12643463](https://news.ycombinator.com/item?id=12643463)

[2] shamelss plug:
[https://github.com/duneroadrunner/SaferCPlusPlus](https://github.com/duneroadrunner/SaferCPlusPlus)

------
PaulHoule
It seems like I am seeing something about SAT solvers almost every day now.

------
gravypod
Some of these are great, some of these are OK, and some of these are horrible
ideas. I wish instead of "studies" we did RFCs

~~~
shpat
Out of curiosity, what do you think the horrible ideas are?

~~~
gravypod
There are a few things I noticed and I'll cover a few right now.

Firstly, they are talking about using microservices which would be ok (I've
used a few microservices for specific applications that actualy make sense to
service-ify) but I would by no means consider this a safer way of doing
things. When you're talking about services run by our government, who isn't
notorious for having their network-security done right, I'm very weary of them
moving to a microservice architecture.

MTD is another thing that sounds concerning. This seems like the bottom of the
barrel of security ideas and looks like it would be far more complicated then
the other methods mentioned. If used this along would probably introduce more
bugs.

"Education and Training" can basically be summed up as universities being
stuck in the 70s and not teaching CS but teaching the Math that CS needs.

The "Liability" section is keeping me torn.

~~~
Clubber
I think the idea of using micro services is that lessens the surface area of
what you need to harden. In other words, it's easier to harden a simple
service that just does one or a few things versus hardening a complex
monolithic application.

I'd liken it to OOP encapsulation or the idea behind linux executables.

------
godmodus
"A weakness is an undesired characteristic of a system’s requirements, design
or implementation [Black11a]. This definition excludes:

* ...

* insider malfeasance, such as exfiltration by Edward Snowden"

Heh.

~~~
eutectic
Fool me once...

