
Making sure software stays insecure [pdf] - thristian
http://cr.yp.to/talks/2014.07.10/slides-djb-20140710-a4.pdf
======
lstamour
FYI, for Internet-sourced files, both Windows and Mac (though I've only
firsthand knowledge of Mac via mdfind/spotlight) actually do keep track of the
source of files. Hence the extra message you get of: you downloaded this
program from X on Date; are you sure you want to run it?

Of course, like antivirus, this is only as useful as the security-mindedness
of your users. The most secure approaches that I'm aware of involve both
locking down the systems to prevent unauthorized OS modifications (e.g.
Chromebook, iOS, etc.), combined with a whitelisting of all applications or
activities allowed on the system, and tracking of extensive audit logs on
Windows to a remote server such that abnormal behavior can be detected
quickly. Automatic updates are a must, though I'd rather certificate pinning
were used given governments level exploits of Microsoft's certificate (since
revoked) to install spyware via Windows Update. Ultimately... How can you be
so right to say 2014.07 is still insecure but then not acknowledge that
security is system-specific? My definition of insecure activity may differ on
a case by case basis, because security, ultimately, is made up of tradeoffs,
restrictions and oversight. And it's not just software, as humans indeed can
be the weak link, but ideally most secure systems create environments to
prevent or monitor this.

------
im3w1l
His ideas about subverting the security community are interesting.

But I think his emphasis on knowing the source of everything is weird. In my
opinion the main security responsibility of the kernel is to not let users
escalate, preventing users from writing and modifying where they shouldn't,
and reading what they shouldn't. Followed by preventing denial of service by
enforcing quotas.

His idea about tagging source for processes based on what files they read, and
then tagging written files with those sources sounds like a bad one. I think
many processes would read files from a lot of unrelated users, which would
lead to noise. Tags seem to pile up, as there is no principle for how to
remove tags. Tags are contagious, for when a commonly opened file (say a conf
file) is written by an editor, it acquires tags of all files opened by the
editor. When the editor creates an output file, all the conf-file tags are
transferred over.

When everything has source *, people would just stop caring.

~~~
pflanze
He probably assumes that processes are usually short-lived or limited in their
scope (that's the design principles he used in his software, anyway). That way
"source creep" doesn't happen so quickly.

It reminds me of the taint principle (-T) as provided by Perl 5. The
differences are that there the set of sources consists just of "secure" and
"insecure", and processes themselves are not tainted. In a world of C
programs, tainting processes themselves (as they become in contact with a
potential "contagion") may actually make sense.

------
dobbsbob
Reminds me of a similar talk Poul-Henning Kamp gave where he presents a
thought experiment on how it would be possible to subvert open source projects
to ensure the priority is insecurity
[http://youtu.be/fwcl17Q0bpk](http://youtu.be/fwcl17Q0bpk)

------
4ad
The ideas presented here are somewhat similar to IX[1], a security-oriented
operating system implemented by Doug McIlroy (inventor of Unix pipes) starting
from Tenth Edition Research Unix.

[1]
[http://www.cs.dartmouth.edu/~doug/IX/](http://www.cs.dartmouth.edu/~doug/IX/)

------
im3w1l
The slides repeat themselves, and have a line length of ~5 words, so I made a
reformated version

[http://pastebin.com/UW2hc2vZ](http://pastebin.com/UW2hc2vZ)

~~~
pflanze
I didn't find this very readable, so I've added formatting.

[https://github.com/pflanze/slides-
djb-20140710/blob/master/s...](https://github.com/pflanze/slides-
djb-20140710/blob/master/slides.md)

------
Zigurd
I like how this thought experiment leads to questioning the current security
orthodoxy. The "security community" is made to serve at least two masters: If
security works, forensics can't work. If forensics works well enough to have
standard tools, that implies longstanding unresolved security problems. Yet
many security consultants work both sides of the fence.

------
bluetech
I don't really understand what's the point of tracking the sources when
copying files? A security policy which restricts my actions over an object I
had copied (with permissions to do so) seems a bit strange to me. But the
slides don't really explain why this is wrong?

~~~
dredmorbius
I'm frankly puzzled by this as well.

1\. Why is it a mandatory requirement?

2\. How can it possibly be uniformly implemented.

E.g.: curl <some source> | perl -e <some filter> > file

How the hell is _any_ OS going to track origin of 'file'?

What _is_ origin of 'file'?

Add an arbitrary number of netcat-over-UDP transfers between start and end of
process.

Ownership/origin metadata is impossible to track unless it's overlayed on
every single bit of of information transiting every process and every network
node.

djb simply does a massive hand-wave over this, as far as I can tell. I respect
the man a lot, but this totally loses me.

~~~
sitkack
How is this different than the concept of 'taint' in Perl? Or setting meta
data on a file object?

    
    
        mdls -name kMDItemWhereFroms slides-djb-20140710-a4.pdf 
        kMDItemWhereFroms = (
            "http://cr.yp.to/talks/2014.07.10/slides-djb-20140710-a4.pdf",
            ""
        )

Set some metadata on process streams. One could write a kernel module to track
this flow.

I'd love to set filters on file actions like, "after an image/pdf/etc has been
downloaded prevent any program from loading it until it has been scrubbed
through a whitelisting format normalizer running in a VM" if problems are
found, alert and blacklist source.

Free technical PDFs make a great vehicle into a research org.

~~~
dredmorbius
Taint isn't telling you _who_ data came from. Only that it was _external_ to
the program. That's a much simpler challenge, particularly as it takes place
within a single process's context.

------
zobzu
it stops at sharing. what he describes works OK on the OS. In fact, mobile
OSes start enforcing similar stuff.

RSBAC, SELinux, what not, did this for years. NSA you say? SELinux codebase is
ridiculously small. It's the concepts that matter, and you know what? These
tools are just patchwork for the current OSes, knowingly so. They're far from
the silver bullet.

When you add the web its harder. What if Frank's computer is offline? What if
its slow as hell?

=> you get these files from google which you do not known, is an abstract
entity, yet you trust.

What if you want to modify the file? => Now you own it. Including the
potentially bad parts.

Now, there are quite a few attempts at writing secure software with a much
more complete concept, much closer to the silver bullet.

There is NO financial interest in it. No financial interest means we'll get
there very slowly. Plan9 died. singularity died.

What's left? Living with a larger risk is whats left. That's why risk analysis
is used for security (just as its used elsewhere). NIST's framework isnt
actually dumb. Sure, dollars should probably go into secure software - but
that doesn't mean that today sysadmins will make stuff more secure by "not
watching logs or updating to 2014.07 when 2014.06 has a remote exploit".

All that to say the PDF is interesting due to whom has been writing it but
it's confused and he hasn't found the direction he's looking for yet.

------
fit2rule
For as long as we keep re-inventing the wheel, there will be holes in the
spokes. For every company that reinvents a daemon process, there's another set
of security holes to be exploited. Its human nature to continually alter
working/running systems, throw away code, etc. These are cultural/ethical
issues which the security spooks know they can continually exploit, and have
done so for centuries.

The problem with security is entirely an ethical one, and not at all
technological. No amount of technology can make up for an ethical dilemna.
What we have with the NSA/GCHQ/five-eyes is entirely a cultural artifact
perpetuated upon us all by those who stand to profit - greatly - from hatred,
distrust, enmity among people. The fact is, as long as we continue to support
the machinations of the military-industrial complex, and its endless streams
of justifications for why it is we need to 'hate those people', we will be
subject to their rule.

Shut down the warrior class, and we have a better chance at a secure future.

~~~
ejr
There are times when the wheel at hand is an ill fit or perhaps there
genuinely is a better way to do things Ex: Libressl. And so the wheel needs
reinvention.

Reinvention of the wheel, reimplementation of a paradigm or some other
repetition in a slightly different or totally different way isn't necessarily
a bad thing as long as the underlying concepts are well understood and it is
executed with competence. I still believe malicious tampering is unnecessary
when carelessness - or tiredness - on the part of the developer will do just
as well.

~~~
x1798DE
I'm not sure that you can use LibreSSL as an example of something that is a
"genuinely better way to do things", given that it's an essentially brand new
project. There are reasons to believe it's a better way to do things, but we
won't know how well it performs until there's time for people to start trying
to exploit it.

------
higherpurpose
Is there a video of this talk? It's kind of annoying to read the same thing
over and over again because the slides repeat themselves.

~~~
nhaehnle
Use the presentation mode of a proper PDF reader to go through the slides.

------
notacoward
So an OS kernel constitutes a reasonably sized TCB, but a browser is "massive"
and "very expensive" to check? Should I look for that on #StuffHNSays, or is
there one just for DJB?

~~~
cbsmith
Comparatively speaking, the functions of an OS kernel are fairly small and
contained. Sure, you can find bloated kernels out there and small secure
browsers, but I can read the kernel portion of the POSIX specs without crying.
I can't say the same for all the browser specifications out there.

~~~
notacoward
> I can read the kernel portion of the POSIX specs without crying.

POSIX is an incredibly poor guide to what a modern OS does. Even in the parts
related to storage, they're hopelessly outdated - either forcing onerous
compatibility with systems that haven't been sold in twenty years, or
completely failing to address issues that have gained in importance over that
same time. The network parts are even worse. There isn't even a POSIX spec for
virtualization, and internal issues such as NUMA or PCI/USB enumeration were
never under their purview to begin with.

If the requirements for a modern kernel were specified to the same degree as
those for a modern browser, they'd be much _longer_. Note that external
protocols and formats are not part of the browser's own specification unless
you apply the same rule for external protocols and formats used by the kernel
- pulling in a ton of stuff from IETF, PCI SIG, IBTA, SATA-IO, ANSI T10, etc.
It's only fair, but again you'd end up with something far longer than the
browser equivalent.

The responsibilities of an OS kernel are far deeper and broader than almost
anyone out in browser-land thinks. I'm quite sure that works the other way
too. Just because it's harder to see detail from further away doesn't mean
it's not there.

