
“Security problems are primarily just bugs” - peterhunt
http://blog.erratasec.com/2017/11/why-linus-is-right-as-usual.html#.WhKEqnXytCU
======
pjmlp
> Also, since most security people aren't developers, they are also a bit
> clueless how things actually work. Bounds-checking, which they define as
> purely a security feature to stop buffer-overflows is actually
> overwhelmingly a debugging feature. Developers know this, security "experts"
> tend not to. These kernel changes were made by security people who failed to
> understand this, who failed to realize that their changes would uncover lots
> of bugs in existing code, and that killing buggy code was hugely
> inappropriate.

Sorry, but I completely disagree here. Many of us that care about security are
developers that have to clean up the mess of such ideas.

And to once more paraphrase Hoare's Turing award speech.

"Many years later we asked our customers whether they wished us to provide an
option to switch off these checks in the interests of efficiency on production
runs. Unanimously, they urged us not to--they already knew how frequently
subscript errors occur on production runs where failure to detect them could
be disastrous. I note with fear and horror that even in 1980, language
designers and users have not learned this lesson. In any respectable branch of
engineering, failure to observe such elementary precautions would have long
been against the law."

~~~
geofft
Yeah, I am extremely confused at this, too. I am a developer by profession who
happens to care about security because I care about delivering a good product.
I'm not a "security person." Bounds-checking is a way for me to deliver a good
product, and killing buggy code is sort of my _job_. Bounds-checking should be
enabled in production.

~~~
ryandrake
One size does not fit all. Any debugging safety net will have a cost in terms
of performance, and you usually have to pay that cost regardless of whether
your code has bugs. It's why we don't release debug builds to customers or use
them in production.

If it is vital to your application to detect a certain kind of bug in
production, then turn the checking on in production and pay the cost. Maybe it
is not vital to my application or my application does not have these bugs. In
my case it would be needless to pay the cost.

To me, essential to delivering a good product is detecting and killing
defective code before delivery. I recognize however, that this attitude is
becoming more and more old fashioned.

~~~
Aaargh20318
> To me, essential to delivering a good product is detecting and killing
> defective code before delivery.

The problem is that you cannot prove the non-existence of bugs, even if you
formally verify all your code, you've only proven that it conforms to spec,
not that the spec is flawless.

This is why you should always code defensively. Assume all code is buggy and
take steps to deal with that. Code is written by humans, and humans make
mistakes.

That doesn't mean that you shouldn't try to deliver a flawless product, only
that you should acknowledge the fact that you're going to fail.

~~~
eru
Your argument would be in favour of a defensive spec, not in favour of a
defensive implementation.

To explain: your spec will have a few hygiene clauses like 'no stack overflow'
or 'no crashes' and some things about what useful things the program should
actually do.

The latter is hard to formalise and deserve defense in depth. Lots of the
former are easy to formalise and you can rely on the compiler to get it right.

And that's what we are doing already anyway: even in C we just trust that a
loop will compile to the appropriate conditional jumps and don't add defensive
measures to check that.

~~~
Aaargh20318
The point is that you're not going to do formal verification of your software.
There is actually very little formally verified software out there, and the
verified code that does exist is relatively small.

And no, you can't rely on the compiler to get it right. Maybe if you only
write Haskell you can get there 99% but there will always be human errors in
code that the compiler can't find.

~~~
eru
We rely on computers to get parts of our spec right all the time. Yes, a
higher level language lets you offload more to the computer (like Haskell),
but even in C we are doing some off that.

Yes, in practice you seldom have much of a formal spec, agreed.

------
jph
Linus advocates: 1) first do no harm to the kernel. 2) better to phase in
warnings and fixes, rather than enforce a panic. 3) there's sufficient track
record of security in practice.

This sounds fully reasonable to me. If you're an OS builder and prefer an
immediate-panic hardened system, then catch the warnings.

Linux kernel post:
[http://lkml.iu.edu/hypermail/linux/kernel/1711.2/01701.html](http://lkml.iu.edu/hypermail/linux/kernel/1711.2/01701.html)

------
vog
This strongly reminds me of the famous paper by Daniel J. Bernstein:

"Some thoughts on security after ten years of qmail 1.0"

[https://cr.yp.to/qmail/qmailsec-20071101.pdf](https://cr.yp.to/qmail/qmailsec-20071101.pdf)

~~~
oftenwrong
I found a txt version of this:
[http://ondoc.logand.com/d/721/txt](http://ondoc.logand.com/d/721/txt)

It's a bit easier for me to read without a fixed page layout.

------
ryangittins
I am also in whole-hearted agreement with Linus, here. In fact, I ran into
this just recently. I discovered one of our systems at work actually stored
_encrypted_ passwords rather than just hashed ones, and _decrypted_ it for
validation. Yuck!

Of course, I put a fix and a database migration in place as soon as I could
and all is well now, but this worries me. It worries me because it must have
been done out of ignorance (bad) or intentionally (worse) and billed as a
feature or something. Neither of these things are mere bugs. This gaping flaw
wasn't introduced by accident.

~~~
thisisit
Or maybe incomplete understanding of the system. I had lobby for a year to get
even a simple encrypted passwords. "Why do we need encrypted passwords behind
a corporate firewall?" they asked. That sucked all the energy out of me.

So when they wrote a script to run a db extract, get all company salary data
in csv and share it with literally 50+ people I gave up. Getting them to even
encrypt data seemed impossible.

~~~
ryangittins
Oh geez, that's nuts. That's like asking, "Why do I need to put my valuables
in a safe if I have a lock on my front door?"

~~~
scruple
> That's like asking, "Why do I need to put my valuables in a safe if I have a
> lock on my front door?"

Not the GP, but I've experienced similar things during my career, and these
sorts of analogies tend to work really well when explaining technical points
to non-technical (and, sadly, even "technical") people.

------
partycoder
Security problems are bugs, sure. But a different kind. Security is a non-
functional requirement.

In an analogy with construction, each room is a functional requirement, and
non-functional requirements are the materials you used to build, which can be
considered some sort of quality attribute.

And just like in construction, if after building a house you are given more
strict seismic requirements, fire prevention, etc... you might need to rebuild
the entire house.

~~~
pessimizer
Is a room any more a functional requirement than security? A roof keeps the
rain off, walls keep the wind out and keep things private, but rooms are more
containers than requirements. And if closing a door at the right angle causes
the roof to fall in, that's a failure of function.

~~~
qznc
My understanding of _functional_ requirement is that it is binary. Function is
either there or not. A non-functional requirement can sometimes be measured,
but not in a yes or no fashion. For example, frames per second is non-
functional. Things are no secure or insecure in general, thus non-functional.

You can turn non-functional into function requirement if you specify a
boundary. For example, a 60 frames per second requirement is functional. For
security, you can require "secure under certain attacker models" to make it
functional.

Example: Is Signal communication secure, if you assume the attacker can only
read data on the server? There is an answer and it is probably "no". That does
not mean, Signal is secure in general. An attacker which can access your phone
breaks the security.

(It is also not really binary, because someone might find a hole somewhere in
the future, but for practical purposes, we can assume the crypto holds)

~~~
provost
> Things are no secure or insecure in general, thus non-functional.

I would disagree with this assertion. A really common phrase in the infosec
community is that "Security is not binary"

~~~
fusiongyro
Are you literate? Go back and reread the first sentence of the comment you are
replying to.

------
alanfranzoni
Another, similar, take, from ALE2014:

[https://docs.google.com/presentation/d/1rXyl_YF-0lg3W8yY9mSo...](https://docs.google.com/presentation/d/1rXyl_YF-0lg3W8yY9mSo4UaoUrEWo8hKAng-
GsF9XeI/pub?start=false&loop=false&delayms=3000#slide=id.g376b8bd03_011)

Security problems are, first and foremost, about CORRECTNESS. If you find an
issue, most probably something has been done in an incorrect way by somebody
who didn't really understand something. It's much harder to find out some
security bug which is not a functional bug (for some part of the domain, not
just the sweet spot).

Of course that doesn't really apply to C or C++, where it's easy to do dumb
mistakes by chance.

------
qaq
Considering that large number of breaches are through social engineering that
statement is largely off.

------
jstewartmobile
DJB's approach to the problem:

" _Nowadays I am much more insistent on programming language support for
smaller-scale partitioning, sane bounds checking, automatic updates of
“summary” variables (e.g., “the number of nonzero elements of this array”),
etc. By “sane bounds checking” I don’t mean what people normally mean by
“bounds checking,” namely raising an exception if an index is out of range;
what I mean is automatic array extension on writes, and automatic zero-fill on
reads. (Out of memory? See Section 4.2.) Doing the same work by hand is silly_
"[0]

[0]
[https://cr.yp.to/qmail/qmailsec-20071101.pdf](https://cr.yp.to/qmail/qmailsec-20071101.pdf)

------
mtgx
I guess this is why Google is doing the right thing by working on its own
microkernel. Similarly for Grsec guys, who forked Linux.

Fundamental difference of opinions like these is why forks should happen in
the open source world. I would like to see someone do a "LibreSSL" version of
the Linux kernel, by cleaning it up of all the unneeded legacy cruft,
modernizing its architecture, and making it more secure.

I imagine only someone like Google (for servers, Chrome OS, and Android) or
Microsoft (cloud services) could take on such a project, but of course they
could only _lead_ such a project. They would also need an _alliance_ of
partners to support the project.

However, if they are committed to it and serious about making it way better
from a security point of view, I could see many companies jumping ship from
Linux Classic, especially in the automotive and IoT worlds, also also from web
hosting world, and so on.

Alternatively, perhaps the large companies could start supporting Rust
OS/kernels such as Redox.

------
hguhghuff
Well yes in hindsight of course.

------
scandox
> Despite his unreasonable tone, Linus is a hugely reasonable person.

Is this a legitimate leadership technique? I mean I presume the intention is
to have a kind of megaphone which will get the attention of a widely
dispersed, highly independent group of people. The ultimate cat herding
weapon. On the other hand I find it really unpleasant and feel like there must
be better ways.

~~~
gaius
Linus is extremely lucky to have found his niche, as he is otherwise
unemployable. One of his sweary rants at a cow-orker in any normal company
would see him fired for gross misconduct and probably a restraining order
taken out. Much of the toxic behaviour in the tech industry can be traced back
to Linus "getting away with it".

~~~
user5994461
It's the "designated asshole" archetype, the role can have a purpose in large
companies in some rare circumstances.

~~~
gaius
_the "designated asshole" archetype_

A previous employer had someone in that "role", I don't think he will ever be
able to work at any company that employs a former cow-orker of his. It may be
fun for a short while for a certain kind of person, but it's career suicide.

~~~
user5994461
I don't think it's fun actually. It's just the job.

There are some roles, like the one Linux has, where a critical aspect of the
work is to tell NO to people and reject most of their requests.

~~~
gaius
It's perfectly possible to say "no" without being obnoxious. But some people
revel in it.

