
Why do we still write insecure software? - luu
http://www.jerf.org/iri/post/2942
======
cwyers
> (Sidebar: It is a common misconception in the programming world for some
> reason that the adage "It's a poor craftsman that blames his tools." means
> that someone blaming their tools proves they are a poor workman. The true
> meaning of the phrase is that it is a poor craftsman that has bad tools in
> the first place. A skilled craftsman does not use a dull saw without
> complaint. They get or make a sharp one, without complaint. A craftsman who
> uses poor tools without complaint is even poorer than the one who is at
> least complaining!)

This is fantastic.

------
CM30
No mention of the obvious answer here; because coders have jobs, employers and
lives. Hence a lot of the time, code has to be written quickly and the company
usually doesn't give a toss whether something's been done the 'right' way...
so long as it works.

If you've got a deadline in thirty minutes, you're probably not going to try
and do everything the best possible way. Same deal if you've got a family or
friends to get back to, an event to go or anything else that requires you to
get things done quickly.

You're rewarded more for getting things done quickly than you are getting them
done properly or securely. When your client wants their site or program done
as quickly as possible for a low cost, or your startup needs to 'move fast and
break things'... security will usually suffer in the process.

~~~
TheAndruu
To add to this, coders frequently have managers who don't allow time to test
and improve much beyond what works, even when asked for.

~~~
collyw
Exactly! It is often that the coders are the ones pushing to do thing more
correctly, but mnagemnt wants the latest feature out.

~~~
TheAndruu
I've been multiple places where requests to fix what was broken in the
production system were shot down repeatedly because there "wasn't money in the
budget to fix that system".

Yet if anyone added up the manhours it took to fix the same production issues
that repeatedly popped up, it would've clearly been less expensive to fix the
recurring bugs.

------
xyzzy123
The first thing to fix is bad tooling, yes, but it's not the whole picture.

Once all your queries are safe by default, your languages have memory safety,
your templates XSS-encode by default etc etc (this is basically the situation
at most sane workplaces) - there are still security issues.

There are _fewer_ security issues left, but they're gnarlier. What we see now
are "application logic" and business logic issues. Overflow and language-level
sharp edges exist, but these are a small proportion of the mistakes developers
(in particular, web application) seem to be making.

Situations where the server unintentionally signs user-controlled data, bad
OAuth2 flows, missing or incorrect authorisation checks, accepting data from
the client as "validated" \- these are much more common in my experience.

The other interesting thing I've noticed is that when developers who are used
to "safe by default" frameworks have to step outside the framework for any
reason (e.g. front-end is 90% angular, but 10% "bespoke" JS) they will make
mistakes with very high likelihood.

The solution is tooling yes, but also education and process.

~~~
zzzcpan
I think such security issues are similar to the ones author described. They
require decent language-level abstractions to disappear as well. Some sort of
lightweight processes to deal with trust boundaries explicitly and easily.

~~~
Natanael_L
What you'd need is almost dynamic live proof checkers hooked to your IDE
capable of tracking the effects of code changes / refactoring to do that,
linked to test cases themselves carrying proofs of testing the spec
completely.

How else will you be guaranteed to catch Goto Fail and similar?

What's worse is when you hit the issue of "composability" in cryptography -
two servers run different algorithms making different assumptions, and when
they interact the assumptions fail to translate and neither provide the
guarantee they should. Like cross-protocol attacks, such as when one server
becomes a signing oracle for another.

Edit: and beyond that, we have cross architecture disagreement on results of
calculations intended to be deterministic. Like how SPARC Bitcoin Qt binaries
previously would have had the Bitcoin reward schedule loop every 255 years and
go back to 50 coins per block and restarting the countdown.

------
Animats
We write insecure software because the costs of insecure software do not fall
on the companies selling it.

------
jacquesm
There is actually a very simple answer to this question: We have 100's of
implementations of roughly the same thing instead of just 1 implementation
that is made bulletproof. Every new framework, every new HTTP server, every
new language will cause another round of re-implementation and will cause yet
another round of the same old bugs and security problems.

~~~
pjmorris
The problem with monoculture (e.g. the Irish potato famine [1]) is that the
bug that takes down one machine takes down them all.

[1]
[http://evolution.berkeley.edu/evolibrary/article/agriculture...](http://evolution.berkeley.edu/evolibrary/article/agriculture_02)

~~~
jacquesm
That's definitely a valid objection and I have absolutely no idea what could
be done about that. But as it is the bug that takes down one machine today
will take down other machines tomorrow and yet other ones five years from now.

So for entirely new bugs your objection stands, they could (and likely would)
be disastrous. Even today a 'zero day' exploit for a major platform can be
dealt with though, I don't see why we would not be able to deal with such
exploits in a scenario where there is only one implementation of something,
it's not as if right now we use the other implementations to keep things
running, it's mostly a matter of impact all at once rather than spread out
over time and infinitely repeating.

~~~
rev_bird
>Even today a 'zero day' exploit for a major platform can be dealt with though

But they can only be dealt with once they're discovered by people with an
incentive to fix it. The NSA _says_ they weren't using Heartbleed[1], but I
can't think of a single reason they wouldn't lie about it. In any case, that
was a massive security flaw that could have conceivably been exploited for
_more than a decade_. If it affects 30 percent of systems instead of 90
percent, that seems like worthwhile hedging of bets.

[1] [http://www.engadget.com/2014/04/11/nsa-
heartbleed/](http://www.engadget.com/2014/04/11/nsa-heartbleed/)

~~~
jacquesm
Afterthought: If we really only had _one_ implementation of something _all_
our energy could go into making that one thing really good, which might even
push the quality beyond the point where zero days would be an issue.

------
legulere
> int8(100) + int8(100) = -56

is just one possible answer. Signed integer overflow depends on signed integer
representation in go (it's undefined for unsigned integers in C):

[https://golang.org/ref/spec#Integer_overflow](https://golang.org/ref/spec#Integer_overflow)

So the SafeAddInt8 in the blogpost may not be correct on architectures that
use really strange representations.

Lots of C developers that wanted to make their code safer ran into a related
trap in C. There it's totally undefined and the compiler may just throw the
check out all together.

I wonder why Go doesn't just define signed integers to be in 2's complement
representation. Are there any architectures out there that use something else
for integers?

~~~
agwa
> (it's undefined for unsigned integers in C):

No, unsigned integer overflow is defined to wrap around in C. Only signed
integer overflow is undefined.

> I wonder why Go doesn't just define signed integers to be in 2's complement
> representation

It does: "The value of an n-bit integer is n bits wide and represented using
two's complement arithmetic" (from
[https://golang.org/ref/spec#Numeric_types](https://golang.org/ref/spec#Numeric_types))

~~~
legulere
You're right, I wanted to write signed integers.

I wonder what they mean with representation in the spec then?

------
Natanael_L
Define what's secure first. Any unintended behavior can become a security
flaw, and there will always be unintended behavior in anything complex.

~~~
cwyers
He actually addresses this.

------
amai
Stop complaining and use Ada or
[https://en.wikipedia.org/wiki/SPARK_%28programming_language%...](https://en.wikipedia.org/wiki/SPARK_%28programming_language%29).
There is also a web Server in Ada:
[http://libre.adacore.com/tools/aws/](http://libre.adacore.com/tools/aws/)

------
Mindless2112
> Whether you want an exception or an error or some sort of Either/Option type
> or whatever depends on your language, but something ought to twitch here.

The problem with handling integer overflow is that the mechanisms for it are
clunky. Is there a programming language that handles integer overflow using a
better mechanism than those listed? (I mean without resorting to a BigNum.)

------
philip142au
Its really simple, my boss said I have to have done in 3 weeks what takes 3
months.

------
ratsimihah
Because software is meant to be written by computers, not humans.

------
NickHaflinger
Because the underlying hardware MMU can't prevent one process from accessing
memory that don't belong to it.

