
The State of Software Security in 2019 - zdw
https://noncombatant.org/2019/01/06/state-of-security-2019/
======
noir_lord
What a beautifully written article with informative links.

[http://lucacardelli.name/Papers/TypefulProg.pdf](http://lucacardelli.name/Papers/TypefulProg.pdf)
is now next on my list when I finish reading A Philosophy of Software Design
(which is brilliant if you haven't seen it).

~~~
nickpsecurity
Cardelli's Modula-3, a C++ alternative, also illustrates excellent balance of
simplicity, programming in large, compile time, and run time. Expanding on
that carefully like with macros, a borrow checker, and LLVM integration would
have made for a simpler, safer, systems language. Give it a C-like syntax with
C compatibility, too, for adoption.

[https://en.m.wikipedia.org/wiki/Modula-3](https://en.m.wikipedia.org/wiki/Modula-3)

~~~
pjmlp
You mean C# I guess. :)

Actually with .NET Native, the GC improvements in .NET 4.6 (
_TryStartNoGCRegion()_ and other companion methods), and the C# 7.x
improvements taken from Midori, it is quite close.

~~~
nickpsecurity
C# looked a lot more complex than Modula-3 when I last looked at it. The book
was _thick_. They definitely did nice things in C#. I just don't know it's
fair to equate it with the simplicity vs get shit done vs small runtime of
Modula-3.

I am up for you elaborating a bit on the second sentence since it sounds
interesting. Not doing .NET, I don't know what any of those are except the
middle one which sounds like SYSTEM/UNSAFE sections.

~~~
pjmlp
It is more complex, but Modula-3 isn't that tiny either, around Algol 68/Ada
83 "tiny".

They integrated the improvements from M#/System C# (Midori) into C#.

Namely return ref, ref for local variables, stack allocation for arrays in
safe code, spans (slices) across all memory types, allocation free pipelines.

------
saagarjha
Off topic, but I just love this article: it's 22.8 KB, with a bit of hand-
written JavaScript that the website works well without, and just enough CSS to
make reading pleasant.

------
stcredzero
_it is just barely possible to significantly raise the cost of exploiting
memory corruption vulnerabilities for projects implemented in unsafe
languages._

If there is something positive walled gardens can bring to the table, it would
be more of ecosystems where entire classes of exploits are completely
mitigated. I think it's something that users and devs should be asking for.
(We did it with USB plugs you don't have to flip over.)

------
nickpsecurity
I like this article. I want to correct something on C and C++ side.

He hints it's barely possible to write correct C. He doesnt mention how
safety-critical sector, esp regulated, cranks out reliable C on a regular
basis. They use C subsets with careful coding, reviews, and every automated
tool they can. Static analyzers, automated testing, you name it. There's also
analyzers that, when they work, can prove absence of entire classes of errors.
RV-Match, Astree Analyzer (used by Airbus), and Frama-C are examples. CompCert
eliminates compiler errors afterward. Most security-critical development
doesnt use tools like these even when companies can afford them. It's apathy,
not technological capability.

Now, C++ is special given it was designed to reduce risks of using C. Yet,
almost all the verification tooling was for C and Java. C++ is just so
complicated that it's hard to statically analyze or even build tools for. That
means a C program done like above might have fewer errors than a C++ program.
For that reason, I recommend avoiding C++ in favor of either C or safe systems
languages compiling to C to take advantage of its verification tooling.
They're getting hit with these vulnerabilities because they using a harder-to-
analyze language possibly with coding strategy that adds difficulty. See
Cleanroom Software Engineering or Praxis's Correct-by-Construction for
examples of low-defect, development strategy.

~~~
blub
I completely agree with your assertion regarding the verification tools for C,
unfortunately most commercial projects (i.e. not regulated) won't pick between
e.g. Frama C and C++ and instead will choose between plain old C and C++,
thereby making C a very poor choice.

~~~
snazz
Nitpick: The idea of your comment is sound, but Frama-C is a C program static
analysis framework, not a C subset. You still write regular old C even with
Frama-C.

------
inetknght
I agree with most of this dude's points, except about not starting new
projects with C++.

I think C++ still has good strong merit but it definitely needs some critical
thinking about _where_ it's used. Other languages are likely better candidates
for most things.

~~~
acdha
The author works on Google Chrome so I'm pretty sure they're familiar with the
pros and cons of C++. It's possible that there's a place for using C++ on a
new project in 2019 but that would definitely be a decision to justify with a
comprehensive process for mitigating the risks of not using e.g. Rust.

~~~
blub
Although on HN it looks like there's some kind of competition happening, Rust
is a rounding error compared to languages like Java, C# or even Swift which
are actually chosen instead of C++ for many projects.

I'm looking at job ads periodically and Rust is almost not at all present.
I've talked to someone from the Rust community and they told me that "many"
companies are using Rust, but not advertising it and not hiring for it on
public channels.

Whatever the reason, the Rust skills market seems to be non-existent, which is
a bad position to be in when considering whether to start a project in Rust.
Project = commercial product that makes money, not OSS or start-up dreams.

~~~
tptacek
The subtext of Rust in posts like this is that Rust compiles programs that are
pin-compatible with C/C++ programs. Without rethinking the architecture of a
kernel, for instance, you're meant to be able to write Rust drivers.

I prefer other higher-level languages to Rust; I'd rather just work with
garbage collection than wrangle the Rust borrow-checker. But if you need to
write software that can't abide GC, Rust is basically the only memory-safe
game in town.

~~~
pjmlp
Although I like Rust for those scenarios where GC isn't an option, I bet
Ada/SPARK still have more commercial deployments out there.

It might change given the price of their compilers versus free rustc, however
those are industries that also certify compilers, which isn't going to happen
for the time being for Rust toolchains.

Also Rust still isn't pin-compatible with COM/UWP or Objective-C runtime, and
I guess we need to wait around one year until mixed language debugging
experience is a thing across all major IDEs.

~~~
pcwalton
> Although I like Rust for those scenarios where GC isn't an option, I bet
> Ada/SPARK still have more commercial deployments out there.

I doubt it, if you're counting users. Hundreds of millions of users of Firefox
and Dropbox, just to name the first two that come to mind, use Rust code.

> Also Rust still isn't pin-compatible with COM/UWP or Objective-C runtime

What? Of course it is. We're shipping Rust code that uses COM and Objective-C
in Firefox beta, right now (WebRender).

~~~
pjmlp
Of course statistics is the art to make the numbers how we like them to be.

On my case I am counting systems in production.

Using COM is a tiny portion of COM development experience.

Where is the Rust support for creating COM and UWP components, including mixed
mode debugging on Windows IDEs, likewise for Objective-C integration.

If I missed that, where can I find more about it?

~~~
pcwalton
Again, we ship COM components. You can't load local fonts on all versions of
DirectWrite without that.

Likewise, Rust has had support for creating Objective-C classes with methods
written in Rust for ages. You can't usefully render OpenGL into a window on
macOS without doing that.

~~~
pjmlp
In that case it would be nice to actually answer my question.

> If I missed that, where can I find more about it?

How do COM type libraries get generated in Rust?

------
wahern
The advice to use -Weverything is very bad advice. See, e.g.,
[https://quuxplusone.github.io/blog/2018/12/06/dont-use-
wever...](https://quuxplusone.github.io/blog/2018/12/06/dont-use-weverything/)

Using -Weverything other than for a one-off, manual build, is a sure fire way
to result in one or several of 1) hacky code that works around some compiler
author's style preferences, 2) _less_ safe code because of the workaround
hacks, or 3) coworkers quickly learning to ignore or disable (e.g. using
inline #pragma's) diagnostics.

For both GCC and clang, -Wall comes close to the line of rejecting objectively
correct and good code, but doesn't quite cross the line. -Wextra crosses that
line (expect it to reject some correct and good code) but the pros often
outweigh the cons. -Weverything blows past the line and wraps around the earth
several times.

------
cheschire
I’m happy to see npm in the ugly section. I wish the code security review
pipeline got as much attention as the package deployment one.

~~~
tcd
It's hilarious, after the event-stream fiasco last year, NPM has
changed...Nothing. NodeJS made a 'package maintenance' repo, but failed to
address the actual root cause of the problem.

We'll see plenty of other NPM disasters this year, and as always, nothing will
change because security is an afterthought not a core mission principle.

If it was, NPM wouldn't exist in its current form. It's confusing, adding
extra security wouldn't even be that difficult or time consuming, they just
don't.

I wish it were possible to take over a project with individuals who actually
care. The people in charge of NPM are incompetent at running a secure, large
scale distribution system, why they're still allowed to be a part of it is
beyond me...

If security isn't your top goal you don't belong in charge of a very
important, widely used ecosystem which many rely on.

~~~
staticassertion
NPM announced support for 2FA and read-only tokens soon after the incident.

[https://blog.npmjs.org/post/166039777883/protect-your-npm-
ac...](https://blog.npmjs.org/post/166039777883/protect-your-npm-account-with-
two-factor)

Can't say I'm aware of any other package manager even discussing these issues.

~~~
ixrec720
Since you said "even discussing", might as well mention that the Rust
community has discussed adding 2FA support to cargo: [https://internals.rust-
lang.org/t/requiring-2fa-to-publish-t...](https://internals.rust-
lang.org/t/requiring-2fa-to-publish-to-crates-io/7931). But as far as I know,
crates.io/cargo still has no 2FA feature of its own.

On the other hand, if your crates.io account is linked to a GitHub account,
you could have 2FA via GitHub. So I'm not entirely sure whether "cargo
supports 2FA" would count as true or false at the moment.

~~~
staticassertion
Yes, I'm the author of that thread. It went nowhere.

------
redwards510
> Throwing away the idea of using ‘engagement’ as the sole or primary metric.

What are some alternatives that are not a synonym for engagement?

~~~
inetknght
Customer satisfaction, perhaps?

I can be satisfied with products that I don't use at every moment of every
day.

I am usually more satisfied with my life when I am not being gamed to be
addicted to something such as a "free" product.

~~~
diminoten
But what if I want to know why you're not using my product every moment of
every day, and what I'd have to change to do that?

If you're satisfied with my product, but you only buy it once a year, that's
bad for me. If you're not very satisfied with my product, but you buy it 30
times a year, that's better.

I'd rather have 30 sales than 1 sale, even if you badmouth my product and lose
me 15 more sales down the line.

TBH I may not give a crap how "happy" you are if you're still engaging.

Satisfaction is trailing, engagement is leading.

~~~
inetknght
> _But what if I want to know why you 're not using my product every moment of
> every day_

Did you try asking me?

> _and what I 'd have to change to do that?_

Again, did you try asking me?

> If you're satisfied with my product, but you only buy it once a year, that's
> bad for me. If you're not very satisfied with my product, but you buy it 30
> times a year, that's better.

If I'm satisfied with your product what makes you think I'd be _more_
satisfied with it if I had to buy it multiple times per year?

> _I 'd rather have 30 sales than 1 sale, even if you badmouth my product and
> lose me 15 more sales down the line._

You assume every sale is the same price. If you sell your product to 30
people, are those 30 people generating the same amount of revenue as the 15
people could have if they were _more_ satisfied with your product?

> TBH I may not give a crap how "happy" you are if you're still engaging.

That is, in a nutshell, exactly why I'm not satisfied with many online
products and have actively _disengaged_ from them. Your line of thinking leads
down a road where the end user is nothing except a source of income to you.
You've completely forgotten that you're supposed to provide a service _for_
humans. It shows a complete lack of respect of users' intent and instead you
wish to push users to generate more money for you potentially at the cost of
the users themselves.

~~~
diminoten
My line of thinking is what makes companies profitable.

You can moralize all you want, but the incentives align for a company to
maximize on profitability, not on "hey my customers like me". You can't pay
for dinner with goodwill, but you can pay for dinner with money.

If I'm being so disrespectful, why is my engagement high? Do people love to be
disrespected?

You're talking about what ought to be, as if it were how it is. It's not that
way. It's how I describe. Maybe it should be some other way, but it's not.

~~~
inetknght
> _My line of thinking is what makes companies profitable._

I don't disagree, however unfortunate that is.

> _You can moralize all you want, but the incentives align for a company to
> maximize on profitability, not on "hey my customers like me"._

Let's change that so that the incentives _do_ work for being liked.

> _You can 't pay for dinner with goodwill, but you can pay for dinner with
> money._

Somewhere there's some middle ground which represents the failure of our
government to incentivize; where companies may still be profitable (even
obscenely so), and employees _are_ able to pay for dinner, and customers _are_
happy, and people _aren 't_ being manipulated.

> If I'm being so disrespectful, why is my engagement high? Do people love to
> be disrespected?

Love to be? No. Most have become desensitized to disrespect such that they
_expect_ to be abused by collusion between big corporations and big
government.

> _You 're talking about what ought to be, as if it were how it is. It's not
> that way. It's how I describe. Maybe it should be some other way, but it's
> not._

I'm talking about what ought to be, indeed. I recognize it's not that way. Do
you recognize that it doesn't have to be that way? Don't you agree that it
should be some other way?

~~~
diminoten
I don't agree with your yanking what I've written out of context... Respond in
full please, not line by line.

~~~
inetknght
I replied to every statement. I don't agree with having super large replies
that people start looking for a tl;dr

------
ademup
Interesting write up. Having seen iOS in 'The Good' section with no mention of
Linux or Windows, I thought things might get interesting in "The Bad" or "The
Ugly". But there was no mention at all. That was a bit jarring, though I
respect the author for omitting them if it is just ignorance in those areas.
I'm not sure if there is a connection or intent, but the result may be a
(tacit?) endorsement of Apple.

~~~
DyslexicAtheist
I'm an enthusiastic Linux user (called myself evangelist the first years of
using it) and was always on a mission to defend it against the dark evil
empire that was Microsoft. Apple was only used by graphic design types back
then.

Now that I've established how much I love Linux and the FOSS movement (I'll
still defend it over proprietary works) , I'm also a realist. Realist because
when you do a lot of security dev then reality on Linux looks different today.
Sticking for sake of discussions with monolith kind of kernels (comparing
oranges with oranges) I absolutely hate the security design choices of modern
Linux. Linux could do a lot better in this regard and Torvalds doesn't do the
industry a favor when he calls security experts a bunch of _masturbating
monkeys_. I can see where he comes from but the attack surface on any type of
Linux (whether embedded or not), I'd go as far to call it total trash when
compared to its BSD cousins. A lot of this isn't just Linux's fault
considering how big the system is today. To realistically secure a Linux
system is almost impossible (within budget+time that is) when compared to BSD.

The only reason why I still put up with Linus is because of FOSS. But if it
were purely for security (on a desktop system) I'd have to say Linux is really
poor.

It hasn't always been that way. My theory (really just theory) is that while
Microsoft got hammered for decades for having shit security (remember
trustworthycomputing.com ??). And I think the message slowly but gradually
sank in at Redmond. Microsoft (and I still get hick-ups when saying it)
somehow got their act together. I believe all that pressure amounted to
something in the end while the Linux community missed the train. Not to say
that there aren't brilliant ideas coming out of that community but there is
very little pressure to provide integrated security that actually protects the
user in a coherent way. You get lots of little pockets of good and bad (as is
the nature of FOSS) and integrating it isn't even done well by the major
distributions.

Again not saying Linux can't be secure but to make it good depends a lot on
who integrates and sets up the system. And for the end-user who only has a
semi-technical background (never heard of threat-modeling or grsec ... for
them it's a dumpster fire).

I'm not comparing Linux to Apple here because Apple security is brilliant and
thought all the way through to the UI/UE level. Linux in that regard doesn't
even stand a chance. @thegrugq has a pretty good guide for OpSec/ComSec which
is still relevant to this very day that spells out the tough reality (which a
Linux fan like myself doesn't always appreciate hearing):
[https://news.ycombinator.com/item?id=8950875](https://news.ycombinator.com/item?id=8950875)

~~~
DyslexicAtheist
also this a 1000 times:

 _" Dependency slurping systems like NPM, CPAN, go get, and so on continue to
freak me out. They might potentially be more dangerous than manual dependency
management, despite the huge risks of that practice, precisely because they
make it ‘easy’ to grow your project’s dependency graph — and hence the number
of individuals and organizations that you implicitly trust. (And their
trustworthiness can suddenly change for the worse.)"_

~~~
ThePhysicist
I always pin dependencies to fixed versions in production software, which
makes it less likely that my toolchain gets poisoned by a rogue version
update. Also, for languages that allow it I bundle dependencies locally (in
Python e.g. using Wheels) or run my own registry proxy, which makes poisoning
by registry takeover also less likely. Personally I think there are bigger
threats than malicious dependencies, especially if an adversary specifically
targets me. My biggest concern is what happens when someone takes over our
CI/CD setup, as that gives way more access to an adversary.

------
jason46
Does anyone else think alexa/(services that record and retain) should be on
the list of bad.

------
Symmetry
ARM is a bit more secure than x86 in that there's one unambiguous way to read
a sequence of ARM instructions but with x86 you might have two entirely
different but completely valid instruction sequences if you start reading at
address FOO versus address FOO+1.

~~~
ChrisSD
While that is nice I'm not quite understanding how this is a security issue?
Can code reuse attacks exploit it?

~~~
Symmetry
Here's an example

[http://mainisusuallyafunction.blogspot.com/2012/11/attacking...](http://mainisusuallyafunction.blogspot.com/2012/11/attacking-
hardened-linux-systems-with.html)

~~~
inetknght
> _Actually, file descriptions. The description is the kernel state pertaining
> to an open file. The descriptor is a small integer referring to a file
> description. When we send an FD into a UNIX socket, the descriptor number
> received on the other end might be different, but it will refer to the same
> description.↩_

Oh man, that perfectly explains a formerly-unanswered question I had about how
that mechanism worked. Thanks for linking.

------
waynecochran
ELI5 : It is 2019. Why isn't everybody using pubkey crypto for email?

    
    
        i.e., transparent support of:
        * I publish a public key 
        * Anyone can use it to send me an encrypted email
        * I use my private key to read
        * as a bonus, I sign my email with my private key
        * anyone can verify with my public key

~~~
jcranmer
There are several reasons:

1\. Key distribution is a difficult problem. Where do you go to look up what
the public keys are? The only obvious answer is the email provider, which
brings us to...

2\. Your email is already in practice secure from everyone except you, anyone
with access to your email address, and your email provider. Your IMAP, SMTP,
and POP connections are all secured with TLS anyways, and the MX connections
between your email provider and your recipients' email providers are likely to
be secured with TLS as well if at least one of them is Big Email Provider.
(Even if that's not the case, this communication is occurring over the
Internet backbone which is generally only tappable by the sorts of people who
could probably get into your personal email account via other means anyways--
the extra value is probably illusory).

3\. Secure email doesn't actually secure any headers. Given that From/To/Date
is going to be implicated by SMTP logs anyways, and these are the most useful
headers for intelligence analysis, there's not much that oculd have been
improved.

4\. You lose the ability to do full-text search (or even header search, if you
designed it to hide the headers as well).

5\. Anti-spam measures become basically impossible with encrypted email; you'd
have to push everything client-side, which is going to get annoying when
you're trying spam detection on your smartphone for 10x the messages and you
have to spend extra time decrypting each one.

6\. Deploying any automated public-key infrastructure requires somehow
automating the verification process. The verification process that can be
automated is "this came from a person who has access to this email account,"
but the verification process that people want is "this came from the person
whose name is associated with this email account," which we don't know how to
automate.

~~~
stcredzero
_Deploying any automated public-key infrastructure requires somehow automating
the verification process. The verification process that can be automated is
"this came from a person who has access to this email account,"_

What if all gmail and Microsoft email accounts came with a 4096 bit public
key, with an option for the state DMV to verify the public key's association
with a driver's license? I'm sure this would come with downsides, but would
they be much worse than state DMVs using your gmail account to reset their web
app passwords?

