I find it pretty funny how no matter how many times we're shown that unsafe languages blow up on all sorts of code by all sorts of programmers, anyone would still try to defend the language.
FFS in this case they even found the bug and fixed it, but didn't notice how it could be a vulnerability. So even with eyes directly on issues, we (human programmers excluding djb) can't seem to get it right.
> I find it pretty funny how no matter how many times we're shown that unsafe languages blow up on all sorts of code by all sorts of programmers, anyone would still try to defend the language.
Heartbleed, Shellshock, Ghost. OpenSSL implemented their own memory allocator, so you would get the same result in another language. Shellshock was a parsing failure, memory safety had nothing to do with it, still arbitrary code execution. Ghost is very hard to exploit, which is why people didn't notice how it could be. It's like trying to exploit an off by one error.
Bugs in production code are not "safe" regardless of what language you use. What we need are better ways to find bugs before the code is put into production.
Shellshock is somewhat atypical for systems vulns no? Looking at all the CVEs for Microsoft for a couple of years, essentially all critical security exploits are due to their use of C/C++.
Heartbleed would not happen just because of a custom allocator. Eg Rust allows you to do so, but would have prevented that code from compiling.
Basically, using C/C++ means that in addition to all the normal security logic errors like command injection, you've got to worry that an errant copy or overflow hands total execution control to an attacker. It's bizarre to not realise this is a huge language failing and that most of the systems level exploits are purely due to poor languages. Even despite all the crazy codegen and memory janking b modern compilers and OSes do, even with some hardware support, it's still happening.
> Shellshock is somewhat atypical for systems vulns no? Looking at all the CVEs for Microsoft for a couple of years, essentially all critical security exploits are due to their use of C/C++.
You're kind of answering your own question. Most OS bugs are in C because most OS code is in C.
> Heartbleed would not happen just because of a custom allocator. Eg Rust allows you to do so, but would have prevented that code from compiling.
If you get a large buffer and then "allocate" it by returning pointers to pieces of it (or offsets if you don't have pointers), now the compiler/runtime only knows where the end of the buffer is, not where the end of the allocation is supposed to be. You can write dumb code in any language.
> Basically, using C/C++ means that in addition to all the normal security logic errors like command injection, you've got to worry that an errant copy or overflow hands total execution control to an attacker. It's bizarre to not realise this is a huge language failing and that most of the systems level exploits are purely due to poor languages.
The problem with this reasoning is that it's solving the problem in the wrong place. Yes, if you screw up very badly then it's better for the language to blow up the program than let the attacker control it. But you still have to solve the other problem, which is that the attacker can blow up the program or possibly do other things even with "safe" languages because the program is parsing unvalidated input etc. And solving that problem, which needs to happen regardless, causes the first problem to go away.
You're not reading it correctly. Microsoft's critical vulns are nearly all of the class of errors that, say, Rust, solves. Memory safety issues. If Windows was written in, e.g. Rust, all those security issues simply would not have happened. I'm not sure how I can make this more clear.
While you can write dumb code in any language, programmers somehow end up not writing remote code execution from simple copies in other languages. Yet in C, this keeps happening.
> You're not reading it correctly. Microsoft's critical vulns are nearly all of the class of errors that, say, Rust, solves. Memory safety issues. If Windows was written in, e.g. Rust, all those security issues simply would not have happened. I'm not sure how I can make this more clear.
And what I'm saying is that you're solving the problem in the wrong place. I'll take a static analysis tool that will find a buffer overrun at compile time over a runtime that blows up the program in production, every time.
> While you can write dumb code in any language, programmers somehow end up not writing remote code execution from simple copies in other languages. Yet in C, this keeps happening.
Shellshock, eval, SQL injection, people will write dumb code that results in remote code execution using whatever you like.
> I'll take a static analysis tool that will find a buffer
> overrun at compile time over a runtime that blows up the
> program in production, every time.
Then you'll love Rust, where the compiler is essentially one ultra-comprehensive static analyzer. :)
Cool, well in all this time, all the C static and dynamic security features are still failing. So today, in the real world, your choices seem to be either fail at runtime or fail and execute arbitrary code.
Care to point out all the RCEs that exist in the millions of lines of C# and Java out there? Apart from exec/eval I don't recall seeing a single one (I'm sure there's a few where they interop or use unsafe code.)
FFS in this case they even found the bug and fixed it, but didn't notice how it could be a vulnerability. So even with eyes directly on issues, we (human programmers excluding djb) can't seem to get it right.