Coordinating disclosure around security issues is hard, especially with a project like Linux where you have an extremely intricate mixture of long-term support kernels, vendor trees and distributions to deal with. Companies who maintain security-critical components really need to ensure that everyone knows how to approach this correctly, otherwise we end up with situations like this where the disclosure process actually increased the associated risks.
(I was part of Google security during the timeline included in this post, but wasn't involved with or aware of this vulnerability)
I mean, it's still altruistic because they're going to 'lose wealth' by donating. You're never going to earn back in tax write-offs as much as you spent by donating.
from the guys who brought you "the spectre patch in the kernel thats disabled by default" and "ignore your doubts, hyperthreading is still safe" comes "the incredible patch built solely around shareholder confidence and breakroom communication"
EDIT: spectre, not meltdown. oops.
I'm not sure to what you are referring to here.
I was one of the people who worked on the Linux PTI implementation that mitigated Meltdown. My memory is not perfect, but I honestly don't know what you're referring to.
> The whole IBRS_ALL feature to me very clearly says "Intel is not serious about this, we'll have a ugly hack that will be so expensive that we don't want to enable it by default, because that would look bad in benchmarks".
Live by the SPEC, die by the SPEC
I apologize the readers for the rant but this whataboutism is so demeaning of the multitude of very inteligent people working in the field, and creates negative valued divides between the low level people and the userspace/web people with huge mutual distrust as shown in the parent and in mine.
Something similar to Wuffs (posted on HN very recently), which compiles down to C, might be a good compromise between portability and safe languages. (There may be some contorted way to have Rust emit C, too.)
I wonder if an LLVM backend that issues a very simple and predictable subset of C would be a viable way to support exotic old architectures which only have a C compiler. LLVM-cbe is a thing: https://github.com/JuliaComputing/llvm-cbe
Naturally the runtime also comes along, but that is another matter.
I advise you to read his Turing award speech from 1981.
There was also failure in prediction, in estimation of program size and speed, of effort required, in planning the coordination and interaction of programs, in providing an early warning that things were going wrong. There were faults in our control of program changes, documentation, liaison with other departments, with our management, and with our customers. We failed in giving clear and stable definitions of the responsibilities of individual programmers and project leaders,--Oh, need I go on?
What was amazing was that a large team of highly intelligent programmers could labor so hard and so long on such an unpromising project. You know, you shouldn't trust us intelligent programmers. We can think up such good arguments for convincing ourselves and each other of the utterly absurd. Especially don't believe us when we promise to repeat an earlier success, only bigger and better next time.
And indeed "the system is to blame" in that it's hard to get drivers using safe techniques into the Linux kernel, and Linus is famously anti-security. But I think for the individual programmer, they still end up choosing one of the above 3 mental models.
So I still think it would be interesting to know how people think about security when writing parsers in C in ring 0 for Linux drivers that exposed by default in billions in devices.
It's not like NASA sent a rover, written almost fully in C, to Mars. It's not like billions of cars and even more billions of their ECUs are written in C. It's not like the firmware of the keyboard you're writing your comment on, or even the OS/browser you're using is written in C. C bad, Rust good.
Re Rover code and ECUs - this is the difference between safety critical code and security critical code at the attack surface.
The first kind deals primarily with "don't keel over or go crazy when natural phenomena throws unexpected circumstances at you", the second deals with inputs crafted by intelligent adversaries who can see your code and test & iterate attacks to exploit any flaws they uncovered through analysis or experimentation against your implementation. (Of course if we nitpick, an intelligent attacker is a natural phenomenon.)
Those have tons of exploitable bugs!!
NASA rovers and car ECUs have minimal people looking to exploit them, so I'm not overly convinced they're exploit free either.
I'm not a Rust evangelist or even a user, but the current paradigm of "THIS TIME, we'll write safe, complex, performant C/C++ code properly" isn't the solution, nor is manually squashing bugs one by one.
The solution seems to be a combination of improving the tooling around existing C\C++, and starting new projects in safer languages when possible.
That quip was originally about something else, I think.
> "A recent study found that "~70% of the vulnerabilities addressed through a security update each year continue to be memory safety issues.” Another analysis on security issues in the ubiquitous `curl` command line tool showed that 53 out of 95 bugs would have been completely prevented by using a memory-safe language. [...]"
You're going to get owned in future by people obtaining creds to important stuff (say, aws creds) and by crappy userspace applications, we can hope that OS security continues to improve but even if it does get bulletproof the story is far from over while our apps are all piles of garbage.
At least, that's what I recon'
Which is not to say that "memory safety" is not a significant issue in C/C++. I wonder why wuffs  is rarely used in C projects to parse untrusted data given that it can be translated to C.
(This would be useful to have on Google's site here, but I understand if it's supposed to be for academic audiences)
If so, they wouldn't want the public to know why they are or that they are "buying and using cyberweapons" - both to stay effective and for political/international relation reasons. So probably hard to find or contact them.
I wonder how often unsolicited emails are sent to *@cia.gov with subject: "I have RCE on ___, wanna buy it?" and how they respond.
From what I see we are going back from general computers running an old version of windows xp or red hat into special purpose Linux system on a module devices.
For example I recall reading there was an example of a car which wouldn't trigger collision avoidance during a phone call because it was erroneously triggering the breaks to a very small degree and the logic was not to trigger the breaks when the user was already braking.
There is every reason to believe security is as mediocre on cars as elsewhere.
The code is an utter shitshow, inviting disaster through seemingly normal use of the language. It contains a mess of malpractices that make any modern C++ or Rust developer cringe: goto, memcpy, naked pointers, type unsafe casts, raw loops, using malloc to allocate memory for input buffers.
Why do people continue to juggle chainsaws? I think it's fear of new things, fear of change. Old habits die hard.
They don’t add features that break backward compatibility, or features that sacrifice runtime performance especially if you don’t use those features. Most new features added to C++ are actually just additions to the STL, which you aren’t forced to use.
It is an open standard, so there is plentiful competition in the realm of compilers. Rust only has 1 so you are SOL if you don’t like something or if the project goes bust one day.
Your holes are its seducing.
It slays from a distance,
You're on the list for
Hmmm. Either BrokenTooth or BlenderWave are apropos to the year.
Going to put this comment here as a reference to quote later when I see a zero click RCE for iOS devices using Bluetooth for drive-by exploitation.
The way I see it, all those vulnerabilities prove the opposite. If there were no "many eyes", I doubt most of those vulnerabilities would have been exposed to the public at all. But I bet that malicious actors would still be using those.
That argument you made reads similar to "hospital theory is disproven, because whenever we get more hospitals and doctors, more people end up with a diagnosis".
The only conclusion people should be drawing from the last 20 years of security being taken seriously is that writing secure software is hard, finding bugs is hard, and business model doesn't really matter.
It's quite possible that for 10 years, the number of eyeballs had not been enough, until it was. The open source model makes it more likely to gather more code reviews.
I hope you and the grandparent get your horses into rehab once you finish your ride. ;-)
Even throwing out the fact that equivalent closed source software has a stupendous amount of money spent on code reviews, the open source model makes those reviews possible. It doesn't necessarily make them likely. That is a very important difference, theoretical eyes make no bugs shallow.
> I hope you and the grandparent get your horses into rehab once you finish your ride
Please refrain from making condescending, smug comments like this here. They do not in any way contribute to the debate.
HN today (over the course of previous weeks) is very quick with broad-swipe sensationalist statements, at least this is the sentiment I'm getting:
— The law of enough eyeballs is disproved by a decade-old bug!
— Sleep deprivation is used for some depression cases, therefore, let's banish sleep and crank all-nighters!
— SOLID is obsolete and debunked, and moreover, the old boomer Robert Martin defends it, so let's banish SOLID!
Repeat ad nauseam about any "mainstream" viewpoint or paradigm. It's getting old very quickly. Thus my abrasive passage that you quoted.
I'd like to see instead a more elaborate discussion about limitations of this observation (about eyeballs and bugs) which has proved itself quite more than once, rather than a sweeping statement. Right now the thread reads like a call to abolish all Newtonian mechanics and using relativistic calculations for everything, just because Newtonian physics got "debunked".
I'd argue that maybe a codebase can grow so much that no number of human eyeballs, even using eyeball enhancers like fuzzing and analysis tools similar to Coverity or PVS Studio, will ever bring all the bugs to the surface (and of course there can be design flaws undetectable with tools). And maybe realizing this should alter the way we design complex systems that should be as bug-free as it gets.