It's a damning indictment of the computer security industry, IMO.
They push wildly complicated and performance-costly bandaids for vanishingly unlikely exploits for dubious benefit, and pay very little attention to holes like this you can drive a truck through. https://xkcd.com/538/ is very fitting.
The fact a database hacker found it, and it missed all these supposed scanners and auditors and security experts is the icing on the cake. But even if it was one of them that found it, it's already in the wild.
The technical programming tools and techniques for security are important don't get me wrong, the security industry just seems to have very little concept of cost-benefit, the big picture of how attackers and their targets operate and what motivates them. They seem to mostly exist to justify their own existence.
I've already seen "security experts" wheel out the big scary (and meaningless) term "nation state" over this, which makes me laugh. Sure it could have been the CCP. It could also have just been some bored kid in his parents basement, that just wouldn't look good to admit though.
I don't agree with you fully. Look at solarwinds for example,or many other supply chain attacks, they were discovered much later after succesful abuse.
The public availability of the software helped catch the attack much faster than commercial software. Even if there was intensive scrutiny of changes to the project, a person familiar with the process can still come up with hard to detect backdoors.
Future backdoors may be stealthier but what this case demonstrated is that even a database hacker who doesn't do security audits could catch it simply because it's opensource. The expectation should be, such backdoors would be detected many months or years after the fact if they were your typical closed source popular application.
This is a case for open source software usage and funding. The security industry can't do much in terms of prevention against a malicious insider that knows the codebase more than any outsider. And opensource or not, people can get paid or implanted to sabotage software.
I imagine though that in this case the attacker has a much simpler time staying anonymous, vs commercial software using paid and vetted employees. Or am I missing something about how this was contributed?
It's not always obvious but devs adding backdoors and vulns is not all that new.
The guy may have been anonymous here but a legit dev's github account compromise could lead to the same outcome.
Each open source project decides how much vetting is applied to contributors. I don't think you can contribute to Linux without using your real name and email for example. In some countries, getting a job for the express purpose of sabotage is very common. People using stolen id's to get remote dev job's is also a thing (although I haven't heard that being abused for backdooring). At least with open source, you can audit the code for anonymous user contributions and look at their policy for it.
> The fact a database hacker found it, and it missed all these supposed scanners and auditors and security experts is the icing on the cake
It's been reported that both Fedora and Ubuntu's valgrind rigs flagged this, actually, which is why the package changes hadn't propagated to those distros yet. It's true that they didn't get as far as recognizing the root cause because Andres beat them to it, but the security infrastructure was absolutely doing its job.
Which seems even worse for the security industry. They had these tools, didn't understand the warnings so ignored them and pushed the things upstream anyway.
It wasn't just that they were beat fair and square on equal terms -- they had opportunity before the packages got into their distro-upstream, and they (allegedly) are the ones who look for and audit security issues, the database guy found it by observing some peripheral performance issue it caused, and pursued and tracked down the problem. A stark contrast to the uncurious attitude of the security theater that was supposed to be actively looking for these things and examining warnings from tools.
> so ignored them and pushed the things upstream anyway
Again, that happened only in Debian testing and Fedora rawhide (and maybe a few other downstreams, though really the exploit requires a particular systemd setup and won't work on arbitrary "linux" systems). Those are rolling release variants deliberately intended to take upstreams rapidly with minimal review, precisely so that integration testing can be done. And it was, and it flagged the issue via at least one symptom.
Only one person gets the cookie for finding any given problem, usually. And this time it was Andres, and we should absolutely celebrate that. But that doesn't mean we run off and shit on the losers, right?
> Again, that happened only in Debian testing and Fedora rawhide
Right, lots of failures happened. I'm not sure the point.
> Those are rolling release variants deliberately intended to take upstreams rapidly with minimal review, precisely so that integration testing can be done. And it was, and it flagged the issue via at least one symptom.
And nothing was done about it.
> Only one person gets the cookie for finding any given problem, usually. And this time it was Andres, and we should absolutely celebrate that. But that doesn't mean we run off and shit on the losers, right?
The problem isn't that they were not the first to find it, the problem is that they weren't even in the race. And people and processes are not be immune to criticism for failures, so we absolutely can "shit on" the failures at many levels that have helped to make this situation possible.
A bored kid would be bad from the potential for this type of threat to be more prolific.
A nation state would utilize this attack chain to steal copious mounts of sensitive data and prepare infrastructure for coordinated attacks on critical infrastructure and intellectual property.
The threat is relevant because it informs us about their strategy. If you expert security experts to prevent this, they need to know why they’re preventing and have some concept of the realism of any given notional threat. There’s aren’t enough of us to address every threat.
Aside, nation-state is a specific geopolitical term. I don't know why it started to be cooped by the security circus, perhaps just an attempt to give themselves more gravitas. They just mean country or state, right? Even that doesn't really say anything really quantitative or useful about security anyway.
And I really disagree that it matters, whether it's a bored kid, an underhanded corporation, an organized crime ring, a terrorist group, or a government security agency, doesn't really matter that much. Some understanding of motivation sure helps, but they don't need to play CIA agent. It's not like the PLA would use this type of hole, the FSB a different type, ISIS would go some completely different route, etc. They're all just looking for ways they can infiltrate, sabotage, deny, steal, etc. So just work on the security problems, roughly with priority from the cheapest cost to benefit, to the greatest.
You don't need to know anything about nation states or any of that claptrap to know that rogue maintainers and contributors are a major problem. And you don't even have to know that to know that linking your privileged sshd process to libsystemd is a stupid idea. Yet millions are spent on ever more esoteric and complicated hardening and speculation issues and things that have never been shown to stop feasible remote attacks.
Probably because those are cool and techy and working on people problems is much harder and less interesting to many tech types.
Not to say don't do any of the technical stuff, but the calls for more funding of OSS I'm hearing won't solve problems like this if the funding goes to more of that kind of thing. It's not a funding problem it's a spending problem. What is desperately needed is some exceptional people who have big picture understanding of the problem, including good people-skills, to hold the purse strings and set some direction.
They push wildly complicated and performance-costly bandaids for vanishingly unlikely exploits for dubious benefit, and pay very little attention to holes like this you can drive a truck through. https://xkcd.com/538/ is very fitting.
The fact a database hacker found it, and it missed all these supposed scanners and auditors and security experts is the icing on the cake. But even if it was one of them that found it, it's already in the wild.
The technical programming tools and techniques for security are important don't get me wrong, the security industry just seems to have very little concept of cost-benefit, the big picture of how attackers and their targets operate and what motivates them. They seem to mostly exist to justify their own existence.
I've already seen "security experts" wheel out the big scary (and meaningless) term "nation state" over this, which makes me laugh. Sure it could have been the CCP. It could also have just been some bored kid in his parents basement, that just wouldn't look good to admit though.