Glad to see you’re still doing great stuff, and also very glad to see your new employer supports such things, especially compared to our old employer! Part of why I retired around the same time you left was because I wanted to make and share things.
They should have a legal obligation to engage in coordinated/responsible disclosure, and it should be a crime to sell or disclose a 0day to anyone other than a state-designated security organization or the vendor/provider.
If it won’t be handled through criminal law then it’ll be handled through civil litigation: Anyone who was exploited as a result of this disclosure should sue the discloser for contributing to the damage they’ve suffered.
The disclosure doesn't appear very "full". Looks like this was slipped into mainline linux among dozens of other mostly-irrelevant "CVEs" with nobody highlighting the fact that it is in fact dirty-cow-on-steroids.
Or is everyone expected to upgrade and reboot every 48 hours for all eternity and just deal with potential regressions all the time?
I think this reflects poorly on the original reporters. If you have a weaponized 700-byte universal local root exploit script ready to go, perhaps you should coordinate with major distros for patches to be available before unleashing it on the world. No matter how "veteran" you are.
I think I must misunderstand. Are you saying that you upgrade and reboot every production system that you administer to apply each commit to the kernel (branch it's using) essentially immediately?
That doesn't make sense to me for a few reasons, but I struggle to find a different reading that applies "upgrade and reboot on a moment's notice" to the "slipped into mainline linux" scenario. Kindly help me to do so.
No: your posture with respect to having to cycle servers is a super complicated subject and you address it both with process and with architecture (for instance: you can be blasé about things like CopyFail if you don't allow multitenant shared-kernel in your design in the first place). But no matter what process and design you have, if you're hosting sensitive workloads, you always have to be in a position where you can metabolize having to cycle your servers.
It's a category error to talk about a disclosure event like this as something that would destabilize someone's fleet operations. The Linux kernel is fallible. So is the x64 architecture. You already have to be ready to lock things down and reboot (or mitigate) at a moment's notice.
Remember: whatever else grumpy sysadmins have to say about this, Xint are the good guys. Contrast them with the bad guys, who have vulnerabilities just as bad as CopyFail, but aren't disclosing them at all --- you only find out about them when it's discovered they're actively be exploited. There's no patch at all. There isn't even a characterization of how they work, so that you could quickly see what to seccomp. That's the actual threat environment serious Linux shops operate in.
I find it curious to call someone dropping a weaponized root exploit before major distros or even LTS kernel git branches have patches ready "good guys". This could have been handled with much more grace.
Again: I made the actual distinction between bad guys and good guys clear. Good guys don't become bad guys simply because kernel security is an inconvenience to you.
There are more than just good guys and bad guys; in particular, there are also opportunists.
Opportunists are the ones who will sell a 0day to bad guys. Or who will drop a 0day publicly to promote their services. And they’ll fight tooth and nail against any actual legal obligation to engage in responsible and coordinated disclosure, because they make more money without that.
Seems like a classification you just made up to navigate a message board debate: the category that equates commercial vulnerability research for security products and people who sell zero-day vulnerabilities to bad guys.
People who sell zero-day vulnerabilities currently sell to both good guys and bad guys, they’re a third thing (mercenaries). However, that third thing is also bad, just a different kind of bad than what you’re calling “bad guys.”
The people selling weapons to the Taliban aren’t bad in the same way the Taliban are; one is bad for ideological reasons, the other is bad for enabling bad actors, even if they also sell to the good guys.
Whatever the entity you're thinking of that sells exploits/"CNE enablement packages", they're not in the same bucket as entities that find and disclose vulnerabilities.
Sounds like bounties are unnecessary then. The argument I’ve always seen for them is that if they don’t exist and aren’t substantial enough, the research will still happen but the results will go to the highest bidder.
To be fair, once Xint gave the heads up and the kernel team committed a patch, what was Xint supposed to do? Keep asking the kernel security team to backport patches for the LTS kernels?
As soon as a patch is committed, the clock starts ticking, the exploit will be discovered by reverse engineering recent commits. The commit was made on April 1st, Xint disclosed it on the 29th. If the Kernel Security team had wanted to, they had 28 days to backport patches in the LTS branches...
It was clear that the original comment didn't say that, since we can see it right above. It was clear to me that the GP was using quotes as a way to use direct speech, not to imply that the GP literally said those words.
It requires the people contributing the work to have the integrity to actually follow the project’s rules. It’s not OK to violate the project’s rules just because you don’t think you’ll be found out as a filthy fucking liar.
I mean best of luck policing this is all I'm going to say. We will soon be back to the "core contributors only" kind of policy in many projects I imagine to avoid the slop spam. The verification will be at the conferences.
Crazy that someone would use this pseudonym while at the same time saying that all society's problems are caused by socialist and Communist conspiracy.
They mostly relied on OS/Toolbox implementation quirks though, not hardware implementation quirks, because applications that relied on the latter wouldn’t run on the Macintosh XL and that mattered to certain market segments. (Like some people using spreadsheets, who were willing to trade CPU speed for screen size.) Similarly anything that tried to use floppy copy protection tricks wouldn’t work due to the different system design, so that wasn’t common among applications.
So even things that wrote directly to the framebuffer would ask the OS for the address and bounds rather than hardcode them, copy protection would be implemented using license keys (crypto/hashes, not dongles) rather than weird track layouts on floppies, etc. It led to good enough forward compatibility that the substantial architectural changes in the Macintosh II were possible, and things just improved from there.
Eh, there were plenty of games that were coded for a particular clock speed, and then once the SE came out, had an update that included a software version of a turbo button, let you select which of two speeds to run at. They run FAST on an SE/30 or Mac II and unusably fast on anything newer.
I didn’t encounter too many of those back in the day, I think because there was the VBL task mechanism for synchronizing with screen refresh that made it easy to avoid using instruction loops for timing.
Much more common in my experience was the assumption that the framebuffer was 1-bit, but such games would still run on my IIci if I switched to black & white—they’d just use the upper left 3/4 of the screen since they still paid proper attention to the bytes-per-row in its GrafPort.
Could be that by the time I was using a Mac II though that all the games that didn’t meet that minimum bar had already been weeded out.
It was actually mostly written in assembly, but used Pascal calling conventions and structure layouts since that was expected to be the primary language for application developers. As it had been for Lisa, as it was for “large” applications on Apple II, and as was the case for much of the rest of the microcomputer and minicomputer industry and even the nascent workstation industry (eg Apollo).
It was the Lisa system software that was mostly implemented in Pascal and some blamed this for its largeness and its performance. Compilers and linkers weren’t great back then; most compiler code generation was pretty rigid, and most linkers didn’t even coalesce identical string literals across compilation unit boundaries!
Lisa Workshop C introduced the “pascal” keyword for function declarations and definitions to indicate they used Pascal calling conventions, and otherwise followed Lisa Pascal structure layout rules, so as to minimize the overhead of interoperating with the OS. (I’m not sure whether it introduced the “\p” Pascal string literal convention too or if that came later with Stanford or THINK Lightspeed C.)
That brings back the memories. I had a copy of Lightspeed C for the Mac in college.
In the workstation world, most companies used C and not Pascal. Apollo was different in that regard as their operating system, Domain, was unique to themselves, while most of the other workstation companies (Sun, HP, DEC, and IBM) were using Unix variants of some time (either BSD-based or System V-based in most cases). Apollo Domain was written in Pascal and was definitely not Unix-based. It had many unique and interesting features. In particular it had very sophisticated authentication and file sharing capabilities. A user could log in on any machine that was part of the domain (hence the name) and the user’s complete file system would be made available over the network on that hardware. Every system on the network shared a domain-level file system which removed the need for many Unix solutions like NFS. I had just accepted a job offer out of college from HP’s workstation division when HP bought Apollo. By the time I started, a couple months later, I was part of the HP side of the Apollo Systems Division.
You’re talking about the workstation world circa 1985 and later, but prior to then the victory of C and UNIX wasn’t a sure thing. Apollo was the big player, but they weren’t the only ones.
In particular, many minicomputer vendors had some type of graphics and engineering workstation system built around their minicomputer product line, whether multi-user (where you’d have one minicomputer or even mainframe serving multiple bitmap or vector graphics terminals) or single-user (whether using a dedicated low-end minicomputer as a single-user system or using a new CPU design).
The Xerox Alto is what everyone cites as the start of the workstation trend, but it didn’t just beget the Xerox Star, the Lisp Machine, and the Lisa, it also led to the Three Rivers PERQ and CAD/CAE environments built on top of modular hardware from Data General and DEC, to the point where eventually DG, DEC, HP, and others released their own graphical workstations based on their minicomputer architectures.
All of these used vendor operating systems, not UNIX, and almost all emphasized the use of Pascal and FORTRAN for high-level application development. (The ones that didn’t had vendor languages too, like InterLISP and Mesa for Xerox.)
A good example of this dichotomy is the Puzzle Desk Accessory --- originally written in Pascal (as an example of making a DA thus), it was too large to include on a 400KB micro-floppy disk, so was re-written in assembly language going from 6K Bytes to 600 Bytes:
reply