Intel told quite a few members of the Linux community about this well before late December. Some got told because they work for traditional distributions and others were contacted directly because they are community maintainers or subject matter experts.
Unfortunately, Greg was not one of those folks that was told early. It's pretty clear at this point that it would have been a lot better had he and folks like him been involved earlier. This is especially true for Greg since he plays such a crucial role in stable kernel maintenance, which is how a lot of the world consumes their kernels (including distros like Debian).
I'm glad Greg thinks "Intel has gotten better at this." I'd like to think so too.
FWIW, I don't think PR had anything to do with this. Without arguing the merits of embargoes, the goal of them is to give the good guys a head start before the bad guys. But, if you tell too many good guys, the assumption is that the bad guys find out. So, you try to tell as few good guys as possible.
This was (from my blatantly biased perspective) an honest, but imperfect attempt at keeping the bad guys from finding out. A lot of lessons have been learned since this, and the group of good guys involved in recent issues has been much more comprehensive than with the original Spectre/Meltdown bits. I think Greg was alluding to this when he said "Intel has gotten better at this."
People actually admitting fault/guilt, publicly, means that heads are getting chopped off.
If you want to see companies' behavior change, we have to change our entire model on what "economic performance" means, and whether it's even something worth pursuing above all else.
Some misbehaviours result in PR disasters, but for whatever reason, no-one ever calls-out a company specifically for putting PR over customers' interests.
I'm biased, of course. But, I think things are much better now. We have ways of having relatively "normal" conversations about these things where a broad set of community folks are involved. It's not perfect, but it's way better than it was a year ago.
The embargo was really painful to work with, even for folks like myself who were inside Intel during the whole thing. I'm used to being able to ask folks for help and to lean on the incredible knowledge of the kernel community. For Meltdown, I was much more on my own than I was comfortable with. It wasn't fun.
Why wasn't he one of the first persons informed by Intel?
I'll give you an engineer's perspective, and specifically about Meltdown since that's what I worked on.
I posted the first Meltdown mitigations in late October, publicly claiming to mitigate a less dangerous class of side-channel (https://lkml.org/lkml/2017/10/31/884). I figured that Linus would merge it (Linus knew the real reason by that point) and would "wink" at Greg and the other stable maintainers enough to get them to merge it too. At this point, we had months before the embargo ended.
In retrospect, that was a kinda silly assumption, but I was heads down trying to get the patch to work and didn't dwell on it much. I wish I would have made more noise about it, especially as we got closer to the embargo ending.
Is there any actual statistics to back this up? I feel like RHEL and to a lesser extent CentOS have a stranglehold on the big enterprise-y environments, and I see Ubuntu basically everywhere else, and Canonical does their own kernels.
Edit: To be clear, I am aware Ubunbtu is a Debian derivative, but since we're talking specifically about who was informed for kernel level mitigations, and Canonical does their own kernels, it seems weird to talk about how Debian wasn't informed and thus people were affected, when Ubuntu being updated wasn't reliant on Debian being updated.
It feels like someone at Debian is trying to avoid realy numbers or relative comparisons to make their case sound more important, but all it results in is statements like this where you look at it and go "WTF are they saying, because what it kind of sounds like seem highly unlikely..."
As far as I'm aware of, Debian doesn't have any real numbers, because it's not an enterprise. However, GNU/Linux Distributions Timeline gives a pretty good idea, how widespread Debian is in comparison to other Linux distributions.
I understand it's hard to count, but if they're going to make a case that your group deserved to be on the short list of groups that needed to know by calling on the prestige of their position, something a bit more concrete than some extremely vague statement like this.
Debian is very important and has a lot of prestige in Linux, due to it's principled stance on software and it's position as a base for other popular distros. Personally I think it's just not a type of importance and prestige that leads it to be as important to notify about these issues as the large enterprise distros (and their non-enterprise offerings). If you want to cover the most systems with the least amount of people involved, you hit the list Intel hit, and at some point you need to make a call on when to stop including people and groups. In the case of Debian, the type of contributors it attracts might actually work against it, as NDAs and threats of lawsuits work a lot better against people who are working under their capacity as a representative from their Company (even if as a kernel developer), and would face major consequences for breaking the embargo. Including random developers from across the world working in their spare time is much more dangerous, as there is no existing relationship with a company to fall back on to vet the individuals, and the negative consequences for breaking the embargo for those individuals might be quite small, depending on circumstance. I imagine it's quite a bit more work for Intel's legal department (possibly in background checks they don't even want to think about having to figure out) to sign off on what might be the suggested people to include from Debian.
According to the graph, Slackware is a way less widespread than Red Hat, and Red Hat is a way less widespread than Debian. It seems to be a correct representation of reality, given the popularity of the .deb package format, and the size of apt package repositories, in comparison to .rpm and yum, respectively.
I agree, that Intel might have chosen not to inform Debian because of its non-enterprise nature, but it doesn't change the fact, that Debian might be the second most popular Linux server distribution in the world, with the market share 20x of that of SUSE, which was informed about the bug by Intel ahead of time.
As I noted before, this graph is just showing a family tree of distributions, not use. If I made a hundred different different distributions they would show as a hundred different branches here. That wouldn't mean anyone actually ran or runs them.
"Widespread" only really makes sense with respect to install or end users, neither of which are represented here at all.
> Debian might be the second most popular Linux server distribution in the world
That's not what your link says. It says it's the second most common distribution for powering websites (so it doesn't count dedicated mail servers, cache servers, database servers, file servers, etc). It also says that "Unix" is almost twice as popular as Linux under the same methodology. That likely means BSD variants, but whatever methodology is showing that ~70% of all websites are running on Unix seems somewhat questionable to me.
> It says it's the second most common distribution for powering websites
There is no other way to measure and compare the popularity of open-source operating systems. Any search for "operating systems market share" shows Debian only seriously behind Ubuntu, and close to CentOS.
Do you also question the popularity of Ubuntu and CentOS?
I'm claiming that Debian's wording was somewhat weird, and that they aren't necessarily as large as it makes them sound.
Additionally I'l claiming that the nature of Debian itself may make it harder to get them vetted for access (I suspect the kernel developers that got access are employed to work on the kernel by large companies, a fairly common scenario for Google and Redhat for example, which is another point for including them).
Also, I'm stating that public numbers got operating system by website is a very poor metric, which I'll cover below.
> There is no other way to measure and compare the popularity of open-source operating systems. Any search for "operating systems market share" shows Debian only seriously behind Ubuntu, and close to CentOS.
Just because only some numbers are available does not mean they are worth using. For example, in the SimilarTech link you provided, you might notice that the operating systems usage percentages shown for the top 1k sites add up to less than 5%. For the "entire internet" statistics it's just over 1.5%. I interpret this as the vast majority of sites having no discernible data as to what operating system they run (for their testing criteria, at least). I suspect it's really a measure of servers started that have the default HTTP server running with a landing page saying "Thanks for installing CentOS" or has some poorly configured protocol with an OS identifier in it (which has been deemed poor security for quite a while, which might explain ancient Unix systems having a large showing...).
For the Datanyze stats, the total websites shown add up to just over 2.1 million. What we don't know, since I can't find any info on their methodology, is whether they suffer from the same problem I suspect SimilarTech does, in that they are reporting vastly biased data because they can only determine the OS of a small subset of the total sample.
Additionally, this doesn't count desktop installs. Google alone likely has over 20k developers, and the default desktop for them is Goobuntu (a Google variant of Ubuntu). While I expect Windows and OS X to have a large share of that market overall, the relative number of desktops compared to servers means that it may affect the numbers greatly.
There is actually a way to semi-accurately measure operating system usage, at least for operating systems that use default update mirror-lists and auto-update (most linux distros, to my knowledge) randomly (or using some specific leveling algorithm) spread requests along the mirrors, if you could get stats on actual downloads of a core package that had a security update within the last 6 months to a year (preferably daily or months stats) for a few mirrors, you could statistically derive a lot of information are active installs. You'll miss people that auto-install with special configurations that point to internal mirrors (common in some business settings), but you might get within 10% of a real number for a distribution.
> Do you also question the popularity of Ubuntu and CentOS?
I question any number presented by these sites. I have plenty of anecdotal evidence to suspect they are popular. If I did not have even that anecdotal evidence, I wouldn't assume to know much from the data we have seen so far.
In fact, it was not Debian's wording, but Greg Kroah-Hartman's, who is currently responsible for the Linux kernel -stable branch, and who has previously worked for SUSE, where he initiated the development of openSUSE Tumbleweed. So, he is in no way associated with Debian, and is one the most important employees of the Linux Foundation.
> Additionally, this doesn't count desktop installs. Google alone likely has over 20k developers, and the default desktop for them is Goobuntu (a Google variant of Ubuntu).
Interestingly, Google ditched Ubuntu for Debian in January 2018, which means, that the entire Google internal engineering environment is now based on Debian.
Ah, thanks for the correction. I was under the impression it was Debian speaking. I still think it's an odd way to describe the group in question (those running vanilla kernels), but there's less negative connotation since the obvious benefactor of that wording wasn't responsible for the statement.
That said, it's clear that some of my prior statements were erroneous.
> Interestingly, Google ditched Ubuntu for Debian in January 2018, which means, that the entire Google internal engineering environment is now based on Debian.
Interesting. I wonder if they did an easy-upgrade for the Goobuntu installs, or if it's just for new installs. To my knowledge, Google has a recommended Linux distro, and developers can choose it or choose another supported distro if they have reason (i.e. There are devs running windows and OS X). In that type of situation, I would think some portion of developers might have switched quickly, others as they had problems and needed a reinstall, and finally new hires. There's probably a good mix of the two distros, which will shift towards the new one consistently over time. It will undoubtedly increase install numbers though, as it's not a small amount of people.
I still wish we had some better numbers on this. :/
Because they're offline doesn't mean that they're safe, just safe from more than an online system.
A vector other than the network could be exploited to leverage an attack that utilizes Meltdown/Spectre.
Edit: never mind, Android was apparently indeed effected.
High-end Android phones with out-of-order processors (the minority) are only vulnerable to Spectre, but it's a little complicated . Then there is only one ARM core (Cortex A75) that is vulnerable to Meltdown, but that was released after the vulnerability was published, so software could ship with mitigations in place.
I can say that I've personally administered roughly a 1:1 ratio of CentOS:Debian despite coming from a country that's servers tend to be CentOS.
Of course, this is anecdotal, but don't undersell debian.
Here in Germany it is mostly Red-Hat/CentOS or Ubuntu, on the projects I have been involved.
So anecdote data.
He is not limiting "the majority" to just Debian, though. He is comparing installs of distributions backed by companies (like Red Hat+Canonical+SuSE) as "the minority" vs. every possible non-corporate-backed distribution or source.
No, there is more to a distro than them sharing a common package manager/format. The kernel, which is heavily patched by canonical, differs from what Debian ships. Canonical also has a bad habit of carrying a lot of non-upstream patches for other core system components. Ubuntu may have been derived from Debian at one point, but they are fairly different at this point. So much so that you cannot take a system running Debian and 'upgrade' (or downgrade, depending on your point of view) it to one running Ubuntu.
That doesn't say much, as sometimes ubuntu and debian upgrades are far from smooth.
As a long Debian & Ubuntu user, I believe you're exagerating the work invested by Ubuntu to custonize Debian. Betond the default desktop environment and attitude regarding proprietary drivers, the work done by Ubuntu is rather negligible.
- https://www.debian.org/security/2018/dsa-4078 (January 4th)
- https://usn.ubuntu.com/3522-1/ (January 9th)
Canonical got caught in the shortened embargo and didn't modify their schedule while everyone else did.
This allows you to have a stable "base" with unchanging software requirements, configuration and features to work on for extended periods while also staying very secure, which is why it's favored by enterprises. You can find more information on this on their Life Cycle documentation.
This behavior actually extends to the kernel itself, which is likely why the Debian developer is quotes talking about "the majority of the world runs Debian or they run their own kernel", as Redhat, Fedora, Ubuntu, etc run custom kernels that they've patched. You can see this in different bugfixes for a package, and how they have the same version number for the software, but different release numbers, which come after the version in the filename.
There are some more nuances to this model, where you can actually get newer software versions at select points, or where they will actually backport new features and not just bugfixes, but those are somewhat separate, and happen much less frequently (point releases in RHEL terminology).
What can be confusing about CentOS/RHEL is that many shops using them are on a release that came out years ago. But that's an issue because Red Hat supports 10 year lifecycles for their releases, and will go on supporting them indefinitely if you pay them enough. All the software bundled gets feature and security updates through it's lifecycle, just without changing any interfaces, ABIs, or version numbers. This is great for shops that aren't looking to operate on the bleeding edge and have expensive COTS or bespoke software stacks sitting on top.
Of course, you still have to patch, and organizations can't be forced to do that. Hopefully what you were encountering was just the former and not the latter at those organizations.
RHEL and CentOS 7 had updates for all of Spectre, Meltdown and Foreshadow on the day of unembargo-ing.
3. 99% fit with RedHat
But it seems likely that this isn't just exaggerated or hyperbolic, but just not close to accurate, in which case even if it's off the record it's silly.
I wonder which operating system vendor pressured Intel to tell the Linux dev community, especially because it sounds like it was a non Linux OS vendor. Whomever it was, good job!
But it seems like Intel has angered the Linux community as well as the various BSD operating systems. You would think Intel would be doing whatever it can to please all operating system vendors especially now that AMD is getting competitive again.
Edit: I found a link confirming that Microsoft uses Linux for networking in their Azure data centers .
Things started shifting around 2002 or 2003 and by the time Vector bought out Corel in 2004, I was happily using Emacs and SVN with nobody batting an eyelid. We also worked with MS to implement the shared source version of .Net, which probably nobody remembers. It was supposed to be an "open" reference implementation of .Net. Mono really took on that role, I think mainly because the shared source reference implementation had a completely useless license. You could see the wheels turning in the heads of the MS people on that project. They were actually doing really good work, but everyone knew that the project was going to be meaningless.
IMHO, it was never a kind of binary switch from "open source is cancer" to "we can make money through open source". It was more of a slow internalisation that open source was a better business model for a lot of stuff that MS was doing. It may be that the switch in CEO helped that transition, but it was clear (at least from my perspective) that the wheels were already in motion for a long time prior to that.
I'm sure there are a lot of people still in MS that are rabid about having to control every last scrap of their "IP", but as MS starts to solve some of their revenue problems with open source solutions I think these people will lose relevance. I don't know if the corporate culture of MS (or even most large organisations) will ever get to the point of embracing free software ideals, but at least they seem to see the advantages of engineering collaboration in some circumstances.
After, it was like a flood with the gates opened. We went from spending a year to get approval to use Boost (and failing!), to a streamlined approval system for pretty much any piece of OSS out there. It's very visible in Visual Studio if you look closely at the files installed - compare, say, VS 2012 to VS 2015, and note how many more OSS bits are in the full install of the latter.
The same goes for releasing under OSS. You might notice that a lot more developer tools are OSS these days - even many new bits written for closed-source products like VS. Hell, I shipped some Microsoft code under GPLv2 a couple years ago - and it was easier to do than getting Boost approved under the old regime!
So I think it's fair to say that Satya and his cadre of execs did drastically re-imagine the company in that regard, rather than just finalizing an existing process.
SSCLI, or Rotor, for those interested in archaeology. It sure was an invaluable resource for learning how innards of .NET worked back then. Official release is no longer available from Microsoft, but there are some mirrors on github .
Of course, Nadella has been in charge for, what, 4 years now? So there had been plenty of time for the company culture to change in that regard. And it did, big time! "GPL is actually a pretty neat hack" is not a phrase I thought I'd ever hear from a Microsoft corporate lawyer, much less in the context of shipping a Microsoft product under that license.
> Experts called for a new generation of secure-by-design computers at the Hot Chips conference here. In small steps in that direction, Microsoft and Google described their separate but similar hardware security architectures.
This isn't true at all. Please stop spreading misinformation.
Edit: Actually the other guy did post a link that explains it so you comment is silly for two reasons.
I feel like this is a thing people repeat, specifically about KRACK and its been shown to be false multiple times (like here: https://lobste.rs/s/dwzplh/krack_attacks_breaking_wpa2#c_pbh...).
Someone should let the BSD folks know and see what they think...
Disable SMT/Hyperthreading in all Intel BIOSes https://news.ycombinator.com/item?id=17829790
"I'm going to spend my money at a more trustworthy vendor in the future." - Theo de Raadt
This is almost like satire.
That post seems to make it pretty clear that they are trying to position themselves as the experts rather than the people asking for help. That is the part that is absurd.
(tl;dr intel contractor were shipping an os that was using grsec tradmark in marketing for a non-grsec approved kernel)
Pretty much, yeah. You did. Pretending that declaring "off the record" during a presentation like that falls anywhere near normal journalistic practice is essentially the same thing as imputing magical properties to that phrase. Confidentiality needs to be negotiated ahead of time, usually with an NDA or embargo agreement before journalists are given any detail. When presenting at a conference that lets journalists attend for free, it's unreasonable to expect any confidentiality for the content of your presentation unless your presentation is part of a closed session that the free media passes don't grant access to.
Future hardware will be mitigated against side-channel issues. There's a nice table showing how things are mitigated on future processors here:
But, not everything is mitigated in the silicon or microcode. The mitigation for Spectre / Variant 1 / Bounds Check Bypass, for instance, will continue to be in software.
I found this page explaining how to do it https://www.cyberciti.biz/faq/install-update-intel-microcode...
I checked and my microcode is from 2018-01-21. Being on a 2014 laptop it means it got updated by the package manager.
One note from his talk though, his slides say that Foreshadow was fixed in April -- but it's clearly supposed to be August. He mentioned he was fixing the updates even on the flight out to OSSNA.
The reason these flaws were found in the first place were because of security researchers, hopefully they can now make their cases to get large research grants and bring about a more secure future.
On the other hand, if you want to talk about tin-foil hat "CPUs working against users", only look so far as Intel ME and the equivalents on other processors. Closed-source and highly privileged code running all the time inside the CPU - it's not unthinkable that Intel would bend the knee to some state actors, especially if that leads to higher profits (i.e. a tax break or entry into a new market (cough China)). It's enough of a concern that a lot of reverse engineering effort has gone into preventing it from running.
They have tons of testing already. The problem is that this is the kind of problem which is easy to miss with tests, especially automated ones, because everything worked correctly and no real code would ever have been affected by the side effects.
Hopefully this will lead to better automated testing procedures for Intel, other processor manufacturers and security researchers.
The emphasis on "better" as there is obviously automated testing in existence already.
Following the money suggests Intel was chasing an enterprise market and didn’t bother to think through the consequences (unlikely) or made a car-manufacturing style decision where the costs of failure were deemed to be overshadowed by the profits on their balance sheets.
For seven years the Ford Motor Company sold Pinto cars in which it knew hundreds of people would needlessly burn to death. It had a better design but it would have cost production delays to implement it. Less money to have people die than fix the issue when it was noticed in testing.
While I’m not equating Intel’s actions to Ford’s in direct comparison, I would say the bean counter thought process is far more likely than conspiracy.
Suppose they did this. Now, that the vulnerabilities are public, Intel takes the fall. Oopsie, huh?