Hacker News new | past | comments | ask | show | jobs | submit login
Linux Kernel Developer Criticizes Intel for Meltdown, Spectre Response (eweek.com)
454 points by CrankyBear 6 months ago | hide | past | web | favorite | 145 comments



I worked on the Meltdown mitigations, at Intel, during the embargo. I still work on Linux at Intel.

Intel told quite a few members of the Linux community about this well before late December. Some got told because they work for traditional distributions and others were contacted directly because they are community maintainers or subject matter experts.

Unfortunately, Greg was not one of those folks that was told early. It's pretty clear at this point that it would have been a lot better had he and folks like him been involved earlier. This is especially true for Greg since he plays such a crucial role in stable kernel maintenance, which is how a lot of the world consumes their kernels (including distros like Debian).

I'm glad Greg thinks "Intel has gotten better at this." I'd like to think so too.


Shouldn't Greq and Linus be some of the very first people to be informed considering the amount of impact they both have on the entire Linux ecosystem? I find it alarming that they were not contacted immediately considering how much of the world/internet relies on Linux.


That does not address the stupid embargo on involved linux developers to speak about it to each other.


It shows that legal and marketing (PR spin) were prominent on the security decision tree rather than security minded individuals.


It's a real pity that Company caught putting PR before customers' interests so rarely causes a backlash, even when it's plain as day.


Disclaimer: I work on Linux at Intel.

FWIW, I don't think PR had anything to do with this. Without arguing the merits of embargoes, the goal of them is to give the good guys a head start before the bad guys. But, if you tell too many good guys, the assumption is that the bad guys find out. So, you try to tell as few good guys as possible.

This was (from my blatantly biased perspective) an honest, but imperfect attempt at keeping the bad guys from finding out. A lot of lessons have been learned since this, and the group of good guys involved in recent issues has been much more comprehensive than with the original Spectre/Meltdown bits. I think Greg was alluding to this when he said "Intel has gotten better at this."


The part that I find amazing about this is that Greg and Linus weren't at the top of the list of people contacted to ask 'this is a very sensitive issue and we'd like to work with you to figure out who we need to get involved in this' from a Linux standpoint. Intel has had people working on Linux support for many years now so it's not like they didn't have people on the payroll who knew the landscape and couldn't readily figure out who to contact. This reeks of incompetence at the senior management levels of Intel.


Greg and Linus are also some of the most visible participants, and as far as I understand, neither is considered particularly expert in the platform intricacies that would be involved in mitigating something like Spectre. Disclosing vulns to high-profile targets is a risky practice; their activity, both public and private, attracts a lot of attention. It makes sense to avoid that risk if possible.


Don't hate the player, hate the game. It's how the entire economic model is setup. For a publicly traded company, stock price matters above all else. Stock goes down, people lose their jobs. PR departments' job consists of damage control and limiting the impact of bad news on the stock. I'm not aware of any company that would've done something different.

People actually admitting fault/guilt, publicly, means that heads are getting chopped off.

If you want to see companies' behavior change, we have to change our entire model on what "economic performance" means, and whether it's even something worth pursuing above all else.


I don't think that's really it.

Some misbehaviours result in PR disasters, but for whatever reason, no-one ever calls-out a company specifically for putting PR over customers' interests.


Disclaimer: I work on Linux at Intel.

I'm biased, of course. But, I think things are much better now. We have ways of having relatively "normal" conversations about these things where a broad set of community folks are involved. It's not perfect, but it's way better than it was a year ago.

The embargo was really painful to work with, even for folks like myself who were inside Intel during the whole thing. I'm used to being able to ask folks for help and to lean on the incredible knowledge of the kernel community. For Meltdown, I was much more on my own than I was comfortable with. It wasn't fun.


> Unfortunately, Greg was not one of those folks that was told early.

Why wasn't he one of the first persons informed by Intel?


Disclaimer: I work on Linux at Intel.

I'll give you an engineer's perspective, and specifically about Meltdown since that's what I worked on.

I posted the first Meltdown mitigations in late October, publicly claiming to mitigate a less dangerous class of side-channel (https://lkml.org/lkml/2017/10/31/884). I figured that Linus would merge it (Linus knew the real reason by that point) and would "wink" at Greg and the other stable maintainers enough to get them to merge it too. At this point, we had months before the embargo ended.

In retrospect, that was a kinda silly assumption, but I was heads down trying to get the patch to work and didn't dwell on it much. I wish I would have made more noise about it, especially as we got closer to the embargo ending.


>"The majority of the world runs Debian or they run their own kernel," Kroah-Hartman said. "Debian was not allowed to be part of the disclosure, so the majority of the world was caught with their pants down, and that's not good."

Is there any actual statistics to back this up? I feel like RHEL and to a lesser extent CentOS have a stranglehold on the big enterprise-y environments, and I see Ubuntu basically everywhere else, and Canonical does their own kernels.

Edit: To be clear, I am aware Ubunbtu is a Debian derivative, but since we're talking specifically about who was informed for kernel level mitigations, and Canonical does their own kernels, it seems weird to talk about how Debian wasn't informed and thus people were affected, when Ubuntu being updated wasn't reliant on Debian being updated.


> The Linux distribution can be positively identified in around 30% of cases, and of these 1.39 million Linux computers, just over half are running Ubuntu Linux, nearly a quarter are running CentOS, and around fifth are running Debian Linux.

https://news.netcraft.com/archives/2017/09/11/september-2017...


I think, what he meant by saying, that "the majority of the world runs Debian or they run their own kernel", – is that almost everyone else, who is not running a product by Red Hat, SUSE, or Canonical, is running a system based on a Debian kernel. And, therefore, Debian should have been informed.


Even that interpretation contains little useful information. It's like some hobby-kit auto engine maker saying "The majority of drivers use our engine kits or they buy a brand name car." Sure, that might be true, but it doesn't say anything about how often that case actually is. In the example I put forth, it's obviously minuscule, and Debian is likely much more common that that, but how much?

It feels like someone at Debian is trying to avoid realy numbers or relative comparisons to make their case sound more important, but all it results in is statements like this where you look at it and go "WTF are they saying, because what it kind of sounds like seem highly unlikely..."


> It feels like someone at Debian is trying to avoid real numbers

As far as I'm aware of, Debian doesn't have any real numbers, because it's not an enterprise. However, GNU/Linux Distributions Timeline[1] gives a pretty good idea, how widespread Debian is in comparison to other Linux distributions.

[1] https://upload.wikimedia.org/wikipedia/commons/1/1b/Linux_Di...


The Debian popularity-contest package can give you an approximate lower bound, though since it's purely opt-in it's very much an underestimate.


If that actually denoted how common a distribution was in any way, we'd see a lot more Slackware in the world. Some distributions are much easier to fork than others, but a hundred forks of thousand users each doesn't compare well to four forks of a million users each, and the number of users is itself a nebulous concept (do we count installs, or people served by that install?).

I understand it's hard to count, but if they're going to make a case that your group deserved to be on the short list of groups that needed to know by calling on the prestige of their position, something a bit more concrete than some extremely vague statement like this.

Debian is very important and has a lot of prestige in Linux, due to it's principled stance on software and it's position as a base for other popular distros. Personally I think it's just not a type of importance and prestige that leads it to be as important to notify about these issues as the large enterprise distros (and their non-enterprise offerings). If you want to cover the most systems with the least amount of people involved, you hit the list Intel hit, and at some point you need to make a call on when to stop including people and groups. In the case of Debian, the type of contributors it attracts might actually work against it, as NDAs and threats of lawsuits work a lot better against people who are working under their capacity as a representative from their Company (even if as a kernel developer), and would face major consequences for breaking the embargo. Including random developers from across the world working in their spare time is much more dangerous, as there is no existing relationship with a company to fall back on to vet the individuals, and the negative consequences for breaking the embargo for those individuals might be quite small, depending on circumstance. I imagine it's quite a bit more work for Intel's legal department (possibly in background checks they don't even want to think about having to figure out) to sign off on what might be the suggested people to include from Debian.


> If that actually denoted how common a distribution was in any way, we'd see a lot more Slackware in the world.

According to the graph, Slackware is a way less widespread than Red Hat, and Red Hat is a way less widespread than Debian. It seems to be a correct representation of reality, given the popularity of the .deb package format, and the size of apt package repositories, in comparison to .rpm and yum, respectively.

I agree, that Intel might have chosen not to inform Debian because of its non-enterprise nature, but it doesn't change the fact, that Debian might be the second most popular Linux server distribution in the world[1], with the market share 20x of that of SUSE, which was informed about the bug by Intel ahead of time.

[1] https://w3techs.com/technologies/details/os-linux/all/all


> According to the graph, Slackware is a way less widespread than Red Hat, and Red Hat is a way less widespread than Debian. It seems to be a correct representation of reality, given the popularity of the .deb package format, and the size of apt package repositories, in comparison to .rpm and yum, respectively.

As I noted before, this graph is just showing a family tree of distributions, not use. If I made a hundred different different distributions they would show as a hundred different branches here. That wouldn't mean anyone actually ran or runs them.

"Widespread" only really makes sense with respect to install or end users, neither of which are represented here at all.

> Debian might be the second most popular Linux server distribution in the world

That's not what your link says. It says it's the second most common distribution for powering websites (so it doesn't count dedicated mail servers, cache servers, database servers, file servers, etc). It also says that "Unix" is almost twice as popular as Linux under the same methodology. That likely means BSD variants, but whatever methodology is showing that ~70% of all websites are running on Unix seems somewhat questionable to me.


Are you claiming that there is not enough evidence of Debian being a major Linux distribution, whose developers should be informed of major security bugs before they become public?

> It says it's the second most common distribution for powering websites

There is no other way to measure and compare the popularity of open-source operating systems. Any search for "operating systems market share" shows Debian only seriously behind Ubuntu, and close to CentOS[1][2].

Do you also question the popularity of Ubuntu and CentOS?

[1] https://www.datanyze.com/market-share/operating-systems/debi...

[2] https://www.similartech.com/categories/operating-system


> Are you claiming that there is not enough evidence of Debian being a major Linux distribution, whose developers should be informed of major security bugs before they become public?

I'm claiming that Debian's wording was somewhat weird, and that they aren't necessarily as large as it makes them sound.

Additionally I'l claiming that the nature of Debian itself may make it harder to get them vetted for access (I suspect the kernel developers that got access are employed to work on the kernel by large companies, a fairly common scenario for Google and Redhat for example, which is another point for including them).

Also, I'm stating that public numbers got operating system by website is a very poor metric, which I'll cover below.

> There is no other way to measure and compare the popularity of open-source operating systems. Any search for "operating systems market share" shows Debian only seriously behind Ubuntu, and close to CentOS[1][2].

Just because only some numbers are available does not mean they are worth using. For example, in the SimilarTech link you provided[1], you might notice that the operating systems usage percentages shown for the top 1k sites add up to less than 5%. For the "entire internet" statistics it's just over 1.5%. I interpret this as the vast majority of sites having no discernible data as to what operating system they run (for their testing criteria, at least). I suspect it's really a measure of servers started that have the default HTTP server running with a landing page saying "Thanks for installing CentOS" or has some poorly configured protocol with an OS identifier in it (which has been deemed poor security for quite a while, which might explain ancient Unix systems having a large showing...).

For the Datanyze stats[2], the total websites shown add up to just over 2.1 million. What we don't know, since I can't find any info on their methodology, is whether they suffer from the same problem I suspect SimilarTech does, in that they are reporting vastly biased data because they can only determine the OS of a small subset of the total sample.

Additionally, this doesn't count desktop installs. Google alone likely has over 20k developers[3], and the default desktop for them is Goobuntu (a Google variant of Ubuntu). While I expect Windows and OS X to have a large share of that market overall, the relative number of desktops compared to servers means that it may affect the numbers greatly.

There is actually a way to semi-accurately measure operating system usage, at least for operating systems that use default update mirror-lists and auto-update (most linux distros, to my knowledge) randomly (or using some specific leveling algorithm) spread requests along the mirrors, if you could get stats on actual downloads of a core package that had a security update within the last 6 months to a year (preferably daily or months stats) for a few mirrors, you could statistically derive a lot of information are active installs. You'll miss people that auto-install with special configurations that point to internal mirrors (common in some business settings), but you might get within 10% of a real number for a distribution.

> Do you also question the popularity of Ubuntu and CentOS?

I question any number presented by these sites. I have plenty of anecdotal evidence to suspect they are popular. If I did not have even that anecdotal evidence, I wouldn't assume to know much from the data we have seen so far.

1: https://www.similartech.com/categories/operating-system

2: https://www.datanyze.com/market-share/operating-systems

3: https://www.quora.com/How-many-software-engineers-does-Googl...


> I'm claiming that Debian's wording was somewhat weird, and that they aren't necessarily as large as it makes them sound.

In fact, it was not Debian's wording, but Greg Kroah-Hartman's, who is currently responsible for the Linux kernel -stable branch, and who has previously worked for SUSE, where he initiated the development of openSUSE Tumbleweed[1]. So, he is in no way associated with Debian, and is one the most important employees of the Linux Foundation.

> Additionally, this doesn't count desktop installs. Google alone likely has over 20k developers[3], and the default desktop for them is Goobuntu (a Google variant of Ubuntu).

Interestingly, Google ditched Ubuntu for Debian in January 2018[2], which means, that the entire Google internal engineering environment is now based on Debian.

[1] https://en.wikipedia.org/wiki/Greg_Kroah-Hartman

[2] https://www.theinquirer.net/inquirer/news/3024623/google-dit...


> In fact, it was not Debian's wording, but Greg Kroah-Hartman's

Ah, thanks for the correction. I was under the impression it was Debian speaking. I still think it's an odd way to describe the group in question (those running vanilla kernels), but there's less negative connotation since the obvious benefactor of that wording wasn't responsible for the statement.

That said, it's clear that some of my prior statements were erroneous.

> Interestingly, Google ditched Ubuntu for Debian in January 2018[2], which means, that the entire Google internal engineering environment is now based on Debian.

Interesting. I wonder if they did an easy-upgrade for the Goobuntu installs, or if it's just for new installs. To my knowledge, Google has a recommended Linux distro, and developers can choose it or choose another supported distro if they have reason (i.e. There are devs running windows and OS X). In that type of situation, I would think some portion of developers might have switched quickly, others as they had problems and needed a reinstall, and finally new hires. There's probably a good mix of the two distros, which will shift towards the new one consistently over time. It will undoubtedly increase install numbers though, as it's not a small amount of people.

I still wish we had some better numbers on this. :/


Honestly I didn't even know suse still existed, Linux distros are dominated by Debian/Ubuntu and redhat/centos


That's very likely. I only posted that Netcraft survey because it was the only statistical data I know of :)


There are a massive number of systems running Linux that are not publicly facing (e.g. compute/service nodes in supercomputers, systems for providing internal infrastructure for R&D, manufacturing, etc).


Such systems also most likely are not vulnerable to Meltdown/Spectre.


Why would you begin to think that at all?

Because they're offline doesn't mean that they're safe, just safe from more than an online system.


Those systems might be immune to attacks that networked machines are vulnerable to, but often networked attacks are used in order to leverage another exploit, such as Meltdown/Spectre.

A vector other than the network could be exploited to leverage an attack that utilizes Meltdown/Spectre.


Most the biggest software vulnerability catastrophies actually involve just such systems. "Secure internal network" is a 90s mirage. See eg https://www.wired.com/story/notpetya-cyberattack-ukraine-rus...


Meltdown & co. only become relevant once you can execute on a CPU. If an adversary can run software on your industrial robot, you are already compromised.


Think browsers and virtualization, and the various VPN's that invariably connect "internal networks" to the outside world via computers that straddle, or alternate between, other networks.


The majority of Linux installations is probably Android smartphones, and well Google was informed alright, but still 95% of those phones were caught with their pants down because well their pants had fallen down a long long time ago given they last received security updates years ago.


How many android installations use Intel processors?

Edit: never mind, Android was apparently indeed effected.

https://www.androidcentral.com/meltdown-spectre


Spectre affected virtually every out-of-order execution CPU in existence; Meltdown was much more limited in scope (and very few non-Intel CPUs were affected, but some were).


Spectre and Meltdown are a problem for Android but it's a smaller problem than on PCs. The vast majority of Android phones use processors that are in-order, and lack the out-of-order execution necessary to exploit.

High-end Android phones with out-of-order processors (the minority) are only vulnerable to Spectre, but it's a little complicated [1]. Then there is only one ARM core (Cortex A75) that is vulnerable to Meltdown, but that was released after the vulnerability was published, so software could ship with mitigations in place.

[1]: https://github.com/lgeek/spec_poc_arm


Some of Apple's A-series processors were affected by Meltdown too, AFAIK, though obviously that doesn't affect Android. (But is another ARM core vulnerable to it.)


I can speak to the companies I've worked for: US companies (and, this includes the UK because apparently we love to be like america) use CentOS predominantly, but mainland europe (Sweden, Germany, Finland) seem to prefer Debian.

I can say that I've personally administered roughly a 1:1 ratio of CentOS:Debian despite coming from a country that's servers tend to be CentOS.

Of course, this is anecdotal, but don't undersell debian.


Since 2008 working for US/UK companies I have only seen RHEL and CentOS (prior to that, mostly SunOS/Solaris). I prefer Debian personally but for the last few years I have used CentOS for my own projects too - too much effort to maintain two parallel sets of skills now I am not really into OS level stuff anymore.


Is SuSE still popular in Europe? (Apologies if I got the capitalization wrong)


Besides SAP deployments I guess not much.

Here in Germany it is mostly Red-Hat/CentOS or Ubuntu, on the projects I have been involved.

So anecdote data.


https://youtu.be/lQZzm9z8g_U?t=1565 seems to be the same talk delivered at another instance of this conference. He does claim one statistic that an unnamed "top 3" cloud provider told him that fewer than 10% of their customers install company-based kernels and over 90% install community-based kernels like debian or kernel.org code.

He is not limiting "the majority" to just Debian, though. He is comparing installs of distributions backed by companies (like Red Hat+Canonical+SuSE) as "the minority" vs. every possible non-corporate-backed distribution or source.


By "majority" he may have meant majority of Linux distributions, not majority of Linux users. There are a ton of distributions derived from Debian: https://wiki.debian.org/Derivatives/Census . All of them would have to update their distros to respond to Meltdown.


Perhaps Greg-KH was implicitly including Ubuntu under Debian?


As mentioned in the article, Canonical were brought in much earlier by Intel. They appear to have been among the "chosen few": RedHat, SuSE and Canonical.


Yea I don't use Linux too often but how different is an Ubuntu kernel compared to a Debian kernel? I know Ubuntu is downstream from Debian or at least they were. So how much extra work/code does Canonical add to its kernels.


Canonical had Spectre and Meltdown mitigations out for their kernels before Debian did, and he also specifically mentions Canonical as getting alerted, so apparently enough.


I think there should be a distinction between Canonical and debian, but Ubuntu is certainly based on Debian, and the latter should be used when talking big scale like he did. Many cloud services and providers run on Ubuntu, and as long as Ubuntu uses apt and .deb, it's Debian (along with Linux Mint, elementaryOS, etc)


> and as long as Ubuntu uses apt and .deb, it's Debian

No, there is more to a distro than them sharing a common package manager/format. The kernel, which is heavily patched by canonical, differs from what Debian ships. Canonical also has a bad habit of carrying a lot of non-upstream patches for other core system components. Ubuntu may have been derived from Debian at one point, but they are fairly different at this point. So much so that you cannot take a system running Debian and 'upgrade' (or downgrade, depending on your point of view) it to one running Ubuntu.


A distro, for the most part is 1) package manager + repos, and 2) system organization (where things get installed, where configs are, how services are managed etc). In that respect, Ubuntu is still Debian - most packages are vanilla Debian packages, and they both use systemd.


Actually, you can smash Debian & ubuntu together, you just get a terrible monster called frankendebian. As a youngster I tried this, it didn't pan out.


> So much so that you cannot take a system running Debian and 'upgrade' (or downgrade, depending on your point of view) it to one running Ubuntu.

That doesn't say much, as sometimes ubuntu and debian upgrades are far from smooth.

As a long Debian & Ubuntu user, I believe you're exagerating the work invested by Ubuntu to custonize Debian. Betond the default desktop environment and attitude regarding proprietary drivers, the work done by Ubuntu is rather negligible.


That's not true.

- https://www.debian.org/security/2018/dsa-4078 (January 4th) - https://usn.ubuntu.com/3522-1/ (January 9th)

Canonical got caught in the shortened embargo and didn't modify their schedule while everyone else did.


Canonical was alerted, but they still had a delay of several weeks before they had an update.


I'm sure he cares a lot about Debian and a lot of my early learning was on Debian derivatives but I feel that the production server side world has largely standardised on RedHat. Non RedHat/CentOS, non-Ubuntu debian distros are probably a distant fifth in terms of business critical production server so no wonder they were not included in the disclosure early on...


Does CentOS get immediate updates like this? My overwhelming impression of that platform is that everything is years out of date... of course maybe that's just the fault of the organizations I've encountered that have used CentOS.


What you're likely seeing, and what the siblinb comments are referring to with the "support" you get for years, is that RedHat (and CentOS by extension) do "back-patching", which is to either take the existing patch diff and alter it to work with a slightly older version of the software, apply it and shit that as an update (with the same software version number but increased release number), or to create a wholly new patch to fix it if required and do the same.

This allows you to have a stable "base" with unchanging software requirements, configuration and features to work on for extended periods while also staying very secure, which is why it's favored by enterprises. You can find more information on this on their Life Cycle documentation.[1]

This behavior actually extends to the kernel itself, which is likely why the Debian developer is quotes talking about "the majority of the world runs Debian or they run their own kernel", as Redhat, Fedora, Ubuntu, etc run custom kernels that they've patched. You can see this in different bugfixes for a package, and how they have the same version number for the software, but different release numbers, which come after the version in the filename.[2][3]

There are some more nuances to this model, where you can actually get newer software versions at select points, or where they will actually backport new features and not just bugfixes, but those are somewhat separate, and happen much less frequently (point releases in RHEL terminology).

1: https://access.redhat.com/support/policy/updates/errata/

2: https://access.redhat.com/errata/RHBA-2017:2576

3: https://access.redhat.com/errata/RHBA-2017:2926


CentOS gets patches as soon as Red Hat publishes the source RPMs upstream or pushes changes to git.centos.org.

What can be confusing about CentOS/RHEL is that many shops using them are on a release that came out years ago. But that's an issue because Red Hat supports 10 year lifecycles for their releases, and will go on supporting them indefinitely if you pay them enough. All the software bundled gets feature and security updates through it's lifecycle, just without changing any interfaces, ABIs, or version numbers. This is great for shops that aren't looking to operate on the bleeding edge and have expensive COTS or bespoke software stacks sitting on top.

Of course, you still have to patch, and organizations can't be forced to do that. Hopefully what you were encountering was just the former and not the latter at those organizations.


Yes it does. Both RHEL and CentOS have a kernel that is "officially" 3.10 but feature-wise is probably something like 4.10-ish for RHEL 7.5 (which was released last April).

RHEL and CentOS 7 had updates for all of Spectre, Meltdown and Foreshadow on the day of unembargo-ing.


I'm pretty sure it's being kept fairly up to date since Redhat acquired it. Bear in mind that it's a RHEL clone so it's usually about as out of date as RHEL is - which is what we generally want in the infrastructure/telco/banking space because stability is preferred over shiny new features.


I run centos in production. That's because of the following:

     1. SELinux 
     2. OpenSCAP
     3. 99% fit with RedHat
None of the other Linux variants have those tools, or the time put in for compliance goodies. My time == $$$


Why would an off-the-record statement, which is what he said this was, need to be 100% without exaggeration or hyperbole?


The article doesn't mention it was off the record, and I wasn't there, so how would I know it's off the record?

But it seems likely that this isn't just exaggerated or hyperbolic, but just not close to accurate, in which case even if it's off the record it's silly.


Ubuntu is a Debian derivative[1], I imagine that is what he means:

[1] https://www.debian.org/derivatives/


Yes, but the problem is we're talking specifically about kernel level mitigations, and Canonical rolls their own kernels. They do NOT use debian kernels.


Please re-read the context. I was simply explaining to the OP why this individual may have said this.


That doesn't make sense, though. Debian and Ubuntu each roll their own kernels. Canonical was one of the chosen few that had advance warning from Intel. Debian did not, so it wouldn't make sense for Greg K-H to be talking about Ubuntu at all when he referenced Debian.


Translation: s/The majority of the world runs/My fellow Debian kernel developers run/


Desktop and personal PC has Ubuntu dominant, but servers are 70% CentOS. Nobody trusts Debian's security record and gcc/kernel abilities.

He is right with the affected number in this case because this time bugs are affected by simple webpage drive-by Javascript attacks, like a common virus. So it affects Debian and Android more than servers and cloud services. And those are the ones which are not regularly updated so the security impact is bigger than on a server.


> "That's a long time, and we only heard rumors because another very large operating system vendor told Intel to get off their tails and tell us about it."

I wonder which operating system vendor pressured Intel to tell the Linux dev community, especially because it sounds like it was a non Linux OS vendor. Whomever it was, good job!

But it seems like Intel has angered the Linux community as well as the various BSD operating systems. You would think Intel would be doing whatever it can to please all operating system vendors especially now that AMD is getting competitive again.


My guess is Microsoft seeing how recently they're pouring a lot of resources into the Linux subsystem and how they're trying to seem more "developer-oriented" overall in recent months.


That was my guess as well, also doesn't Microsoft run some Linux in their Azure cloud? So that adds yet another incentive for Microsoft to help Linux folks out.

Edit: I found a link confirming that Microsoft uses Linux for networking in their Azure data centers [1].

[1]: https://www.datacenterdynamics.com/news/microsoft-runs-azure...


I work for Microsoft. We build customer facing business applications on Linux, Java, Go, etc. Microsoft is Big!


I'm sure Microsoft is doing this now, but wasn't that frowned upon before Satya Nadella took over as CEO or was building on non Microsoft products okay before?


I worked for a Corel at the turn of the century (ha ha, feels weird saying that). We worked very closely with MS from about 2000 (when MS injected about $130 million into the company). While Corel had their Linux distro, when I worked there we were forbidden from using free software (and specifically GPL software) other than on the Linux code. I heard it was due to an agreement with MS (hearsay, but I believe it). I had my knuckles wrapped a few times for disobediently editing my code using Emacs -- eventually I got used to Visual Studio (and even learned to appreciate it to a certain degree).

Things started shifting around 2002 or 2003 and by the time Vector bought out Corel in 2004, I was happily using Emacs and SVN with nobody batting an eyelid. We also worked with MS to implement the shared source version of .Net, which probably nobody remembers. It was supposed to be an "open" reference implementation of .Net. Mono really took on that role, I think mainly because the shared source reference implementation had a completely useless license. You could see the wheels turning in the heads of the MS people on that project. They were actually doing really good work, but everyone knew that the project was going to be meaningless.

IMHO, it was never a kind of binary switch from "open source is cancer" to "we can make money through open source". It was more of a slow internalisation that open source was a better business model for a lot of stuff that MS was doing. It may be that the switch in CEO helped that transition, but it was clear (at least from my perspective) that the wheels were already in motion for a long time prior to that.

I'm sure there are a lot of people still in MS that are rabid about having to control every last scrap of their "IP", but as MS starts to solve some of their revenue problems with open source solutions I think these people will lose relevance. I don't know if the corporate culture of MS (or even most large organisations) will ever get to the point of embracing free software ideals, but at least they seem to see the advantages of engineering collaboration in some circumstances.


It was a slow transition even before Satya - remember the Ms-PL license, and ASP.NET MVC being open sourced under that with great fanfare? Or IronPython and IronRuby? The difference before and after was that before, it felt like carefully controlled and contained experiments, that were still treated as decidedly different and unusual on the inside, and on which the plug could be pulled at any moment.

After, it was like a flood with the gates opened. We went from spending a year to get approval to use Boost (and failing!), to a streamlined approval system for pretty much any piece of OSS out there. It's very visible in Visual Studio if you look closely at the files installed - compare, say, VS 2012 to VS 2015, and note how many more OSS bits are in the full install of the latter.

The same goes for releasing under OSS. You might notice that a lot more developer tools are OSS these days - even many new bits written for closed-source products like VS. Hell, I shipped some Microsoft code under GPLv2 a couple years ago - and it was easier to do than getting Boost approved under the old regime!

So I think it's fair to say that Satya and his cadre of execs did drastically re-imagine the company in that regard, rather than just finalizing an existing process.


> We also worked with MS to implement the shared source version of .Net, which probably nobody remembers. It was supposed to be an "open" reference implementation of .Net

SSCLI, or Rotor, for those interested in archaeology. It sure was an invaluable resource for learning how innards of .NET worked back then. Official release is no longer available from Microsoft, but there are some mirrors on github [1].

[1] https://github.com/SSCLI/sscli20_20060311


It was much more difficult to get all the necessary approvals etc. But possible if you had a particularly strong business case for it.

Of course, Nadella has been in charge for, what, 4 years now? So there had been plenty of time for the company culture to change in that regard. And it did, big time! "GPL is actually a pretty neat hack" is not a phrase I thought I'd ever hear from a Microsoft corporate lawyer, much less in the context of shipping a Microsoft product under that license.


I worked in RedWest for Microsoft in the mid 2000s. Firefox, Chrome, Macs were all common in my building.


Pretty sure MS has a ton of internal services running on Linux based systems too.


Along with that I'd imagine that they're really invested in it with Azure and such too. If all their customers were vulnerable to this they could end up in a bad position. They could obviously work on their own patches but they'd have to get them upstream into distros without telling them what's going on because of the nda/silence desired by Intel. That would leave them in a terrible position.


Azure has a sizable customer base running Linux, so I imagine it was Microsoft.


BTW

> Experts called for a new generation of secure-by-design computers at the Hot Chips conference here. In small steps in that direction, Microsoft and Google described their separate but similar hardware security architectures.

https://www.eetimes.com/document.asp?doc_id=1333616


I wonder how many NSA back-doors will go into those.


The vast majority of regular users would voluntarely install a dozen of NSA backdoors on their computers if it guarantees them full protection from script kiddies, phishing, malware and ransomware.


I feel like you just described Google.


I feel that Apple would fit the description much better.


Or Microsoft.


Users in the US perhaps but elsewhere? I doubt it.


Elsewhere regular users routinely use e.g. cheap Chinese outdoor and indoor surveillence cams which are most often streaming everything to some Chinese cloud - and nobody cares actually.


As opposed to what? He said secure by design.


Greg's response is understandably frustrated, though seemingly less so than the OpenBSD devs. Why are they being repeatedly left out of the loop?


OpenBSD has been accused of breaking embargoes in the past. They are pretty open about their policy of pushing their fix as soon as it's ready and not doing anything to obfuscate what they're fixing and why.


Its not that OpenBSD breaks embargoes, it's that they won't just be strung along delaying embargoes for ever. So they can't be trusted to extend an embargoes. They probably would be happy to if there was a good reason, but not for bad reasons.


> They are pretty open about their policy of pushing their fix as soon as it's ready and not doing anything to obfuscate what they're fixing and why.

This isn't true at all. Please stop spreading misinformation.


Neither of you have provided citation or reasoning. You basically just said "nah uh"


Perhaps, but it rather behooves the one making the accusation to substantiate it with evidence. I don't see how the one claiming that they don't leak can provide evidence of not leaking!


What is the reason then? (And what has led people to have that perception?)


amaranth didn't provide any citation or reasoning either. The difference of course being that he holds the burden of proof.

Edit: Actually the other guy did post a link that explains it so you comment is silly for two reasons.


Can you provide some links for that?

I feel like this is a thing people repeat, specifically about KRACK and its been shown to be false multiple times (like here: https://lobste.rs/s/dwzplh/krack_attacks_breaking_wpa2#c_pbh...).


Theo de Raadt answers that at https://youtu.be/UaQpvXSa4X8?t=4m7s


> "Intel has gotten better at this," he said.

Someone should let the BSD folks know and see what they think...


I get the impression OpenBSD isn't so happy with Intel:

Disable SMT/Hyperthreading in all Intel BIOSes https://news.ycombinator.com/item?id=17829790


And also:

"I'm going to spend my money at a more trustworthy vendor in the future." - Theo de Raadt


My understanding is that they've gotten better. FreeBSD has had advance notice of some issues. Last I heard they offered to let OpenBSD in too, but hadn't found anyone willing to sign an NDA.


I'm reasonably certain that OpenBSD has never agreed to any NDA from anyone as a matter of principle. It's one of the things that makes me love the project so much.


I don't know if "never" is accurate, but certainly they are very NDA-averse. That's their right, but it means they're going to get left out of things like this; it's simply not possible to organize coordinated disclosure of issues if the participants don't agree to not blab ahead of the agreed disclosure date.


An NDA is a legal agreement. It's entirely possible to organise coordinated disclosures without a legal agreement. The folks pushing NDAs, however, don't seem to be interested in other sorts of agreements.


The alternative would be a "gentleman's agreement"? An NDA would seem to be much more transparent with everyone understanding what was agreed to rather than something agreed upon over cigars and cognac. Refusing to sign NDAs as a matter of principal doesn't seem like a very mature way to conduct business.


It doesn't have to be a handshake and a nod. Things can still be clearly written down. But formal contracts with consequences take it up a notch. And this isn't about how you "conduct business"; that's a very business-oriented view of what's going on.


To be clear, we do routinely operate on the basis of "gentlemen's agreements"... but Intel is a corporation full of lawyers, so I would be astonished if they were willing to work on that basis.


Why would anyone sign an NDA for vulnerabilities? The problem is on Intel's end, not the OS devs.


So that embargoes can be enforced, so that the OS dev doesn't jump the gun and publicly announce the vulnerability to the world before all the other devs have had a chance to ready their patches.


They are getting better, even if they didn't set a very high bar to begin with. I believe they let the FreeBSD folks know about the last round of Intel bugs as opposed to trying to keep them in the dark like the first few rounds of Intel bugs.


While I agree that Intel's response was far from ideal, I find it a bit rich for Linux kernel developers to be criticizing them. Remember, the completely uncoordinated disclosure happened because Linux kernel developers started discussing the vulnerability -- while under NDA -- on a public mailing list.


You sure those devs were under NDA?


Because Intel is that good in security and hardening, why not use Intel's newly minted own special secure Linux distribution?

https://01.org/blogs/imad/2018/letter-industry

This is almost like satire.


Not sure what you mean. This just looks to me like they're admitting they're awful at security and they're pleading/threatening the global community to help secure their ecosystem or else you're all going to be in a ton of trouble because the robots/cars are going to kill you and your workers and what'll you do then?


> This just looks to me like they're admitting they're awful at security and they're pleading/threatening the global community to help secure their ecosystem

That post seems to make it pretty clear that they are trying to position themselves as the experts rather than the people asking for help. That is the part that is absurd.


I think even a Intel contractor was indirectly responsible for GRSecurity going behind a paywall.

(tl;dr intel contractor were shipping an os that was using grsec tradmark in marketing for a non-grsec approved kernel)


I was in the room. Greg specifically said “off the record”.


That's not a magic phrase, it's something you negotiate with reporters in advance. You can't really make "off the record" comments in front of an audience at a conference.

0xFFC 6 months ago [flagged]

Did I say it is a magic phrase? It used to say when you want something not reported. You should definitely get familiar with journalism terminology.


> Did I say it is a magic phrase?

Pretty much, yeah. You did. Pretending that declaring "off the record" during a presentation like that falls anywhere near normal journalistic practice is essentially the same thing as imputing magical properties to that phrase. Confidentiality needs to be negotiated ahead of time, usually with an NDA or embargo agreement before journalists are given any detail. When presenting at a conference that lets journalists attend for free, it's unreasonable to expect any confidentiality for the content of your presentation unless your presentation is part of a closed session that the free media passes don't grant access to.


What is the current status of this? The last time I heard, the OS fixes would impact CPU performance by 30%. Is this still the case? Will new iterations of Intel CPUs be immune to this, or is this an ongoing issue going forward because it's inherent with the architecture?


Disclaimer: I work on Linux at Intel.

Future hardware will be mitigated against side-channel issues. There's a nice table showing how things are mitigated on future processors here:

https://www.tomshardware.com/news/intel-cascade-lake-details...

But, not everything is mitigated in the silicon or microcode. The mitigation for Spectre / Variant 1 / Bounds Check Bypass, for instance, will continue to be in software.


Based on my own experience with things and seeing other benchmarks, it looks like there's some synthetic benchmarks and very specific loads that will see up to 30% losses, but most things are really around 10-15% if they've got any significant losses at all. That's still a big impact but not nearly as "sky is falling" as it seemed like it might be. No idea how this'll be going forward with hardware changes since those haven't happened yet, and even when they do, how will we benchmark them properly?


Meanwhile, on HN frontpage: https://news.ycombinator.com/item?id=17876476


Everything I've read has said real workload differences can be as little as 3% or as much as 30%, but mostly somewhere in between (closer to the 3% side). Depends on what you're doing I guess.


> While there have been many patches made in Linux, he strongly advised users to update with Intel's microcode fixes as well, as they provide an additional layer of protection beyond what an operating system can provide.

I found this page explaining how to do it https://www.cyberciti.biz/faq/install-update-intel-microcode...

I checked and my microcode is from 2018-01-21. Being on a 2014 laptop it means it got updated by the package manager.


Any videos of the presentation?


No video as far as I know, but he did post the slides with some backup materials on his GitHub:

https://github.com/gregkh/presentation-spectre

One note from his talk though, his slides say that Foreshadow was fixed in April -- but it's clearly supposed to be August. He mentioned he was fixing the updates even on the flight out to OSSNA.


Does anyone honestly buy into the idea that this is something other than a back door, carefully plotted by certain nation-state actors?

For real.


The creators aren't always aware of the monsters they create. I think Intel was caught with their trousers down - AMD and ARM in many respects too. Hopefully this will lead to automated testing procedures for Intel, other processor manufacturers and security researchers.

The reason these flaws were found in the first place were because of security researchers, hopefully they can now make their cases to get large research grants and bring about a more secure future.

On the other hand, if you want to talk about tin-foil hat "CPUs working against users", only look so far as Intel ME and the equivalents on other processors. Closed-source and highly privileged code running all the time inside the CPU - it's not unthinkable that Intel would bend the knee to some state actors, especially if that leads to higher profits (i.e. a tax break or entry into a new market (cough China)). It's enough of a concern that a lot of reverse engineering effort has gone into preventing it from running.


> Hopefully this will lead to automated testing procedures for Intel, other processor manufacturers and security researchers.

They have tons of testing already. The problem is that this is the kind of problem which is easy to miss with tests, especially automated ones, because everything worked correctly and no real code would ever have been affected by the side effects.


This is what I meant and didn't say - it should read:

Hopefully this will lead to better automated testing procedures for Intel, other processor manufacturers and security researchers.

The emphasis on "better" as there is obviously automated testing in existence already.


I’m sure it will but I wouldn’t underestimate the difficulty of finding significant unintended state changes in something on the order of complexity of a modern CPU.


Plenty more security vulnerabilities in Intel ME to be found. It’s now a well known attack surface full of goodies to plunder. I’m sure AMD and ARM have similar excitement in store. This is in addition to what has already been found.


Applying Occam’s razor to Intel ME and similar security issues: conspiracy with some evil agency is possible. Maybe. But not the only explanation.

Following the money suggests Intel was chasing an enterprise market and didn’t bother to think through the consequences (unlikely) or made a car-manufacturing style decision where the costs of failure were deemed to be overshadowed by the profits on their balance sheets.

For seven years the Ford Motor Company sold Pinto cars in which it knew hundreds of people would needlessly burn to death. It had a better design but it would have cost production delays to implement it. Less money to have people die than fix the issue when it was noticed in testing.

While I’m not equating Intel’s actions to Ford’s in direct comparison, I would say the bean counter thought process is far more likely than conspiracy.


Why would Intel be complicit in it?

Suppose they did this. Now, that the vulnerabilities are public, Intel takes the fall. Oopsie, huh?


Yes, literally every rational person.


Yes. But then, you're asking for an opinion. If you were asking the reader for evidence, you wouldn't have started out with a premise depending on paranoia and conspiracy.




Applications are open for YC Summer 2019

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: