Hacker News new | past | comments | ask | show | jobs | submit login
“Intel Core 2 bugs will assuredly be exploitable from userland code” (2007) (marc.info)
325 points by pixelmonkey on Jan 5, 2018 | hide | past | favorite | 118 comments



This image is linked in the e-mail thread, with (some of?) the errata: https://www.geek.com/images/geeknews/2006Jan/core_duo_errata...

I'm just surprised that the URL is still valid after 12 years!


Is it just me or do AE4 and AE11 have the same description (REP MOVS crossing pages of different types) yet completely different classification/impact?

IIRC this behaviour is now documented in the official manuals as "by design" since it was like that since the P3.


same, and don't bother reading too much, the best bugs came after ...

incredible how I (we) ran on buggy hardware for so long.

We should fund a tiny group for sane cpu design.


The Mill computing people seem to have figured out quite a lot of elegant yet fast security solutions[0][1].

One simple example is the zero-bit. I don't have time to look up the exact details now, but IIRC, it's a simple hardware bit-flag that is part of the metadata of any value that can be read. If set, reading out that value returns zero. If not, it returns the actual value.

Data defaults to having the zero-bit set, and the only way to unset a zero bit is writing new data into whatever you want to read from. The exact mechanism is probably a bit different, but that's how I remember it.

Regardless of what the precise implementation is, the point is that by adding some smart hardware-based metadata like this in the form of bit-wise flags, you don't have to worry about having to manually wipe the data in memory once you are done with it (and have the speed benefits of not having to do so as well), yet still get all the safety benefits of never having to worry about buffer overflows and whatnot again.

[0] https://millcomputing.com/

[1] https://millcomputing.com/docs/security/


> We should fund a tiny group for sane cpu design.

It's happened, again and again. The ARM CPUs powering your phones are a good example. Maybe with more of a market, POWER9 prices might come down.


Half recent application class arm core are affected.

They are saner but the problem is still here


I was more trying to point to RISC in general ("RISC is good" https://www.youtube.com/watch?v=wPrUmViN_5c). What I really wanted to say is that we've tried but RISC hasn't really been successful in the (mass consumer) market, which because of ARM is only partially true.

In truth you're correct, and ARM is sort of an unruly mess at this point, but that's a much larger discussion.


Order your Raptor Engineering Talos™ II now! Improved with PCI Express 4.0, CAPI 2.0, DDR4 RAM, & POWER9 CPU. https://raptorcs.com/TALOSII/


but power is not immune from this right ?


According to Raptor Engineering on Jan 5: "#POWER8 and prerelease variants of #POWER9 vulnerable to #Meltdown (CVE-2017-5754) and #Spectre (CVE-2017-5753 / CVE-2017-5715). #POWER9 is being patched and will not be vulnerable at ship, and there will be no performance loss versus current #POWER9 samples. Patches coming soon."[0]

[0] https://twitter.com/RaptorCompSys/status/949368929507520517


That would be great.

I know nothing about low level programming, and I'm wondering if webassembly couldn't be a good place to start designing CPU instruction sets?

Can someone more competent on the matter tell me if this idea is crazy?


> Can someone more competent on the matter tell me if this idea is crazy?

Yes, it's crazy. :)

But seriously, the ISA (instruction set architecture) is not the problem, but the optimizations (deep pipelines and speculative execution, etc.) are.


The nature of the ISA can influence the kinds of optimizations undertaken.


thanks!


See also, Kris Kaspersky claimed he was able to exploit some of them in browsers back in 2008: https://news.ycombinator.com/item?id=16076941


Prescient money quote:

>As I said before, hiding in this list are 20-30 bugs that cannot be worked around by operating systems, and will be potentially exploitable. I would bet a lot of money that at least 2-3 of them are.


Wow. This is bad for Intel. Industry experts have been expressing concerns for this for ten years. Does this open Intel up to possible repercussions?


> Industry experts have been expressing concerns for this for ten years.

AFAICT, de Raadt was concerned about Intel in general, but not the recent exploits in particular. We can find endless criticisms of every major company from the last 10 years (including on HN!); picking this one mailing list posting is bit arbitrary in the context of these exploits, even if de Raadt makes some good general points.


I work with a bunch of hardware engineers who used to work at intel and the really knowledgeable ex-intel guy I know said speculative execution has always been known to be risky but actually exploiting it was presumed to be very difficult/impossible.

Looking back, I should have asked him how intel evaluates security risks considering how much of modern day computing uses it (which makes it an extremely valuable black hack exploit since it can work on practically every computer).


In practice, if you cannot demonstrate an exploit, your concerns will be heavily discounted, and this is a case in point. Even if you do show an exploit, attempts will often be made to dismiss it as infeasible in practice (this is for security issues in general, not specifically Intel.)


They were more than general points. Please don't poo-poo this as some sort of horoscope doom and gloom post. He specifically pointed out the the issues addressed in the recent disclosures:

> It is not just buggy, but Intel has gone further and defined "new ways to handle page tables" (see page 58).

The MMU, page tables, and TLB are all directly related.


> Please don't poo-poo this as some sort of horoscope doom and gloom post

That wasn't my intent at all. de Raadt makes good, generally applicable points.


> AFAICT, de Raadt was concerned about Intel in general, but not the recent exploits in particular.

Really?

How do you explain the following:

> These processors are buggy as hell, and some of these bugs don't just cause development/debugging problems, but will ASSUREDLY be exploitable from userland code.

> Note that some errata like AI65, AI79, AI43, AI39, AI90, AI99 scare the hell out of us.

> AI90 is exploitable on some operating systems (but not OpenBSD running default binaries).

Plus huge development effort in stack/address randomization, W^X pages, dynamic kernel and c library relinking, etc as simply 'concern about intel in general' but not any details 'in particular'?

Even if the stated position is not true, it would be logically inconsistent to assume someone whose operating system project focuses on 'security and correctness' to simply be making general statements and not being concerned with actual low-level details..


What forapurpose said: > but not the recent exploits in particular

What you said: > but not any details 'in particular'?

Looking at the list of items Theo gave, most of them were fixed previously, and the others, at least to my largely uneducated eye, do not appear to be related to this specific issue.

Unless I am mistaken (and that's certainly possible!) it was not at all related to Spectre/Meltdown


The linked intel erata pdf file is 404'ing, but I think this [0] matches if you are curious about this line:

> Note that some errata like AI65, AI79, AI43, AI39, AI90, AI99 scare the hell out of us.

[0]: http://download.intel.com/design/processor/specupdt/313279.p...


ugh, I checked some of them, and most of them suggest accessing protected memory.

I believe that Intel didn't believe that these bugs suggest a fundamental issue that can be exploited, until Google Project Zero created a working exploit.


I know I will sound like a conspiracy theorist, but can those bugs be intentional? I mean if you are a security agency, would it be possible to push for the introduction of such bugs?


Occam's Razor suggests to me that intelligence agencies have not had to push for processor bugs, on the grounds that A: we can adequately explain the initial existence of these bugs by normal engineering, marketing, and management considerations such as almost everyone here has experienced personally and B: the field of the bugs that come from my first point is so ripe that the intelligence agencies are better served by examining them carefully and finding their own exploits. The primary reason for this is that the best hidden paper trail is the non-existent paper trail; by finding bugs independently and holding all knowledge within the agency, there is zero risk of it ever being revealed that they deliberately inserted bugs into the CPU, which would be a PR disaster for all concerned (and to a non-trivial extent, an act of war).

Given that we know that intelligence agencies have asked for backdoors before, in both hardware and encryption standards, do not interpret my post here as a claim that they would never ever even think of asking for plausibly-deniable CPU bugs to be inserted into hardware. I am just saying that Occam's Razor says we should prefer the perfectly plausible scenario I gave above where they do not have direct involvement in these bugs, not because of their pristine ethics, but because given the circumstances on the ground, actively intervening is not their best choice from their point of view using their valuation function, when they can attain all their goals without active intervention.

(I'm willing to believe in things that might be labelled "conspiracy theories"; history is rife with proof that they have existed in the past, such as the aforementioned cases where we know backdoors have been inserted into crypto standards, as well as other things such as the way in which the Soviet Union was created which essentially involved what was initially a small conspiracy, and I see no reason to believe they have ceased in the modern times. But I want to see how the conspiracy theory passes Occam's Razor; many of the conspiracy-minded, in my opinion, underestimate the randomness and everything-is-always-correlated-a-bit-ness of the real world.)


We now know there are many conspiracies in our industry wrt national intelligence but I agree with you- this is probably not one of them.

Why ask for a bug when you can just sit back and wait for them?


Who's Occam, and why is he an authority on guessing the truth again?

Yes, I'm kidding about the first part.


>there is zero risk of it ever being revealed that they deliberately inserted bugs into the CPU, which would be a PR disaster

Since neither the threat nor the reality of PR disasters has ever given the NSA pause before, occam's razor strongly suggests this theory of yours is wrong.


Since the threat of PR disaster is just an ancillary bit of evidence, rather than core to my logic, even if I accept your point entirely (which I do not), it is not sufficient to destroy my argument. Of the motivations I gave, the one I would expect to be most powerful is the general desire to keep things internal without involving any external resources, which is just a general operational security principle.

I would also disagree that your assessment is correct anyhow; they are insensitive to certain kinds of bad PR, but not generally immune to it, nor do they act generally immune to it. The people in charge of funding the intelligence community may not come after them for getting caught putting backdoors in things; indeed, this may even prove to the people controlling the purse that they are doing their job. But they are certainly sensitive to ensuring that the people controlling the purse do not come off looking bad, and as a part of that, sensitive to not getting caught conducting open, unambiguous acts of war against every nation in the world. We all "know" that they do, everybody else "knows" that they do, and everybody also "knows" that you shouldn't trust Chinese-manufactured electronics either for the same reason, but it's still not openly known. The distinction between open secrets and openly-known facts may greatly confuse Spock and he may shake his head about how illogical it is for everyone to know a thing and for everybody to know everybody else knows but still act as if nobody knows, but is an integral element of understanding human politics, in which appearances matter a lot.


Right, the NSA may not give pause, because it's not really their asses on the line. It would be Intel that would take the beating in PR and revenue, so you'd kind of expect them to consider that before doing any old thing asked of them.


Does the NSA use Intel chips? If so, it would seem unlikely that they made their own compute infrastructure insecure.

Their mission isn't to make everything less secure, it's to make the other team's stuff less secure (where "other team" includes the citizens of the US apparently).

Furthermore, given the number of bugs in just about every piece of software (including kernels), I don't think these bugs are even an anomaly that needs an explanation. Bugs exist in every complex system, and CPUs are very complex these days.

That said, to address your direct question of "would it be possible", I'm fairly sure it would; we have hints from Linus that Three Letter Agencies have asked him for backdoors, so I'm sure large hardware companies have open channels with the NSA et. al. as well.


If they run patched OS versions that don't use the hardware features they achieve this goal.


> (where "other team" includes the citizens of the US apparently).

Not really. It's not like stuff like that Dual_EC_DRBG were export only, and IIRC, they did hold on to zero days that would affect American systems, leaving those system vulnerable, while they exploited them elsewhere.


Bugs happen.

What is likely is that security agencies with their larger staffs and budgets do discover many of them before the private sector. It's certainly plausible that the NSA knew about this new class of attacks before we did.

BTW -- forget about CPUs. My biggest concern is with the chips that get less security attention: GPUs, network chips, USB controllers, etc. Those are likely just as bug-ridden or more.


I think it's not unreasonable to think about the possibility of that. My understanding is that these bugs could have been exploited for a very very long time without being noticed?

I also wonder if one or more security agencies find a lot of their efficacy in just a small handful of exploits. And if those were to be patched, they'd find themselves severely hamstrung. So seen as a threat to national security, they have a strong need to ensure the availability of exploits.


Possible, but I think it's more probable that security agencies took advantage of those bugs with the help from Intel than intentionally putting those there in the first place.


At least one of the recent exploits needs to be mitigated at the OS level (I haven't looked at the details carefully, but I know Microsoft and Linux are working on it). Is OpenBSD affected and if so, what are they doing to mitigate it?


Affected and probably doing what everybody is doing for mitigation.


> probably doing what everybody is doing for mitigation

Is there any discussion? I thought OpenBSD development mostly took place on public mailing lists ?


Some issues tend to attract a lot of lookie loos. The patch probably won't be improved by a dozen replies about the incompetence of intel.


I understand their need and priority, but sometimes those discussions are a great way to learn about the vulnerability - open development teaches others.


I don't disagree. I prefer more open development and try to encourage it, but everyone has their own preferences for what to share.


It looks like none of the BSDs were in on the embargo, so given the complexity of the KPTI patches for Linux, I'd guess it'll take a while to develop the equivalents.


There's nothing prescient about saying that computer system X or Y can be exploited in ways yet to be discovered.

The entire Internet is wide open, the holes just aren't known yet.


You miss the point. Theo wasn't just hand waving, he had identified specific issues that he believed would eventually get exploited.


"he had identified specific issues that he believed would eventually get exploited"

But those aren't the issues that were exploited. His post has absolutely nothing to do with the current issues. It's just uninformed dogpiling (the ignorance of the crowd).

All chips have errata. This post is not particularly informative or relevant to anything.


This is not the ignorance of the crowd, it is the insight of one informed individual. Dismissing as irrelevant all concerns except those that have already been exploited would be closer to being "the [self-inflicted] ignorance of the crowd" in these matters.


Everyone fishing around for anyone saying anything about Intel chips ever, and then trying to shoehorn it into a current narative is the ignorance of the crowd. Every single chip has errata. In this case Theo is pointing at Intel's own list of defects in a revision/chip, which is profoundly unilluminating, and is completely and absolutely irrelevant.


The issue is not that the errors existed, but the risk some of them presented. Theo correctly identified an area where the risks were greater than generally recognized.


No, he didn't. Intel specifically states that the errata can allow unauthorized access to protected memory. Theo then says "oooh, that sounds scary!". Yes, of course it is, which is why it appears in the errata list.


You are misrepresenting Theo's message, which looks beyond individual errata, saying that collectively they amount to a significant change in how the MMU operates, and from that predicts a high likelihood of a then-yet undiscovered flaw that would lead to problems much greater than were then being acknowledged by Intel (and he said AMD was heading the same way.)

He was less prescient when he implied that it would take Intel a year or two to get beyond this.


care to enlighten us about this statement, wise one?

> For instance, AI90 is exploitable on some operating systems (but not OpenBSD running default binaries).

if you know how the processor and OS behaves at a low level, and read specific hardware errata which you know will cause problems that logically cannot be worked around, developing a proof of concept (aka 'discovering') is simply a waste of time and effort..


From the email:

"(While here, I would like to say that AMD is becoming less helpful day by day towards open source operating systems too, perhaps because their serious errata lists are growing rapidly too)."

Glad to see that in 2018 AMD's reputation has generally improved from this.


Has HN begun to collect suggestions to intel how to handle the situation and what to change regarding community interaction to reduce the impact of such flaws? Instead of bashing our heads out, maybe it's time to offer them a hand when they're down on the ground.


I am sure Intel will be fine. It is effectively a monopoly in the desktop and server market and enjoyed their position and profits for years. They can handle a bit of criticism from a bunch of nerds on HN.

Maybe loading data speculatively across a protection boundary was careless. It seems besides the latest ARM CPUs no other vendor went that route. But not owning up to it and issuing PR statements saying "This works as designed, not a bug" is a bit hard to stomach.

But if it needs help drafting a better PR release, someone is welcome to point them to HN's comments section.


>They can a handle a bit of criticism from a bunch of nerds on HN.

What a reductive and shortsighted evaluation of the situation.

Can they handle the loss of faith from big companies? Can they handle the loss of faith from the entire tech community? Seems to me that AMD et al have now got the perfect opportunity to erode intels market share and build up a large market base amongst cloud providers etc (not to mention security minded users) that require technology that is both resistant to meltdown and not underperforming hardware.

It's silly to act like this is a storm in a teacup because the HN community is up in arms over it. Monopolies fall, and the loss of trust and key clients tends to precipitate that fall.

>But if it needs help drafting a better PR release, someone is welcome to point them to HN's comments section.

Their PR was shocking, but on the order of things people are upset about over this incident, this is literally at the bottom.


> They can a handle a bit of criticism from a bunch of nerds on HN.

It was a tongue-in-cheek response to OPs statement that we should feel bad for Intel and offer it help. I suggested that it needs help drafting a better PR release that's a bit more honest and straightforward.

> Can they handle the loss of faith from big companies?

With a $200B capitalization they certainly can.

> Seems to me that AMD et al have now got the perfect opportunity to erode

Agreed. The next step is to see if any of the large cloud providers or PC manufacturers will announce they are buying AMD CPUs. I hope because I'd like to be able to buy cheaper CPUs and have more competitors in the market. But realistically I kind of doubt it. At the end of the day INTC's stock hasn't moved that much. The performance hit as reported by Google didn't seem to as big.


> At the end of the day INTC's stock hasn't moved that much.

nor should it.. huge stock (so less speculators), many products besides x86 PC processors, and their reaction/fix to this & subsequent impact to actual earnings hasn't shaken out..

not pro or against intel. but mentioning this as concerns market stuffs.


AMD’s x86/x86-64 Cross licensing agreement terminates if they ever have more than X (I think 30% or 50%) of the desktop and server market share, so I don’t think so. The agreement purposefully gimps AMD as a minority player.


> AMD’s x86/x86-64 Cross licensing agreement terminates if they ever have more than X (I think 30% or 50%)

Wow I'm surprised this is even legal because it sounds like an implemented monopoly.


Well if it means that AMD couldn't ship any x86 processors and Intel couldn't ship x86-64 (i.e. AMD64) processors, that effectively means neither of them could ship any modern x86 processors at all, so that's effectively mutually-assured destruction. If that were in any danger of happening I'd bet dollars to donuts it'd be renegotiated immediately since they both stand to lose everything.


This is strange, I’d never heard of this agreement. As I understand it, Intel x86-64 was actually based on AMD’s amd64 design originally. So in reality they are still two separate architectures in a sense?


Intel's biggest threat is not AMD but ARM.


Their shockingly bad PR and general response are directly tied to the loss of trust. Sure, the downfall of a monopoly is economically disruptive but also creates opportunities for progress.


PR is always shocking.

Let's wait to see how much performance is lost and what vulnerabilities get used in the wild.


The Register did a great analysis of turning the Intel Press release into English.

"We translated Intel's attempt to spin its way out of CPU security bug PR nightmare as Linus Torvalds lets rip on Chipzilla"

https://www.theregister.co.uk/2018/01/04/intel_meltdown_spec...


If they need help, they should look at Google's release. Despite effectively saying the same thing, Intel's is disgusting and defensive, like a guilty man in a police interview yelling "I didn't do it!" Google's is facts, no bullshit language, and effective.


> It is effectively a monopoly in the desktop and server market

But it couldn't be better timing for ARM. AMD isn't the competition (Though this helps them a little bit) it really is all about ARM and it is going to get a lot more attension with this. Windows runs on ARM now.

CPUs can't get much smaller. It is now how many cores and thermal control you can place on a waffer. ARM has the advantage in both of those. We just have to learn how to utilize multiple cores better than we are now.


Which ARM? Have you ever even looked at ARM implementation errata? At the errata for people doing semi-custom ARM like Cavium? Do you think that those companies are as diligent as Intel?

I can’t say anything about ARM vendors, but I’m pretty familiar with MIPS and PowerPC errata from chips in the mid-2000s and they generally made Intel look 10x as professional and careful.



A friend of mine "bought the dip" and profited about $100 in the first 20 seconds and it only got better as the day progressed.


Saying "it is not a flaw, it is working as designed", when that design has led to a demonstrated exploit, marks one as either clueless or duplicitous. Why would a company as large as Intel choose to present itself as such? I guess it thinks we are too dumb to notice (Intel did say it is not a flaw in its press release; I don't know whether it explicitly tried the "working as designed" excuse, but the no-flaw claim by itself is nonsense, regardless.)


My guess is that the memo, besides the marketing channels was also filtered through the legal department and they advised not to admit guilt as they probably expect to be sued at some point. Then a clear admission on their part would be slam dunk.


That is probably so, though if it came to being sued, I guess the plaintiffs' counsel would be ready to point out the flaws in that line of thinking.

On the other hand, Intel's stock price did recover in response, reversing the somewhat panicked or speculative drop earlier in the day, so perhaps this was mainly for the market.


(playing devil's advocate here, to be clear)

What leads you to believe that Intel has any reason to think that there's an issue that needs changed? Or that "the community" knows anything about their business processes or what Intel should do? They have their highly-paid C-levels to figure that out.

From their perspective, there's no problem. Nothing needs fixin'. You'll keep buying their CPUs, anyways -- you don't really have much of a choice, do you? [0]

Just go install those updates from your vendor(s) and go about your business, you'll be fine. No big deal, nothing to worry about. Carry on. Just like you did with that recent little ME/AMT issue. There'll be another issue to deal with in a few days and everyone will forget all about this one.

[0]: Oh, you're gonna replace all your infrastructure with AMD's CPUs, huh? Yeah, sure you are. They're no different.


Basically buying new Intel processor to replace old will yield 5-60% performance in I/O-heavy workloads even without any other changes but fixed processor bug, unless you're ready to tweak with your OS settings and fine with potential vulnerability. With proper marketing they can make huge profits from this situation. Sure, you can buy AMD, but Intel is still faster for many benchmarks. Given that they knew about bug for 7 months, I think that soon new processors will be without Meltdown bug.


You are right in your assessment. This is how it will go down. The vendors simply need Intel. There is now way they will make enemies with them. The problems are simply passed on to the customers who have no other option but accept the reduced performance.


IMO the first step would be disclosing all their tricks they implement outside of the specs they give. If researchers had adequate documentation of all the side effects that these tricks introduce then it could be properly audited.


Looking at how they behave the only thing I would expect from them is to not give free shovels when people are trying to dig. They will keep turtling until there's a new product they can push and rush everybody to ditch the "insecure predecessors".


Something like adding "Implicit caching occurs when a memory element is made potentially cacheable, although the element may never have been accessed in the normal von Neumann sequence. Implicit caching occurs on the P6 and more recent processor families due to aggressive prefetching, branch prediction, and TLB miss handling." to the developer's manual.


After all the history of shady things with Intel ME/AMT, hindering coreboot projects efforts, etc I highly doubt there will be people who want to do that. Hopefully this story will start a big change in Intel policies (more likely it is not though).


They had $4.5 billion in profit last quarter. If they want help, they can pay for it.


Maybe we could develop in more efficient languages with more efficient frameworks so that all the pressure to improve performance doesn't land on the hardware side? Or we could say developer time is more important and keep pumping out electron apps, leaving intel to continue pushing the boundaries of physics.


If it's possible to get an eventual legal judgement against them, perhaps instead of paying x-billion dollars in fines, maybe they should be forced to make their future work open source.


1) They aren't going to have to pay a fine 2) This would just screw the shareholders, making the stock worth less 3) Open source doesn't have the people, skills, or finances to utilize Intel's internal design data - the only beneficiary from this would be other chip manufacturers and nation-states. I'm not sure who this would benefit?


I was thinking the benefit would be the ability to audit it, but yeah, not going to happen.


Since C2Ds are still highly regarded by the FSF RYF crowd, I wonder how much of this has been mitigated and how much of this is still an issue with LibreBoot Trisquel laptops.



Yes, but as the mods didn't catch it now the conversation is here.


this is a good reminder for everyone to stay away from from those engineering sample Xeon processors on ebay/taobao.


If only he had come up with a catchy name and a logo, we would have listened.


Remember that naming is one of the hardest things in software engineering.


That and off by one errors ;)


Concurrency

You forgot 2) Cache invalidation and 3)


Haha, you are correct, but there are some other hard problems that I now remember after some quick googling since i forgot all the related jokes :)

There are only two hard problems in distributed systems: 2. Exactly-once delivery 1. Guaranteed order of messages 2. Exactly-once delivery

There's two hard problems in computer science: we only have one joke and it's not funny.

Source: https://martinfowler.com/bliki/TwoHardThings.html


The two most difficult things in software development.

    * naming things
    * cache invalidation
    * off by one errors


It's always worth having it stated in the canonical form. :)


The fifth hardest thing is figuring out the canonical form :)


I’d risk saying we’re all intensely aware how hard Cache invalidation is, right now


well, we can only speculate...


Yes. This is why it kind of frustrates me to see all the "experts" hate on the naming of bugs. This is how you raise awareness about them, which in some cases could be as important as discovering the bugs (especially if the vendors are unwilling to fix them otherwise - think airline industry, and so on).


the movie was released in 2015, you're asking for the impossible


"Spectre" is a pretty cool and spooky name all by itself.


Spectre (2015), from IMDB:

> A cryptic message from Bond's past, sends him on a trail to uncover a sinister organization. While M battles political forces to keep the Secret Service alive, Bond peels back the layers of deceit to reveal the terrible truth behind S.P.E.C.T.R.E.

Catchy, on point pop-culturally and ominous.


[flagged]


Except that these bugs were all open on open source operating systems, as well... Do you think Linux wasn't affected by Spectre and Meltdown? That the BSDs aren't? That Xen isn't?

Your comment is disingenuous and dangerous.


I think their point is the exploit code would be obvious in open source applications and you could choose not to run them. Violates the defense in depth precaution.


You're (and most likely all downvoters) completely missing the point.

I never said that opensource software is not affected, I said "not being affected as a user". Because opensource software, being peer-reviewed, will never try to exploit a CPU bug. Opensource software is, by default, non-malicious.


> Opensource software is, by default, non-malicious.

So is closed source software.

You seem to be deeply confused about the scenarios people are worried about.

The main ones are 1. untrusted users being hosted 2. javascript off the web.

All the open source in the world doesn't help in either scenario.

You're focused on someone deliberately running a malicious program. But if they do that, these exploits aren't even necessary to do severe harm. It's a marginal scenario at best.


> > Opensource software is, by default, non-malicious. > So is closed source software.

Sorry what? 99.999999% of all the malware that exists and has existed today, is/was closed source. Compare that to the other 0.0000001% that got once or twice into opensource and was removed as soon as it got detected.

> The main ones are 1. untrusted users being hosted 2. javascript off the web.

I've never talked about (1), of course my comment was not targetted to server owners but by normal workstation owners that only have one user. WRT 2: that's precisely why I mentioned NoScript. Nowadays the only untrusted software that could be run in your computer if you were using opensource OS and opensource apps exclusively is javascript from the web (which is by default closed source).


> Sorry what? 99.999999% of all the malware that exists and has existed today, is/was closed source. Compare that to the other 0.0000001% that got once or twice into opensource and was removed as soon as it got detected.

There is plenty of open-source malware, and most closed source software is not malware.

You seem to be conflating "is in a trusted open source distribution, listed as non-malware" with mere "open source". Code that is openly malicious, or has never been peer-reviewed, is still perfectly capable of being open source.

> I've never talked about (1), of course my comment was not targetted to server owners but by normal workstation owners that only have one user.

That's cool but the workstation use case is not why people are freaking out. The workstation is the place where you don't even need this bug to take over and ruin everything, because it's all under one account anyway.

Saying it's a "way to not be affected by these bugs" is pretty myopic.


Your stats need a citation, otherwise they mean nothing. CVE lists have plenty of entries for FOSS software, so it's definitely not '99.whatever' you're claiming.


Being open source and peer-reviewed doesn't mean software won't have bugs, exploits, or that backdoors can't be hidden in it. 'Peer-reviewed' sounds good on paper, but people can still miss things, be lazy, or not understand the code they are reviewing (but it works, so accept merge!). It's hardly the silver bullet to this problem.

As for your original post - moving to FOSS only is not viable for a lot of people. Linux has no good video-editing software, no good CAD tools, gimp has nothing on photoshop and even Libreoffice is rather lackluster when compared to the MS Office package. On top of that, javascript is used pretty much in every site these days, many of which require javascript to run. You can't seriously expect an average user to manually whitelist javascript in the sites they browse.

If the FOSS world had software that worked for everyone, people would use it. Right now, the only major group this is true for however, are developers. Until this changes there will be no 'year of the Linux Desktop' or whatever.


So what you are saying is that if you never install closed source software, you will never (intentionally) install malware because you define all malware as closed source. This is a meaningless tautology. The advice you are giving is both objectively wrong and actively dangerous.

Firstly not all security issues involve installing 'malware'. Take heartbleed; a major security vulnerability in open source software. To be hit by that you didn't need to install any non-open source software. You could have had the most pure and open stack of software and hardware ever created and it would have made no difference because the flaw was an information leak that didn't involve installing software or even unintended remote execution of code. That bug was due to the existing already installed open source code copying past the end of a buffer. Similar information leaking bugs could feasibly exist due to CPU bugs or compiler bugs or undefined behaviour in the code etc (i.e. all sorts of ways that don't involve installing malware and aren't even obvious from reading the code).

Secondly, even exploits that do involve "running malware" are very often not because the user intentionally installed malware. They happen because the user did something like download and decode an image file which took advantage of a vulnerability in their (open source!) image decoding library to execute code on their machine. Again in this scenario it doesn't matter that the user would only ever intentionally install open source software, because the malware was executed unintentionally when they performed an activity they didn't expect to lead to code execution (viewing an image).

E: These CPU issues are fundamentally timing attacks that leak information. I don't need to execute code on your computer at all to run a timing attack against you that leaks valuable information - I can do it by sending packets of pure data and don't need to be able to execute arbitrary code on your machine at all. Side channels like timing attacks don't necessarily require any kind of code execution on the target machine (although obviously having a high precision local timer makes it easier). This isn't just theoretical, it has been demonstrated in reality. For example see this paper[1] which demonstrates doing timing attacks against a browser to reveal the users browser history without the need to execute javascript at all. It's all done by sending CSS, which isn't code.

[1] https://www.nds.rub.de/media/nds/veroeffentlichungen/2014/07...


You are totally missing the point. A side channel means that someone else on the same host can read your privileged or otherwise inaccessible data. So it is fine if you only use open source and compile everything yourself but what about that other account on the same machine or image? What about that other VM on the same hardware?


Some users will think it "not worth their time" to offer constructive commentary along with their downvote. Also, commenting on the downvotes you receive will invite more silent downvotes :P




Applications are open for YC Winter 2023

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: