This colossal security update was released as Windows XP Service Pack 2. This is the beauty of software.
Hardware vulnerabilities scare me. There are millions of Intel CPUs and DRAM modules installed in data centers and PCs around the world. When we find vulnerabilities in hardware how are we supposed to fix them?
The past few years have been really scary for me as a software engineer. I keep thinking that the next one will pop the software bubble.
created: January 14, 2018
Are there tools for developers to use these (e.g. with VC++), or are they all Microsoft internal?
There is also this: https://github.com/mre/awesome-static-analysis
It's part of VC++.
But I do not recommend using SAL for modern code.
This is a kind of obvious variant of Rowhammer, which hasn't generally been fixed AFAIK.
Hardware in the late 2010s seems a bit like Windows in the late 1990s - a house of cards waiting to collapse, but the dominant vendors know that very few customers are willing to pay more for security and reliability vs. performance/features/new and shiny.
(So if we're lucky then we'll end up with a "more secure" version of hardware that is analogous to... Windows Vista.)
Whats known to the public, you mean. I suspect even the nuttier conspiracy theorists don't have a solid grasp of just what capabilities might exist among well funded secret institutions.
Less impressive than 6 feet, but still impressive.
Software defined radios can be had for as little as $10-20 for simple receive only types, up to several hundred for substantially nicer transceivers. They can also be $1000+ for special applications or R&D.
I think "hasn't generally been fixed" is a fair assessment. TRR is so cheap to implement and should have been made mandatory, with generous safety margins.
Yeah but when I chase down references all I can find is one saying that rowhammer was possible on a Pixel phone, in turn referencing a paper that doesn't have the word 'pixel' in it. No study about how TRR effects the difficulty, or whether TRR was set up correctly. I want a chart showing bit flip difficulty vs. TRR aggressiveness. Because you can set TRR to be very aggressive with almost no performance impact on non-malicious workloads.
In many deployments, automatic scaling and failover already provide the necessary mechanics. Then the Chaos Monkey suddenly becomes a security enforcement tool :)
You know what software's like, though. What today takes the blood of a virgin and a dozen dribbly candles will be doable next week with a toothpick and three cc's of mouse blood.
The secret data has to be duplicated, column aligned, and have a single unallocated row between it. Controlling the alignment of the secret data seems like a major complication for realistic exploitation.
Edit: The strategy in the paper requires allocating a bunch of physical memory from all the small blocks so that memory allocation requests from a new process are allocated deterministically to a desired physical row.
Honest question, because I'm not a security expert: can you do this without having already pwned your target to such a degree that this would be unnecessary?
However, historically, these sorts of attacks always get better, not worse.
And while even that can sometimes be empty rhetoric, I will say in the last 5 years I'm seeing a lot of security attacks that are already well beyond what even my moderately-trained intuition would suggest are possible, so I have to admit I've sort of given up on trying to guess on whether or not an attack can be made practical. I've seen too many mind-blowing presentations from security researchers to think I can bound their abilities safely. I wouldn't care to bet that they won't move the attacks I already am flabbergasted can exist to some other even-more practical attack that I am flabbergasted can exist.
This is unrelated to the current matter, but let me give you an example: https://www.youtube.com/watch?v=_eSAF_qT_FY If you think that's trivially obvious, and you're confident you can predict how these sorts of things will play out in the future, more power to you, but I'm certainly not justified in that belief at my skill level. I'm just happy I can follow that presentation!
So many systems have unspecified, undocumented and undertested behaviors that have not been exploited only because no one has ever tried.
The paper describes strategies for manipulating a target box into doing just this.
The cost of mitigating these attacks both in terms of CPU performance and increased hardware costs may offset some of the economies of scale.
(I don't think the authors implied otherwise, I just know that I somehow got confused for a second.)
Allow me to save some time, from TFA:
> Is there a CVE number?
> Yes, see CVE-2019-0174 .
> What is RAMBleed?
> Previous attacks exploited the Rowhammer effect to write (or flip) bits in the victim's memory. RAMBleed is different in that it uses Rowhammer for reading data stored inside the computer's physical memory. As the physical memory is shared among all process in the system, this puts all processes at risk.
> What data can be read by RAMBleed?
> While the end-to-end attack we demonstrated read out OpenSSH 7.9's RSA key, RAMBleed can potentially read any data stored in memory. In practice, what can be read depends on the victim program's memory access patterns.
> What technologies are affected by RAMBleed?
> RAMBleed relies on Rowhammer-induced bit flips to read privileged memory. As such, any system that uses Rowhammer-susceptible DIMMs is vulnerable. Previous research has demonstrated bit flips on both DDR3 and DDR4 with TRR (targeted row refresh) enabled. While we demonstrated our attack on a desktop machine and an ECC enabled server machine, Rowhammer attacks have been demonstrated against both mobile devices and laptops. As such, we suspect that many classes of computers are susceptible to RAMBleed.
> How can I mitigate this issue?
> Users can mitigate their risk by upgrading their memory to DDR4 with targeted row refresh (TRR) enabled. While Rowhammer-induced bit flips have been demonstrated on TRR, it is harder to accomplish in practice.
> Memory manufacturers can help mitigate this issue by more rigorously testing for faulty DIMMs. Furthermore, publicly documenting vendor specific TRR implementations will facilitate a stronger development process as security researchers probe such implementations for weaknesses.
I think the marketing campaigns are awesome. Makes it much easier to get resources to fix.
But sometimes it seems to be stretched to the point of making you feel nauseated and cause extreme distraction (which can be the point in some cases, though). Still, I wish some of the less egregious issues were described with a little less hysteria, while still conveying their seriousness. (Maybe it's my weariness of dealing with them is speaking here.)
But there really isn't any other good options.
AMD's SEV might help protecting VMs from each other.
Seems like mitigations for Rowhammer is not enough for RAMBleed.
The paper even specifically says that SEV and other similar technologies protect against this.
tl;dr ECC logic can't cope with 'single' errors involving > 2 bits.
If these errors are actually treated as errors, then chips will be disabled or processes will get blacklisted long before they can be used to exploit.
So this is really "ECC is often configured wrong", not "ECC isn't effective".
There does seem to recently have been an increase in having HSMs be part of cloud offerings, and while that's a good idea in general issues like this may be part of it too. One issue with this attack is that even in research conditions it seems to be pretty fundamentally slow. It took them 34 hours to do the setup phase and another 4 hours of reading, all going after a few kilobytes (.31 bits/s, accuracy rate of 82%). Which could still matter a lot when it comes to cryptographic keys since those are all small, but it's not clear is or ever could be feasible for sniffing significant amounts of confidential content data. It's more that if credentials can be harvested that'd be the steppingstone towards taking over. So if keys are just plain moved to a minimal blackbox outside the system entirely and on which no customer code of any kind runs, that seems like a real mitigation to this particular leak? I mean, even beyond any basic improvements to DRAM (or successors to DRAM, I wonder how some of the NVDRAM research like future gens of Optane handle this?).
> Can RAMBleed be detected by antivirus?
> We believe that it is very unlikely that any antivirus software on the market currently detects RAMBleed.
How does one enable this feature? Is it in the BIOS?
Ivy Bridge and newer processors should support pTRR and possibly even TRR.
For those who don't understand: https://en.wikipedia.org/wiki/Feng_shui
I still want to have a reasonable computing experience (e.g. copy + paste works and the experience is kind of like using one computer the normal way).
Any ideas? I was thinking some kind of remote desktop setup with a bunch of cheap boxes, each running Linux, on a LAN.
But the computing experience is quite unreasonable - with copy&paste working though ;)
I'm going to guess that it's not actually a practical solution to this problem right now.
-- What happens when the cure is worse than the disease (some variants of Spectre), and we "have to" patch because otherwise we're "not secure against this scary sounding thing." It takes a lot of agency away from the IT department to make an informed decision about the appropriate mitigation and threat model, and turns it into a business requirement regardless of the trade-off - because brass "understands" Spectre, they don't understand CVE-1234
-- Or, theoretical exploits that aren't practical in the real world, but we spend a lot of effort or make tradeoffs to fix just because someone needs some material for their PhD.
-- What do you call it when we find another way to read RAM next year - RAMBleed2? I guess.
What happens next year when there's a new way to bleed RAM? If it's anything like Spectre, we'll call it "RAMBleed Variant 2", and assign it a new CVE.
or RAMBleed-NG, probably.
Either you have people with security experience who can tell you if it matters in your situation, or... Well, you won't know what the right answer is and may as well default to patching since your os distribution will do it.
But I don't think it's that different for named and not named vulnerabilities. The response to management can be either "we dealt / are dealing with it according to our vulnerability management policies", or whatever you'd answer when asked about any other issue. The only difference is you may be asked in the first place.
P2) What if all wide-impacting vulnerabilities were treated like this, regardless of theory or not; the vulnerability still exists.
P3) Names are fairly easy to come up with.
Some of my views on why this approach is beneficial, https://medium.com/@9bplus/why-i-love-marketing-vulnerabilit...
... until someone does it somewhat reliably. I always assume my knowledge and imagination does not come close the collective creativity of exploiters.
I consult on embedded systems, and at least one medical device I work on is directly concerned with this issue. Embedded systems (cars, elevators, medical devices) consider a different threat model than, say, a server sitting in a data center. These poor devices often sit vulnerable, physically unprotected, and often they can be acquired on Ebay, etc. and "tortured in a garage" for months on end.
Scary stuff. Good for me and my business, bad for all of us though.
For most software vulnerabilities, the main takeaway message for the layperson is "yesterday's release of TheThing fixes some bad stuff. Don't use old versions, update to the new release" and the details matter only for the specialists.
For attacks like Rowhammer or Meltdown or Spectre or this one, however, there's lots of information that's still going to be relevant years after the publication - many people will have vulnerable hardware that will stay vulnerable, and you can still buy vulnerable hardware and so need to evaluate if your situation requires specific hardware choices to avoid these vulnerabilities, etc. So a long-term informative website is pretty much necessary.
Then thought about it and was expecting to see a comment saying “it’s so that normal people understand the vulnerabilities.”
And then I realized it’s probably also because of a desire to be seen. (It seems all participants are from academia?)
/sarc but also maybe not