Hacker News new | past | comments | ask | show | jobs | submit login
RAMBleed Attack – Reading Bits in Memory Without Accessing Them (rambleed.com)
385 points by ga-vu 14 days ago | hide | past | web | favorite | 134 comments



Shortly after I joined Microsoft in 2004, all new feature work on Windows Longhorn (Vista) was halted. There had been yet another high profile computer virus on Windows XP caused by a buffer overflow. It was an all-hands-on-deck type moment. Everyone spent weeks adding SAL (source-code annotation language) annotations to the entire Windows code base. SAL annotations enabled automated source code analysis to identify potential buffer overflow bugs and other common security vulnerabilities. Once annotated, we identified and fixed who knows how many hundreds of security vulnerabilities in Windows.

This colossal security update was released as Windows XP Service Pack 2. This is the beauty of software.

Hardware vulnerabilities scare me. There are millions of Intel CPUs and DRAM modules installed in data centers and PCs around the world. When we find vulnerabilities in hardware how are we supposed to fix them?

The past few years have been really scary for me as a software engineer. I keep thinking that the next one will pop the software bubble.


Maybe hardware will have to become more programmable in the future so you can fix some of these flaws without needing to replace anything.


It can't just be turtles all the way down.


Can you make an FPGA out of an FPGA?


Yes


  user: FPGAhacker
  created: January 14, 2018
Checks out!

I'm already done making an fpga out of my fpga, and I never had one!

Is it a known phrase or your good sense of humour?



Ah, thank you. William James keeps chasing me.

> SAL annotations

Are there tools for developers to use these (e.g. with VC++), or are they all Microsoft internal?



I was specifically asking about SAL.



I think they talk about the annotation but not the analysis tools?


https://docs.microsoft.com/en-us/visualstudio/code-quality/c...

It's part of VC++.

But I do not recommend using SAL for modern code.


Oh wow. I'd used /analyze before but I hadn't realized it takes SAL into account. I thought it just does limited extra analysis without SAL. Thank you!


Yup there are a few static code analysis tools that support using annotations afaik


SAL specifically, or other kinds?


I am wondering if/when the first class action lawsuits will fly against the hardware manufacturers.


Dependent types can prevent buffer overflow attacks without any additional runtime overhead.


Firmware and microcode updates.


Whenever you have sharing, you usually have side channels. We're going to see more and more of this until we have much better hardware isolation.

This is a kind of obvious variant of Rowhammer, which hasn't generally been fixed AFAIK.

Hardware in the late 2010s seems a bit like Windows in the late 1990s - a house of cards waiting to collapse, but the dominant vendors know that very few customers are willing to pay more for security and reliability vs. performance/features/new and shiny.

(So if we're lucky then we'll end up with a "more secure" version of hardware that is analogous to... Windows Vista.)


Real-world side channels are going to start getting more attention as the technology and techniques for extracting information from EM radiation from circuits (not radios) leaking state improve.


I was unaware that there were high-speed digital circuits that weren't also radios.


Just ones that people have particularly strong feelings about being radios


>technology and techniques for extracting information from EM radiation

Whats known to the public, you mean. I suspect even the nuttier conspiracy theorists don't have a solid grasp of just what capabilities might exist among well funded secret institutions.


There have been many demos at security conferences of doing these kinds of things, "simple" things like key logging from a laptop with only a nearby radio.


how about extracting pgp keys with a smart phone microphone from 6 feet away?

https://www.tau.ac.il/~tromer/papers/acoustic-20131218.pdf


> Figure 4. A mobile phone (Samsung Note II) is placed 30 cm from a target laptop.

Less impressive than 6 feet, but still impressive.


Is that possible without special hardware to detect EM radiation?


Well "special hardware to detect EM radiation" is most often software-defined radios like HackRF or BladeRF among others down to TV reception dongles you can get for a few dollars (search for RTL-SDR)


I look forward to the day when SDRs are inside smartphones.


They probably already are just hidden behind closed source firmware


You usually need an antenna, yes. Does that matter?


It makes attacks much more limited. You need physical access to whatever machine you're attacking, versus being able to distribute code to millions of machines at once over the internet. That matters a ton.


it matters because if the hardware is difficult and expensive to obtain, an attack can be costly, and hence, won't affect the common man. Industrial espionage or state level espionage hardly changes for the common man, and since most of the hardware in use is by the common man, it's unlikely to get a real fix.


Effective antennas are reasonably easy to build from scratch (depending on the frequency band), and are commercially available in an insane number of variations. Either way is very cheap - as little as a few dollars.

Software defined radios can be had for as little as $10-20 for simple receive only types, up to several hundred for substantially nicer transceivers. They can also be $1000+ for special applications or R&D.


There are demos of things being done with RTL-SDR dongles which cost less than a good lunch.


> This is a kind of obvious variant of Rowhammer, which hasn't generally been fixed AFAIK.

https://en.wikipedia.org/wiki/Row_hammer#Mitigation


"optional hardware support" "some manufacturers"

I think "hasn't generally been fixed" is a fair assessment. TRR is so cheap to implement and should have been made mandatory, with generous safety margins.


“RAMBleed has been demonstrated on devices with DDR3 memory chips, and Rowhammer's bit flipping on DDR4 components. DDR4 supports a defensive technique called Targeted Row Refresh, but its efficacy is uncertain. "Given the closed-source nature by which TRR is implemented, it is difficult for the security community to evaluate its effectiveness," said Kwong. "While bit flips have been demonstrated on TRR before, the extent to which TRR mitigates RAMBleed remains an open question."

https://www.theregister.co.uk/2019/06/11/rambleed_rowhammer_...


> While bit flips have been demonstrated on TRR before

Yeah but when I chase down references all I can find is one saying that rowhammer was possible on a Pixel phone, in turn referencing a paper that doesn't have the word 'pixel' in it. No study about how TRR effects the difficulty, or whether TRR was set up correctly. I want a chart showing bit flip difficulty vs. TRR aggressiveness. Because you can set TRR to be very aggressive with almost no performance impact on non-malicious workloads.


From the speed figures (many hours of setup and runtime, very low read rate) I can suggest that restarting your important service often, and moving it between VMs, would prevent an attack like that.

In many deployments, automatic scaling and failover already provide the necessary mechanics. Then the Chaos Monkey suddenly becomes a security enforcement tool :)


> From the speed figures ...

You know what software's like, though. What today takes the blood of a virgin and a dozen dribbly candles will be doable next week with a toothpick and three cc's of mouse blood.


That was ..... interesting quote.

If I understood correctly, this is orders of magnitude worst that RowHammer, as it is kind of passive: you set some program to do the RowHammer side, within its own memory space, and then use RAMBleed to see changes from memory cells, not exactly the ones that were RowHammered. Is that it ?


> a bit is more likely to flip when the bits above and below it have the opposite charge. ... To exploit this effect, we developed novel memory massaging techniques to carefully place the victim's secret data in the rows above and below the attacker's memory row.

The secret data has to be duplicated, column aligned, and have a single unallocated row between it. Controlling the alignment of the secret data seems like a major complication for realistic exploitation.

Edit: The strategy in the paper requires allocating a bunch of physical memory from all the small blocks so that memory allocation requests from a new process are allocated deterministically to a desired physical row.


> Edit: The strategy in the paper requires allocating a bunch of physical memory from all the small blocks so that memory allocation requests from a new process are allocated deterministically to a desired physical row.

Honest question, because I'm not a security expert: can you do this without having already pwned your target to such a degree that this would be unnecessary?


Probably not.

However, historically, these sorts of attacks always get better, not worse.

And while even that can sometimes be empty rhetoric, I will say in the last 5 years I'm seeing a lot of security attacks that are already well beyond what even my moderately-trained intuition would suggest are possible, so I have to admit I've sort of given up on trying to guess on whether or not an attack can be made practical. I've seen too many mind-blowing presentations from security researchers to think I can bound their abilities safely. I wouldn't care to bet that they won't move the attacks I already am flabbergasted can exist to some other even-more practical attack that I am flabbergasted can exist.

This is unrelated to the current matter, but let me give you an example: https://www.youtube.com/watch?v=_eSAF_qT_FY If you think that's trivially obvious, and you're confident you can predict how these sorts of things will play out in the future, more power to you, but I'm certainly not justified in that belief at my skill level. I'm just happy I can follow that presentation!


Yeah, it seems that whenever an exploit "doesn't seem practical for actual use" it is just one more exploit-in-the-chain away from being operationalized.

So many systems have unspecified, undocumented and undertested behaviors that have not been exploited only because no one has ever tried.


Presumably there would be value in releasing the "impractical for actual use" version to the public after you have already operationalized and not before.


Yes. Multi-user unix systems. Potentially even from inside a browser depending on GC details.


Yes, you can do it from an unprivileged user account, potentially allowing you to read some data from a privileged user or OS process.


> Honest question, because I'm not a security expert: can you do this without having already pwned your target to such a degree that this would be unnecessary?

The paper describes strategies for manipulating a target box into doing just this.


I am also not a security expert, but it seems like VM's that are shared in the cloud are the targets for a lot of these types of attacks. You don't need to pwn the other VM, but you don't exactly get to choose which VM's you cohabitate with to target your attacks.


So basically we need to defrag our ram now. Here we go again.


Seems RowHammer can flip bits and RAMBleed uses RowHammer to also read memory stored nearby.


These recent CPU and Memory vulnerabilities are calling into question the economics of the public cloud. A lot of these bugs only really matter if you have multiple programs from different entities running on the same computer system. A company that has its own private cloud, doesn't have to care about these attacks so much.

The cost of mitigating these attacks both in terms of CPU performance and increased hardware costs may offset some of the economies of scale.


Since the mention of OpenSSH can cause readers to jump to false conclusions: This attack still requires that the attacker's code runs on the victim machine; it is not a remote vulnerability.

(I don't think the authors implied otherwise, I just know that I somehow got confused for a second.)


Can we talk about these attack names? Everything bleeds these days. "RAMSack" is a legit missed opportunity.


“Bleed” has a functional purpose; it communicates that it is a side-channel leaking information.


"Bleed" is normally just used for communicating that the exploit leaks data. See Optionsbleed, Heartbleed and Ticketbleed which are not side-channel attacks afaik.


Perhaps I'm missing something really obvious, but doesn't hardware-provided memory encryption solve this quite handily? You might be able to read single bits of raw physical memory from another process, but this won't tell you anything substantial about what that process is up to.


The mitigations section of the paper discusses memory encryption


For those who find that sort of thing interesting, the domain was registered 109 days ago, February 23, 2019.

https://whois.domaintools.com/rambleed.com


Anyone else love how vulnerabilities these days have full on marketing-esque campaigns backing them? In any case, if it increases awareness, I'm mostly supportive.

Allow me to save some time, from TFA:

> Is there a CVE number?

> Yes, see CVE-2019-0174 [1].

[1] https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-0174

---

> What is RAMBleed?

> Previous attacks exploited the Rowhammer effect to write (or flip) bits in the victim's memory. RAMBleed is different in that it uses Rowhammer for reading data stored inside the computer's physical memory. As the physical memory is shared among all process in the system, this puts all processes at risk.

> What data can be read by RAMBleed?

> While the end-to-end attack we demonstrated read out OpenSSH 7.9's RSA key, RAMBleed can potentially read any data stored in memory. In practice, what can be read depends on the victim program's memory access patterns.

> What technologies are affected by RAMBleed?

> RAMBleed relies on Rowhammer-induced bit flips to read privileged memory. As such, any system that uses Rowhammer-susceptible DIMMs is vulnerable. Previous research has demonstrated bit flips on both DDR3 and DDR4 with TRR (targeted row refresh) enabled. While we demonstrated our attack on a desktop machine and an ECC enabled server machine, Rowhammer attacks have been demonstrated against both mobile devices and laptops. As such, we suspect that many classes of computers are susceptible to RAMBleed.

> How can I mitigate this issue?

> Users can mitigate their risk by upgrading their memory to DDR4 with targeted row refresh (TRR) enabled. While Rowhammer-induced bit flips have been demonstrated on TRR, it is harder to accomplish in practice.

> Memory manufacturers can help mitigate this issue by more rigorously testing for faulty DIMMs. Furthermore, publicly documenting vendor specific TRR implementations will facilitate a stronger development process as security researchers probe such implementations for weaknesses.


Relevant @patio11 article

https://www.kalzumeus.com/2014/04/09/what-heartbleed-can-tea...

I think the marketing campaigns are awesome. Makes it much easier to get resources to fix.


Wow, that's a really excellent read. Thanks.

Unless your CEO loves talking about CVEs all day long, I think we should all be grateful for these good-looking marketing campaigns for serious security issues.


I whole-heartedly agree with your broader point about having sensible names and descriptions that draw necessary attention to critical flaws. (And "+1" to the excellent related article from patio11.)

But sometimes it seems to be stretched to the point of making you feel nauseated and cause extreme distraction (which can be the point in some cases, though). Still, I wish some of the less egregious issues were described with a little less hysteria, while still conveying their seriousness. (Maybe it's my weariness of dealing with them is speaking here.)


This really sux - that the fix is "Move to DDR with TRR enabled - which means for many people: Buy a new computer. Or new RAM. Or new RAM, motherboard, CPU, etc - ie - a new computer; ie - spend a lot of money".

But there really isn't any other good options.


They need to step up their marketing game. They don't even have a Twitter account to promote Rambleed. twitter.com/rambleed isn't taken yet ;-)


Does this affect public cloud providers? Can it be used to obtain data from other VMs running on the same host?


Secure Memory Encryption (SME) and Secure Encrypted Virtualization (SEV) should be a good counter measure.


If this attack is a concern then so is rowhammer. So you can basically ask whether cloud providers have mitigated rowhammer.

AMD's SEV might help protecting VMs from each other.


>unlike Rowhammer, RAMBleed does not require persistent bit flips, and is thus effective against ECC memory commonly used by server computers.

Seems like mitigations for Rowhammer is not enough for RAMBleed.


SEV is a completely different thing than ECC. It encrypts the memory. RAMBleed reads raw physical bits which are encrypted and thus useless when SEV is enabled.

The paper even specifically says that SEV and other similar technologies protect against this.


ECC isn't effective mitigation against Rowhammer either.

https://www.vusec.net/projects/eccploit/

tl;dr ECC logic can't cope with 'single' errors involving > 2 bits.


It has to cause a huge amount of single-bit errors to find a spot suitable for a three-bit error.

If these errors are actually treated as errors, then chips will be disabled or processes will get blacklisted long before they can be used to exploit.

So this is really "ECC is often configured wrong", not "ECC isn't effective".


Not sure if this attack is feasible cross VM as long as the memory is not deduped/over provisioned. You need to make sure the victim allocates memory between the attacker controlled memory. Edit: As long as the memory is not deduped on the VMs it should not be possible since it won't be interleaved


It sure sounds like it—have there been any responses from the major cloud providers? This doesn’t exactly seem like something that can be patched easily....


>have there been any responses from the major cloud providers

There does seem to recently have been an increase in having HSMs be part of cloud offerings, and while that's a good idea in general issues like this may be part of it too. One issue with this attack is that even in research conditions it seems to be pretty fundamentally slow. It took them 34 hours to do the setup phase and another 4 hours of reading, all going after a few kilobytes (.31 bits/s, accuracy rate of 82%). Which could still matter a lot when it comes to cryptographic keys since those are all small, but it's not clear is or ever could be feasible for sniffing significant amounts of confidential content data. It's more that if credentials can be harvested that'd be the steppingstone towards taking over. So if keys are just plain moved to a minimal blackbox outside the system entirely and on which no customer code of any kind runs, that seems like a real mitigation to this particular leak? I mean, even beyond any basic improvements to DRAM (or successors to DRAM, I wonder how some of the NVDRAM research like future gens of Optane handle this?).


This is a tangent, but I find it (somewhat) annoying when f.a.q's don't answer their own questions. I see this happen occasionally and I always wonder if it's intentional to side-step an issue by raising it yourself, and then answering something else.

> Can RAMBleed be detected by antivirus? > We believe that it is very unlikely that any antivirus software on the market currently detects RAMBleed.


What should they say? It's theoretically possible for any AV to detect the bug (how else did they prove it exists?) but a simple "Yes" hardly conveys accurate information.


I find it the other way around annoying, talking in absolutes when one is not sure.


Seems like this would happen when many people ask the same question and you don't have an answer so you post the best response you can.


> Users can mitigate their risk by upgrading their memory to DDR4 with targeted row refresh (TRR) enabled.

How does one enable this feature? Is it in the BIOS?


I'm not sure if it is an option you can enable/disable in BIOS, but if your CPU and memory have support for pTRR/TRR it should be enabled.

Ivy Bridge and newer processors should support pTRR and possibly even TRR.


Do AMD processors support it?


I find it very amusing that they use the name "Feng Shui" for placing the victim page.

For those who don't understand: https://en.wikipedia.org/wiki/Feng_shui


It's also an idiom Alex Sotirov coined 10 years ago:

https://en.wikipedia.org/wiki/Heap_feng_shui


Imagine if human beings could be vulnerable to such attacks. Someone sends you a video link, you watch it, you see weird shapes appearing and disappearing for a few minutes, then the next thing you know, you wake up in a bathtub full of ice-cubes with one of your kidneys stolen.





Another fun one is the Lexicon by Max Barry, except it exploits the auditory language processing.

https://www.goodreads.com/book/show/16158596-lexicon


If you're into movies with zombies, pontypool is a fun watch: https://en.m.wikipedia.org/wiki/Pontypool_(film)



this is more or less how the TV show Chuck works, except in generally a positive capacity.


I want to find a way to run each application I use on its own machine so that memory attacks (like spectre/meltdown, etc.) are theoretically impossible.

I still want to have a reasonable computing experience (e.g. copy + paste works and the experience is kind of like using one computer the normal way).

Any ideas? I was thinking some kind of remote desktop setup with a bunch of cheap boxes, each running Linux, on a LAN.


Plan 9 is really radical about this, but I'm not sure how secure that would be in practice: http://doc.cat-v.org/plan_9/4th_edition/papers/net/

But the computing experience is quite unreasonable - with copy&paste working though ;)


Plan 9 is so cool.

I'm going to guess that it's not actually a practical solution to this problem right now.


Run X on one box connected to your programs over SSH running on separate boxes?


I was thinking that. I wonder if it would work well in practice.


I am so torn about the whole vulnerability branding thing. On the one hand, it gives a great amount of visibility and motivation for brass to actually provide for getting prod patched when IT might not have had the political capital to do so otherwise. But on the other hand, I think about things like:

-- What happens when the cure is worse than the disease (some variants of Spectre), and we "have to" patch because otherwise we're "not secure against this scary sounding thing." It takes a lot of agency away from the IT department to make an informed decision about the appropriate mitigation and threat model, and turns it into a business requirement regardless of the trade-off - because brass "understands" Spectre, they don't understand CVE-1234

-- Or, theoretical exploits that aren't practical in the real world, but we spend a lot of effort or make tradeoffs to fix just because someone needs some material for their PhD.

-- What do you call it when we find another way to read RAM next year - RAMBleed2? I guess.


Spectre ended up being a handful of (non-contiguous) CVEs. It's much easier to talk about "Spectre" as a class of vulnerability than "these 6 CVEs and also that other variant that doesn't have a CVE yet".

What happens next year when there's a new way to bleed RAM? If it's anything like Spectre, we'll call it "RAMBleed Variant 2", and assign it a new CVE.


> -- What do you call it when we find another way to read RAM next year - RAMBleed2? I guess.

or RAMBleed-NG, probably.


It would have made more sense to call this on "RAMbo First Bleed". Next year it can be "RAMbo First Bleed Part II".


RAMBleed++


RamBleed MAX9


> we "have to" patch because otherwise we're "not secure against this scary sounding thing."

Either you have people with security experience who can tell you if it matters in your situation, or... Well, you won't know what the right answer is and may as well default to patching since your os distribution will do it.

But I don't think it's that different for named and not named vulnerabilities. The response to management can be either "we dealt / are dealing with it according to our vulnerability management policies", or whatever you'd answer when asked about any other issue. The only difference is you may be asked in the first place.


P1) I don't see the current naming model as any worse than the disjointed CVEs of today.

P2) What if all wide-impacting vulnerabilities were treated like this, regardless of theory or not; the vulnerability still exists.

P3) Names are fairly easy to come up with.

Some of my views on why this approach is beneficial, https://medium.com/@9bplus/why-i-love-marketing-vulnerabilit...


Very cool and scary, but in reality actually exploiting this without a cooperative target, and a relatively quiet machine is quite unlikely. Does not mean it shouldn't be looked at, but do not lose sleep over it.


> actually exploiting this without a cooperative target, and a relatively quiet machine is quite unlikely.

... until someone does it somewhat reliably. I always assume my knowledge and imagination does not come close the collective creativity of exploiters.


Yes, this. Also, there is a saying in the security world, "Attacks only get better over time."

I consult on embedded systems, and at least one medical device I work on is directly concerned with this issue. Embedded systems (cars, elevators, medical devices) consider a different threat model than, say, a server sitting in a data center. These poor devices often sit vulnerable, physically unprotected, and often they can be acquired on Ebay, etc. and "tortured in a garage" for months on end.

Scary stuff. Good for me and my business, bad for all of us though.


Why does every new attack have a homepage these days?


I forget if heartbleed was the first one to a have a homepage or the first one to have a catchy name, but it was to help with the public relations angle of convincing CIOs or their equivalents in an organization to take the problem seriously AFAIK. Someone else here can probably answer it better.


There were nicely branded jailbreaks for the iPhone well before Heartbleed.


On this subject, I'd love to see a full history of the "branded vulnerability" thing.


Here is a link up thread with 'patio11's take on it: https://news.ycombinator.com/item?id=20158079 .


Ever reported a security vulnerability to a company and had it severely downplayed before?


It kind of makes sense for all the fundamental hardware vulnerabilities.

For most software vulnerabilities, the main takeaway message for the layperson is "yesterday's release of TheThing fixes some bad stuff. Don't use old versions, update to the new release" and the details matter only for the specialists.

For attacks like Rowhammer or Meltdown or Spectre or this one, however, there's lots of information that's still going to be relevant years after the publication - many people will have vulnerable hardware that will stay vulnerable, and you can still buy vulnerable hardware and so need to evaluate if your situation requires specific hardware choices to avoid these vulnerabilities, etc. So a long-term informative website is pretty much necessary.


I imagine these researchers are proud of their findings and want to promote their work and their own personal brands. Name recognition is a big part of academia. You'll notice a "People" section immediately following the description of the vulnerability.


This paper was accepted at IEEE S&P, which is arguably the best security conference in the world, so I don't think these guys will be lacking when it comes to recognition!


Came here to ask the same.

Then thought about it and was expecting to see a comment saying “it’s so that normal people understand the vulnerabilities.”

And then I realized it’s probably also because of a desire to be seen. (It seems all participants are from academia?)


If you discovered a way to read privileged memory from any process, wouldn't you want people to know about it, and that you played a part in discovering it? Why not have somebody slap a logo on it too?


It's hard to talk about things without names. You could use the CVE but it's hard to remember one from the other and is about the same as using no names.


I personally find much easier to fully understand the implications of a security vulnerability that can affect me and my side projects. I'm a full time front-end dev and I do not follow security news explicitly. If severe vulnerabilities have dedicated home pages then it's easier to spread awareness and more people end up sharing the same resource thus keeping people updated easily


This one's a novel iteration of Rowhammer, and people find Rowhammer interesting.


Well, this sounds really bad. Though on a positive note, I guess we could try a similar exploit in meatspace to figure out if we're living in a simulation or something.

/sarc but also maybe not


That's actually a pretty cool thought. I've always scoffed at philosophies along the lines of, "what if it's all a dream/simulation" because if it's perfect, and you only escape it through death, what's the point of holding that view? But if there's a chink in the armor, it becomes suddenly very interesting. Not that I think that's likely to pan out since it so badly violates Occam's Razor, but it's an interesting possibility.


It might not be as unlikely as you think: https://www.newyorker.com/books/joshua-rothman/what-are-the-...


These arguments seem entirely unconvincing to me, in the same way that the Fermi Paradox is. Let's take some small numbers (which we got mostly by guesses, not experimentation), multiply them together, and then be shocked at what a tiny number it is.




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: