
What Spectre and Meltdown Mean for WebKit - pizlonator
https://webkit.org/blog/8048/what-spectre-and-meltdown-mean-for-webkit/
======
cm2187
I wonder if this shouldn't question whether we should still allow all websites
to run javascript by default.

There are websites that genuinely need to run some code, like webmails, online
trading platforms, online games, etc. But 99% of the websites have no good
reason to do so. Javascript is used to make up for the shortcomings of
html/css (different rendering for different screen sizes, lack of local
validation of forms, etc), for the mere convenience of developers and for user
hostile activities (tracking, messing with default behavior of the mouse,
keyboard, clipboard, scrolling, etc).

This might be an opportunity for revisiting html and making it better. An
ecommerce, a newspaper or a blog should have no reason to execute client side
code to render. Once we have a good enough markup alternative, executing
javascript can become an optional feature requiring user authorisation like
accessing your camera or location.

[edit] also at the very least it should lead us to question the chain of trust
of javascript. When I visit abc.com I should only execute javascript from
abc.com or a subdomain. No script either hosted on a non abc.com domain or
appearing in an iframe should be executed.

~~~
krapp
> I wonder if this shouldn't question whether we should still allow all
> websites to run javascript by default.

Plenty of harmful things are done with C and C++ but no one is saying we
should deactivate native apps written in unsafe languages, or not allow anyone
to program a GUI unless they can justify the use of canvas space. Yet the web,
arguably the most successful and free (as in both beer and freedom) and
accessible software platform yet devised, is the only one on which people -
erstwhile _hackers_ \- say that code should be a considered a privilege and
not a right, or that it doesn't really belong on the web at all, despite
javascript being on the web for ~20 years now.

What you're describing seems a lot like adding Windows UAC prompts to the web
- no one would find that better, particularly given that anyone can simply
turn javascript off in their browser already. This isn't even a javascript
problem per se - you could erase javascript from the universe and make the web
as plain as you like and Spectre and Meltdown would still exist.

> also at the very least it should lead us to question the chain of trust of
> javascript. When I visit abc.com I should only execute javascript from
> abc.com or a subdomain. No script either hosted on a non abc.com domain or
> appearing in an iframe should be executed.

That seems reasonable, and I would be willing to bet, possible to configure
with modern tools. But pretending javascript is a second-class citizen on the
web isn't the answer.

~~~
cm2187
Random, unexpected, untrusted c/c++ written code doesn't execute on my machine
every time I browse a website. The only c/c++ code is code I downloaded and
installed or that my OS vendor trusted to include in the OS. It's quite
different from javascript where the code is literally executing uninvited,
often by 3rd or 4th or 5th parties of the website I visit.

~~~
krapp
>Random, unexpected, untrusted c/c++ written code doesn't execute on my
machine every time I browse a website.

It does every time you run a native application, for the same values of
"random, unexpected and untrusted."

>It's quite different from javascript where the code is literally executing
uninvited, often by 3rd or 4th or 5th parties of the website I visit.

Fine. Turn it off then. That's an option you don't really have in any other
runtime, so take advantage of it. But just because you don't want javascript
on the web doesn't mean it doesn't belong there when authors choose to include
it as part of the content they serve.

Also, how exactly does one run "3rd, 4th or 5th party" javascript?

~~~
cm2187
> Also, how exactly does one run "3rd, 4th or 5th party" javascript?

You load a page that includes 3rd party scripts/iframes that themselves load
4th party scripts that themselves load 5th party scripts, etc. You often
notice that when you start blocking an ad in your browser and suddenly 10
others disappear at the same time.

------
Game_Ender
This is a great article on the subject and I like the clear explanation, with
code examples on their fixes.

I don’t think I will ever forget the line “we cannot trust branches”, it’s
just amazing that such a fundamental part of the CPU was broken here.

~~~
syncsynchalt
I've been explaining to friends that Spectre is like discovering that ESP
exists.

Not that Spectre attacks are going to be easy to pull off, but it really makes
you reconsider everything that you thought you could take for granted.

~~~
om2
The scary thing is that without mitigations, it is shockingly easy to pull
off. WebKit engineers made multiple fully working exploits internally and we
would be hard pressed to do that for other kinds of vulnerabilities. Building
full exploits out of your run of the mill use-after-free bug is much harder
and requires specific expertise.

Fortunately these mitigations make it way harder.

------
NelsonMinar
I really hate to see this rush to "make sure our code doesn't use branch
prediction". It may be the best fix we have available now, but it's going to
create a legacy of bizarre inefficient machine code that will last forever.

~~~
alien_at_work
I feel the opposite. I'd like nothing better than this to be the end of branch
prediciton. I can think of no other technology that's been so deterimental to
high level languages than branch prediction. How many elegant, beutiful
algorithms are replaced by bizarre data structures, etc., because avoiding
pipeline stalls will trump any other optimization you can make.

For most things the CPU is doing, we can ignore in high level languages
because the compiler/optimizer can turn our high level code into the
appropriate byte code to take advantage of deep hardwar magic. Not branch
prediction, though, no matter how high level my language, I still must
consider pipeline size, cache alignment and so on when designing a novel new
container or algorithm or risk it being inexplicitly slow.

Branch prediction is, and always was a hack. Let it die.

~~~
ambulancechaser
but removing branch prediction would be to restore those stalls and
inefficiencies as the way things are when you don't use those strange data
structures, right?

The effect of this seems to be "I wish everything was slower and then the
naive implementation would be the best implementation".

~~~
alien_at_work
>restore those stalls and inefficiencies as the way things are when you don't
use those strange data structures, right

No, a stall is only a thing because branch prediction exists. Get rid of it
and I can go back to not caring about the deep underlying architecture of the
hardware I may potentially be running on. And _not_ doing branch prediction
isn't inefficient... that's a bizare statement. Branch prediction and
speculative execution are themselves inefficiencies (i.e. spending power to do
things that will turn out to not be needed fairly often).

>the naive implementation would be the best implementation

Not the naive implementation. A well thought out, developed implementation
that is defeated by deep hardware details. The promise of high level languages
was always to avoid having to know these sorts of details. In designing an
algorithm I should be worried about things like how often I access data (e.g.
can I cut down on how many accesses with some clever structure?) and the like.
Now that's all trumped by deeply knowing the low level implementation.

~~~
zzzcpan
Nothing changes for high level languages. If they are high level enough they
can still fix Spectre and abstract away deep hardware details.

~~~
alien_at_work
That's exactly my point: you _cannot_ abstract away details of pipeline size.
You must know this to get the best performance out of your structures and
algorithms. There is no way for a compiler/optimizer to look at your container
implementation and say "oh, this will potentially overflow the pipeline, let's
split the data structure into multiple parts and change the code to handle
this new access stategy"... that would be creating entirely new code (what
would stepping through a debugger look like on such code?).

------
80x25
These mitigations feel like a half measure. To quote the Spectre paper:

"Even code that contains no conditional branches can potentially be at risk."

"long-term solutions will require that instruction set architectures be
updated to include clear guidance about the security properties of the
processor, and CPU implementations will need to be updated to conform."

It seems too early to declare Spectre class attacks mitigated by the
mechanisms presented in the OP.

~~~
WillReplyfFood
Translated this basically means- for real security to exist, the chip has to
be open source down to the layout.

This will not happen. So basically, the interest of the one outweighing the
interests of the many, results in the many suffering for what exactly?

~~~
dingo_bat
> open source down to the layout.

I don't see what open source has got to do with any of this.

~~~
vog
More security experts would be encouraged to have a look at the design and to
find flaws early on.

Of course, we all know that this doesn't always happen, see OpenSSL. However,
once a major incident (Heartbleed) happened, they did: Many more OpenSSL
issues were found and fixed, forks with different trade-offs came into place.
For example, LibreSSL traded backwards compatibility with ancient systems for
a smaller code base and increased security.

Since CPU designs are not Open Source, and on top of that flooded with
patents, nothing like that will happen in this space. Intel and AMD are on
their own, rather than having their design checked by a motivated
international research community.

~~~
dingo_bat
But these attacks (Meltdown/Spectre) are on a fundamental design approach,
which was conceived and developed and researched in the open. People in
colleges all over the world study about them. Do you really think this would
have been caught much sooner is Intel had released all schematics and layouts
to the public?

~~~
vog
I'm just saying that in general, the incentive for a scientist to put work
into an open system is orders of magnitude higher than to put work into a
closed system.

To provide a similar example:

The crypto experts around Daniel J. Berstein and Tanja Lange stated publicly
at 34C3 that they refused to perform crypto analysis on a certain algorithm
that was patented. But they (and others) published good crypto analysis
results (working attacks!) just a few months after the patent expired.

~~~
UncleEntity
> I'm just saying that in general, the incentive for a scientist to put work
> into an open system is orders of magnitude higher than to put work into a
> closed system.

They already do that, I'm sure you can find a multitude of papers on branch
prediction and speculative execution if you simply took the time to look.
Probably even some by the very same people who designed the Intel chips
causing all the fuss.

~~~
vog
Please refrain from strawman arguments.

Nobody said there is no research, just that openness would lead to more
research. Even "a multitude of papers" was obviously not enough to catch this
earlier.

On top of that, please refrain from personal attacks.

------
bakery2k
Mitigations for Spectre and Meltdown are also being added to the JavaScript
VMs in Chrome [1], Firefox [2] and IE/Edge [3].

Are similar mitigations also needed in the VMs for other dynamic languages,
such as CPython/PyPy, Ruby MRI and Lua/LuaJIT? What about the JVM and
Microsoft's CLR?

Or are these other VMs not susceptible to this form of attack?

[1] [https://www.chromium.org/Home/chromium-
security/ssca](https://www.chromium.org/Home/chromium-security/ssca)

[2] [https://blog.mozilla.org/security/2018/01/03/mitigations-
lan...](https://blog.mozilla.org/security/2018/01/03/mitigations-landing-new-
class-timing-attack/)

[3]
[https://blogs.windows.com/msedgedev/2018/01/03/speculative-e...](https://blogs.windows.com/msedgedev/2018/01/03/speculative-
execution-mitigations-microsoft-edge-internet-explorer/)

~~~
pizlonator
> Are similar mitigations also needed in the VMs for other dynamic languages,
> such as CPython/PyPy, Ruby MRI and Lua/LuaJIT?

Yes.

> What about the JVM and Microsoft's CLR?

Yes.

~~~
bakery2k
Thanks. I suspected as much for LuaJIT because, like a JavaScript engine, it
supports JIT compilation of untrusted code. It's interesting to hear that PUC
Lua also needs these mitigations even though it's only an interpreter.

As for implementations of Python and Ruby, they might not worry about these
attacks - because AFAIK they do not try to support secure execution of
untrusted code.

------
roca
I see how this addresses some examples of Spectre, and it's clever (hi Filip!)
and probably worth doing. However, not all possible Spectre-related
vulnerabilities involve type confusion.

E.g. imagine if we had JS code that does

    
    
      let x = bigArray[iframeElem.contentWindow.someProperty];
    

Conceivably that could get compiled to some mix of JIT code and C++ that does

    
    
      if (iframeElemOrigin == selfDocumentOrigin) {
        index = ... get someProperty ...
        x = bigArray[index];
      } else {
        ... error ...
      }
    

and speculative execution could let the property value leak into the cache.

So I'm not optimistic about the prospects for fully automating defenses
against Spectre without hardware fixes :-(.

~~~
om2
We assume no if statements that implement a security check are safe without
something. It’s not just bounds checks and type checks.

Origin checks are a special case worth considering. I am not sure what you
describe would be exploitable in practice (reasons too long to fit in this
margin) but worth looking into.

------
TheCoreh
Restricting performance.now()'s resolution to 1ms is really bad news :( Once
the other mitigations are in place, will the webkit team consider increasing
the resolution again? Can we get a Developer menu or Web Inspector option to
temporarily enable the old resolution again?

~~~
flohofwoe
1ms is absolutely overkill IMHO and will require rewriting a lot of code which
measures frametime this way. This is not an issue for the precision reductions
in the other browsers (Chrome's 100us is just about what's still tolerable,
Firefox's 20us is fine). It's probably better to round the measured frametime
to the next 'vsync frametime' like 16.667ms or 33.333ms anyway, but I expect a
lot of WebGL demos and games to break :/

~~~
iainmerrick
What's an example scenario where you need an ultra-high-resolution timer?

For WebGL, don't you just rely on requestAnimationFrame to call you at the
right times, and draw your frame as quickly as possible? What's the benefit of
being able to measure the frame time to the nearest microsecond, rather than
the nearest millisecond?

~~~
andrewmcwatters
Timing behavior in literally any game.

~~~
iainmerrick
Does the timer need to be much higher resolution than the frame rate? If so,
why?

Maybe you want to measure the frame rate precisely? That is certainly
something you can do, but not usually something you need to do.

------
username223
> WebKit is affected because in order to render modern web sites, any web
> JavaScript engine must allow untrusted JavaScript code to run on the user’s
> processor... WebKit is affected by both issues because WebKit allows
> untrusted code to run on users’ processors.

There's the elephant in the room, though this article doesn't go far enough.
Trust isn't binary. It's not that the code isn't formally verified, or doesn't
type check. It's that the people running it don't consciously install it, the
people distributing it don't know or care what it is, and the people writing
it are often adversaries far more sophisticated than the people running it.

~~~
EgoIncarnate
Unfortunately the genie is out of the bottle already. The web by and large
requires javascript, and it's not likely to change anytime soon. So we are
stuck with the situation where the defenders (the hardware and software
designers) have to be correct 100% of the time on a platform that is
constantly changing. The attackers only need to be right once. Maybe someday
everything will just be streaming video or something, but that is a long way
off.

As for relying on users ability to determine what code is safe to download...
It's not like users can be relied on to do the right thing all the time even
without javascript. People respond to phishing attacks, download executables
sent to their email, reuse passwords... I think a sandbox is a better solution
than relying on users to understand what code is trustworthy and what isn't.

~~~
username223
> The web by and large requires javascript, and it's not likely to change.

While I am mostly a pessimist, this is one place where I harbor a tiny sliver
of optimism. Once users recognize that something is awful, and have the means
to get rid of it, they eventually do so. Client side Java and Flash are dead,
because everyday users figured out that they sucked, and were given the option
not to use them. We're getting close to a situation where average people
recognize that millisecond auctions to run arbitrary code on their machines
are a bad idea, and those people also have the tools to say "no." Hope springs
internal...

~~~
pjmlp
> Client side Java and Flash are dead

Not paying attention to the news? Just wait until WebAssembly gets more
mature.

~~~
EgoIncarnate
I don't think WebAssembly is going to bring back Flash or Java Applets in any
meaningful way. Maybe someone will hack something together and use it for
niche old Flash game sites, but it's hard to see a reason for widespread
adoption of anything new. People have moved on.

Flash and Java Applets may not be "dead" forever, but they also are not likely
to ever be more than undead zombies.

~~~
pjmlp
Again, not paying attention to the news.

[http://teavm.org/](http://teavm.org/)

[https://forums.adobe.com/thread/2432179](https://forums.adobe.com/thread/2432179)

[http://www.mono-project.com/news/2017/08/09/hello-
webassembl...](http://www.mono-project.com/news/2017/08/09/hello-webassembly/)

[https://www.hanselman.com/blog/NETAndWebAssemblyIsThisTheFut...](https://www.hanselman.com/blog/NETAndWebAssemblyIsThisTheFutureOfTheFrontend.aspx)

[https://github.com/Microsoft/xaml-
standard/issues/197](https://github.com/Microsoft/xaml-standard/issues/197)

~~~
EgoIncarnate
I think we are talking about different things. Half your links have nothing to
do with Flash or Java Applets (.net, XAML?).

You seem to be talking about new platforms that may derive some part from the
old. I'm not saying WebAssembly won't be used for new platforms as that's sort
of the whole point of it.

What I'm saying is that WebAssembly won't bring back people making Flash .swfs
or writing classes derived from java.applet.Applet in any mainstream way. In
that sense Flash and Java Applets are dead. Maybe someone will hack something
together that allows you to run them, but that won't bring the developers back
to the old platforms.

If Oracle or Adobe announce something, maybe the industry will jump on board,
but currently I don't see anything in your list that makes me change my mind
about Flash and Java Applets being dead and not coming back.

~~~
pjmlp
TeaVM allows any Java application to be ported into WebAssembly, be it an
applet or not.

As3-WebAssembly is a compiler for porting Action Script 3, Flash's programming
language, into WebAssembly. Already integrated into Flash Develop.

Microsoft and Xamarin efforts to port Mono into WebAssembly, will allow making
Silverlight apps again.

You are free to believe this won't turn out into anything, I rather think we
will end up in WebAssembly + Canvas/WebGL in a couple of years.

~~~
EgoIncarnate
As I said, we are talking about different things. I am talking about running
binary executables from old platforms and people making new binaries in the
old way again, you seem to be talking about porting old source code to new
platforms.

The new platforms may use the old languages, but the runtime libraries are not
the same. The new platforms have different APIs and functionalities (no
threads in TeaVM for instance).

To simplify my point. ActionScript is just a language. Adobe Flash was much
more than that. Maybe companies really will find some compelling reason to
start using ActionScript again... but Adobe Flash and it's SWFs will still be
dead.

------
bradgessler
One of the most brilliant features in the latest versions of Safari are per-
website settings for ad blockers, notifications, location, etc. Between those
two vulnerabilities and the general obnoxious useage of JS on websites, I’d
love to see the addition of a per-website setting for JS. I would personally
turn it off by default and only whitelist a handful of websites.

~~~
DamonHD
This is what I already get with FF + NoScript.

And yes, I am bored by sites that I visit for the first time that will not
show me any meaningful content until I enable layer upon layer of JS. Just not
necessary, nor is it safe.

~~~
pmontra
Sometimes you can open the developer tools and hack the CSS, because some of
those sites hide content with a display: none unless some JavaScript removes
it. If it's a site you often navigate to, there are addons like Stylus that
can make those changes permanent.

------
eridius
I'm really curious how SharedArrayBuffer can be used to create high-precision
timers. I'm also wondering what's now broken due to SharedArrayBuffer being
removed.

~~~
indexerror
From page 10-11 of [1]:

> A shared resource provides a way to build a real counting thread with a
> negligible overhead compared to a message passing approach. This already
> raised concerns with respect to the creation of a high-resolution clock
> [19]. In this method, one worker continuously increments the value of the
> buffer without checking for any events on the event queue. The main thread
> simply reads the current value from the shared buffer and uses it as a high-
> resolution timestamp.

[1]
[https://gruss.cc/files/fantastictimers.pdf](https://gruss.cc/files/fantastictimers.pdf)

------
Abishek_Muthian
Good work WebKit team in putting this together, it's just as good as Raspberry
Pi's note on why it isn't affected by Spectre & Meltdown -
[https://www.raspberrypi.org/blog/why-raspberry-pi-isnt-
vulne...](https://www.raspberrypi.org/blog/why-raspberry-pi-isnt-vulnerable-
to-spectre-or-meltdown/)

Of-course here, the mitigation are being detailed as well.

------
ikeboy
I'm starting to think naming multiple similar vulnerabilities by different
names was a mistake. Every article I read I have to go back and remind myself
while one was which.

~~~
brianberns
Giving them similar names certainly wouldn't make it any easier to remember
which one is which.

------
tasty_freeze
With current CPU design there is only so much that can be done, but the
architectural fix seems obvious to me, one that doesn't give up the
performance benefits of speculative execution and branch prediction.

The data in caches, the TLB, and the state tracked by branch prediction must
be segregated by the protected mode bit. That isn't sufficient so solve all
the problems, but seems like it solves a number of them.

~~~
pizlonator
The problem that JSC is dealing with is that Spectre allows untrusted JS code
to read things from the memory of the process that runs it. We don't want to
allow that, but I don't think your proposed solution would fix it.

~~~
gpderetta
If browsers are careful not to put any sensitive information in the same
process that is executing JS code, sandboxing could work.

This seems a sensible thing anyway as it would mitigate other attacks as well.
Allocating one process for JS-using webpage is expensive though.

~~~
om2
Chrome is working towards something like this with Site Isolation, and it’s a
good idea. Unfortunately it’s not a complete defense.

First, web pages can load cross origin resources, and that may be enough to
get data or a cookie into the attacker’s web process. Second, some risks of
this attack (e.g. ASLR bypass) don’t require any data from another origin to
be in process to be dangerous.

~~~
gpderetta
> web pages can load cross origin resources

I know nothing about web technologies, but maybe this is something we should
stop doing, at least for any executable resource? This would prevent JS ads I
guess, so win/win?

> Second, some risks of this attack (e.g. ASLR bypass) don’t require any data
> from another origin to be in process to be dangerous.

yes, ASLR seems to be busted.

~~~
om2
> > web pages can load cross origin resources > I know nothing about web
> technologies, but maybe this is something we should stop doing, at least for
> any executable resource? This would prevent JS ads I guess, so win/win?

It's arguably a flaw in the design of the web that loading cross-origin
resources is allowed by default. Unfortunately, there isn't a great path to
changing this. We may be able to allow websites to opt out of having their
resources loaded cross-origin, maybe (similar to X-Frame-Options but for
resource types other than frames).

------
ENOTTY
Isn't Intel and AMD shipping new fence instructions that prevent speculative
execution from progressing beyond a certain point? Why doesn't Webkit use
those?

~~~
om2
The fence instruction that Intel recommends (lfence) is way slower than the
techniques described here. We measured a 5x slowdown on Web Assembly trying to
use it.

Also we have been working on these mitigations since well before Intel made
their suggestion.

~~~
br1
Have you considered arr[max(i, arr.len)] instead of AND?

~~~
tetromino_
Max() is not a CPU instruction. It's an abstraction, a function that could be
implemented either using a branch (which defeats the whole purpose) or, on
some architectures, with something like cmovXX.

Perhaps they wanted a solution that works on Arm, which I think doesn't have
cmovXX. Or maybe Intel does speculation with cmovXX used on array index.

~~~
olliej
Cmov can speculate as well (it makes sense from a performance standpoint but
that isn’t the same as secure ;) )

~~~
om2
On some CPUs, cmov is known not to speculate, but in those cases it is
apparently super slow.

------
0x7f
I don't understand why they say Spectre can control branching in WebKit.
Spectre is an information leak attack, it doesn't allow to modify memory. I
could allow to find x in `is x == valueToCheck`. But if this is possible, even
before Spectre it's a security issue, it's only harder to guess, and
Javascript code should not be allowed to control `x`.

~~~
0x7f
I think this part is misleading =>

"Spectre means that an attacker can control branches, so branches alone are no
longer adequate for enforcing security properties."

I think they meant "Spectre means an that attacker can ABUSE branches", and in
that they are right.

~~~
pizlonator
This is clarified later: “Spectre means that branches are no longer sufficient
for enforcing the security properties of read operations in WebKit.“

It’s totally true that Spectre allows attackers to control reads, but when
they do this, they enter a non-destructive execution mode. They can read but
anything they write is thrown away. (To our knowledge, lol.)

~~~
0x7f
Thank you for clarifying, that what I thought. Still I find that the article
is not clear enough on this point, I fell that some people will read this as
"OMG they can control execution remotely, this the apocalypse". I mean to an
extent yes, but the results are dropped like you said and the main execution
path shouldn't be affected. It only facilitate information leaks AFAIK.

------
bangonkeyboard
What is the reasoning behind reusing version numbers?

~~~
js2
Apple does this sometimes for macOS. It's probably because it's an application
update, not an OS update (so OS patch version does not change), and doesn't
require a reboot.

~~~
hamandcheese
This doesn’t explain why they kept Safari at the same version number.

~~~
js2
Oh you're right. Specifically, they kept the CFBundleShortVersionString the
same, but updated the CFBundleVersion. My guess is that updating
CFBundleShortVersionString is a marketing decision. Maybe Apple is waiting
till all the WebKit fixes related to Spectre/Meltdown are available before
updating it?

~~~
jurip
Also various Apple systems define CFBundleShortVersionString to be of the
format A.B.C, no fourth element allowed. macOS versioning is still stuck with
10 as the A, B identifies the major release and C is the point release that is
effectively also marketing driven. With iOS the first number is used for the
major releases so C is available for bug fixes.

------
fpoling
From the patches it seems that the write operation to the array is also
protected with masking. What is the reason for it? If due to a bug unrelated
to Spectre a JS code could trigger a write beyond the allocated memory,
restricting writes to the next power of two just slightly complicates the
attacks. This is very different from the reading situation.

~~~
olliej
The masking is done in addition to bounds checks, not instead of, so no it
isn’t adding a mechanism to write out of bounds - the masking is purely to
limit the upper bounds of speculative load distance.

As far as the attack: writing to memory pulls the effected page into the cache
just as a read does

------
fpoling
It is not clear from the article how WebKit avoids changing semantics with
array index masking. In JS out-of-bounds access should return undefined, not a
random element of the array. To preserve that a branch still has to be made.

~~~
bennofs
If you combine index masking with a branch that should still be Ok. For
example, if you do `if(idx > arrayLength) return undefined else array[idx &
mask]` then the CPU can only predict "return undefined" or "array[idx &
mask]", none of which can cause any harm.

~~~
fpoling
Does that imply that WebKit always allocate by power of two and a script
cannot read the memory for unrelated allocation between the array length and
the nearest 2 __n?

~~~
iainmerrick
No, it means attacks are still possible in the memory region just after the
buffer, when it's smaller than the next power of two. Not ideal but it's
better than nothing.

------
ams6110
> All type checks are also vulnerable. For example, if some type contains an
> integer at offset 8 while another type contains a pointer at offset 8, then
> an attacker could use Spectre to bypass the type check that is supposed to
> ensure that you can’t use the integer to craft an arbitrary pointer.

So, even if WebKit were entirely written in Rust, we'd still be fucked. I'm
really starting to think it's about time for me to get out of this business.

~~~
josephg
Something I keep coming back to in all this is that maybe we should be
surprised that it’s even _possible_ to share a CPU between mutually untrusted
programs, let alone do it in so many contexts.

How do we stop his entire class of bugs? What is the Rust for CPU design?

~~~
6nf
CPUs tend to have multiple cores these days, would it be possible to assign
(in software) all kernal space work to one or more dedicated cores to mitigate
some of the risks?

~~~
politician
Yes, well, close.

> Since the 2010 Westmere microarchitecture Intel 64 processors also support
> 12-bit "process-context identifiers" (PCIDs), which allow retaining TLB
> entries for multiple linear-address spaces, with only those that match the
> current PCID being used for address translation.[19][20]

[https://en.wikipedia.org/wiki/Translation_lookaside_buffer#P...](https://en.wikipedia.org/wiki/Translation_lookaside_buffer#PCID)

HN discussion:
[https://news.ycombinator.com/item?id=16094349](https://news.ycombinator.com/item?id=16094349)

~~~
vardump
That's just Meltdown workaround.

Spectre was more relevant to context.

------
stcredzero
A lot of game-related media has reported that a lot of games are largely
unaffected by Spectre and Meltdown. My impression is that the most common
result is "about 3%." (With some notable exceptions, however.)

Does this bode well for Apple code, which seems to tend to pull in innovations
from games into the general UI?

~~~
alkonaut
The performance issues come from security mitigations when entering/exiting
the kernel during syscalls. So I/O heavy workloads such as databases will
suffer, whereas heavy compute-oriented tasks such as games should be mostly
OK. This is expected. There is nothing any company can do to "take advantage"
of the fact that games are cpu/gpu bound in order to mitigate new performance
issues in e.g. databases.

~~~
SquareWheel
Exactly. Though it may encourage different performance best practices now that
syscalls are considered more expensive.

