
Rendered Insecure: GPU Side Channel Attacks Are Practical [pdf] - pedro84
http://www.cs.ucr.edu/~zhiyunq/pub/ccs18_gpu_side_channel.pdf
======
kibwen
The browser password capture attack is awesome. To clarify though, that
requires an external attacker-controlled process to monitor the GPU, yes? If
you've already got a process running on the victim's computer, aren't there
easier ways of installing a keylogger? Alternatively, are there legitimate
programs that grant the ability to run arbitrary untrusted GPU code (WebGL
springs to mind, but you'd think that if this attack were possible entirely
from within the browser that they'd have been eager to show it off)? Or is the
novelty here the ability to more easily guess which keystrokes represent
passwords due to the website fingerprinting step? In any case, it suggests
that websites could slightly frustrate the attack by making all the textboxes
on their login page the same size, and that anyone using any password manager,
including the one built in to their browser, is largely protected. Very cool
nonetheless.

~~~
EthanHeilman
>If you've already got a process running on the victim's computer, aren't
there easier ways of installing a keylogger?

Now system designers have to very seriously consider this risk when building
sandboxes or other forms of process isolation. Say the attacker controlled
process is in a sandbox and would normally be assumed safe.

~~~
loeg
If your sandbox gives unfettered access to the GPU, that's not a great
sandbox, is it?

~~~
IshKebab
Erm.. Many sandboxes allow access to the GPU. E.g. WebGL or Android apps.

Why would you think otherwise?

------
dahart
> By probing the GPU memory allocation repeatedly, we can detect the pattern
> of user typing (which typically causes re-rending of the textbox or similar
> structure animating the characters).

I guess I’m a tiny bit surprised that the browsers are de-allocating and re-
allocating memory for the re-render of the text box elements, since they don’t
change size. Presumably this would be texture memory? Does anyone here know
precisely what’s happening there, and whether a render-to-texture won’t work
for some reason? Is it normal for browsers to allocate memory for all element
repaints, or is this something unique to text typing or font handling?

The paper mentioned some mitigation avenues from the GPU’s perspective, but
there are easy mitigations from the browser side here too. Given their
interest in security, it seems reasonable for browsers to move on easy
mitigations without waiting for changes to the GPU API.

Rate limiting password fields in the browser, and avoiding GPU memory
allocations while the user is only typing into a small text box element would
remove the ability to do timing attacks on passwords. Individual websites can
even do these things if they’re worried about it.

~~~
Chabs
Speculation: GPU-based text rendering is typically going to be done using two
triangles per glyph. So what they are seeing is probably not the allocation of
the underlying texture storage, but the transient vertex buffer that contains
the information of which glyph to render and where to render them.

~~~
pcwalton
Well, most GPU-based painting backends would try to recycle those VBOs to
avoid reallocating them every frame. But it's hard to get all the heuristics
right; cache invalidation is one of the two hard problems of computer science,
after all…

------
tomglynch
I'm really into this trend of formal publications having clever puns in their
titles.

~~~
izietto
Can you explain the pun? I'm not english and I'm struggling getting it

EDIT: ohh is it maybe that one meaning of "render" is "make"? So it's like
"made insecure"?

~~~
cosarara
> ohh is it maybe that one meaning of "render" is "make"? So it's like "made
> insecure"?

It's exactly that, yes.

------
nickpsecurity
Repeat after me the lesson from the founders of information security: every
system, from individual components to their integration, is insecure until
proven trustworthy by sufficient analysis. You have to also have a precise
statement of what secure means to compare the system against. You then apply
methods proven to work for various classes of problems. By 1992, they had
found everything from kernel compromises to cache-based, timing channels using
such methods. On this topic, every hardware and software component in every
centralized or decentralized system has covert channels leaking your secrets.
Now, if you're concerned or want to be famous, there's something you can do:

Shared Resource Matrix for Storage Channels (1983)
[http://www.cs.ucsb.edu/~sherwood/cs290/papers/covert-
kemmere...](http://www.cs.ucsb.edu/~sherwood/cs290/papers/covert-kemmerer.pdf)

Wray's Extention for Timing Channels (1991)
[https://pdfs.semanticscholar.org/3166/161c3cbb5f8cd150d133a3...](https://pdfs.semanticscholar.org/3166/161c3cbb5f8cd150d133a3746987da2d264d.pdf)

Using such methods were mandatory under the first, security regulations called
TCSEC. They found a lot of leaks. High-assurance, security researchers stay
improving on this with some trying to build automated tools to find leaks in
software and hardware. Buzzwords include "covert channels," "side channels,"
"non-interference proof or analysis," "information flow analysis (or
control)," and "information leaks." There's even programming languages
designed to prevent accidental leaks in the app or do constant-time
implementations for stuff like crypto. Go forth and apply covert-channel
analysis and mitigation on all the FOSS things! :)

Here's an old Pastebin I did with examples of that:

[https://pastebin.com/ajqxDJ3J](https://pastebin.com/ajqxDJ3J)

------
ccnafr
"several vulnerabilities have already been demonstrated"

Nothing in this paper is new, except a few tweaks here and there.

This is worrisome. More and more academic scientific research is the author
taking previous research, adding a spin to it, and claiming it's a new attack.

~~~
stcredzero
_More and more academic scientific research is the author taking previous
research, adding a spin to it, and claiming it 's a new attack._

This is nothing new. There was a period of time when it seemed like all
CompSci papers had to do with some variation on hashing. There was a period of
time at OOPSLA where it seemed like some whopping big fraction of papers was
just a Java port of something that had been done in Smalltalk several years
before. "Publish or perish," is just academia's version of YouTubers posting 3
videos a week.

------
jotm
Real world Spectre/Meltdown when?

