Now system designers have to very seriously consider this risk when building sandboxes or other forms of process isolation. Say the attacker controlled process is in a sandbox and would normally be assumed safe.
Why would you think otherwise?
Yes, they do it with high-speed probing of the amount allocated GPU memory. This requires access to native APIs (specific OpenGL extensions in this case). Like you said there seems there would be simpler ways to achieve this, but the demonstration of this side-channel is still very interesting.
Varying the size of the textboxes might not help because the rendering engine of browsers can have a memory allocation per letter for example.
Is that still the case, and would that unintentionally avoid this side channel since the global GPU perf stats wouldn't necessarily see a difference on small allocations?
If the renderer uses a glyph atlas, then there won't be any changes in GPU memory used as you type in a text box. There consequently won't be any time used to update glyphs, as they are already in the atlas (ignoring CJK locales for a minute).
I guess I’m a tiny bit surprised that the browsers are de-allocating and re-allocating memory for the re-render of the text box elements, since they don’t change size. Presumably this would be texture memory? Does anyone here know precisely what’s happening there, and whether a render-to-texture won’t work for some reason? Is it normal for browsers to allocate memory for all element repaints, or is this something unique to text typing or font handling?
The paper mentioned some mitigation avenues from the GPU’s perspective, but there are easy mitigations from the browser side here too. Given their interest in security, it seems reasonable for browsers to move on easy mitigations without waiting for changes to the GPU API.
Rate limiting password fields in the browser, and avoiding GPU memory allocations while the user is only typing into a small text box element would remove the ability to do timing attacks on passwords. Individual websites can even do these things if they’re worried about it.
It highly depends on the browser and the specific painting backend in use. With a traditional CPU painting backend, browsers typically render at tile granularity, and tiles shuffle in and out of a global pool for repaints. That is, when the browser goes to repaint content, the painter grabs a tile from a shared pool, renders the new content into it, and then hands it off the compositor, which swaps the tile buffer and places the old buffer into a pool. This means that the GPU memory access patterns are somewhat irregular.
If the GPU is being used for painting, as for example in Skia-GL/Ganesh, then there are all sorts of caches and allocators in use by the backend. Typically, these systems try to avoid allocation as much as possible by aggressively reusing buffers. Allocating every frame is generally bad. But it can be hard to avoid, especially with immediate-mode APIs like Skia. (I would expect that WebRender could be better here long-term, though we probably have work to do as well.)
EDIT: ohh is it maybe that one meaning of "render" is "make"? So it's like "made insecure"?
It's exactly that, yes.
"_______ is All You Need"
Shared Resource Matrix for Storage Channels (1983)
Wray's Extention for Timing Channels (1991)
Using such methods were mandatory under the first, security regulations called TCSEC. They found a lot of leaks. High-assurance, security researchers stay improving on this with some trying to build automated tools to find leaks in software and hardware. Buzzwords include "covert channels," "side channels," "non-interference proof or analysis," "information flow analysis (or control)," and "information leaks." There's even programming languages designed to prevent accidental leaks in the app or do constant-time implementations for stuff like crypto. Go forth and apply covert-channel analysis and mitigation on all the FOSS things! :)
Here's an old Pastebin I did with examples of that:
Nothing in this paper is new, except a few tweaks here and there.
This is worrisome. More and more academic scientific research is the author taking previous research, adding a spin to it, and claiming it's a new attack.
This is nothing new. There was a period of time when it seemed like all CompSci papers had to do with some variation on hashing. There was a period of time at OOPSLA where it seemed like some whopping big fraction of papers was just a Java port of something that had been done in Smalltalk several years before. "Publish or perish," is just academia's version of YouTubers posting 3 videos a week.
Go to infosec meetups like Empire Hacking (the Trail of Bits meetup) and this will be almost every talk without exception.
It's an industry that runs on social capital and people have to do anything to get their name out there. Fortunately there is still good work being done.
This describes the vast majority of academic publications both today and a few decades ago. It might not be worthy of spotlight, but there is certainly a place for it in academia.