OffscreenCanvas appears to be a special canvas object that you can transfer to a web worker, draw to it there off of the DOM thread, and then "commit" the updates to the associated DOM Canvas.
Is the DOM canvas neutered after calling transferControlToOffscreen? Meaning, can you get a context and draw to the canvas concurrent with the Web Worker, or is the DOM Canvas inaccessible after control is transferred?
If it is the latter, then is this OffscreenCanvas just a WebGL-exclusive version of CanvasProxy? What are the characteristics of WebGL versus 2D-canvas API that make it conducive to Web Workerage? Not that I'm complaining, I just don't understand what's the difference.
I'm curious about how bad the synchronization issues are with OffscreenCanvas in this first implementation.
In any case, this may have incredible implications for mobile web game development if the experiment pans out, not to mention desktop games! Super exciting, I've been dreaming to be unshackled from the DOM thread for years. Thanks AGAIN, Mozilla!!
I don't know enough about our graphics stack, or the CanvasProxy spec to answer that. I just try to show why people should be excited about the hard work done by Mozilla employees, contributors, and volunteers.
Also, specs are not laws until they're candidate recommendations, and a lot will never be implemented. I should do a "how a bill becomes a law" post about web standards bodies...
> Is the DOM canvas neutered after calling transferControlToOffscreen? Meaning, can you get a context and draw to the canvas concurrent with the Web Worker, or is the DOM Canvas inaccessible after control is transferred?
Open your dev tools and find out. Fewer chars than your question typed out:
a = document.createElement('canvas'), a.transferControlToOffscreen(), a.getContext('webgl')
> Thanks AGAIN, Mozilla!!
We've got your back, son.
To be pedantic; web specs are (almost) never laws (occasionally some specs for e.g. a11y are incorporated by reference into other laws, but that typically doesn't affect implementors). This is more important than it sounds because people are occasionally confused into thinking that if they can get something written down in a spec and drive the spec through the W3C process people will somehow be forced to implement it.
Even for specs that are implemented, the phases of the W3C process are actually mostly meaningless formalisms, and other bodies (e.g. WHATWG) work well without them. In practice a spec becomes "law" — i.e. immune from backwards incompatible changes — when there is enough content that depends on the existing behaviour that vendors won't unship it. For most cases this is more or less when there are two compatible implementations shipping, although of course the details very on a case-by-case basis.
This technique would be really powerful if the worker thread could pass a buffer of the rendered results off to the main thread efficiently for use by the main thread. For example, a game has a worker thread to refine lightmaps while the main thread uses those lightmaps for rendering the game (similar to http://madebyevan.com/shaders/lightmap/). Does anyone know if this is possible?
Also, what does this mean for GPU use on the worker? Can heavy GPU use in the worker still lock up the GPU and impact the GPU use of the main thread and the windowing system in the OS?
Threads won't be able to share FBO's or VBO's, but we're working on SharredArrayBuffer, which will allow multiple threads to share data (more like C style arrays than JS style lists, they still would have to be glBufferData'd to VRAM from RAM). See also: https://www.youtube.com/watch?v=XvoBR9K3ZmE
> Also, what does this mean for GPU use on the worker? Can heavy GPU use in the worker still lock up the GPU and impact the GPU use of the main thread and the windowing system in the OS?
Isn't that the case for all applications?
It would be amazing to have fast GPU-GPU transfer. Is it the case in Firefox that using texImage2D/texSubImage2D on a main thread context with a worker <canvas> will result in a guaranteed GPU-GPU transfer? Or does that download the pixels to the CPU and re-upload them?
> Isn't that the case for all applications?
Yes, but I haven't been following GPU tech recently so I wasn't sure if that still applies universally. Maybe some day we'll get preemptive multitasking on GPUs :)
I didn't write the patch, but let me ask who did.
Because of the shape of the API though (support for arbitrary unpack format/type requests), depending on the underlying pixel formats, gpu->gpu blit isn't guaranteed.
We're looking into ways to provide a reliable fast-path.
On OpenGL. Since Windows 8, MS has had preemptive multitasking in their driver scheduling model for GPU usage.
Folks that spend big bucks on dedicated hardware might disagree.
> If a game isn't fullscreen, I'm not prioritizing its framerate; it should (be forced to) play nice with my desktop.
That's fair. That was also Microsoft's reason for not implementing WebGL for a long time, IIRC. "DoS." Thing is, either you want a thread to take everything (like when you play a game fullscreen), or you can just close a demanding application (or tab on a site with misbehaving/straining WebGL).
Sorry, I misspoke. Having a watchdog timer reset the GPU is indeed a useful feature.
I meant to say "you can just avoid such a site in the future." See greggman's point #1 on DoS: http://games.greggman.com/game/webgl-security-and-microsoft-...
However, this being able to push the text rendering I do off to a thread, specifically to keep the main thread responsive, would be huge for me: https://bugzilla.mozilla.org/show_bug.cgi?id=801176
I'm already working on physics in a web worker, and have already got picking working there. The text rendering would be the biggest piece of low-hanging fruit in my rendering latency.
EDIT: whoa, I just had a vision of having the text rendering canvas running in a new tab, using HTMLCanvasElement.toBlob() to serialize it, and transmitting from one tab to another through SharedWorker or even LocalStorage. It's a huge hack, and it will depend on whether or not the comms and deserialization is not more expensive than the rendering, but could potentially solve my problem today.
Halt and Catch Fire. That's a direct contradiction. If you're doing work on WebVR you should absolutely care about WebGL in workers. See: https://www.youtube.com/watch?v=XvoBR9K3ZmE
Consider running physics in separate thread, rendering in a thread, and sampling from the VR positional sensor and HMD in the main thread. GC pauses in one thread should not affect other threads.
Well, I get what you're saying, that you could at least still render a supersampled image and update view over a subset of that image in the main thread. But that's Asynchronous Time Warp, and Chrome already has it (Firefox, however, does not yet have it).
In most implementations, WebGL is triple buffered on Windows (since Firefox and Chrome go through ANGLE), or are double buffered. No one ever renders directly to the front buffer.
I highly recommend reading chapters 1 and 2 of WebGL Insights edited by P. Cozzi. (shameless plug, I wrote chapter 5).