From an earlier discussion of it:
"The OpenGL calls are sent over the network and transformed into WebGL calls in the browser. It’s not video-streaming. The browser is executing WebGL in a canvas. Textures, buffers, glyphs, etc. are sent as well.
You need a browser with WebGL support. Tested on IE11, Firefox, Chrome, Edge, Chromium."
In this case, they are building the OpenGL ES 2.0 version of Qt, and WebGL is basically fully compatible with OpenGL ES 2.0. It's accelerating the toolkit itself. If you wanted to use GL directly I think you would have the same restrictions.
> Some time ago I published a couple of blog posts talking about Qt WebGL Streaming plugin
I had to search to find out for sure what he was talking about. A Qt application runs a small, local web server, streaming UI calls to a WebGL context on a page you view through your browser of choice. http://blog.qt.io/blog/2017/02/22/qt-quick-webgl-streaming/
This differs from Electron in that your backend is C instead of JS, your UI is Qt instead of HTML, and the runtime doesn't embed the browser. I'm just guessing, but I suspect you also don't develop any code on the browser client context.
I've also personally found value in running Electron from a page in a local HTTP server process, so I don't find that part too different, but YMMV.
Maybe it's more appropriate to say this is like per-app RDP via WebGL.
If I'm understanding you correctly, I don't believe that's the intended use for this technology. This is about streaming an application over the Internet, from a remote server, not from a localhost server. I can't think of any advantage to building a locally hosted Qt app that way. If you want to run a Quick/QML app locally, you just run it — there would be no gain and numerous penalties in running it through a web browser.
(It also doesn't seem useful to compare it to Electron-based apps. The Qt comparison to an Electron app would be a Qt/QML app that uses Qt's WebEngineView, a control that embeds the Chromium browser.)
Qt is commonly used for control UIs (also in industrial devices not running on a standard control system), so this would make remote control far easier to develop and deploy [1]. Essentially, yes, "per-app RDP".
[1] It removes the need to develop a remote API/RPC interface, the need to make the app so modular that you can detach its UI completely, the need to built, test and distribute desktop releases etc.. It is also easier for the customers, "set password on device, put IP address in browser, done", and likely requires less support. It probably also works much better for use on tablets (which is becoming very common in many industries, well, at least where the work environments aren't too dirty); industrial control UIs have used touch input for a long time, and tend to use screens with sizes comparable to a tablet.
3 years back I wanted to do something similar with a different approach. Instead of sending opengl commands, I chose to encode the Qt ui into video streams (also with a platform plugin, so no code change from the Qt developer side) and use either webrtc or mediasource api. I prototyped this, but I was unsure about the business value of it.
At the time, what I wanted to do is an app store that is across browsers and platforms.
I'm curious to see the latency of this webgl approach.
I had for few times, thought about the same approach, however, I thought this should be slower than sending video frames.
The method I tried to use was a mixture of vnc and h264 frames. for interactive surface, such as QOpenGLWidget, I send with h264, but for static content like a button, I used vnc-like approach.
This seems like a bad design if you care about latency.
All UI interaction requires a round trip from the browser to the server, with a whole new set of GL commands sent back to the browser. It's a fancier version of VNC ultimately.
It would never work well over the Internet, it might be tolerable on a LAN.
Can someone answer the question for me, why? What is the user case for this? Is it mostly for when you have an existing app that you want to access remotely?
I'm not giving it a hard time, I legitimately can't see the user case and want to know.
I wanted to do the same before. the idea was a platform for software as service.
You see, not all software are ideal to be implemented with pure web technology, although the limitation is quickly going away.
One example is expensive 3D modelling tools, such as Maya and 3D studio max. they are still desktop apps. They suffer from high piracy rate due to their prices. this is a negative feedback loop. You price them higher, you will have more people pirating them.
So this streaming Qt thing is a solution for quickly making a conventional desktop app into a web service and you can charge people with subscription.
This is a combination of the power of server machine and the convenience of the web. As web is not as capable as desktop app on 3D rendering, threading and direct accessing to GPU hardware.
Streaming the software ui is a viable solution for SAAS already, for example, as I know, both office 365 and the online version of libreoffice use this approach.
What I wanted to do was making a platform to make this easy. But I eventually gave up, because I think the biggest developers who would adopt this are heavy 3D graphics companies, and that narrows down to Adobe and Autodesk, as computer graphics is very labor intensive and monopolized. They both are big enough to develop their own solutions.
> This is a combination of the power of server machine and the convenience of the web.
Just to clarify, Qt WebGL Streaming is not a video streaming solution, so the rendering still happens in the browser via WebGL. The hardware requirements to run a graphics app with this feature would be no lower than they would be to just run the Qt app natively, so the only benefit is the portability and accessibility of the web.
Though with applications not already designed to act as thin clients, it could potentially be useful.
It doesn't take much horsepower to just open an application like Blender, but it might be useful to run it on a powerful server and access it from a lighter more power-efficient laptop.
To elaborate a bit more here: imagine a tool, which renders things in custom way (engine), but still can expose the rendering commands to WebGL.
This way you can present, preview, annotate, comment, etc. a model, texture, game level if you say to 3rd party contractor and create a field for communication changes, visual diffs, notes, etc.
2-3 years ago I had to retrofit an existing game engine tools into a small size (was able to do it down to 1GB), and had to package this to each contractor.
yes. This enables thin client also. You can purchase a very cheap hardware, a chrome book, for example, to use professional 3D software or other power hungry software that need performant hardware.
I don't think this is true, as the browser is still executing the OpenGL calls via WebGL. This isn't a video streaming solution, it just allows WebGL rendering via the network, so you still need hardware capable of natively rendering the graphics.
what you said is not necessarily correct. they could render the opengl scene into a texture and send it over to webGL.
they never mentioned that they don't support opengl 4.5 with this webgl backend, so I guess they achieved it by rendering the opengl 4.5 context into a texture map and send that texture map content to web.
So legitimate users that paid good money for their licenses will now be forced to use a blocky slow streaming video solution instead of fast responsive interface on their Xeon/64GB RAM equipped workstations... AND pay perpetual monthly subscription for something they owned before?
I see that going down well with your actually paying customers :)
Remember, you're not gaining money when you're screwing over paying customers to hunt people who aren't going to pay anyway.
Qt is apparently used a lot in company-internal apps and other specialized tools. Now these can be offered remotely for mobile devices, or laptops not supported by the tool (eg Linux or Mac in a Windows shop). Less clunky than a remote desktop setup.
If I understand correctly, it works a little bit like how X11 used to run over the network.
The program UI state and program data is all server side, but the actual rendering happens client side. Instead of sending the client a stream of X11 commands, it sends OpenGL/WebGL drawing instructions, and the client sends input events back to the server.
It doesn’t look like that to me. It looks like normal Qt/QML (which does use JavaScript, sure) but with a browser-based front end. Possibly streamed over a network. Kinda like X forwarding.
You need a browser with WebGL support. Tested on IE11, Firefox, Chrome, Edge, Chromium."
http://blog.qt.io/blog/2017/02/22/qt-quick-webgl-streaming/
I assume only a subset of OpenGL calls can be sent and things like 3d volume textures would eventually require WebGL2?