Hacker News new | comments | show | ask | jobs | submit login
Qt WebGL Streaming (qt.io)
58 points by jsfdez on July 7, 2017 | hide | past | web | favorite | 46 comments



From an earlier discussion of it: "The OpenGL calls are sent over the network and transformed into WebGL calls in the browser. It’s not video-streaming. The browser is executing WebGL in a canvas. Textures, buffers, glyphs, etc. are sent as well.

You need a browser with WebGL support. Tested on IE11, Firefox, Chrome, Edge, Chromium."

http://blog.qt.io/blog/2017/02/22/qt-quick-webgl-streaming/

I assume only a subset of OpenGL calls can be sent and things like 3d volume textures would eventually require WebGL2?


In this case, they are building the OpenGL ES 2.0 version of Qt, and WebGL is basically fully compatible with OpenGL ES 2.0. It's accelerating the toolkit itself. If you wanted to use GL directly I think you would have the same restrictions.



> You need a browser with WebGL support. Tested on IE11, Firefox, Chrome, Edge, Chromium."

Or the webgl polyfill created with mesa and emscripten.


Oh I never got to use networked opengl (which existence I learned years after I learned about opengl), that's very nice.


I wonder if this is helped by the OpenGL X11 heritage - GLX used to work relatively well over the network.


If I understood correctly, it is technology similar to GTK+'s Broadway https://www.google.com/search?q=gtk+Broadway


Here is a docker image I made if anyone wants to quickly try out broadway https://github.com/moondev/gtk3-docker


Yeah, seems largely the same except Broadway was based mainly on canvas commands.


> Some time ago I published a couple of blog posts talking about Qt WebGL Streaming plugin

I had to search to find out for sure what he was talking about. A Qt application runs a small, local web server, streaming UI calls to a WebGL context on a page you view through your browser of choice. http://blog.qt.io/blog/2017/02/22/qt-quick-webgl-streaming/

This differs from Electron in that your backend is C instead of JS, your UI is Qt instead of HTML, and the runtime doesn't embed the browser. I'm just guessing, but I suspect you also don't develop any code on the browser client context.

I've also personally found value in running Electron from a page in a local HTTP server process, so I don't find that part too different, but YMMV.

Maybe it's more appropriate to say this is like per-app RDP via WebGL.


If I'm understanding you correctly, I don't believe that's the intended use for this technology. This is about streaming an application over the Internet, from a remote server, not from a localhost server. I can't think of any advantage to building a locally hosted Qt app that way. If you want to run a Quick/QML app locally, you just run it — there would be no gain and numerous penalties in running it through a web browser.

(It also doesn't seem useful to compare it to Electron-based apps. The Qt comparison to an Electron app would be a Qt/QML app that uses Qt's WebEngineView, a control that embeds the Chromium browser.)


That's a good point. I could also see this being useful on IoT devices to provide better UI fidelity for configuration tools across the LAN.


Qt is commonly used for control UIs (also in industrial devices not running on a standard control system), so this would make remote control far easier to develop and deploy [1]. Essentially, yes, "per-app RDP".

[1] It removes the need to develop a remote API/RPC interface, the need to make the app so modular that you can detach its UI completely, the need to built, test and distribute desktop releases etc.. It is also easier for the customers, "set password on device, put IP address in browser, done", and likely requires less support. It probably also works much better for use on tablets (which is becoming very common in many industries, well, at least where the work environments aren't too dirty); industrial control UIs have used touch input for a long time, and tend to use screens with sizes comparable to a tablet.


Nitpick: C++ instead of C


3 years back I wanted to do something similar with a different approach. Instead of sending opengl commands, I chose to encode the Qt ui into video streams (also with a platform plugin, so no code change from the Qt developer side) and use either webrtc or mediasource api. I prototyped this, but I was unsure about the business value of it.

At the time, what I wanted to do is an app store that is across browsers and platforms.


I'm curious to see the latency of this webgl approach. I had for few times, thought about the same approach, however, I thought this should be slower than sending video frames.

The method I tried to use was a mixture of vnc and h264 frames. for interactive surface, such as QOpenGLWidget, I send with h264, but for static content like a button, I used vnc-like approach.


This seems like a bad design if you care about latency.

All UI interaction requires a round trip from the browser to the server, with a whole new set of GL commands sent back to the browser. It's a fancier version of VNC ultimately.

It would never work well over the Internet, it might be tolerable on a LAN.


The video also shows a bit of stuttering. Fluid 60fps should be seen as minimum requirement when comes to such interfaces.


Many applications that do a round trip per UI interaction work over the Internet, even games.


Can someone answer the question for me, why? What is the user case for this? Is it mostly for when you have an existing app that you want to access remotely?

I'm not giving it a hard time, I legitimately can't see the user case and want to know.


I wanted to do the same before. the idea was a platform for software as service.

You see, not all software are ideal to be implemented with pure web technology, although the limitation is quickly going away.

One example is expensive 3D modelling tools, such as Maya and 3D studio max. they are still desktop apps. They suffer from high piracy rate due to their prices. this is a negative feedback loop. You price them higher, you will have more people pirating them.

So this streaming Qt thing is a solution for quickly making a conventional desktop app into a web service and you can charge people with subscription.

This is a combination of the power of server machine and the convenience of the web. As web is not as capable as desktop app on 3D rendering, threading and direct accessing to GPU hardware.

Streaming the software ui is a viable solution for SAAS already, for example, as I know, both office 365 and the online version of libreoffice use this approach.

What I wanted to do was making a platform to make this easy. But I eventually gave up, because I think the biggest developers who would adopt this are heavy 3D graphics companies, and that narrows down to Adobe and Autodesk, as computer graphics is very labor intensive and monopolized. They both are big enough to develop their own solutions.


> This is a combination of the power of server machine and the convenience of the web.

Just to clarify, Qt WebGL Streaming is not a video streaming solution, so the rendering still happens in the browser via WebGL. The hardware requirements to run a graphics app with this feature would be no lower than they would be to just run the Qt app natively, so the only benefit is the portability and accessibility of the web.


Though with applications not already designed to act as thin clients, it could potentially be useful.

It doesn't take much horsepower to just open an application like Blender, but it might be useful to run it on a powerful server and access it from a lighter more power-efficient laptop.


I was talking about the software streaming use case, not implementation details.

If you search this page, you will find my other threads mentioning my approach, which was based on video streaming.


To elaborate a bit more here: imagine a tool, which renders things in custom way (engine), but still can expose the rendering commands to WebGL.

This way you can present, preview, annotate, comment, etc. a model, texture, game level if you say to 3rd party contractor and create a field for communication changes, visual diffs, notes, etc.

2-3 years ago I had to retrofit an existing game engine tools into a small size (was able to do it down to 1GB), and had to package this to each contractor.

This might be better.


Thank you. You just helped me see a use case I didn't see before.

To paraphrase. Hiding the bulk of the business logic from the client since with Javascript all your source is exposed.


yes. This enables thin client also. You can purchase a very cheap hardware, a chrome book, for example, to use professional 3D software or other power hungry software that need performant hardware.


I don't think this is true, as the browser is still executing the OpenGL calls via WebGL. This isn't a video streaming solution, it just allows WebGL rendering via the network, so you still need hardware capable of natively rendering the graphics.


what you said is not necessarily correct. they could render the opengl scene into a texture and send it over to webGL.

they never mentioned that they don't support opengl 4.5 with this webgl backend, so I guess they achieved it by rendering the opengl 4.5 context into a texture map and send that texture map content to web.

but this is just a guess.


So legitimate users that paid good money for their licenses will now be forced to use a blocky slow streaming video solution instead of fast responsive interface on their Xeon/64GB RAM equipped workstations... AND pay perpetual monthly subscription for something they owned before?

I see that going down well with your actually paying customers :)

Remember, you're not gaining money when you're screwing over paying customers to hunt people who aren't going to pay anyway.


It's not video streaming.


It doesn't have to be the extreme opposite as you described.

Plenty of apps benefit from some stuff being done on the client and the server.


the experience isn't as bad as you imagined. for example, you can try this first https://www.paperspace.com/gaming and then judge.


So this streaming Qt thing is a solution for quickly making a conventional desktop app into a web service...

With really shitty interaction latency.


> But I eventually gave up, because I think the biggest developers who would adopt this are heavy 3D graphics companies

Is lack of viable adoption by users the only reason you gave the project up?


Qt is apparently used a lot in company-internal apps and other specialized tools. Now these can be offered remotely for mobile devices, or laptops not supported by the tool (eg Linux or Mac in a Windows shop). Less clunky than a remote desktop setup.


I don't understand what "streaming" means here.

Is this a video stream of a QT UI being rendered in the server ad-hoc for the client?

Is this a QT UI being converted to WebGL (planes, meshes, etc) on the fly?

Is this a QT UI compiled for WebGL?


If I understand correctly, it works a little bit like how X11 used to run over the network.

The program UI state and program data is all server side, but the actual rendering happens client side. Instead of sending the client a stream of X11 commands, it sends OpenGL/WebGL drawing instructions, and the client sends input events back to the server.


To me it looks like "a way for javascript programmers to make a UI without having to learn Python or C++".


Couldn't be more wrong. The GUI is written in C++ and is running on the server. It streams its OpenGL calls to a web browser.

It's basically a way to remotely view a Qt app. Like VNC but faster and better integrated into browsers.


But VNC is video streaming, right?

What are the instruction being sent to the browser? "paint this text" here, "paint this surface" here, etc?


It doesn’t look like that to me. It looks like normal Qt/QML (which does use JavaScript, sure) but with a browser-based front end. Possibly streamed over a network. Kinda like X forwarding.


This would definitely has it's applications, like gamedev, etc. - but not sure mainstream.


They should really have a demo available on the web to try out.


I'm guessing it's pretty hard to scale, given that it is effectively a remote session like RDP.

It would be interesting to see a local demo that pairs with WebAssembly, though.


For one user at a time...




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: