For twenty years no Qt developer has complained about the speed of Qt signals. Even in PyQt the speed of Qt signals is not issue - it's the code responding to the signal that can be a problem.
This is a non-issue and anyone who is worried about it is having a severe case of premature optimization.
This post did raise an interesting question for me though. With Qt being used on microcontrollers these days, It made me wonder about the overhead of signals and slots on single core/single threaded usecases and how that might affect things like power consumption/CPU usage on those devices. I would love to see if someone did a comparison on that.
As for the crazy contrived example they have provided: "Assume I receive 100,000 trades per second from some crypo exchange and if one trade triggers 100 signals than I have 10.000.000 signals per second that is something comparable with the maximum." - They'd have to process the incoming messages on a separate thread (or process) anyway, so they wont drop packets because of the event loop that also processes user input. Then update the GUI once every event loop tick because that's how frequently the UI gets updated anyway.
As per my old boss, the difference in prices when it comes to these microcontrollers and "less powerful" linux socs is negligible that it wasn't worth trying to target them. Not sure if/how the economics changed over the last 2-3 years either.
"Qt for MCUs" isn't the full Qt library as I understand it, but an entirely separate development
> Qt Quick Ultralite is designed to serve as a rendering engine for the application's graphical user interface (UI). Its implementation is different from the standard Qt, and it does not depend any Qt libraries such as Qt Core or Qt Gui. Hence Qt Quick Ultralite applications need to use standard C++ containers and classes instead of those from Qt. For example, instead of using QObject or QAbstractItemModel, Qt Quick Ultralite provides a simple C++ API to expose objects and models.
> It does not include the following from the Qt world:
> The Qt C++ APIs.
The non-graphical modules such as Qt Core and Qt Network.
The Add-on modules such as Qt Multimedia, Qt Bluetooth, and others Qt Addon Modules.
The non-MCU embedded platforms such as embedded Linux or the mobile platforms.
These days "microcontroller" refers to a part that has self-contained flash and SRAM, and the MMU/Linux-capable SoCs need external mass storage and DRAM to run. So there's a cost adder on the Linux parts. You might also need a denser PCB to handle DRAM signals (although Jay Carlson might be right on this).
I used Qt quite extensively about 10 years ago and we constantly complained about signals. Was working in real-time medical simulations and interfacing with the many io devices we had was painfully slow. Also (at the time) keeping an OpenGL frame (we had multiple at a time) required signal communication that was a high overhead. We ended up replacing all of the signal beside setting up and tearing down the application and handling of system events. Signals are a bad design and I’ll stand on that hill forever.
There are many many things you should never use if you need ultra low latency and/or ultra high bandwidth. That doesn't make them badly designed, simply not suitable for every purpose.
Qt signals are absolutely fine for like 99% of their usage.
Absolutely not, signals and slots make a spaghetti mess of messaging. Often causing a many to one to many dependency hell. In simple cases signals and slots work fine, but quickly become overwhelming as the project grows. I’ve not worked in any large Qt project that wasn’t a dependency mess and a chore to do anything. Signals and slots are a crutch of poor design.
> anyone who is worried about it is having a severe case of premature optimization.
I think the "This is a premature optimization" mindset is what leads to slow pieces of software that I like to avoid (React, Qt, electron). But I guess it's fine, as most users don't care as much as I do.
Fusion 360 is kind of a mess that's gotten way out of control. I enjoy how the navigation cube can literally draw outside the boundaries of the window as you move the window around. I don't think that's a Qt problem, I think they glued ten different UI libraries together to keep their legacy code working.
Fusion is probably the newest software product that Autodesk rents. In fact it was Carl Bass' passion project. Unfortunately Autodesk's software development process is… broken.
The fact that it's made by Autodesk is probably the reason that it's slow, not because it uses Qt. Various Autodesk products have been clunky, buggy, crashy, slow pieces of crap (I'm looking at you, Maya) since before they were migrated to Qt.
The UI is totally performant for me. The only time it's bogged down was when I was modelling something with > 100k mesh triangles. Or when I do a long UI-blocking processing operation (which isn't in the qt codepath)./
Have you used SolidWorks or other similar applications? I use both and the Fusion UI is very laggy and unresponsive in comparison. If that's not your experience, maybe there's something wrong and I should reinstall it...
I have not used SolidWorks. It costs far more money and is aimed at a totally different market.
What I'm trying to understand: are you talking about when you ask Fusion 360 to do something expensive the UI stops responding for a bit? I certainly see that, but it's not Qt that is the source of problems.
No, I'm talking more about how quickly the UI draws and how quickly it responds when you interact with it. When typing and tabbing around the UI, does it react instantly or after a delay? SolidWorks is decently response, Fusion 360 feels slow.
Nope, I'm not seeing any responsiveness problems. Note: I have a 8-core AMD w/ 64GB RAM and a high end graphics card. Clicking on any UI element (such as Create to open the Create menu) seems to lead to a response in under 100ms and if I do something that requires the net, it shows me a spinner while it does the loading.
SolidWorks actually had/has a product aimed at the hobbyist crowd just like Fusion. Used to be you could get a free copy with membership in some aviation related professional org.
I had solidworks sales people contact me and we chatted. No matter what i tried, they wanted to charge me multiple thousands a year. Any free copy would eventually stop working for example when I updated to a new version. I'm happy with Fusion 360, it solves a collection of problems for me well enough that unless I see a similar program for less money (I already became proficient in FreeCAD and OpenSCAD before switching to Fusion 360), I don't plan to switch since I've already invested a lot of effort into mastering the fairly complicated user interface and feature set provided.
Experimental Aircraft Association used to toss in a free copy of the SolidWorks "education premium" product or whatever it was called. EAA membership is on the order or $40/yr so there was some migration after Autodesk decided to gut the hobbyist version of Fusion. SW realized they could do the same so the offer is now half off the cost of a castrated social-media-ified version of SolidWorks (3DEXPERIENCE).
For hobbyist stuff at least Fusion is by far the least painful option, but the bar is set really, really low. FreeCAD is a gigantic clusterfuck to put it charitably (akin to using an awl to carve a drawing out of cardboard versus pencil and paper). OpenSCAD is neat but it really suffers because OpenSCAD is basically developed and maintained by a single person.
Not sure why you believe that, I built some (small) Qt applications so I know it's not a web framework, I was just referring to slow software in general.
I think it's less true now, but I remember KDE (Qt-based) being slower and buggier than gnome (Gtk-based) a couple of years ago, just to cite one thing. It just felt like Qt-based stuff was in general more of a pain to use & heavier than GTK based stuff. It's really a matter of personal preference here and Qt is a nice project, just that I have some criticism here regarding performance choices. I feel like bad performance decisions tend to snowball and get multiplied when people make library choices and add their own performance issues on top.
Because you included it in a list that otherwise only contained web tech frameworks. It’s not clear to me why you think those are peers of Qt and implied to me that you think they are near equivalents.
KDE apps tend to use KDE frameworks, which is quite a bit of stuff on top of what pure Qt offers. Pure Qt apps were always comparable to Gtk in performance.
A function call in JS is usually just a few machine instructions. It's an example of a case where a conventional C++ compiler can do less optimisation than a JIT JS compiler.
I would find it extremely surprising if a regular function call in C++ took any more instructions than in JS.
Now a virtual function call, a JIT compiler can devirtualize more often than an AOT compiler. But the vast majority of function calls in your typical C++ app are not virtual.
Then you should be comparing them with the direct JavaScript equivalent, which would be something like an array of functions and a foreach/invoke over that.
It should also be noted that Qt signals are far from the optimal way this can be implemented in C++. On top of that, C++ itself makes it more complicated than it needs to be by not providing a bound (to receiver) member function pointer as a primitive; but even then, this can be done in two indirections. A compiler for a language that supports such a facility directly - say, Delphi - can compile it down to a single indirection.
> Then you should be comparing them with the direct JavaScript equivalent, which would be something like an array of functions and a foreach/invoke over that.
But again... a JS JIT could unroll that loop, leaving with just a couple of instruction per call and no loop.
Does it or maybe could? I hear a lot about the miracles of JS JIT but from my experience C++ code will run circles around JS code in real world scenarios.
How is Qt slow? Do you have benchmarks of Qt vs JavaFX, UWP, SwiftUI or Electron showing that it's slow, let alone comparable to Electron or React?
10x the latency of a virtual function call for a signal is very very small beans compared to where you're actually spending CPU cycles, for any reasonable software.
Qt is heavyweight when compared to say Gtk. But that being said, I agree that Qt perf is much better than electron / JavaFX. I mean everything is a tradeoff. Gtk Is more complex to develop-with than Qt, electron super easy, but super memory intensive. My main point is that we should not always think "This is premature optimization". This mindset makes it very difficult to care about performance in modern organizations.
Qt is generally as fast as GTK. It's a bigger library that does a lot more but it is still as fast or faster.
I agree that we shouldn't always think "this is premature optimization". However, we should focus our optimization efforts where they matter, and I really struggle to think of a place where signal latency is really crucial. In any well architected software that's going to be really rare and it makes sense to focus your efforts on optimizing other parts of the software, which seems to be what the Qt devs did.
I think you're thinking of the "any optimization will be too diffcult, take too long, and cost too much" mindset. Which is a huge problem.
Being mindful enough to identify when you're feeling the urge to optimize something too soon, will let you step back and optimize what will have the biggest impact once it's finished.
I have seen developers spend hours optimizing some functionality, pick the fastest technique, and it turned out by designing everything to work with their earlier optimizations they made the overall system much slower.
> the speed of Qt signals is not issue - it's the code responding to the signal that can be a problem.
Even if the callback were infinitely fast, if it's called 10e6 times per second, the work already can't take longer than 100 ns on average (10e6 * 100ns = 1e9ns = 1 second). So a more useful way to frame this would be in terms of the ratio of the callback overhead to the work done by the callback (both measured in time), but there's no mention of the latter.
In this particular case, this is a pretty obvious case of 'this is the wrong tool for the job', at least with the stated requirements. Also, at the point where you need to do something 10e6 times per second it's usually appropriate to think about how you might distribute that work across multiple cores.
null erasure. You can't know a negative existence. You know what I do when I encounter things that don't work the way I want? I don't go to qt forums and complain, because nothing happens. I code something else and move on. The issue is still there.
This is a non-issue and anyone who is worried about it is having a severe case of premature optimization.