Hacker News new | past | comments | ask | show | jobs | submit login
Qt signal is ten times slower than a virtual function (developernote.com)
101 points by todsacerdoti on Aug 11, 2022 | hide | past | favorite | 141 comments



For twenty years no Qt developer has complained about the speed of Qt signals. Even in PyQt the speed of Qt signals is not issue - it's the code responding to the signal that can be a problem.

This is a non-issue and anyone who is worried about it is having a severe case of premature optimization.


This post did raise an interesting question for me though. With Qt being used on microcontrollers these days, It made me wonder about the overhead of signals and slots on single core/single threaded usecases and how that might affect things like power consumption/CPU usage on those devices. I would love to see if someone did a comparison on that.

As for the crazy contrived example they have provided: "Assume I receive 100,000 trades per second from some crypo exchange and if one trade triggers 100 signals than I have 10.000.000 signals per second that is something comparable with the maximum." - They'd have to process the incoming messages on a separate thread (or process) anyway, so they wont drop packets because of the event loop that also processes user input. Then update the GUI once every event loop tick because that's how frequently the UI gets updated anyway.


The more slowly you can interact with a "crypo exchange" the better, right?


Hahah touche.


> With Qt being used on microcontrollers these days

Is it really though? It's used in embedded systems. I wouldn't call those that can run Qt UIs microcontrollers.


Well they did show it off a lot recently. STM32F7 and STM23H7s. Not sure how many "real companies" use it though.

https://www.qt.io/microcontrollers-st .

As per my old boss, the difference in prices when it comes to these microcontrollers and "less powerful" linux socs is negligible that it wasn't worth trying to target them. Not sure if/how the economics changed over the last 2-3 years either.


"Qt for MCUs" isn't the full Qt library as I understand it, but an entirely separate development

> Qt Quick Ultralite is designed to serve as a rendering engine for the application's graphical user interface (UI). Its implementation is different from the standard Qt, and it does not depend any Qt libraries such as Qt Core or Qt Gui. Hence Qt Quick Ultralite applications need to use standard C++ containers and classes instead of those from Qt. For example, instead of using QObject or QAbstractItemModel, Qt Quick Ultralite provides a simple C++ API to expose objects and models.

> It does not include the following from the Qt world:

> The Qt C++ APIs. The non-graphical modules such as Qt Core and Qt Network. The Add-on modules such as Qt Multimedia, Qt Bluetooth, and others Qt Addon Modules. The non-MCU embedded platforms such as embedded Linux or the mobile platforms.


Interesting. Yeah I wasn't sure what was going on there, but it seems to use a pure C++ QProperty, Qul::Signal and Qul::Object.

https://doc.qt.io/QtForMCUs-2.1/qtul-integratecppqml.html

Which is kind of interesting ...


These days "microcontroller" refers to a part that has self-contained flash and SRAM, and the MMU/Linux-capable SoCs need external mass storage and DRAM to run. So there's a cost adder on the Linux parts. You might also need a denser PCB to handle DRAM signals (although Jay Carlson might be right on this).


I used Qt quite extensively about 10 years ago and we constantly complained about signals. Was working in real-time medical simulations and interfacing with the many io devices we had was painfully slow. Also (at the time) keeping an OpenGL frame (we had multiple at a time) required signal communication that was a high overhead. We ended up replacing all of the signal beside setting up and tearing down the application and handling of system events. Signals are a bad design and I’ll stand on that hill forever.


There are many many things you should never use if you need ultra low latency and/or ultra high bandwidth. That doesn't make them badly designed, simply not suitable for every purpose.

Qt signals are absolutely fine for like 99% of their usage.


Absolutely not, signals and slots make a spaghetti mess of messaging. Often causing a many to one to many dependency hell. In simple cases signals and slots work fine, but quickly become overwhelming as the project grows. I’ve not worked in any large Qt project that wasn’t a dependency mess and a chore to do anything. Signals and slots are a crutch of poor design.


> anyone who is worried about it is having a severe case of premature optimization.

I think the "This is a premature optimization" mindset is what leads to slow pieces of software that I like to avoid (React, Qt, electron). But I guess it's fine, as most users don't care as much as I do.


Qt is slow now? All the Qt-based applications I've used are as fast as anything else can be.


Everything is slow when the goal is to just complain about things you don't understand.


AFAIK, Autodesk Fusion 360 is a Qt app and it's very slow compared to similar tools (like SolidWorks).


Fusion 360 is kind of a mess that's gotten way out of control. I enjoy how the navigation cube can literally draw outside the boundaries of the window as you move the window around. I don't think that's a Qt problem, I think they glued ten different UI libraries together to keep their legacy code working.


Fusion is probably the newest software product that Autodesk rents. In fact it was Carl Bass' passion project. Unfortunately Autodesk's software development process is… broken.


The fact that it's made by Autodesk is probably the reason that it's slow, not because it uses Qt. Various Autodesk products have been clunky, buggy, crashy, slow pieces of crap (I'm looking at you, Maya) since before they were migrated to Qt.


The UI is totally performant for me. The only time it's bogged down was when I was modelling something with > 100k mesh triangles. Or when I do a long UI-blocking processing operation (which isn't in the qt codepath)./


Have you used SolidWorks or other similar applications? I use both and the Fusion UI is very laggy and unresponsive in comparison. If that's not your experience, maybe there's something wrong and I should reinstall it...


I have not used SolidWorks. It costs far more money and is aimed at a totally different market.

What I'm trying to understand: are you talking about when you ask Fusion 360 to do something expensive the UI stops responding for a bit? I certainly see that, but it's not Qt that is the source of problems.


No, I'm talking more about how quickly the UI draws and how quickly it responds when you interact with it. When typing and tabbing around the UI, does it react instantly or after a delay? SolidWorks is decently response, Fusion 360 feels slow.


Nope, I'm not seeing any responsiveness problems. Note: I have a 8-core AMD w/ 64GB RAM and a high end graphics card. Clicking on any UI element (such as Create to open the Create menu) seems to lead to a response in under 100ms and if I do something that requires the net, it shows me a spinner while it does the loading.

Do you work with huge models?


SolidWorks actually had/has a product aimed at the hobbyist crowd just like Fusion. Used to be you could get a free copy with membership in some aviation related professional org.


I had solidworks sales people contact me and we chatted. No matter what i tried, they wanted to charge me multiple thousands a year. Any free copy would eventually stop working for example when I updated to a new version. I'm happy with Fusion 360, it solves a collection of problems for me well enough that unless I see a similar program for less money (I already became proficient in FreeCAD and OpenSCAD before switching to Fusion 360), I don't plan to switch since I've already invested a lot of effort into mastering the fairly complicated user interface and feature set provided.


Experimental Aircraft Association used to toss in a free copy of the SolidWorks "education premium" product or whatever it was called. EAA membership is on the order or $40/yr so there was some migration after Autodesk decided to gut the hobbyist version of Fusion. SW realized they could do the same so the offer is now half off the cost of a castrated social-media-ified version of SolidWorks (3DEXPERIENCE).

For hobbyist stuff at least Fusion is by far the least painful option, but the bar is set really, really low. FreeCAD is a gigantic clusterfuck to put it charitably (akin to using an awl to carve a drawing out of cardboard versus pencil and paper). OpenSCAD is neat but it really suffers because OpenSCAD is basically developed and maintained by a single person.


The actual processing is slow, not the UI


Fusion is slow in large part because it makes a ton of HTTP requests. The Qt part is fairly performant.


From context it seems poster may believe Qt is some form of web framework, or something.


Not sure why you believe that, I built some (small) Qt applications so I know it's not a web framework, I was just referring to slow software in general.

I think it's less true now, but I remember KDE (Qt-based) being slower and buggier than gnome (Gtk-based) a couple of years ago, just to cite one thing. It just felt like Qt-based stuff was in general more of a pain to use & heavier than GTK based stuff. It's really a matter of personal preference here and Qt is a nice project, just that I have some criticism here regarding performance choices. I feel like bad performance decisions tend to snowball and get multiplied when people make library choices and add their own performance issues on top.


With all due respect, you made quite a strong claim based on very little proof/experience.


Yeah I agree, I don't have actual data to back this snowballing effect idea.


> Not sure why you believe that…

Because you included it in a list that otherwise only contained web tech frameworks. It’s not clear to me why you think those are peers of Qt and implied to me that you think they are near equivalents.


KDE apps tend to use KDE frameworks, which is quite a bit of stuff on top of what pure Qt offers. Pure Qt apps were always comparable to Gtk in performance.


Comparing Qt overhead to the insane overhead of JavaScript frameworks is a little far fetched.


Qt has QML which is a js framework :)


Optional isn't it?


You can call js functions in qml, but that does not mean qml is a js framework. Normally you call your c++ functions from qml.


I don't want to disparage qt+qml, I used it commercially and it's much better than other gui frameworks I worked with.

I just dislike people bashing technology for the wrong reason (like it being a "js framework").

In qt+qml usually you connect slots to signals in js, even if they are implemented in c++.


A function call in JS is usually just a few machine instructions. It's an example of a case where a conventional C++ compiler can do less optimisation than a JIT JS compiler.


I would find it extremely surprising if a regular function call in C++ took any more instructions than in JS.

Now a virtual function call, a JIT compiler can devirtualize more often than an AOT compiler. But the vast majority of function calls in your typical C++ app are not virtual.


> I would find it extremely surprising if a regular function call in C++ took any more instructions than in JS.

We're talking about Qt 'signals' though - they're sort of heavy-weight virtual call, reimplemented in C++, not regular function calls.


Then you should be comparing them with the direct JavaScript equivalent, which would be something like an array of functions and a foreach/invoke over that.

It should also be noted that Qt signals are far from the optimal way this can be implemented in C++. On top of that, C++ itself makes it more complicated than it needs to be by not providing a bound (to receiver) member function pointer as a primitive; but even then, this can be done in two indirections. A compiler for a language that supports such a facility directly - say, Delphi - can compile it down to a single indirection.


> Then you should be comparing them with the direct JavaScript equivalent, which would be something like an array of functions and a foreach/invoke over that.

But again... a JS JIT could unroll that loop, leaving with just a couple of instruction per call and no loop.


Does it or maybe could? I hear a lot about the miracles of JS JIT but from my experience C++ code will run circles around JS code in real world scenarios.


How is Qt slow? Do you have benchmarks of Qt vs JavaFX, UWP, SwiftUI or Electron showing that it's slow, let alone comparable to Electron or React?

10x the latency of a virtual function call for a signal is very very small beans compared to where you're actually spending CPU cycles, for any reasonable software.


Qt is heavyweight when compared to say Gtk. But that being said, I agree that Qt perf is much better than electron / JavaFX. I mean everything is a tradeoff. Gtk Is more complex to develop-with than Qt, electron super easy, but super memory intensive. My main point is that we should not always think "This is premature optimization". This mindset makes it very difficult to care about performance in modern organizations.


Qt is generally as fast as GTK. It's a bigger library that does a lot more but it is still as fast or faster.

I agree that we shouldn't always think "this is premature optimization". However, we should focus our optimization efforts where they matter, and I really struggle to think of a place where signal latency is really crucial. In any well architected software that's going to be really rare and it makes sense to focus your efforts on optimizing other parts of the software, which seems to be what the Qt devs did.


I think you're thinking of the "any optimization will be too diffcult, take too long, and cost too much" mindset. Which is a huge problem.

Being mindful enough to identify when you're feeling the urge to optimize something too soon, will let you step back and optimize what will have the biggest impact once it's finished.

I have seen developers spend hours optimizing some functionality, pick the fastest technique, and it turned out by designing everything to work with their earlier optimizations they made the overall system much slower.


> the speed of Qt signals is not issue - it's the code responding to the signal that can be a problem.

Even if the callback were infinitely fast, if it's called 10e6 times per second, the work already can't take longer than 100 ns on average (10e6 * 100ns = 1e9ns = 1 second). So a more useful way to frame this would be in terms of the ratio of the callback overhead to the work done by the callback (both measured in time), but there's no mention of the latter.

In this particular case, this is a pretty obvious case of 'this is the wrong tool for the job', at least with the stated requirements. Also, at the point where you need to do something 10e6 times per second it's usually appropriate to think about how you might distribute that work across multiple cores.


null erasure. You can't know a negative existence. You know what I do when I encounter things that don't work the way I want? I don't go to qt forums and complain, because nothing happens. I code something else and move on. The issue is still there.


Alternatively people avoid using the feature because it is too slow, so it appears not to matter.


premature optimization has become a meme at this point


This has been documented for eons.

https://doc.qt.io/qt-5/signalsandslots.html

"Compared to callbacks, signals and slots are slightly slower because of the increased flexibility they provide, although the difference for real applications is insignificant. In general, emitting a signal that is connected to some slots, is approximately ten times slower than calling the receivers directly, with non-virtual function calls."


"A month in the laboratory can often save an hour in the library."


My criticisms of Big Tech fall down the cracks, but hey, failure to read the fucking manual makes it to the HN front page.

The documentation literally says ten times slower.


Given that signals are primarily for interaction with the outside world, (eg user action), 10x slower than the likely optimum seems perfectly fine, no? How many times per microsecond do you expect a user to click that button?

I guess what I'm saying is, the example feels contrived.


"onMouseMove" is a user signal that you would want to be fast. also signals should be used for more than just user input interactions, like "download progress" etc..


Objective-C was doing sufficiently-fast UI updates for it to run well on iPhones 10 years ago, while relying on objc_msgsend, which is _much_ slower than a virtual function call or even a Qt signal.

You wouldn't want to use ObjC's message sending OR Qt's signalling mechanism in a tight inner loop – hell, you probably don't want to deal with the indirection of the vtable incurred by a virtual function in a tight inner loop. But all of these are more than fast enough for interactive UI work.


> objc_msgsend, which is _much_ slower than a virtual function call or even a Qt signal.

objc_msgsend is slower than a virtual function, but like 1.1x-2x, not 10x. (In rare cases it can even be faster due to it being a little more predictor-friendly.)

https://www.mikeash.com/pyblog/friday-qa-2016-04-15-performa...


There's a chart of various timings here [1] and objc_msgSend is actually pretty efficient (it's received a lot of optimization over the years for obvious reasons).

A cached IMP is faster than a C++ virtual method call (both of which are slower than a non-virtual method call, inline code, or a C function call of course).

[1] https://www.mikeash.com/pyblog/performance-comparisons-of-co...


Also, 30 years ago, on NeXT hardware.


Memory latency hasn't improved that much since Next and these kind of virtual-like things cause lots of dependent reads.


Objc_msgsend on the other hand, has been optimized continuously.

https://www.mikeash.com/pyblog/objc_msgsends-new-prototype.h...


In a tight loop you’d resolve the dispatch only once and call the resulting function repeatedly.


One tenth as fast as a virtual function call is incredibly fast.


I like to bang on the drum that as a programmer, you need to understand the sheer number of orders of magnitude you're spanning more than the average programmer does. We so often deal in "10x slower" and "100x" slower that we can forget that it just doesn't matter if we're doing it even a mere 60 times a second. 10x slower on a process that takes 100ms is a problem. 10x slower on a process that takes 10ns requires a lot of looping to become a human-scale problem. There are things that can attain that level of looping, certainly, but it's not everything.

A good programmer ought to have read that sentence and instinctively observed that between 100ms and 10ns is a full seven orders of magnitude. For two numbers that at the human level may seem not terribly far away from "zero", there's a lot of space between them.


onMouseMove is normally delivered at the video frame rate, 60 fps. The OP's benchmark shows it can deliver around 60M signals per second, so it uses about 1/1000000 of the CPU time. Seems tolerable.


Even if it was delivered at the polling rate, that should never be higher than 1kHz (otherwise you deserve whatever performance issues you get). A virtual function call is 15ns conservatively, so say a signal is 150ns. 1000x is <150us of wasted time, well below observable overhead in any human-centric application.


I think onMouseMove is a QML/QtQuick specific thing. In C++/QWidgets I remember having to use mouseMoveEvent https://doc.qt.io/qt-5/qwidget.html#mouseMoveEvent for that.

As for download progress etc.. I don't think I have ever had to worry about speed of a function call ever - as long as I was leaving it to the event loop take care of it.


The slowest number mentioned in the post--"32,562,683 signals [per second] with sender"--works out to about 31 nanoseconds. That's around half a dozen orders of magnitude less than an amount of latency that would be noticeable to a human.


The little 50MHz Arm Cortex I'm using does one tick in 20 ns. Gate delay through old 74LSXX logic was like 10ns.


10 times faster than a virtual call is still ridiculously fast on any general purpose CPU made in this century. We often forget how insanely fast they are (and at the same time, I have no idea how we manage to add as much bloat to certain stacks that they make the processor struggle).

Like, I’m fairly sure you could have decent latency if your callback function for onMouseMove made a local network call to another process.

Also, how fast do you think download progress should update? Animating it to the next keyframe every second is much more than enough.


In Qt, those sorts of input interactions are mostly handled through virtual function calls, not signals. You're basically referring to QWidget::mouseMoveEvent. https://doc.qt.io/qt-6/qwidget.html#mouseMoveEvent


If signals were only used for onMouseMove and such things, this wouldn't be big deal.

It's when you start to have to use signals thousands of times per event that it becomes a problem.

Signals for raw user events won't make a difference, but signals as the main mechanism of API interfacing is a problem.

I would say it's a serious problem.

I avoid using signals in general.


What are you doing with onMouseMove that would be perceptible to the user at higher than 60fps, or even 120fps?


Drawing perhaps?


Drawing only happens (at most) once every 4ms. I’m not aware of any modern display technology that allows you to manipulate a frame buffer during the display interval (unlike CRTs which could be manipulated during a scan).

The retro gaming community is obsessive about input and display latency and even there anywhere between 5-16ms (16ms being one frame of 240p content) is considered acceptable for even the most hardcore twitch response games.

That’s not saying that other processes aren’t happening faster than that, just that human input and subsequent visual feedback maxes out somewhere between 200-300 times per second and for the vast majority of humans, it is far, far lower.


Humans are even slower than that.

If you measure the response of individual photoreceptors, it takes 25-50 ms to peak after a flash of appropriately-colored light; the precise number depends on the color and intensity of the light. After that, the signal still needs to propagate through a bunch of visual brain areas, and then even more needs to happen to somehow influence behavior. With everything tuned just so, you can complete that whole process in 100 ms or so, but the conditions have to be perfect; otherwise, 300+ ms between (simple) stimulus and (simple) response is more typical.

Obviously, a lot of this is happening asynchronously, and high refresh rates can help in other ways (e.g., by smoothing out movement), but it astonishing how laggy our visual system is.


Those random input generators that take every move and feed it to the mixer. Not that they're actually useful.


nothing but if you want to do something, say even at 1fps, you need to enable the signal to fire in the first place.


I wrote an interactive QtPy program- it receives video frames over the net and allows the user to interact (steering a microscope) in real time. I use millisecond timers (which generate signals delivered to slots) all the time.

After doing a bit of tuning I was able to steer the microscope with no visible latency, which means I'm handling user events at ~25FPS or higher and not seeing any high variances. The only problems I have are when the handler that receives a signal takes longer than I have budgeted (IE, more than 1000/25).


onMouseMove is going to be called at most like 3000 times a second, assuming a 240hz screen and the signal beinf sent through 12 objects.

You're going to be using up, what, 300 microseconds?

But even then, you're missing the point, because you don't have to use signals to detect the mouse moving if you don't want to.


not that fast, it's still an user thing which is incredibly slow comparing to computation


I wrote Tcl/Tk code in the 90ies that was perfectly adequate in terms of user interaction, and that is a scripting language...


I haven't used Qt enough to speak meaningfully about its idioms but in GLib there's a signal for I/O ready on an fd (i.e. effectively a way to plug a select(2) into your UI event loop). I can easily imagine someone, accidentally or ignorantly, using this in a very stupid way.


The equivalent is QSocketNotifier (which uses select under the hood) and QFileSystemWatcher (which does complex stuff). It's been a long time since I've spent quality time with Qt but I don't remember anyone complaining about the performance of QSN.


Sure, if you use it right it's not an issue. Not everyone uses everything right, and unlike e.g. mouse move or key press, this one could conceivably fire a million times a second if you use it wrong. Personally, I'm in the "I'm surprised it's only 10x" camp - I wonder who thought signals were fast to begin with?


Semantically, a signal is basically just a collection of method pointers + receivers for the same, so one could reasonably expect the same performance as iterating over a bunch of function pointers and calling them in turn. Which shouldn't be 10x slower than calling a single pointer.

(I believe the reason why it doesn't quite work that way in practice is because slot dispatch is ID-based, not pointer-based.)


Me? The impression I had (granted it's been decades at this point) is that Qt/KDE apps were painfully slow to compile but were usually performant. Look at something like KHTML.


It sounds like you're not familiar with what signals are used for. I brought up the select(2) example because I wouldn't want to fire a signal for every byte (I probably wouldn't want to call a virtual function call, even).

But for clicking the mouse, scrolling a page, kicking off a network request - you could go up to a full ms and you'd still feel that the app is (in a vague general sense) "performant".


lolwut

Having written a few kioslaves decades ago, I feel pretty qualified to comment on Qt's signal/slot performance. Similar to select and poll, QSocketNotifier fires off a signal when there's data ready on an fd. Typically you won't see an event per byte with either interface (unless you're deliberately writing obtuse code).

I think that a lot of the "Qt is slow" nonsense came from C++'s awful reputation especially when it came to templates. The reality was that signals/slots were always a slick and performant wrapper around callbacks. It's worth noting that Qt in all its glory is available on QNX (a well known RTOS) and has been since 4.6. Performance is a non-issue.


Context is key, I hope you feel better tomorrow.


What context am I missing? You're talking about network I/O as being a bad use case for signals/slots and I've pointed out that I (along with other KDE folks) wrote code using signals/slots to handle incoming network data. That's what Trolltech recommended and almost certainly what the current crop of ioslaves do today (okay I lied, it looks like there are dedicated socket classes that provide their own signals these days). It's actually pretty difficult to structure something such that a signals/slots become a performance bottleneck.

Insofar as other overhead sensitive environments go, Trolltech ported Qt (including the signal/slot paradigm) to a realtime OS (QNX) and features it prominently in safety critical contexts (e.g. automotive dashboards).

   I wonder who thought signals were fast to begin with?
You're surprised signals are fairly low overhead (and that's fine) and and wondered if anyone actually thought they were performant. Having dabbled with Qt off and on for decades I wasn't surprised. The beauty of moc is that it front loads a lot of the magic. Compiling it sucks (less now), running it typically doesn't.

https://doc.qt.io/qt-5/qsocketnotifier.html


How are 10 times a literal virtual call not performant? That’s 150 NANOsec.


"Assume I receive 100,000 trades per second from some crypo exchange and if one trade triggers 100 signals than I have 10.000.000 signals per second that is something comparable with the maximum."

Computers work faster than people who click manually.


I strongly doubt the signals, even at 10x a virtual function call, are a bottleneck when handling 100k network requests in a second. When you're optimizing that deep, you can do that extra bit of work and replace the qt signal library by a real vtable. It's not what signals were designed for.


If you're using signals for anything that may reach "10.000.000 signals per second" then you're doing it wrong.

Sounds like the guy who wanted an event to be fired for each audio sample and required 44100 samples per second. Interestingly, qt signals maybe even fast enough for just that.


Why would you use QT signals for that? The point is you do not have to interact with the UI at that high of a frequency.


Wait until he finds out the display refreshes at most 240 times per second…


1000Hz polling on a mouse seems acceptable for gamers... (Google shows some mice supporting 8000Hz, but that seems like an overkill)


This really sounds like a naive implementation expecting that naive use of a feature is perfectly optimized. IE, don't use that number of signals, or treat signals like a broadcasting message bus. 10X the cost of a virtual function call is pretty optimized, btw.


At some point you have to consider whether your way of doing things is at all sensible.

Signals are good for rare, unpredictable events. If you have a firehose worth of data coming in, you're not sitting around waiting very often. Your data probably also has some commonality to it, and can be handled in bulk. If you have 100K events coming in, they're probably events of the same type.

Just like if you're writing gigabytes to a file, a program written with the most minimum amount of intelligence isn't going to do it one character at a time.


Even if you are getting 100,000 events per second - your GUI only updates at 60fps or 120fps - so you'd want to batch those events and try to update the GUI only once every frame.

Also to process important incoming messages on the same thread as the main event loop - where other things like user input, application drawing is handled, is kind of a bad idea.


Qt signals are a one-to-many synchronous/asynchronous intra/inter-thread intra/inter-language dynamic/static message passing mechanism, which figures all that stuff out by itself by default. 10x slower than a (presumably hot) virtual function call is actually very good for what you're getting.


Only if you need those things, and in most cases, you don't need most of them.


From the Qt manual:

>In general, emitting a signal that is connected to some slots, is approximately ten times slower than calling the receivers directly, with non-virtual function calls.

https://doc.qt.io/qt-6/signalsandslots.html


Not that surprising, because a virtual function call is really fucking fast.


The claimed 16 times speeddown apparently is caused by a call to sender(); if the object reference is just passed to onValueChangedSignal instead of calling sender() there is likely no speeddown at all.


The 16 times slow down is when they added call to sender();

But before that their benchmark was for regular signal vs. observer pattern no?

``` void onValueChanged() override { ++m_count; }

        void onValueChangedSignal()
        {
            ++m_count;
        }
 ```
QT signal: 00:00:01.841 Virtual function: 00:00:00.179


Ten times slower than a C++ virtual function call is still pretty damn fast. Good for them.


> Assume I receive 100,000 trades per second from some crypo exchange and if one trade triggers 100 signals than I have 10.000.000 signals per second that is something comparable with the maximum.

I dunno how someone is skilled enough to do this benchmark but still thinks that this is how you are supposed to use signals and slots...


For events that happen that often, signals are not the right tool/implementation for the job. Deciding on a communication mechanism is a game of balances and trade-offs depending on your needs. HTTP is much slower than some more complex RPC mechanisms, but it's fast enough for most applications. if somebody were blasting enormous amounts of data through it, you wouldn't criticize HTTP as too slow, you'd change to a more appropriate protocol that forces more complexity on the application but is more efficient with time and space. I think you should do the same here and use a different communication mechanism than signals.

FWIW, I did some pretty intense real time animation in Qt about 10 years ago, and even on the crap hardware at the time I had no issue using signals to hook into mouse move events (which happen often and quickly sometimes).


Im actually surprised they are that fast.


QT Signal speed isn't the problem. Focus on making your app fast. Then if that's not enough, make it appear fast. If your report generator takes 5 seconds to generate 400 pages, don't put up a meter to show percentage done. Instead, start showing the user the pages that have already been generated. They won't care that there's 2 more seconds to wait, because they can already start seeing pages.


The number is about right. I did a benchmark a couple of year ago while developing a realtime 3D frame processing and using Qt signal was about 7-8x slower.

But for most of the use-cases like UI interaction handling, the difference is negligible.


So, x100 times faster than ReactJS?


Im never doing another QT contract. I think the last one was my last one. Ill stick to native UI, not electron or anything like that..

The hypocrite in me thinks flutter is my next choice.


Great story, well-supported. I'm going to adopt this philosophy from now on, solely on the strength of your arguments.


Out of curiosity, why?


What is native UI?


So what?


nitpick: It's Qt (pronounced "cute"), not QT.


Marketing. I used to work with Qt before 2000. In the beginning it was "Quasar Toolkit". Nobody cared and most people I knew - including some of the libraries developers - were fine with kju-tee.


I tried pronouncing it as "cute" for a few months before the inertia of everyone I work with calling it "que-tee" won out. I figure if you work on Qt rather than with Qt, saving the syllable is a big win.

Descriptivism vs prescriptivism, or something


My TA in college said it stood for Quite Terrible.

I've used it once since then on an embedded vehicle guidance system, and it seemed fine, once you learn all it's quirks.


For me it will always be "cutie". It is feels better to use as noun but is still somehow cute.


Ugh, what the heck does “ten times slower” mean? “Slow” isn’t a measure, speed is a measure. So, it’s “one tenth as fast”, or “10% as fast”. Even “takes 10 times as long” is good. “Ten times slower” is the only one that doesn’t make sense.

English is stupid, but there are rules…


> English is stupid, but there are rules…

…and that isn’t one of them.

“Twice as slow” is perfectly acceptable, “slow as fuck” is a standard measure of time and “ten times slower” means exactly what it says.


Ten times slower than what? What is the unit of slow? What is the measure of “slows” that this is ten times greater than?


If something mission critical is happening in a GUI dependent on Qt, it seems like even the most rudimentary timing analysis of the requirements could have revealed that Qt is not a good solution for that.

I fail to see any relevant insights here.


Some version of Qt is safety-critical certified, for example you can use it to display GUIs in medical devices:

* road vehicles functional safety

* railway software applications

* electrical / electronic safety-related systems

* medical device software- software life-cycle processes

https://www.qt.io/product/functional-safety-and-qt


> mission critical is happening in a GUI

Example pleas ? That "mission critical" and "in GUI" is kind of strange... But if you just talking about "human reaction is needed" time then 10 x virtual function call time don't looks so bad...



So, Qt Company clearly states Qt is up to the task. Even they hope to make money from it :)

Just hope they (Qt Co.) hear about that ship with touch interfaces accident. But still, "GUI _reaction_ time" is not a problem while interacting with humans. I would more worry about replacing mechanic/hydraulic subsystems with electronis - just in case EMP bomb in neighborhood. Or last years favourite - pilot's decisions overriding... Because it's captain/pilot/driver who should have final decision to make - nothing better invented so far and not I would not recommended to change that. Some limited Skynet-like incidents are not hard to imagine...


  Just hope they (Qt Co.) hear about that ship with touch interfaces accident.
Yeah I'm in camp mechanical user interfaces, but…

  But still, "GUI _reaction_ time" is not a problem while interacting with humans.
Sure it is. People interact with this crap while driving all the time. They shouldn't but they do. A sluggish interface could absolutely be a major distraction.


it seems like even the most rudimentary timing analysis of the requirements could have revealed that Qt is not a good solution for that.

How do you deduce this without studying it? Or someone else studying it for you?


Graphics and graphical processing take orders of magnitude longer than other fundamental machine instructions.

It's just inherently slower than doing something equivalent that requires zero graphical processing or monitoring of user input. It is common sense to anyone who knows the tech stack that underlies the pixels on the screen.

This is no fault of Qt by any means. It's just the nature of the tech.


But he is testing slots and signals, which are not tied to GUI.


Why would Qt not be suitable?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: