Hacker News new | past | comments | ask | show | jobs | submit login

The thing I see glanced over a lot in multithreading is the actual implimentation of messaging people are talking about. What is being used to actually just 'send' messages to the main thread while the sender thread continues on? I know qt has slots and signals, but I haven't found a good native method that does something like that if I wanted to for example use an std::future and also receive messages back that can update a gui.



If you're using a GUI toolkit like Qt or WxWidgets, then you have to use whatever event system the toolkit provides, since it controls the main thread's event loop. For WxWidgets for instance the doc is here: http://docs.wxwidgets.org/trunk/overview_thread.html

I worked on a C++ trading app in the past and our approach was "services" (actors) talking via message passing. Basically a bunch of long-lived threads, each with a queue (nowadays that could be something like boost::lockfree::queue [1]), running an event loop. Message passing worked by getting a pointer to the target service's queue via a service registry (singleton initialized at startup, you could also keep a copy per service though), then pushing a message to it. If you needed to do something with the result, then that would come back via another message sent via the same system.

It does mean if there's back-and-forth between services the code would be harder to follow as you'd have to read through a few event handlers to get the whole flow, but with well-designed service boundaries it worked quite well. I don't think futures would have been useful in our context as blocking any service for an extended time would have been very bad.

[1] If you don't want boost, you can wrap std::queue yourself, this is a simple example where you can replace the boost mutex etc. with std ones: https://www.justsoftwaresolutions.co.uk/threading/implementi...


There isn't a good, native method. I always begin by implementing a thread safe producer/consumer queue with non-blocking peek, which checks to see if there's something at the head of the queue. You can use something like this for other threads to block on, or to poll, etc. It's very useful.

Futures are somewhat useful if you spawn off a thread to do a single task and wait for it to finish, however, on many platforms, the thread creation overhead is high enough that you don't want to do that, you want to have existing worker threads being given work to do.

Once again, this is something Go does really well. Go routines are almost free, so you can use them with a futures model very easily.


I'm not 100% sure what's meant by "native" method in this context, but if you're programming for macOS/iOS, try Grand Central Dispatch. It's a very nice task parallelism library with some extremely smart design choices.


If you use dpdk it has some of the fastest (if not the fastest) ring implementation available.


In case anyone else is wondering a "ring" is a queue data structure which provides a different set of trade-offs. The documentation specifically compares it to a linked list:

    The advantages of this data structure over a linked list queue are as follows:

    * Faster; only requires a single Compare-And-Swap instruction of sizeof(void *) instead of several double-Compare-And-Swap instructions.
    * Simpler than a full lockless queue.
    * Adapted to bulk enqueue/dequeue operations. As pointers  are stored in a table, a dequeue of several objects will not produce as many cache misses as in a linked queue. Also, a bulk dequeue of many objects does not cost more than a dequeue of a simple object.

    The disadvantages:

    * Size is fixed
    * Having many rings costs more in terms of memory than a linked list queue. An empty ring contains at least N pointers.

http://dpdk.org/doc/guides/prog_guide/ring_lib.html




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: