Slightly off-topic, but, as far as Rust goes, I've completely given up on it after realizing (after much study) that it is unsuitable for event-driven (callback-based) design. In short, its idea of ownership, borrowing and references is incompatible with "lower" objects knowing about their "parents" and invoking callbacks to them.
I know there are ways to get around but I don't see any which is sufficiently good for me (e.g. low boilerplate, scalable/composable, no compromise of performance). Some details are in my question on StackOverflow[1].
One particular example that C and C++ allow is to have an object "self-destruct" itself in a callback (e.g. after a TCP disconnect, EventLoop callbacks Socket callbacks MyClient which directly destroys MyClient including Socket). Yes it is indeed valid to call a (virtual) function on a class and for that class to proceed to delete itself [2]. This is very nice because it removes need to have special cleanup code all around and/or half-dead states. In Rust, this seems impossible by design (i.e. would need unsafe, and it's unclear if the compiler itself allows it anyway).
Rust is very-very explicit. Whereas every event-driven framework is (usually) a full-throttle magical land of monkey patching or wrapping everything, plus the not so well advertised, but very forbidden dark marshes full of nasty blocking I/O goblins, and usually this means there's no proper API to use the async parts without the magic of the event loop.
In Rust you'd have to either make an few (unsafe) global ( https://github.com/rust-lang-nursery/lazy-static.rs ) structures to keep track of the event-driven state, or pass them into every callback, or make every callback somehow derive (or a derived from) a common structure that does this bookkeeping.
So far the language doesn't have syntactic sugar for this, but I think it'll be there in a few years. The compiler is up to the task (as you can already see a few 3rd party macro-driven solutions for similar "magic" things - such as serde's custom macros).
But since every event-driven thing can be implemented as a queue and a consumer thread pool (as it's implemented under the event-driven hood), I don't think Rust is a non-starter for even-driven solutions. Though I'm inclined to agree on the extra care needed to satisfy the type system to get easy callbacks is annoying.
I have to disagree that about the magical monkey patching. Event driven systems can be made to be very simple and easy to understand, and such that much boilerplate is avoided. It's just that most programmers are not capable of doing that, and the frameworks that become popular usually aren't subsequenty "fixed".
A few hints how to do it:
- no shared_ptr / ref counting, make object ownership clear
- design to allow destroying an object at any time even from a callback
- don't make callbacks harder than they need to be (use virtual functions or simple macro hackery to reduce boilerplate of using function pointers; don't introduce "signals and slots")
signals and slots seem like a way of introducing an extra layer of type safety and explicitness - you cannot accidentally pass the wrong function pointer to a handler, and it's easier to extract your wiring graph. they don't make things all that much harder either; they add a slight bit of ceremony, but it's worth it. (mostly based on my experience using qt)
At least in the Qt implementation, signals and slots make it easy to forget to connect essential signals, and to understand which signals are essential. I also feel like they tend to encourage making the interface more complex than it needs to be.
On the other hand, with function pointers or virtual functions, you can easily ensure a required callback is provided, by requiring the user to pass it in the constructor.
I don't see any difference related to "accidentally passing the wrong function". In either case you need to specify which function to call and on which object; for virtual function callbacks it's even easier and harder to fail. Type checking can also be done by the compiler in either case (even for function pointers at least in C++ it is possible to make a type-safe wrapper, see my implementation of lightweight callbacks [1]).
I'm just now completing a rather large event-driven system in Rust. It's also callback-based and has to interoperate with C.
I do struggle with the ownership rules sometimes but it's pretty much always because I'm clashing with the bad habits I learned coding in C/C++, or I'm shooting my future-self in the foot because my limited cognitive capabilities can't always keep all code paths in memory perfectly all the time.
The borrow checker came with an upfront cognitive cost that ultimately saved me from multithreaded async I/O madness later on. I can sleep at night knowing my multithreaded async system is more sound and easier to safely extend/maintain than it would have been had I created it in C/C++.
I can see why some people might give up on Rust. In C++ you can take the easy way out (use mutable aliasing etc.) and you can quickly get things 'working'. You don't have it so easy in Rust because you have to carefully think about end-end ownership.
Also in Rust I found that when I did make bad design decisions it was sometimes a lot more work (redesigning structs, shifting ownership responsibility) than it would have been in C/C++ (because I probably would have laid land-mines for my future self by mutably aliasing things etc.).
It would probably help if you described how you build a large, event-driven system with callbacks in Rust. Then, the parent and anyone else doubting it would see specifically how to go about it without problems they describe. Also, any links you have to posts specifically on good style or structure for these in Rust would be helpful if you couldn't give details on your own work for whatever reason.
You can do what you describe as long as you use Rust's reference counting support (Rc and Arc), which includes weak references that can be used for parent pointers, plus RefCell and Mutex for mutation.
To have an object "self-destruct", have it remove all reference counted pointers to itself.
You can pass an &Rc<Self> (or &Rc<RefCell<Self>> or &Arc<Mutex<Self>>) as the self parameter if you want to let an object create references to itself. If you want an object to be able to drop itself immediately in that case, use an Rc<Self> instead of an &Rc<Self>, and call drop(self) in the method (this also work if you are not using reference counting and just pass Self, of course).
You should try to not do this kind of thing though, because it adds overhead and the compiler cannot statically check that you are not leaking reference counted cycles or deadlocking on mutexes or refcells (which is not a Rust limitation, it's just impossible without having the programmer write a machine-checkable proof).
If you do it in C++ the compiler also cannot check whether you are referencing freed memory or incorrectly concurrently modifying the same object.
Rc<RefCell<T>> is not high boilerplate at all, and it's a super common Rust pattern. If you think this is too much boilerplate then I'm really not sure what you were expecting. It's just composition of reference counting memory management and mutability, a great example of modular design. What more did you want?
Rc is dynamic memory allocation, which means every element of your application will end up in its dynamic memory block. This is inefficient and highly undesirable for resource-restricted / embedded applications.
Right, you shouldn't reflexively use Rc everywhere. That seems like a completely different topic, though: what I was responding to was the assertion that composition of Rc and RefCell was "boilerplate".
Using an Rc<RefCell<T>> comes down to one method call, .borrow_mut() - it's not exactly magic, but it's far from significant boilerplate either. Additionally, it's not a "workaround", it's an inherent part of most useful Rust code. Servo contains 79 uses of borrow_mut.
Note that Servo has a GC (the spidermonkey one) since it deals with a managed DOM, so it is an atypical example. Most Rust code I've seen has far less RefCell usage; but yes, RefCell is pretty idiomatic and the boilerplate is minimal.
I have a protocol implementation in development which looks somewhat like this (minus the "delete this" part, I just keep my Peers in a bunch of vectors and remove them from those, Rc then drops it when the function ends and nobody is capable of accessing it), as part of a protocol implementation. Turns out it's relatively easy - it's what the Rc<RefCell<T>> type is for.
I'm coming from Python, so the odd simple integer/boolean check (what Rc and RefCell come down to) aren't an issue for me - they might be for you depending on what you're writing, I suppose, although 99% of the time they're not going to be what you need to optimise.
However - Rust is not an object-oriented language. This sort of design isn't necessarily what you want to be using. In your particular case, what you'd probably want is the EventLoop owning the Socket and MyClient, callbacking MyClient directly, and allowing MyClient's callback to return a value describing whether to destroy it or not. Libraries like rotor[0] do exactly this.
Well yes, I do want to avoid any unnecessary overhead, and specifically I want to avoid any dynamic memory allocation. Consider a design for a hard real-time system and/or microcontrollers where you want to have all memory allocated statically. In C++ this is pretty easy to do if you want it (hint: don't use shared_ptr).
I'd agree that Rust doesn't manage to entirely avoid dynamic memory allocation unless you use unsafe code... but as you say, you can't use the safer parts of C++ without running into the same issues.
There's also a chance that you might be able to encapsulate your unsafe code behind safe abstractions, and Rust can help prove the rest is memory-safe.
Right now I'm working on something where the choice was between Nim or Rust. I required integration with lots of C/C++ code and there are callbacks from C that execute my code, Nim was the clear choice.
The main app is C that links all my libs together, but the actual C code just calls a Nim to execute the code. I talk to all the C libs using Nim.
I'm very happy with that decision. It was a breeze to setup and is great to work with.
Have you considered that maybe event-driven programming is problematic?
As I see it, its basically letting you ignore the context code is called in, but the issue with that is, well, IME failing to understand that properly is where nearly all bugs come from.
It's true that rusts rules about aliasing are frequently, well, annoying, but its hard for me to think that poor support for event driven programming is that bad of a thing.
... FWIW, you should possibly take a look at servo. Browsers have to support some level of eventing, so it seems like they probably have a system for this.
No, I've long ago came to the conclusion that event-driven programming is the right way for programming large-scale network applications. The problem is that it is often poorly supported in languages/libraries, and for some reason seems to be looked upon as "elitist". I'm not sure if it's just the culture or if it really is harder for people to grasp.
I think the "totally unsuitable" is a quite hard statement, but I also came to the conclusion that Rust plus callbacks is not a match made in heaven while trying to implement a networking library (although that was pre 1.0).
Wrapping everything into Rc<Refcell<>> and lots of dereferencing was one turn down (yes - C++ also needs shared_ptr<>) from the syntactic and ergonomic point of view. That might now be better due to some automatic derefing. The biggest issue that I had was that reference counted 'interfaces' (trait objects in rust) were not working in the required way at all (no required up and downcasting of boxed trait objects was possible). Don't know how this changed since then.
If I would need to perform the task again I would probably try to model everything with synchronous/blocking API calls as this seems to fit much better into Rusts ownership model, even thought this would sometimes require lots of threads.
You may be able to prevent some specific issues using shared-ptr, but in my experience it is better long-term to design the application so its use is unnecessary. One issue that pops up is reference cycles, so then you also need the weak_ptr boilerplate.
Specifically, I've found that is possible to prevent any callback-related use-after-free and related callback hazards by following these simple rules:
- Callbacks should always originate from the the lower layers (event loop), should never be called back synchronously as part of a call from upper layers. If you need to callback soon, use a facility of the event loop that makes the event loop call you soon.
- When invoking a callback, "return" immediately afterward. If you have the desire to do something after invoking a callback, do so by first queuing up a call to you from the event loop using the same facility as mentioned above. Actually, callback typically return void , I usually call them by "returning them", i.e. return callback();
- Make good use of destructors or related facility (if your language doesn't literally have destructors) to ensure that when an object is destroyed, it will not receive any more callbacks from the objects that it used to own.
I think that a programming language could even enforce these rules at compile time (and the logic for this is much simpler than rust's ownership system).
Why do you want to use object semantics for resource management? It sounds like you are dealing with a dependency graph - wouldn't it be much more simple just to use a graph datastructure?
I don't understand what you mean by "use a graph datastructure". I want to have a framework / set of patterns for building large-scale event-driven applications using composable components.
`Unsafe` doesn't mean you shouldn't do it, it just means you have to tell the compiler "I know what I'm doing, trust me" - and be careful because it will have less guarentees because it trusts you.
Indeed you beat me to the punch. You can totally use unsafe. This just means that the compiler essential is unable to _prove_ that your code is memory safe. That does mean the author can't. Documenting unsafe points in your code gives also tells you where to look if you do start seeing memory related problems, which can be helpful for debugging.
I know there are ways to get around but I don't see any which is sufficiently good for me (e.g. low boilerplate, scalable/composable, no compromise of performance). Some details are in my question on StackOverflow[1].
One particular example that C and C++ allow is to have an object "self-destruct" itself in a callback (e.g. after a TCP disconnect, EventLoop callbacks Socket callbacks MyClient which directly destroys MyClient including Socket). Yes it is indeed valid to call a (virtual) function on a class and for that class to proceed to delete itself [2]. This is very nice because it removes need to have special cleanup code all around and/or half-dead states. In Rust, this seems impossible by design (i.e. would need unsafe, and it's unclear if the compiler itself allows it anyway).
[1] http://stackoverflow.com/questions/36952894/event-driven-des...
[2] http://stackoverflow.com/questions/3150942/c-delete-this