We are clearly working with very different kinds of event loops and asynchronous programming then.
I think you use "in general" to mean "in a specific subset" here...
It is not true that every step in async programming is sequentially consistent, except in a particular subset of async programming styles.
The concept of taking an async mutex is not that unusual. Consider taking a lock on a file in a filesystem, in order to modify other files consistently as seen by other processes.
In your model where everything is fully consistent between events, assuming you don't freeze the event loop waiting for filesystem operations, you've ruled out this sort of consistent file updating entirely! That's a quite an extreme limitation.
In actual generality, where things like async I/O takes place, you must deal with consistency cleanup when destroying event-driven tasks.
For an example that I would think this fits in what you consider a reasonable model:
You open a connection to a database (requiring an event because it has a time delay), submit your read and writes transaction (more events because of time to read or to stream large writes), then commit and close (a third event). If you kill the task between steps 2 and 3 by simply deleting the pending callback, what happens?
What should happen when you kill this task is the transaction is aborted.
But in garbage collected environments, immediate RAII is not available and the transaction will linger, taking resources until it's collected. A lingering connection containing transaction data; this is often a problem with database connections.
In a less data-laden version, you simple opened, read, and closed a file. This time, it's a file handle that lingers until collected.
You can call the more general style "broken" if you like, but it doesn't make problems like this go away.
These problem are typically solved by having a cancellation-cleanup handler run when the task is killed, either inline in the task (its callback is called with an error meaning it has been cancelled), or registered separately.
They can also be solved by keeping track of all resources to clean up, including database and file handles, and anything else. That is just another kind of cleanup handler, but it's a nice model to work with; Erlang does this, as do unix processes. C++ does it via RAII.
In any case, all of them have to do something to handle the cancellation, in addition to just deleting the task's event handlers.
I think you use "in general" to mean "in a specific subset" here...
It is not true that every step in async programming is sequentially consistent, except in a particular subset of async programming styles.
The concept of taking an async mutex is not that unusual. Consider taking a lock on a file in a filesystem, in order to modify other files consistently as seen by other processes.
In your model where everything is fully consistent between events, assuming you don't freeze the event loop waiting for filesystem operations, you've ruled out this sort of consistent file updating entirely! That's a quite an extreme limitation.
In actual generality, where things like async I/O takes place, you must deal with consistency cleanup when destroying event-driven tasks.
For an example that I would think this fits in what you consider a reasonable model:
You open a connection to a database (requiring an event because it has a time delay), submit your read and writes transaction (more events because of time to read or to stream large writes), then commit and close (a third event). If you kill the task between steps 2 and 3 by simply deleting the pending callback, what happens?
What should happen when you kill this task is the transaction is aborted.
But in garbage collected environments, immediate RAII is not available and the transaction will linger, taking resources until it's collected. A lingering connection containing transaction data; this is often a problem with database connections.
In a less data-laden version, you simple opened, read, and closed a file. This time, it's a file handle that lingers until collected.
You can call the more general style "broken" if you like, but it doesn't make problems like this go away.
These problem are typically solved by having a cancellation-cleanup handler run when the task is killed, either inline in the task (its callback is called with an error meaning it has been cancelled), or registered separately.
They can also be solved by keeping track of all resources to clean up, including database and file handles, and anything else. That is just another kind of cleanup handler, but it's a nice model to work with; Erlang does this, as do unix processes. C++ does it via RAII.
In any case, all of them have to do something to handle the cancellation, in addition to just deleting the task's event handlers.