Author here. The book has just gone to production and will be available at the end of the year in dead-tree and electronic formats.
The book happened because I got annoyed at the fact that there is no good reference that shows how the condition system works, what are the foundations required for it to work, what are the basic building blocks that it is composed of, and what are its uses, both in the domain of handling exceptional situations and, more importantly and less obviously, outside that domain.
The book also contains a description of a full implementation of a condition system that can (and hopefully will!) be used outside Common Lisp. There's hope that some languages will pick up these parts of Common Lisp that fully decouple the acts and means of signaling conditions from the acts and means of handling them, therefore allowing for means of dynamically controlled control flow that are currently very hard to achieve in other languages.
While this may seem absurdly naive, what is the CL condition system? Based on what I see (read) It seems like you need to know what it is before read the book.
It is essentially an exception system, but more general and flexible. When you "throw an error", so to speak, rather than unwinding the stack until it finds a matching handler and then executing that, it looks at a list of handlers until it finds a match, and then calls it, right then and there. The handler can then run whatever code it wants, for example to unwind the stack, or it can return and let the search for a matching handler continue.
It could also choose to invoke a restart. Restarts are basically code to handle a problem, but they don't catch any conditions. Instead, they can be invoked, usually from a condition handler. So you would have an error being signaled at the lowest level of your code, various recovery strategies defined at appropriate intermediate levels of code, and at the highest level of code you could have a condition handler defined which picks whichever restart is best for the high level situation. So a library author can provide many different restarts, signal errors very deep in the library, and then let the library user choose the recovery strategy that fits their requirements.
If no matching handler is found, typically the debugger would be invoked, and when running with a suitable IDE like Emacs with Slime or Sly, a window with a stack trace will pop up. Here, if there were any available, restarts can be invoked directly by the programmer while they are debugging
Finally, the condition system can be used to signal non-erroneous conditions, that can optionally be handled.
Let me answer by posting an introduction to the condition system by Kent M. Pitman. It is the first subchapter of the book.
------
There have been many attempts to declare the Lisp family of languages dead, and yet it continues on in many forms. There are many explanations for this, but an obvious one is that it still contains ideas and features that aren't fully appreciated outside the Lisp community, and so it continues as both a refuge and an idea factory.
Gradually, other languages see the light and these important features migrate to other languages. For example, the Lisp community used to be unusual for standing steadfastly by automatic memory management and garbage collection when many said it couldn't be trusted to be efficient or responsive. In the modern world, however, many languages now presume that automatic memory management is normal and natural, as if this had never been a controversy. So times change.
But proper condition handling is something which other languages still have not figured out that they need. Java's try/catch and Python's try/except have indeed shown that these language appreciate the importance of representing exceptional situations as objects. However, in adopting these concepts, they have left out restarts --- a key piece of the puzzle.
When you raise an exception in Python, or throw one in Java, you are still just performing an immediate and blind transfer of control to the innermost available handler. This leaves out the rich experience that Common Lisp offers to perform actual reasoning about where to return to.
The Common Lisp condition system disconnects the ability to return to a particular place in the program from the necessity to do so, and adds the ability to "look before you leap." In other languages, if you create a possible place to return to, that is what will get used. There is no ability to say "If a certain kind of error happens, this might be a good place to return to, but I don't have a strong opinion ahead of time on whether or not it is definitely the right place."
The Common Lisp condition system separates out three different activities: describing a problem, describing a possible solution, and selecting the right solution for the right problem. In other languages, describing a possible solution is the same as selecting that solution, so the set of things you can describe is necessarily less expansive.
This matters, because in other languages such as Python or Java, by the time your program first notices a problem, it already will have "recovered" from it. The "except" or "catch" part of your "try" statement will have received control. There will have been no intervening time. To invoke the error handling process IS to transfer control. By the time any further application code is running, a stack unwind already will have happened. The dynamic context of the problem will be gone, and with it, any potential intervening options to resume operation at other points on the stack between the raising of the condition and the handling of an error. Any such opportunities to resume operation will have lost their chance to exist.
"Well, too bad", these languages would say. "If they wanted a chance, they could have handled the error." But the thing is, a lot of the business of signaling and handling conditions is about the fact that you only have partial knowledge. The more uncertain information you are forced to supply, the more your system will make bad decisions. For best results, you want to be able to defer decisions until all information is available. Simple-minded exception systems are great if you know exactly how you want to handle things ahead of time. But if you don't know, then what are you to do? Common Lisp provides much better mechanisms for navigating this uncertain space than other languages do.
So in Common Lisp you can say "I got an argument of the wrong type. Moreover, I know what I would do with an argument of the right type, I just don't happen to have one or know how to make one." Or you can say "Not only do I know what to do if I'm given an argument of the right type (even at runtime), but I even know how to store such a value so they won't hit this error over and over again." In other languages, if the program doesn't know this correctly-typed value, even if you (the user) do know it at runtime, you're simply stuck.
In Common Lisp, you can specify the restart mechanism separately from the mechanism of choosing among possible restarts. Having this ability means that an outer part of the program can make the choice, or the choice can fall through to a human user to make. Of course, the human user might get tired of answering, but in such a case, they can wrap the program with advice that will save them from the need to answer. This is a much more flexible division of responsibility than other languages offer.
Am I allowed to think the described use case less than compelling?
It's especially not compelling that the caller passed the wrong type, the called function would like a different type and somehow a different piece of code would know both end of the situation and fix it instead of the caller or callee.
It all seems like an artificial and convoluted use case.
Software design tend to know if they'd like to abort (throw) or report (callback) statically. Conditions may be general, but they seem to me to mostly allow unnecessary open-ended complex design.
If the supposed gain is that a single system can do it all, I again don't feel it is a convincing argument. In fact, I tend to prefer a one-goal system, where different use case are easily differentiable because they are different. IOW, that throwing an exception, calling a callback, sending a signal or converting a value should look different is a plus.
This is something I've noticed: one starts with a rigid design, then adds abstractions. But one reach a point over over-abstracting where the design becomes uncomprehensible because it is so generic that it becomes meaningless.
Sure. The type-error example shown here is easily fixable by languages which utilize static typing and therefore make invalid code uncompilable, but it has the advantage of being easily understood by almost all programmers.
A more contrived example would be a situation in which some piece of data (e.g. a worker's monthly timesheet report) is passed between modules of a programming system, but the receiving module, upon performing validation, discovers e.g. that the employee was working during a holiday.
Handling that situation is hard, since there are multiple ways of handling it, each of them valid in its own specific context. If the employee is in another country where that day is not a holiday, we should proceed without any other actions. If the employee has an agreement with their manager that they are having crunch time, then the system should proceed after applying overtime payment. If the employee is on a flexible time schedule, we should proceed and log this somewhere else; if the employee has no justification for that overtime, we should abort and signal an error; more examples follow.
In other words, when we signal the condition, we do not have full information about what should happen to it. We have the question "What should we do with this timesheet?" and the answer to that question is "It depends."
It is possible to model this situation by a condition type named e.g. EMPLOYEE-WORKING-DURING-HOLIDAY, and instances of that condition type being signaled inside the programming module. We create a dynamic environment where the proper handler routines for EMPLOYEE-WORKING-DURING-HOLIDAY are established, and we call the module's validator function inside that environment.
This process fully decouples the act of signaling a condition from the act of choosing whether to handle that situation and also from the act of choosing how to handle that situation.
One can (and should) document the condition type in the design specifications, and also describe functions that are allowed to signal it.
I'm afraid to sound like a negative type, but to me this example is simply a callback?
The described EMPLOYEE-WORKING-DURING-HOLIDAY is akin to a system has a slot to register a callback (or signal which can be connected to receiver, to use a Qt-like design) which can handle the situation or decide raise an exception.
Again, I understand that it might be attractive to have a single unified system to handle the different possible situations instead of separate systems. I find it hard to imagine a case where the decision to be an exception-like, callback-like or restart-like is chosen dynamically at run-time by any or all participants. And like I said, I' d be afraid that if such a case come up, it would make understanding the design harder, not simpler.
For example, as much as Qt signal/slot mechanism is powerful and flexible, I've found that when used fully, it makes understanding the code hard because it becomes impossible to know what will happen when a signal is raised because the handling is so well uncoupled.
Yes, you are correct! (And you don't sound negative just yet; skeptical, if anything.) The mechanism of handlers is literally a mechanism of dynamically provided callbacks. I draw the parallel between condition handlers and callbacks/hooks in the body of the book, too.
The difference is the fact that Common Lisp has facilities that allow two things: choices of what and how to proceed and flexible non-local returns. The callback is allowed to list all recovery choices that are present in a given dynamic environment; they are established dynamically, just like handlers, and therefore can be provided fully from outside.
For instance, a handler/callback can invoke a choice named CONTINUE if it decides that absolutely nothing needs to be done and the validation is safe to proceed. The validating code doesn't need to know why exactly that choice was taken.
Or, if it notices that e.g. it needs to convert a timesheet from version 3.0 to version 4.0, it can call a conversion function on the timesheet object and invoke a choice named REVALIDATE, passing the converted timesheet as an argument. Note that the validating code doesn't need to know about any details of the conversion routine! It only needs to provide a means of restarting the validation by establishing a REVALIDATE choice.
Or, if it decides that the situation is hopeless, it can signal an error of its own and defer the responsibility of handling that situation higher up - all the way to the system debugger, if necessary. The validating code doesn't need to know about any details of why an error was signaled! That's a ton of modularity that we've just given there.
These choices are actualy named restarts in Common Lisp. I purposefully name them choices, though, as I introduce them in the book.
Condition handling is an evolution of callback-based handling methods. A condition handler is a kind of callback. The way it is registered and invoked is different. It doesn't have to be passed down as an argument. It is tied to an exception type, which is linked into an inheritance hierarchy that allows for flexibility in searching for a handler. A handler can decline to handle an exception simply by returning; then the exception search can find another handler.
They are callbacks, but when they return they don't return to you. Instead they return back up the stack to whoever called the function that signalled the condition.
> Instead they return back up the stack to whoever called the function that signalled the condition.
Not necessarily. They may return to any point on the stack that has been "announced" as a valid return point. One can compute a list of these return points, choose which one to utilize, and under which conditions.
That's what constitutes the power of restarts (and non-local exits) in Common Lisp.
Another disconnect people often have is that you can actually wait for user input before deciding which restart to use. You can't quite stash the condition in a database and wait until everyone returns from the holiday to resume progress, but that's merely a technical limitation of the normal lisp compilers that are currently available.
> You can't quite stash the condition in a database and wait until everyone returns from the holiday to resume progress, but that's merely a technical limitation of the normal lisp compilers that are currently available.
I think that in a multithreaded environment you can try to hang a particular thread by forcing it into the debugger in order to preserve all stack information inside it, but I don't know if that is feasible in the long run on production systems since it opens a gate to possibly DoSing the machine.
Or just have it wait for an RPC message. A thread waiting on IO doesn't cost much, and you can always spin up more to do the work it was supposed to be doing.
But yea, conditions weren't designed from the start with distributed systems in mind, so you almost need a runtime that can save a whole thread to disk and recover it later. Back in the day though you could just open a window and ask the user what to do directly.
I think it depends on your OS. IIRC, on Windows, it was fairly easy to destabilize the OS by spawning threads in a loop and starving other processes on the system. I haven't used Windows in a long while though, so I cannot say for sure.
In my (admittedly limited) experience with the CL condition system, I've seen it used, usefully, in 3 main ways.
Firstly, for pretty much what the parent describes. Essentially, to help you continue debugging in the presence of incompatible caller/callees, typos etc. Even with very disciplined developers, these are pretty common in image based, dynamic environments (Smalltalk has similar issues in my experience). Re-starting from scratch would be exceptionally painful so a "patch and continue" approach is very useful. I fully concur that this isn't a compelling use case even in weakly typed languages.
Secondly, for exception throwing and catching of similar ilk to Java/C# etc. In other words, in cases where a less general condition system would have worked as well.
Thirdly, where there are 2 or 3 reasonable options to recover from an error condition, generated in library code, and these are coded with the error. The policy regarding how to handle the error is then made at the application level.
You could do something similar without the CL condition system, say by passing flags around (or using globals) and switching on them, but this is super cumbersome/ugly and I don't think I've ever seen it used much.
But honestly, there were maybe 2 or 3 cases where the third use wasn't better coded as "try this one thing and, if that doesn't work either, bail".
So my assessment is that it's not crossed over to other languages mostly because of its limited usefulness in those environments. I contrast this with garbage collection and compile time coding which have more general utility and have successfully crossed over.
Multiple dispatch hasn't crossed over much either (the obvious exception being Julia via Dylan). Now, I did happen to find that very useful so I'm a little more surprised at that than I am at the generalised condition system.
Let me tell you about a use case I had not too long ago.
I'm trying to read in some Lisp code, from a wide variety of sources, as grist for a mutational fuzz tester. I don't want to have to define all the packages used in this code. So, when I come across FOO::BAR I want to be able to gracefully handle this rather than causing a reader error if the FOO package is undefined.
Restarts let me do this. In SBCL, there's a restart that let's the handler use a different package in this case. In my case, I just stuck the symbol in some other default package.
One could also imagine this being used as a load-on-demand system. The handler could look up FOO in some list of packages provided by various libraries and automatically load the right library, then return control to the reader to try now that the package was defined.
The reader itself doesn't have to know about any of these various ways of dealing with reader errors; it just has to provide the restarts so the various handlers can tell it how to proceed.
I don't intend to take away the thunder of the announcement of the new book, but I thought Peter Seibel explained it quite well and gave a convincing example in http://www.gigamonkeys.com/book/beyond-exception-handling-co.... It might be worth expanding on that though, as well as presenting some discussion of when and how to use it. And yes, I agree, I think restarts are one of the features I'd like to be available elsewhere too.
The PCL description is a good one. My book expands on it two ways.
First, I describe how the condition system can be used to work with conditions that are not errors and therefore do not require to be handled; Peter Seibel calls SIGNAL a "primitive" function, therefore completely skipping the scenario in which a condition may not be handled. And, by that, he skips non-error non-warning conditions altogether!
Second, the PCL chapter does not describe the internals of a condition system, how the handlers and restarts are constructed. My book describes that in detail twice: once, building a sample condition-like system piecewise, and then again, describing the implementation of a complete, ANSI-compliant condition system.
The first part uses an example that you'd probably find not all that convincing (though it's a reworded example from Dan Abramov's post). The second one is not about error handling at all - but instead, shows how you can use non-error conditions to bolt on a pseudo-UI on top of an operation that signals appropriately, using both restarts (to abort the operation) and signal handlers (as sort of observer pattern).
I'd say the magic of CL's condition system is in restarts, and the ability to choose them programmatically. It's a powerful tool in API design, that allows you to cheaply expose out-of-band interface for monitoring and error recovery (and I mean actual recovery).
Suppose you're writing a module that extract data from a bunch of files (possibly accessed over network). In the high-level view, you have two layers there: one that loops over files, and the other that goes over data in a given file and extracts values of interest.
Writing this in Common Lisp, you could define a restart in the loop layer, allowing to retry a download, or substitute a different file URL. Then, in the file processing layer, you could define a restart around reading an invalid value, allowing to skip it or substitute it. These error conditions and restarts are now part of your module API. The users of that module could then choose different strategies for error recovery. In one case, if file access fails, you need to abort everything. In another, you need to skip it. In yet another, the caller knows of a backup data file, and can pass that information to the restart. Similarly, invalid values could be replaced by an appropriate default value, again provided by the caller in the context.
No, thank you for actually spreading the word about the condition system and stressing the parallel between algebraic effects and conditions. It's an important one.
Tentatively agree. It's a very abstract description with no concrete examples I can relate to. I've never found problem with try/except, which may be blindness to something genuinely useful (like the 'wow' moment when I saw FP using functions as 1st class objects), or not. I'd like to understand more.
If it's the timesheet example, it's much more useful in it's concrete which allows me to pick it apart, which was the point.
Picked apart I just can't see the value. You're hitting an unexpected condition then trying to handle the resultant exception intelligently, whereas you should have considered that possibility up-front and handled it non-exceptionally.
The only interesting part is for the error handler to be delegated to a human, but in a way that suggests to me that you've not done your analysis/testing/speccing properly. In a normal system I'd have something like a dead-letter endpoint for such an unrecoverable delegate-to-the-human condition. I can see the extra flexibility that would bring I admit but at what cost of gaping holes?
> You're hitting an unexpected condition then trying to handle the resultant exception intelligently, whereas you should have considered that possibility up-front and handled it non-exceptionally.
That is the inverse of the idea. The only possibility we consider up front is that we might need to restart the process of validating the timesheet from the scratch. Why we might need to restart it, and with what data we should restart the validation, we don't really know - we let the caller specify that. If they decide that they want to make use of that restart strategy, we have successfully delegated the responsibility of making that choice and providing data for that restart elsewhere - to the caller.
We are therefore capable of decoupling the handlers and restarts from the real validation code. This means that the caller of the validation function can specify their recovery strategies at the calling site of the function or higher in the stack above it, and also that they can decide to choose to make only some of these strategies available in a context where other strategies simply do not apply.
> The only possibility we consider up front is that we might need to restart the process of validating the timesheet from the scratch
This makes no sense. Unless you either a) alter the code or b) delegate to a human somehow[0], restarting the process is pointless - you'll get the same outcome.
> If they decide that they want to make use of that restart strategy, we have successfully delegated the responsibility of making that choice and providing data for that restart elsewhere - to the caller.
So get the caller to give that same info first time, and the exception won't happen because the exceptional condition has already been accounted for, by "provid[ing] data" (that necessary data) the first time round.
[0] which you mentioned and I did find interesting
> This makes no sense. (...) restarting the process is pointless - you'll get the same outcome.
To quote a famous person, “insanity is doing the same thing, over and over again, but expecting different results.” Hence, when we restart in this scenario, we are not restarting blindly in hope something changes; we use the restart logic to alter the input to the validator function.
* We call the validator, which in S-expressions looks like (VALIDATE-TIMESHEET TIMESHEET). This signals an error and invokes condition handlers matching that error.
* Some condition handler notices that this is a timesheet in an old format, computes an updated version of the timesheet that we will call TIMESHEET2, and invokes a restart named REVALIDATE with TIMESHEET2 passed as an argument.
* This restart sets the TIMESHEET variable to the modified value of TIMESHEET2 and transfers control to a point just before the validator is invoked.
* And so, (VALIDATE-TIMESHEET TIMESHEET) is called again, but this time TIMESHEET is in a new format, so no error is signaled and validation progresses.
All of the above can obviously be done statically, but the additional value is in decoupling the concrete means of handling the condition from the point of signaling it. The code that calls VALIDATE-TIMESHEET has no idea
------
If you consider that to be a good idea, I can write an article that gives a concrete code example of this behavior - that should be even more clear than the description that we have here. Should I?
I appreciate the offer of an article but let's leave that until we are both clearer. In your example
> Some condition handler notices that this is a timesheet in an old format, computes an updated version of the timesheet that we will call TIMESHEET2, and invokes a restart named REVALIDATE with TIMESHEET2 passed as an argument.
So why not just say
if newformat(ts) do processItTheNewWay(ts)
else if oldformat(ts) do
newts = compute_updated-version-of-timesheet(ts)
processnewformatTS(newts)
(edit: fixed silly mistake)
you have the code for compute_updated-version-of-timesheet because you said "Some condition handler [..] computes an updated version of the timesheet that we will call TIMESHEET2" so the code must already exist.
So why expect exceptions when you can anticipate some inputs are reasonably going to cause exceptions and simply handle them upfront?
The only new thing I can think of now is that you can dynamically load a newly-written exception handler (compute_updated-version-of-timesheet, which you've rush-written because you only just found out about the old style timesheets) into a already running program to handle this unexpected condition.
The value of the restart subsystem comes into play when there is not enough knowledge at the call site to make a good decision (and this is also likely where I've failed at coming up with an example that is both simple enough to understand by reading and complex enough to properly demonstrate the value of the decoupling).
The approach using restarts and the IF-based approach you've listed are fully equivalent in their functioning. I'll claim, though, that if one is capable of computing these situations up front and preparing the recovery strategies ahead of time, then we're no longer doing exception handling whatsoever. If you are able to compute all situations up front, then you don't have exceptional situations, so you don't have exceptions and therefore don't need need exception handling! If anything, one needs proper flow control; these two terms have become wrongly synonymous in some contexts. That's also how I understand your quote of "why expect exceptions".
The timesheet example is very easy to understand because we know everything about the situation: there's a new version that can be an input, an old version that also can be an input, and only one of them does not signal an error. This is why one may also decide to solve this example by using a strong type system - by defining different types (e.g. in Haskell) or classes (e.g. in Lisp) for distinct versions of the timesheet. If you can successfully use the "parse, don't validate" pattern commonly found in strictly typed strictly functional code, then one doesn't need as many exceptions because many exceptional situations become simply unrepresentable in code.
Your example assumes that one can easily couple the means of signaling a condition (conditionalizing on newFormat(ts)) and the means of handling it (calling compute-updated-version-of-timesheet(ts)) inside a singular block of code. That doesn't need to always be the case. Let's suppose that we've just run into the error and need to fix it - the situation that you described, an interactive hotfix of a running system. Fixing this properly requires modification of that piece of code that signaled an error, which requires digging into that module which we might not even have sources for, or we have the source but it's completely foreign to us and we can't afford debugging it on the spot.
We likely might need to develop a patch for that module where we now know that it has a fault and we may also need to live with it for X days before it is fixed upstream. When we have error-handling information stored in the dynamic environment and restart points sprayed across our stack, we have a situation where the error can be handled both without recompiling that faulting module and without destroying the stack - computation can continue when we've provided a proper recovery routine. We aren't getting crashes, stacks are not getting completely unwound.
So you can think of the condition system, or its combo of handlers and restarts, as an intricate system of callbacks and restart points. Every place that calls SIGNAL is a point that can execute arbitary hooks/callbacks/code; every place on the stack that accepts non-local returns is a possible place that can computation can return to in order to resume the computation. This, along with the fact that Lisp is an interactive system (a lot of Lisp's power simply disappears when it's used like Java or Rust or any other batch language), allows for a lot of possible choices of how to handle This New Situation™ that just landed us in the debugger. We can fix it interactively, and possibly store this fix in a new condition handler that we then patch on top of our running program.
Of course, this hotfix might then end up getting factored into the code properly, and what was once a toplevel condition handler that processes some data invokes some restarts becomes a proper part of the code: that situation is no longer unexpected or exceptional we are already prepared for it. (Or rather: we think that we are, and reality will validate it.)
But the condition system has served its purpose by then! It helped us keep the system alive and fix the issue on the spot.
EDIT: The main issue that makes writing examples for this troublesome is the curse of knowledge. If I write that a condition system allows to recover from a type error, it's trivial to say, "just add a typecheck for that with a conversion routine"; if I write that a condition system allows you to recover from the format mismatch, it's trivial to say "but you can just stick an ifIsOldFormat(...) in the code to solve it". These solutions are no longer about exception handling; they are modifications to the standard control flow of the program. They aren't trying to fix the situation from outside; they are trying to fix the situation from the inside. And fixing it from inside isn't always viable or even possible at all.
While control flow deals with known knowns and known unknowns, exception handling is much more about situations where we deal with unknown unknowns: something explodes, we don't know what it is, none of our programmed recovery mechanisms have taken care of that situation, and we need means of gracefully recovering.
This is achieved first and foremost by allowing human intervention: the debugger is started, the stack information is preserved, so a programmer has all the information to try and debug the situation to let the program continue execution. Some of these fixes can then be easily added to code in by means of defining new programmatic condition handlers (hooks) and restarts (choices) in code.
(EDIT2: I wholeheartedly apologize for the wall of text. I'll just add it to the book instead - thank you for the good questions.)
This is a good answer and gets to the heart of what's been throwing me - cultural and technical differences.
> I'll claim, though, that if one is capable of computing these situations up front and preparing the recovery strategies ahead of time, then we're no longer doing exception handling whatsoever. If you are able to compute all situations up front, then you don't have exceptional situations, so you don't have exceptions and therefore don't need need exception handling!
Exactly! This is a cultural difference; in my world you should only use exceptions for things that should never happen, but I suppose 'never' is a matter of expectation and therefore code that follows that expectation.
> This is why one may also decide to solve this example by using a strong type system
Agreed, and to me this is the 'correct' solution. But that may be a counsel of perfection (and an overbearing cultural 'arrogance' on my part)
> That doesn't need to always be the case. Let's suppose that we've just run into the error and need to fix it - the situation that you described, an interactive hotfix of a running system.
This is both cultural and technical difference that I've been struggling with. This makes it clear what you're getting at.
> If you can successfully use the "parse, don't validate" pattern[...]
> We likely might need to develop a patch for that module where we now know that it has a fault and we may also need to live with it for X days before it is fixed upstream. When we have error-handling information stored in the dynamic environment and restart points sprayed across our stack, we have a situation where the error can be handled both without recompiling that faulting module and without destroying the stack - computation can continue when we've provided a proper recovery routine.
That is a compelling use case.
> We aren't getting crashes, stacks are not getting completely unwound.
I came across Lisper's enthusiasm for such exception handling ability (based on having the stack not-unwound) when I was trying to learn Dylan, a long time ago. I'm starting to see what they were getting at. That doesn't mean I'm comfortable with it, but I do understand it rather better now.
> along with the fact that Lisp is an interactive system (a lot of Lisp's power simply disappears when it's used like Java or Rust or any other batch language) [...] We can fix it interactively, and possibly store this fix in a new condition handler that we then patch on top of our running program [...] But the condition system has served its purpose by then! It helped us keep the system alive and fix the issue on the spot [...] These solutions are no longer about exception handling; they are modifications to the standard control flow of the program. They aren't trying to fix the situation from outside; they are trying to fix the situation from the inside. And fixing it from inside isn't always viable or even possible at all [...] This is achieved first and foremost by allowing human intervention: the debugger is started, the stack information is preserved, so a programmer has all the information to try and debug the situation to let the program continue execution
OK, these explicit statements are very important and revealing.
> ...apologize for the wall of text
No need, it's an excellent answer, it's just a perspective that does not come from 'my side of the tracks'.
Glad I persisted, I feel I've learnt something important today. Good luck with the book!
An exception handling system in which exceptions throwing does not unwind can do thing similar to generators or continuations, but without any of the implementation complexities and overhead. Plus the flexibilities of the exception type system: handlers found by subclass matching, and the potential for a handler to examine and decline exceptions so it can go on to match another handler.
In TXR, I simplified the concept by eliminating the duality of conditions and restarts. There are only exceptions. catch frames take an exception with unwinding. handle frames take an exception without unwinding (they can potentially decline it after looking at it). In the above example, keep-it? is an exception and so are yes, no and substitute.
So, why would you do the above instead of just, say:
You almost certainly wouldn't; it's just an illustration. There is a relationship between the two in that the mappend is an incomplete specification of a behavior which requires a function to be specified by the caller.
In the exception example, that is happening also: a function is required from the caller to complete the behavior. But that function isn't passed as a parameter; it is dynamically bound as a handler which is labeled by an exception symbol. The function doesn't simply return a value, but chooses from a menu of restart points.
That points to a whole different set of ways of organizing the code, and options for extending and maintaining it.
Sometimes a single operation has multiple ways of failing, each with their own ideal recovery / retry plan.
You want to catch the error, examine context (app state, error type and metadata at crash point) and determine what the best next step is.
This example linked below is from a web crawler service I run.
When an error occurs while logging in (multiple page loads and sometimes a redirect), there is a single handler to determine if retrying the login, crashing or alerting a human is the best course of action.
To me that sounds like it would make more difficult to build abstractions/encapsulation if the caller needs to be aware of the internal state of the callee if it is to resume execution there? Or am I misunderstanding how it works?
The caller needs to know the protocol. The API writer has to document something like widget-frob may raise a widget-error condition, in which case the foo and bar restarts are available which proceed in such and such ways.
The behavior is also discoverable interactively. You play with widget-frob at the REPL, and hit a widget-error condition. This recurses into a debug prompt where you are told about the available restarts, which have docstring-like descriptions.
There's no internal state of the callee leaking out. The act of communication between the signaling site and the handling function is done via condition objects - which are just like standard Lisp classes whose slots can be read by the handling function.
The slots and/or accessor methods via which one can access the condition objects can (and should!) be documented as a part of the contract of any piece of code that is expected to signal conditions.
Resuming execution is possible since the original stack is not destroyed by unwinding it. The handling code can simply perform a transfer of control to a point on that stack established earlier and continue from there.
Trivia: Kent M. Pitman (also known as kmp) is the father of the condition system as it exists nowadays. The code in the book is based on Portable Condition System - a library that I have created, and Portable Condition System itself is in turn based on the original implementation of the condition system available at http://www.nhplace.com/kent/CL/Revision-18.lisp
After a wee bit of massaging that code to compile it on modern Lisp implemenations (the above code is CLtL1, which is an old version of Common Lisp from 1984), it still works!
The book explicitly contains a tutorial section that describes dynamic variables, non-local transfers of control, and lexical closures - the three building blocks that allow for construction of a condition system.
The part that comes afterwards constructs the condition system piece by piece, function after function - and only after we've constructed something from scratch, we draw parallels to relevant parts of the condition system. There's also a short tutorial for macro writing as an appendix, since we will write some macros as a part of the condition system.
Overall, this book should be able to introduce a programming into using the Common Lisp condition system. For the very basic basics, I will still want the reader to start with Practical Common Lisp, but once the reader knows the basics of syntax, evaluation rules, and basic data types in Lisp, they should be able to follow the book from the beginning.
Where would you put it in relation to htdp and sicp? I personally am working my way through htdp at this time, and plan on doing sicp afterwards, and would be very interested in maybe slotting your book in there somewhere.
I'd put TCLCS after SICP and HTDP; these are introductory books to programming in general and therefore are prerequisites to working with a condition system.
Or, in other words, one shouldn't read a book about handling exceptional situations if one doesn't understand what are exceptional situations or why one would need to handle them.
Thanks. Hope that it turns out insightful for you.
(Also, Common Lisp isn't a functional language. It's a multiparadigm one that often gets the label of being a functional one because Scheme and Clojure are really functionally oriented.)
Sorry, if it came across like I believe lisp is purely functional. I meant to say that I never had any experience in functional programming, and so want to use lisp to learn it.
> The book happened because I got annoyed at the fact that there is no good reference that shows how the condition system works, what are the foundations required for it to work, what are the basic building blocks that it is composed of, and what are its uses, both in the domain of handling exceptional situations and, more importantly and less obviously, outside that domain.
Except lacking deep more coverage applications, I think Practical Common Lisp covers all of that.
Which isn't a vote against a book focussed on the subject, though.
As mentioned in one of the comments, PCL does good overall coverage of the condition system but does not go in depth enough for my taste; it is a book for teaching CL, not the condition system, after all.
It does not describe any applications or use cases of the condition system that do not strictly serve exception handling, and it does not show the internal construction of the condition system, especially from the approach of starting with nothing but most basic language tools which are dynamic variables, means of transferring control, closures, functions, classes, and variables.
> The book also contains a description of a full implementation of a condition system that can (and hopefully will!) be used outside Common Lisp
Do you foresee any issues with other languages only having dynamic (i.e. THROW) non-local transfers of control rather than lexical as well (i.e. GO). I find it convenient for when I want to conditionally unwind the stack to have lexical transfers, and obviously it's been demonstrated you can implement GO with just THROW, so there's no loss of generality, but I wonder what the ergonomic impact will be.
None, other than for sheer performance. (THROW will perform worse than GO and RETURN since the list of catch tag will need to be traversed on each throw.)
EDIT: These were several wild hours of an AMA - thank you, everyone, who participated in the discussion. Feel free to post more comments, I'll reply to them in due time. Or just poke me via mail or on Freenode.
It will be very nice if you can show a big list of conditions use-cases, hypothetical and real world, besides the implementation details. That will help to popularize the concept for non-CL audiences.
The main part of the book outlines multiple ways of dealing with exceptional situations, including the handler and restart subsystems of the condition system. It also includes a section with some use cases of the condition system outside standard error handling. I hope that the information stored there is, all in all, rich enough to satisfy your curiosity, and the curiosity about concrete use cases for the condition system in general.
Glad to hear that! The book starts off with a tutorial about dynamic variables, transfers of control, and lexical closures, and then uses these (along with a few functions and variables) to build a condition system from scratch.
Thanks. Hope that it indeed is interesting, and that it doesn't contain sizeable errors anymore. The Lisp community has been a great help when it came to reviewing and editing the hell out of the manuscript, both language- and content-wise.
Looks cool! Is there a TOC and/or sample section available? Couldn't see one through the site but would like to get a sense of contents and style before buying
Samples should be available later; the book has only gone to production several days ago, so there's nothing properly formatted for a sample section yet.
As for ToC, hold on, give me a few minutes - I'll extract it for you from the manuscript.
Curious to your logic in determining that? As far as I understand it, the freely available digital version of PCL aided in the selling of the dead-tree book?
The online version of Practical Common Lisp was published fifteen years ago, well before the ebook market was as developed as it is now. Given the current market situation, I've made the decision to go the same way that Common Lisp Recipes went - selling ebooks + dead-tree books.
If you are just going to use `TAGBODY` and `GO` to implement this ... in a goto based languages like C I don't think you would be needing a condition system. In C a nested function can also jump into a parents goto label. No one probably uses it, but just saying!
Also how similar is this to promises in javascript ?
[8]> (tagbody
(funcall (lambda () (go out)))
(print 'skipped)
out
(print 'out))
OUT
NIL
another notable language which can do this is PL/I. To tie this a bit more to the topic, PL/I is incidentally where conditions come from, including the "condition" terminology.
C's longjmp doesn't cleanly unwind the stack. Common Lisp's GO, just like all CL control flow operators, cleanly unwind the stack, allowing cleanup forms established by UNWIND-PROTECT to be executed. That's the key difference.
The book contains an implementation of dynamic variables in C, though, which uses a GCC cleanup extension; the example code contains the contributed code examples for that showing various means of implementing dynavars in C.
Promises are asynchronous while signaling and condition handling is fully synchronous; I don't know what kind of parallel one can draw here. If anything, a promise's .catch(...) method may act like a CL HANDLER-CASE; the promise becomes resolved, just a different code block is executed in case of a failure.
The TXR Lisp exception handling is based in C setjmp/longjmp. It has the same expressive power as Lisp conditions.
Well, once upon a time it used to be setjmp/longjmp; currently it's based on a re-implementation of a mechanism that is almost setjmp/longjmp in assembly language.
To use setjmp/longjmp for implementing sophisticated exception handling, you have to maintain your own unwind chain on the stack and unwind that.
Before invoking longjmp, you have to walk your own frames that are chained through the stack, and do all the clean-up yourself, up to that frame that holds the jmp_buf where you want to jump.
TXR correctly maintains the chain connectivity even under delimited continuation support, which works by copying sections of the C stack to and from a heap object.
When a delimited continuation is restored (involving copying it out of a heap into a new location on the stack), the frame linkage in the restored continuation is fixed up and hooked up. The continuation can throw an exception and unwind out through the caller that invoked it.
Here is the implementation of unwind-protect operator in the interpreter:
static val op_unwind_protect(val form, val env)
{
val prot_form = second(form);
val cleanup_forms = rest(rest(form));
val result = nil;
uw_simple_catch_begin;
result = eval(prot_form, env, prot_form);
uw_unwind {
eval_progn(cleanup_forms, env, cleanup_forms);
}
uw_catch_end;
return result;
}
The grotty jump saving and restoring stuff is hidden behind friendly-looking macros. The simple catch begin/end macros will create a frame with a particular type field. That type field tells the unwinder that it's supposed to stop there to do clean-up code. How that works is that the frame contains the saved jump buffer: a longjmp-like operation (that used to be longjmp once upon a time) restores control here, then the forms in the uw_unwind { } are executed and then the unwinding is resumed.
In the virtual machine, there is a uwprot instruction instead:
1> (disassemble (compile-toplevel '(unwind-protect (foo) (bar))))
** warning: (expr-1:1) unbound function foo
** warning: (expr-1:1) unbound function bar
data:
syms:
0: foo
1: bar
code:
0: 58000004 uwprot 4
1: 24000002 gcall t2 0
2: 00000000
3: 10000000 end nil
4: 24000003 gcall t3 1
5: 00000001
6: 10000000 end nil
7: 10000002 end t2
instruction count:
6
#<sys:vm-desc: 9432d90>
uwprot 4 means that the cleanup code is found at instruction address 4. uwprot registers a frame which references that code. What immediately follows the uwprot instruction is the protected code. This code is terminated by an end instruction. When the code hits the end instruction, control returns to the uwprot instruction, which then transfers control to instruction address 4. The cleanup code is also terminated by an end instruction. In the non-unwinding case, that just falls through to the next end instruction for ending this whole block and returning the value of register t2, which is the result of the (foo) call produced in gcall t2 0. In the unwinding case the end nil at 6 will allow control to return to the unwinder.
The function that the vm interpreter dispatches for uwprot is simplicity itself:
The VM context (frame level and instruction pointer) are saved very simply into local variables on the C stack. Well, what is saved is not the current instruction pointer but the one of the cleanup code, pulled from the instruction's operand. Then there is a simple catch frame which re-enters the vm, continuing with the next instruction. vm_execute will return when it hits that end instruction, passing control back to here. If an exception is thrown, then the uw_unwind block restores the VM context from the two variables and runs the cleanup code through to the end instruction, which also happens in the normal case.
That's not enough. The condition system decouples the act of signaling a condition from the act of deciding how to handle it. This means that the promise would need to reach out to its dynamic environment to figure out what it would need to do in case of errors and restarts.
And that also only handles the case of "help I'm ded get me out of here". What about signals that do not expect to be handled, and instead are used to transfer information higher up the stack by invoking a handler function specified in the dynamic environment? That's a valid use of the condition system, as outlined in the book.
It's notable that the Worlogog::Incident and Worlogog::Restart perl modules on CPAN provide a condition system whose unwinding is implemented by Return::MultiLevel which contains a fallback pure perl implementation that does use goto-to-outer-function's-label (with gensym-ed label names for uniqueness).
Works for perl because while we don't (yet, somebody's working on one) have an unwind-protect like primitive, perl's refcounting system provides timely destruction so you can use a scope guard object stuffed into a lexical on the stack whose DESTROY method does whatever unwind cleanup you need.
Ironically, the main reason I'm not using this so much at the moment is that it isn't compatible with the suspend/resume code around promises provided by Future::AsyncAwait and I'm heavily using that in my current code, but at the point where I need both I'll probably attempt to nerd snipe one of the relevant authors into helping me figure it out.
(EDIT: Aaaaactually, I think I might already know how to make them work together, naturally an idea popped into my head just after I hit the post button ... using Syntax::Keyword::Dynamically and capturing a relevant future higher up the chain of calls should allow me to return-to-there cleanly, then I "just" need to cancel the intermediate futures to simulate unwinding ... but I'll have to try it to find out)
Not quite an equivalent in perl because there are limitations like exceptions being discarded if you're unwinding because of an exception being thrown, but it's entirely possible to make things go if you know what you're doing ;)
"Matt S. Trout (mst)" is good - I'm probably better known by my IRC nick than my full name in geek circles ;)
The author of Future::AsyncAwait is working on a patch to core to provide LEAVE {} blocks which will be a more full unwind-protect solution. Note of course the destructor limitation doesn't affect Worlogog use so much since you're -not- throwing exceptions, but once you're into mixed condition and exception based code of course things that to get more "fun".
> once you're into mixed condition and exception based code of course things that to get more "fun".
I imagine that is the reality of everyone who tries to implement a condition system in a language where the default behavior is to immediately unwind the stack by throwing some sort of an exception.
For those unaware, Common Lisp exception handling is heads and shoulders above almost any other programming language: Imagine having your program crash in the middle of a function, being able to inspect the call stack to find the bug, fixing it, then completing the function call from the middle of the function, at the exact point where the exception happened, with the fix in place.
Expanding on that: the most impressive feature of CL's condition handling is that it doesn't unwind the stack for you automatically.
In languages with "exceptions", when an exception gets thrown, you have a chance to catch it, but at that point the stack has already been unwound and there is no way to resume execution at the point where exception was thrown.
In CL, you can let someone else (the caller) make the decision on what to do in case of an exception, providing options. For example, if a value is a NaN, the caller could request to continue the computation using the caller-supplied value instead.
In practical terms, this system, while very impressive, it isn't widely understood or used in CL.
It's one of the very few things I miss after I moved from CL to Clojure. Rich Hickey wrote that he did consider it, but it was a lot of complexity for fairly little gain (all platforms where Clojure is hosted support thrown exceptions only), which I would agree with.
Additionally, one of the most tasty facts about it is that the Common Lisp condition system is written in Common Lisp itself. It's literally a Lisp plugin that only needs to be integrated with the rest of the system at its end points: where the conditions are signaled and where the debugger is entered.
Thanks. It was you who directed me towards this implementation and made it possible for me to mention it in the book, though, so I only consider it fair if I also add you to the book's Hall of Fame.
I haven't explored Clojure. If it allows one to customize the means in which errors are signaled (read: force the language to not immediately throw a Java exception on an error, and instead execute some code), then one should be able to implement both signaling and a debugger.
It might not be easy to tell Clojure or the JVM to not be trigger happy with throwing exceptions and therefore destroying the stack. Perhaps one will find a way through!
Something else that sets it apart is that the decision on how to handle something is decoupled from the implementation of that thing.
With traditional exceptions, the stack unwind all the way to the "catch" block which has full responsibility of handling the exception. With CL conditions, you can implement multiple handlers which do not catch the exception, and then a handler way up in the stack can decide which of the low-level handlers to run.
It sounds bonkers, but when you've tried it you'll see that "normal" exceptions are a very crippled version of it.
> With traditional exceptions, the stack unwind all the way to the "catch" block which has full responsibility of handling the exception. With CL conditions, you can implement multiple handlers which do not catch the exception, and then a handler way up in the stack can decide which of the low-level handlers to run.
I'd say that this is the main difference that sets the CL condition system apart from mainstream programming languages. In these other languages, when an error happens, the stack is automatically unwound; in Lisp, when an error happens, the stack is automatically wound further, and the code executed in such way has the full choice of whether to unwind the stack and where exactly to unwind it to; this information is provided in the dynamic environment in which a given piece of code is running.
Not at all. It is a consequence of language design of other languages.
When you divide by zero in Java, then there is nothing set in stone that prevents the language from winding the stack further and executing some code that will analyze the dynamic environment in which the error happened, calling all error handlers found in the dynamic environment, and only then - if no handler transferred control outside the error side - either entering an interactive debugger of some sorts or giving up and crashing.
But instead Java goes another way and immediately destroys the stack by throwing an exception. That's a design choice made by Java creators. Lisp's homoiconicity has nothing to do with this.
You can also analyze the Python implementation of the condition system posted elsewhere in the comments - that's an example of a condition system in a non-homoiconic language.
This seems really valuable for development, but is there a way to use this in production systems? Say a user hits a bug in a CL web app. Could I write an exception handler that (1) immediately returns a 500 error page and (2) persists an image somewhere so that I could fire it up later and have the debugger ready at the offending point?
I think that it's possible. Immediately returning a HTTP 500 is easily doable, and you should be able to fork the Lisp process, continue serving requests in the parent, and save stack information (such as function parameters, etc..) and dump the Lisp core for further analysis in the child. (Do note, however, that this process breaks all FDs, which includes open files and network sockets.)
Depending on how the HTTP library works, one might be able to return an error to the user, but preserve all the other call state in an open thread, and later on join it from an interactive debugger … perhaps even a Lisp debugger running in a Web browser!
That's doable. Common Lisp has a programmable debugger hook which allows for implementing one's own debugger - also, possibly, as a web application. And for situations where the standard debugger hook is not enough (e.g. BREAK or INVOKE-DEBUGGER), one can use https://github.com/phoe/trivial-custom-debugger to completely override the system debugger with a custom one.
If the server decides to escalate the issue to the programmer, e.g. by invoking the debugger, then yes, it would most likely hang until the programmer interactively resolves the problem (e.g. by issuing a proper restart), and only then execution would proceed.
However, that doesn't need to be the case; it is still possible to e.g. do the industry standard, which is to log all available information about the request and program state to some sort of log file or crash dumps, return a HTTP 500, and start waiting for another request. The possibility of interactive resolution of exceptional situations is a possibility, not a requirement.
It's also possible to block and invoke the debugger only in some cases, e.g. when some global debug variable is set to true. In all other cases, the system can behave automatically.
>log all available information about the request and program state to some sort of log file
I've only dabbled with some Schemes, but given Lisp's homoiconic nature, I'm guessing it would also be possible to generate and persist a unit test based on the failing call too, right?
As long as you can persist all the state (which is easy if you are programming in a functional manner), then you will know exactly what sort of input caused a crash in your program. And when you have the data, you can write it in a manner that is later readable by the Lisp reaer.
For this example, it would be during debugging. You could intentionally write a bug, and then write a condition handler that would fix the bug without unwinding the stack further than necessary, but that would be rather pointless.
However think of all the things that can go wrong with your environment that don't involve a bug in your program. A network connection goes down, the disk gets full, &c. All of those can be handled without unwinding the stack to the point where the code for handling them was defined.
For a really trivial example, consider a performance watchdog. If you want to completely abort the attempt and log a call-stack, you can do that it just about any modern, dynamic language because the exception object includes the call-stack. But what if you want to log the call-stack and continue for the first N times the watchdog rolls-over? This is, of course, doable in other languages with passing callbacks and such, but in Lisp this can be done by signalling a condition in the exact same manner that an exceptional condition would be signaled.
I'm excited about this book, because that particular feature is underutilized in Lisp (by myself included), probably because it's one of the last dynamic things that Lisp does that more well-known dynamic languages do not, so programmers coming from other languages just aren't going to use it.
> All of those can be handled without unwinding the stack to the point where the code for handling them was defined.
Nitpick: the only thing that cannot be handled by winding the stack is when you run out of stack to wind.
But, luckily, if your program overflows the stack, you usually have bigger issues to deal with and/or you usually have all the stack information you need to diagnose the problem.
I've noticed quite a few Smalltalk programmers who actually coded in the debugger. They would put a call in the program, hit it as an error, and then write new code. Does this happen with Common Lisp programmers?
I know of several lispers that swear by top-down "programming inside debugger", where they continuously fill in the gaps starting from top level function :-D
Do you have any videos of such programming style performed in Lisp, or could you try to convince some of those programmers to record some of their programming sessions?
Smalltalk and Common Lisp are both image-based programming languages where the compiler and debugger are always available. I think debugger-oriented programming is much more common in Smalltalk than in Lisp (at least I prefer to leave the debugger back to REPL pretty often), but I think it's certainly possible to adopt. The Slime/Sly environments for emacs might accept a few patches to facilitate that programming style better.
The condition system is often hard for non-lisp people to understand as the best they’ve had in the past is a core dump or a c++-style exception system. The main problem for me with those is that by the time you see the exception the state has already been destroyed. This is the problem with core dumps as well: all your files and network connections have been closed. The only alternative is to use a debugger which is a high overhead situation.
The roots of condition system evolved from two related hacker environments: the ITS system at the MIT AI lab and the lisp machines (both CADR and its descendants and the PARC D-machines) as well as influence for the Smalltalk folks.
In ITS what is called the “shell” these days was the debugger. If your program signaled an error you were immediately in the debugger and could debug, just exit, or Force a core dump. This was the routine way to interact, even for non-programmers. The lispm did the same. Debugging was a lightweight operation.
So it was natural in the 80s to develop a system that formalized this mode of operation. Because an exceptional situation by definition requires greater knowledge than is available in situ. When you catch an error in java or can++ it’s too late to look at the state of the object that caused the error without standing on your head and possibly redesigning the try block say to allocate something on the heap that can be passed to the error (which means no more RAII etc etc).
In addition, try/catch typically puts the hand lin too close to the site of the problem for you to be useful; if not, by the time you get the exception there’s not much you can do but terminate the calculation politely.
> The main problem for me with those is that by the time you see the exception the state has already been destroyed. This is the problem with core dumps as well: all your files and network connections have been closed. The only alternative is to use a debugger which is a high overhead situation.
This seems like two core issues with understanding the condition system. First, the currently popular exception systems always destroy the stack by unwinding it, allowing the programmer to debug only dead remains instead of a living program; second, debuggers are bolted on top of programs instead of being their internal parts, which gives many people an impression that using them as an integral part of the whole programming lifecycle is inconvenient and unnatural.
The condition system in Common Lisp solves both of these issues, which IMO, paradoxically, contributes to it not being well understood outside Lisp circles.
I tried to implement this in Ruby with a gem about 6 years ago and got really close. If anyone wants to know more about this, but is more fluent in Ruby, you might find this a good jumping-off point. There’s a good explanation in the README.
https://github.com/michaeljbishop/mulligan
Why did it not work in the end? Because even though I was able to implement an error-recovery mechanism that wouldn’t pop the stack, it still triggered all the `ensure` statements that were up the stack. It was still a valuable exercise. I'd welcome this addition in any exception-based language.
They're also "2014 Rust" specific; Rust changed a bunch between then and 1.0. Which doesn't mean they still wouldn't apply, just it gets harder for anything earlier than may 2015.
A problem I don’t have a good answer to is how to extend conditions (or even exceptions) to async programming models.
Two problems are:
1. some code may fail multiple times (with non-async code, once an exception is caught the program can’t throw another one from the same dynamic extent of the try block; similarly once a restart is invoked and control is transferred, a condition won’t be signalled from the same dynamic extent where the restart was bound (for the purpose of this clause, we consider a retry restart to mint a new dynamic extent to bind the restart in again after retrying))
2. Some computation may happen after you don’t care about its result because a parallel dependency has failed. E.g. suppose you have some rate limiter where you pass an async function, get a promise for it’s result, and the rate limited will queue your function to be called soon but with the condition that it will only allow say 5 calls to be in flight. Now you have some resource-downloading function which puts many separate download requests into the ratelimiter in parallel, and combines the results in such a way that the whole download fails if any request fails. But now if an early request fails, your function will continue trying to download all the other resources even though it doesn’t care about the result now that it has failed. You could imagine having a way to cancel promises and maybe you would be clever and write really solid programs and never have issues but to me this seems to be inviting very difficult to debug problems (just like expecting any thread to be killed at any random point (or at least a bunch of different points) would make correct multithreaded programming harder.
Perhaps a more succinct description of problem 2 is that unwinding the “stack” isn’t well-defined with an async programming model because “stack” doesn’t really mean the call stack from the scheduler you whatever bit of async computation you happen to be doing, but rather the combined dynamic scope from everything that started the computation and is currently awaiting its completion.
A related and, I think, very useful thing to have is a way to represent computations which may signal conditions or be restarted. I think CL doesn’t have a great story for this (threading isn’t really a part of the language though). An example use case would be some program which has to process some stream of events. If it gets stuck on one or some, you may want to continue processing the rest of the events while deciding what restart to invoke for the problematic ones. Perhaps an effect system could help with this.
I'll be frank: my book does not touch this topic in the slightest, and this is an open question. Common Lisp does not have any asynchronous features built in and, as I've said in some earlier comment, the condition system is fully synchronous.
This topic is still waiting to be explored, either with an extension of Common Lisp or a completely different language that permits asynchronous programming with an adaptation of the condition system to this programming paradigm.
Amazon generally has a "pre-order" option. If there was a pre-order option, I also would order your book. By the time your book is in print, I will have forgotten about it. Fascinating topic; nice to see good coverage. I always felt that callbacks and exception handling left a little something to be desired. It looks like you've addressed that.
When your book is in print and available for sale I hope you post again here as a reminder.
The book happened because I got annoyed at the fact that there is no good reference that shows how the condition system works, what are the foundations required for it to work, what are the basic building blocks that it is composed of, and what are its uses, both in the domain of handling exceptional situations and, more importantly and less obviously, outside that domain.
The book also contains a description of a full implementation of a condition system that can (and hopefully will!) be used outside Common Lisp. There's hope that some languages will pick up these parts of Common Lisp that fully decouple the acts and means of signaling conditions from the acts and means of handling them, therefore allowing for means of dynamically controlled control flow that are currently very hard to achieve in other languages.
The accompanying code is available at https://github.com/phoe/tclcs-code and https://github.com/phoe/portable-condition-system - it will be copied over to the Apress repository soon.
Linked Reddit thread: https://www.reddit.com/r/lisp/comments/hrjzs8/
AMA, I guess.