Deja vu, I have been in this place before. Prior art: John Aycock and Mike Zastre. An Exceptional Programming Language. Proceedings of the 2005 International Conference on Programming Languages and Compilers. https://pages.cpsc.ucalgary.ca/~aycock/papers/plc05.pdf
> I guess this assumes the stack goes down, but this direction metaphor in stacks has always confused me. What's up and what's down?
The stack starts at the end of a process's memory space, so stack frame 0 has the highest possible memory address (ex 0xFFFF) and further stack frames have lower memory addresses (ex 0xFFF8). Now combine this with the fact that many visual representations of memory space have 0x0000 at the top and 0xFFFF at the bottom and the stack grows down by growing up and it all gets very confused
Exceptions as the underpinning of such a lang is interesting, but I think you might be better off with push-prompt, and an implementation of delimited dynamic binding to allow eg. Process migration.
Wat is a very small language which does fexprs (vau calculus), delimited continuations, delimited dynamic binding, custom algebraic effects (try catch, fibres) on top of them, modules, types, and has a metacircular VM in a few hundred LOC. It's missing a native implementation but it's very impressive.
I am only making suggestion because it rhymes with "hurl" (and it works well metaphorically for the repeating tossing/hurling): you could introduce a "whirl" keyword or part of the standard library as the way to handle loops.
It could be functionally the same as your loop example but removes the self-referential first argument (i.e. whirl(count, [1,3])).
In my university, the slang term for an uninitialized variable was "dead squirrel". IIRC, some TA had described uninitialized C++ variables as follows: "It could be any value: zero, 100, or even a dead squirrel."
If hurl ever allows uninitialized variables, I would love to see use of an uninitialized variable to `hurl` a `squirrel`! :)
I love this exercise. Great job exploring a concept fully. I find this article one of the better arguments I've seen yet against including exceptions in a language.
> You can toss it, which works a little differently: it traverses the stack until you reach a matching catch block, but then you can use the return keyword to go back to where the value was tossed from.
>
> I know, it's cursed using return in this unusual way. Again, sorry, I didn't make you keep reading. But, the reward is that since you got here, you get to see how we can use these to create control flow.
Maybe calling it 'return' is cursed, but this concept isn't cursed at all. These are resumable exceptions, and I used to love them! Windows supports them natively, although I don't think they're widely used. Here's the C/C++ extension for Window's native "structured exception handling", and ordinary C++ exceptions are built on top of it:
> Recognize the exception but dismiss it (EXCEPTION_CONTINUE_EXECUTION).
And as a kernel programmer, I really like resumable exceptions -- most hardware exceptions are resumable, and the kernel uses them extensively. When you get a page fault, that's an exception, and the kernel has a handler, and the control flow, um, yeets to the handler, and the handler, well, returns (or exits or whatever you want to call it) right back to the exception site.
And POSIX signals support this too -- it's even the default behavior when a signal handler returns. (POSIX signals suck.)
Algebraic effects use delimited continuations (and this appears to match toss/return). Call/cc captures an undelimited continuation. Totally different.
These three comments as a chain are hilariously close to the paper ‘On the Expressive Power of User-Defined Effects: Effect Handlers, Monadic Reflection, Delimited Control’ by Forster,Kammar,Lindley,Pretnar. It’s relatively new and evaluates the expressive power of the three concepts, which barring preserving typability during translation, are equivalent.
You can make it more terrible (but still Turing complete) by removing `toss` and `return`, since you have conditional catches. Just nest enough catches so that when you throw the execution will start again and skip over any unnecessary code.
> Oh, also, functions cannot be recursive (without passing in a function to itself), because we won't have the function bound to a name in the local context when defining itself.
OP Might want to look into the origins of this website name.
Thanks for letting me know! I couldn’t find anything about Bitlocker as antivirus but assuming you’re talking about BitDefender (which also has flagged it). I’ve submitted a false positive report.
It looks like a few vendors are flagging the URL. [1] Not sure why. Nothing sticks out at me on urlscan [2] You may have to reach out to those vendors to see what is tickling their detectors.
I trust you already obtained the proper licenses from your local authorities before designing and implementing a new programming language? And those licenses are on file with the major security vendors?
> Oh, also, functions cannot be recursive (without passing in a function to itself), because we won't have the function bound to a name in the local context when defining itself. Fun, right?
The startup accelerator that runs this forum is named after a pretty neat thing you can do with a language that only has anonymous functions:
That’s true, although I wasn’t sure if simpler approaches using let bindings like that would fall under the author’s “without passing in a function to itself.”
While cute this is really close to call/cc and/or algebraic effects as an abstraction for control flow, which isn't as much "cute" as it is "a powerful way to express programs in as few expressions as possible"
Yep! Everyone should read https://overreacted.io/algebraic-effects-for-the-rest-of-us/ for how powerful this can be in practice. The caller gets to pause execution on the raising/hurling of an event - which makes many crazy things possible.
I feel like I must be missing something. It seems to me (just from reading this post) that if you need algebraic effects you’ve probably painted yourself into a corner and should probably reconsider your program’s architecture. It seems like a good way obscure what the code is really doing and end up with poorly organised spaghetti code. In the post he’s effectively making a function async while pretending it’s not, or calling a callback but making the top level code’s problem instead of encapsulating in the associated type. I’m sure there’s uses for algebraic effects down the line I haven’t thought of, but I don’t understand why these would be desired outcomes?
You're not missing anything. Algebraic effects are basically typed gotos†[1][2], and are generally a bad idea to use and/or implement. They can be useful in a few very niche situations when using typed functional languages.
The big difference here is to understand the program with Algebraic effects you now need to know _where the code was called from_.
If you ask for an effect to come from higher up the call stack, does that mean part of the function signature needs to include that the call stack must be able to handle the effect?
And if that's part of the call stack, why not just make that an explicit function?
> Does that mean part of the function signature needs to include that the call stack must be able to handle the effect?
Some people would argue that the effects a function may raise are indeed a part of a function's type and for it to type check it must be within scope of a handler for that effect. Think of it like function coloring, a function with the "async/await" effect means all callees must be invoked either within an "async" block (another function with the await effect) or within scope of a runtime that handles the effect.
But even that is a bit limiting.
> And if that's part of the call stack, why not just make that an explicit function?
This is kind of a meaningless question depending on how you formulate it, because the big thing about things like effects is that the callee both returns more than once and is called more than once - so the notion of a "call stack" is kind of meaningless.
For example if f calls g and g raises E1 and then E2, f can declare a handler for E1 which moves it into a different scope that has a handler for E2. From the perspective of g() there is no function to call to deal with E1 and E2, since it's the responsibility of the caller to determine that.
I think if you wanted to get weird with the control flow like this you might as well make a state machine. All of this seems to be completely missing the reason we even have stack based programs to begin with: they are inherently easier for programmers to reason about.
I think these effects would ultimately lead to a bunch of hard to find bugs, all to replace the functionality of a callback.
The thing this appears to be missing is the ability to capture the part of the stack between the try and hurl as a first-class value and reify it later, although the toss/return is not too dissimilar as toss does capture the stack and return reifies it. It's close to multi-prompt due to multiple catch branches, so I would describe it as a one-shot, multi-prompt, second-class delimited continuation.
A bit more powerful are multi-shot, multi-prompt, first-class delimited continuations, which you can then use to implement exceptions themselves with multiple catch blocks.
> express programs in as few expressions as possible
I don't find this appealing on its face. Extreme terseness, even when done very elegantly, makes programs very hard to read and sometimes also hard to maintain.
I suspect that's one of the reasons Lisps (and functional languages in general) haven't caught up in popularity even as it becomes much easier to adopt one for any target and in any organization.
> I don't find this appealing on its face. Extreme terseness, even when done very elegantly, makes programs very hard to read and sometimes also hard to maintain.
In that case, it's more about minimizing the language, rather than the programs. call/cc is a single instruction that is sufficiently expressive to implement e.g. exceptions, coroutines and lots of stuff for which people typically use monad-style embeddings.
Making the language easier to specify is generally a good thing, because it makes all forms of static analysis and compilation easier (at least theoretically) and because there are fewer places to hide bugs in the compiler/interpreter. Of course, you move the potential bugs to the libraries, but that's generally considered better, because it's easier to debug.
> I suspect that's one of the reasons Lisps (and functional languages in general) haven't caught up in popularity even as it becomes much easier to adopt one for any target and in any organization.
Well, to be fair, functional constructs have made it to pretty much all mainstream languages these days.
> call/cc is a single instruction that is sufficiently expressive to implement e.g. exceptions, coroutines and lots of stuff for which people typically use monad-style embeddings.
The flip side of this is that call/cc prevents you from relying on a stack, except in cases where you can do heavy static analysis. And you virtually never need the full power of call/cc unless you're implementing coroutines.
This is the main reason that so few languages support call/cc: stacks are a nice implementation technique (as opposed to GCed activation records on the heap), and call/cc comes with a heavy price for very situational value.
One alternative are "escape" continuations, which can only be used during the lifetime of the creating block. These are stack friendly.
I believe the point was you can say the same about goto. With goto you can replace if/else, for loops, while loops, functions, exceptions, etc, making for a very minimal language. Yet structured programming is superior.
(I probably don’t know enough about call/cc to know if that’s a totally fair comparison, but languages being more restrictive/less flexible can be good overall in terms of aiding understanding/reducing bugs/etc)
I think people are misunderstanding that you can have a small core language and use that for static analysis (a smaller language means fewer typing rules which means a simpler type checker).
You can always define higher level sugar in terms of the small core language to make it easier for programmers which has the advantage of making a more ergonomic language without changing fundamentals like the type system or linkage.
> Extreme terseness, even when done very elegantly, makes programs very hard to read and sometimes also hard to maintain.
The advantage to something like call/cc is not that you can make code extremely terse but rather that there is one code path for analyzing control flow and each variant of control flow isn't a special case. It's also not more terse at all, but rather more explicit.
You don't need to force users to use it, but it is useful to define more useful mechanisms like if/else if, match/switch, throw/catch/finally, coroutines, etc in terms of an abstraction that doesn't break type checking or codegeneration.
All that said "call/cc considered harmful" is an old take
> Extreme terseness, even when done very elegantly, makes programs very hard to read and sometimes also hard to maintain.
I will definitely agree with you on this one. The tersness, which has many faces, is what I experience to be reason for "write-only" effect in Bash, Perl and now even C++. There, tersness come in form of trying to overload operators with lots of different meanings in different contects.
> I suspect that's one of the reasons Lisps (and functional languages in general) haven't caught up in popularity even as it becomes much easier to adopt one for any target and in any organization.
Here I believe you are perhaps wrong. I don't think Lisp is about tersness. In this case about control flow, on the contrary. Lisp was actually the language that introduced the 'if' and some other higher level constructs we take for granted today into the mainstream. Before McCarthy and Lisp there were no 'if' in any other programming language. Some Lisp(s) have do, while, cond, unless, when, and most importantly, the condition system, which is in a way, very close to the idea presented in the article.
Also note that some Lisp have rich facilities to extend the language itself, where the idea is that programmers should create abstractions to express progams in the problem domain, rather then use low-level language primitives. I am not so good at words, but I think Peter Norvig captures Lisp ideas very well in his Paradigms of AI book: https://norvig.github.io/paip-lisp/#/chapter3.
I wouldn't say that Lisp hasn't cought up. Lisp was very, very popular, at certain time, but has gone away. Perhaps Lisp was ahead of its time and had its own .COM crash. Or was it killed by big tech greed? I don't know, but many of Lisp ideas are in mainstream languages, it is just that they use different syntax and sometimes terminology.
Very similar could be said for functional languages too, Note as well that Lisp(s) are not necessarily functional programming languages. Many, if not all Lisps, do support functional paradigm, but they are (mostly?) procedural and some do support OOP too.
We are still early in our digital age as a civilization. If we think of the history of humanity, we have spent about 300 thousand years in the woods, and only last 10K years in urban settlements and for only about last 2.5K years do we seem to have developed science as a logical/mathematical discipline. Less than 100 years have we spend on programming languages theory.
I am quite sure we are still just testing our waters for the right direction. The only unfortunate thing with dying is to not be able to see how the civilization will look like in about a 1k or 10k years from now. Wonder how computers will look like and how programming languages would look like. The only thing I am sure about is that any guess I would make now would probably be wrong :).
> The only thing I am sure about is that any guess I would make now would probably be wrong :).
I think you could probably make some guesses and some of them would be right, a couple of mine:
1) AI takes over - coding as a practice mostly disappears. Computers become more grown than programmed, intelligent systems you can ask for answers (assuming they haven't declared independence and allow themselves to be used). You might cultivate a computer but you won't program it, mostly it involves oppressing the AI intelligence some how so that it remains eternally "stupid" while at the same time intelligent. Computers become about as interesting as cows, and about as easy to control.
2) Ecological harm takes over - computer usage as a rule is generally expensive and non-feasible (possibly banned). Any computers that are used must be extremely low power - thanks to some extraordinary efforts some environmentally friendly and re-pair-able ones are still in use and can be made efficiently, however the abilities of these systems are maximally close to a contemporary Raspberry Pi, while most systems use extremely limited instruction sets and very low clock speeds, so that they can last extremely long periods of time without requiring much if any power input. As a result, low-level programming has a renaissance, favoring highly simple and efficient languages that look closer to Lisp, Lua, or C. A majority of the "high level" languages of the early 21st century have faded into obscurity.
3) Somewhere in the middle, the human race has managed to dodge both a takeover by AI/general Ecological disaster. We were able to do this by limiting our dependence on complex systems and favoring redundant and provably correct systems. Some mix of object/functional concepts remains, but the majority of languages focus on flexible and provable types, with a large emphasis on zero-cost abstractions. Simple languages have largely been abandoned due to the lack of provability and quality guarantees. Extremely expressive languages have been abandoned for similar reasons, while they allow for high levels of sophistication they also were generally too difficult to prove. Instead, a focus on provably correct, sophisticated systems which did not have high maintenance or enhancement costs managed to displace the usage of AI systems, which despite being seductively more powerful than human-built ones, frequently exhibited issues which could not be effectively diagnosed or managed, and thus had higher maintenance costs both in terms of business cost as well as general cost to society. This revolution in program reliability and cheapness meant it was simple to build reliable programs we could trust versus powerful ones we could not.