Hacker News new | past | comments | ask | show | jobs | submit login
Hurl, the Exceptional Language (hurl.wtf)
315 points by todsacerdoti on May 26, 2024 | hide | past | favorite | 132 comments



For anyone designing a programming language, enforce namespace to includes/imports! and if possible, don't allow top level side effects.

    let foo = include "lib/foo.hurl"
    foo.init()
it's much easier to reason about then, for example:

    include "lib/foo.hurl" // side effects
    baz(buz) // function and variable that I have no idea if they are in the standard library, or included *somewhere*

That way it's much easier


Preferably enforce that the namespace matches the include/import statement, if the statement doesn't use an explicit name binding...

import "foo/bar" should make foo.* OR bar.* available, not bazz.*. I'm looking at you, Go.


Python has this in a slightly different spot. Most pypi packages have the name and module aligned but it’s only a matter of convention, and there are some common deviations like the pyyaml package providing the yaml module.


Ah yes, Python's package manager has it wrong also. But at least Python the language is clear, so you can know "import foo" or "from bar import foo" creates a name "foo" in your file. Go has no such limitation. Imagine doing "import pyyaml" and "yaml" is the name defined...


Wait I’m so confused by this - this is the opposite of how I thought it worked? Go import creates exactly the symbol you mention in the import statement, like “import fmt” creates a symbol “fmt”?

Can you give an example of what you mean?


Yea, this is correct.

    ```go
    import (
       "net/http"
       "os"

       "github.com/anacrolix/torrent"
       tstor "github.com/anacrolix/torrent/storage"
    )
```

would import as: http, os, torrent and tstor. That guy is mixed up.


I don't know what this proves. See example above for an example of exactly what I've said.


The only example I have in mind is modules, which can have a different name from their root package.

For example you can have github.com/user/go-module as module name, and "module" as package name, so "import github.com/user/go-module" will be imported as "module.*".

There’s a linter that automatically aliases this kind of imports with their actual module name so that it’s non ambiguous when reading the import list.


Here's an example, a package bar under a totally different name:

    package main

    // imports 'bar', you couldn't possibly know from reading this line
    import "github.com/remram44/foo20240527"

    func main() {
        // ??? where does this function come from ???
        bar.SomeFunc()
    }


> I'm looking at you, Go.

I’m so confused by this. Unless you use a dot import, isn’t this just bar.*?


I'm not sure what you mean. The name binding created is whatever `package <...>` is in the target package's file, which can be anything.


Ah got it. I think there’s such a strong convention there, that it would be exceedingly rare to see in pratice. It’s probably allowed just for compat reasons at this point — but I think your point is don’t make that mistake in the first place, not that Go is filled with flagrant violations of the convention.


This is normally how it works, no?


What does that mean?


it means: this is normally how imports in Go work, is it not?


What is how it works? How is it normal?

I posted a criticism of Go, I don't know what you mean by "this is how it normally works".


I'm saying that when I import something, "foo/bar" "bar/baz" etc, The way I access the exposed functions and types in that package usually corresponds to the basename. e.g. bar.* for the first example, and baz.* for the second example. Is your criticism of Go that it is technically not required for this to be the case?


I do not disagree, but I use IntelliJ for work and it shows clearly where some reference is imported from and let you navigate to it with a shortcut. VSCode does similar things with plugins and LSP, just much worse. I cannot work in VSCode because navigating code is so slow. Is this suggestion only useful when you don’t have such tools? It seems impossible to me that people can live without them, at least in a professional setting.


Let's not write software and especially programming languages which assume or depend on users having access to advanced tools that require a monthly subscription.


> I use IntelliJ for work and it shows clearly where some reference

Useful when writing the code but not much use when reading the code.


This also allows one to pass parameters to `foo.init()`, something you cannot do with naked imports.


    import foo(42, FULL_OF_EELS) as foo

    -:1:8: E0012: Initializer of module "foo" has 3 arguments, 2 were provided.


In case we're still in the design phase of the language, named arguments would help around that (coupled with a good IDE)... but yeah, probably i agree with how i interpret your comment.

Shouldn't be like how parent proposed imports work. Would lead to too much pain.


I mean, it’s a language based around exceptions for flow control, I think the “easy to reason about” ship has sailed.

(Don’t confuse this with me thinking this project is worthless, I think it’s art.)


The imported files should really hurl their exported functions, and the importer needs to catch it into a variable.


That’s a great idea. I’ll have to do that for the next version.


Oh, dependency injection?


Maybe even dependency ejection.


Ha!


> I mean, it’s a language based around exceptions for flow control, I think the “easy to reason about” ship has sailed.

Sometimes I wonder if I'm exceptionally (haha) talented as I personally find the impact of exceptions on flow control pretty easy to grasp. But based on my understanding of other advanced computer language concepts, which is pretty lacking in some regards, I come to the conclusion that it can't be too hard, and people make a lot of fuss about it for no particular reason.


It’s largely difficult because either:

- you’re working in a language that doesn’t have checked exceptions, so the set of potential errors and the set of potentially error-raising calls is infinite but unknowable

- you’re working in a language that has checked exceptions, and you hate that it makes you do work, so you catch-rethrow runtime errors that recreate the first scenario

- you’re working in a language that has checked exceptions, but someone else did the second scenario so you’re in the first scenario anyway


Other programmers tend to be bad at reliably cleaning up resources such as file handles, locks etc, so I need to inspect the whole invocation tree anyways to have an understanding of what runtime implications I've summoned by invoking other people's code.

As for myself, I've lived through the hell that are checked exceptions in Java. You learn that compositionality and checked exceptions are at odds when you try to insert a remoting layer into an application that has grown without IOExceptions. Then you learn that it's actually not necessary to know the set of possible errors, just make sure that you're not a bad programmer as in my first paragraph, and everyone will be fine. This is also something that you can learn from Exceptional C++.


Yeah, I’ve never understood the complaints about exceptions either: most of the time you want the exceptions to just bubble up anyway because, in that case, you only have to think about the contracts of the functions you interact with and not about the unusual states you might be in. Return-type or return-value based error handling has always seemed to me to be significantly worse.


The unchecked exception example doesn't seem any different than using a dynamically typed language and reading return values, and exceptions seem to get significantly more hate than those.


Because even in a dynamically typed language you can generally go look at what the function returns. You can’t look at what it throws without walking the entire call stack and inspecting the source code of the runtime.


Flow control involving recursion is already well into the weeds. Recursion and exceptions is probably a nightmare for someone not fond of ML or Lisp.


But why? I don't get it. You call something. It can break. It will break. Treat it as such wrt to resources you've allocated. You can ignore error details here.

At the highest level of your application (and at a few critical places, executors, retrying strategies etc) handle all the exceptions you know of, and implement a sane default for everything you don't know.

Done.


Not really, what’s happened is that everyone’s so scared of exceptions they only use them in extremely confined ways. These ways are relatively easy to reason about. But the full fat arbitrary goto around your call stack is avoided.


I agree, side-effecting imports add to the spooky action at a distance aesthetic.


i think you mean to to say "has sunk"


Agreed 100%.

I forked Ruby to have require that didn't clobber the symbol table but then lost interest in Ruby itself because the ecosystem seems unhinged on shared global mutable state.


Agree!

For that very reason in Elixir `import` is discouraged in favor of `alias`


I enjoy Elixir but this situation is quite unfortunate, there is alias, import require and use for referencing / pulling in code external to a file in some way or another, plus the possibility of calling a function from another module directly by name without an import statement using the module name – and the most annoying part of it is that none of these give an indication of which file the other module is from, which is like 50% of the utility of an import statement for me.

Instead there is this pattern of naming modules based on the file path they are located in, which is not enforced.


Doesn't Elixir (and Erlang for that matter) specifically require a module to not be in a specific place in order to have hot reloading of modules? Though I suppose you could still have hot reloading and require a module to be in a specific place.


Not sure, could be that Elixir stuck close to Erlang's module system yeah.

But for example gleam [1] is another language on the BEAM (compiles to Erlang), that has a much nicer approach: All imports must be declared explicitly, and the import path in your local project is based on the file structure so you always know where something is imported from [2].

[1] https://gleam.run/

[2] https://tour.gleam.run/basics/modules/


Nope. Elixir doesn't care where your module lives. (You can even "nest" modules, though it's not a great practice)


This is precisely why I stopped using Nim. I was going crazy trying to remember what functions were called etc. I could do from x import nil but it felt like fighting the language.


Requiring namespaces would break dot call syntax, which is one of Nim’s main features.


It could be preserved if specifying the namespace used a different syntax.

Right now, if I have the `foo` module and `bar` function, I can call `arg1.bar()`, or `foo.bar(arg1)`. But if the namespace didn't also use `.`, then it wouldn't be an issue.

For sake of argument, lets choose `/` like Clojure. Then we'd get: `arg1.foo/bar()` so we can specify the namespace and uniform function call syntax is preserved.


True. That said, would you find, say,

    "one,two".std/strutils/split(',')
more readable than this?

    "one,two".split(',')


Maybe. First, you need the module name primarily for disambiguation for compiler or possibly for readability. I wouldn't recommend requiring it for everything, eg one of the arguments against requiring naming all imported symbols explicitly is for procs that use `[ ]`.

But I think the std/strutils/split counter example isn't strong for two reasons:

1. When you disambiguate in Nim, you use just the module name, not it's full path. So it would still just be `strutils/split`.

2. If we were to introduce such a syntax, we could also introduce an `import foobarbazqux as foo` alias syntax, which is present in many languages (eg JavaScript, Clojure, etc). This would also be useful if we ever had module name collisions, which has never happened to me in Nim, but doesn't seem impossible.


This is the chief problem with Python and most Lisps


I mean, for anyone designing a programming language, don't use exceptions as the chief means of control flow.

Critiquing a joke design is of dubious usefulness, at best :)


I've always kinda hated exceptions as it makes the contract between a caller and a callee hard to determine, and makes your code highly coupled. I prefer the Go or Rust style of handling it through return values. Briefly skimming the language, I'm not sure if there is anything that fixes that?

I think this kind of model could be cool if your IDE could dynamically determine all uncaught exceptions for a function, and lets you jump to each possible place where the exception could be thrown. Not sure how you handle coupling though. This seems like it would result in an insanely volatile control flow graph.


This is what IntelliJ does for Java. A problem is reported whenever you have a function that throws exceptions and isn’t caught in a caller anywhere in the project, and you can jump to implementation or calls easily.

However, exceptions that a function can throw are part of the function signature in Java unless they extend RuntimeException (and in that case your program won’t compile if you throw an exception without adding it to the signature). While the circumstances in Java make it much easier for IDEs to report uncaught exceptions, it’s a solvable problem for non-runtime exceptions using static analysis.

On the other hand, returning standardised Ok/Err-wrapped values seems like a simpler approach, both in terms of tooling support and developer convenience.


I think once Java has finished up exception switch-case it will be a model followed by other languages. Being able to catch exceptions at both, method and transaction boundaries will be a boon for readable control-flow.


> and transaction boundaries

What are transaction boundaries? Is Java getting transactions?


Algebraic effects is going in completely the opposite direction.


> ...as it makes the contract between a caller and a callee hard to determine, and makes your code highly coupled. I prefer the Go or Rust style of handling it through return values.

There is literally (literally!) no difference at all between throwing and exception and returning it as a variable. Except for the fact that in the exception passing style you have to write the boilerplate by hand, instead of letting the compiler do it for you.

Why anybody with a sane and functioning brain would want to do that by hand in 2024 I will never understand.


I think people like it because the control flow of a given program is more obvious when you write it that way. No one can "throw Foo" three libraries down from their caller as an "API". See https://go.dev/blog/errors-are-values


You're just trading one type of control flow visibility for another. With even the most basic amount of error-return handling the actual control flow of your program is quite obscured.


I dislike Go, but I understand why people like most of its decisions.

I cannot fathom why people think it does error handling well, though. The codebase I work in has _so_ many errors that are completely ignored, and errors are a lot harder to track down.

These problems can be solved by writing better code, but the problem is that it's, of course, hard to write good code.

Java's exception handling has problems, but at least it gives you nice stack traces and you can't forget to propagate an exception.


> I cannot fathom why people think it does error handling well, though.

It doesn't. But it lets me do error handling well if I'm up to it. Which is the worst of all possible worlds except for the one where error handling is done badly.

> Java's exception handling has problems, but at least it gives you nice stack traces and you can't forget to propagate an exception.

Y'know, I actually had the chance to use raw java (and even a modern version of java to boot) for something semi-recentish, and it was pretty great. I was surprised how well it worked out. Unfortunately, this wasn't the experience I got on any sort of enterprise java project though. The stack traces usually truncated before it got out of the framework being used.

I'm not really working in go because I'm rejecting java itself. I have certain disagreements with the design philosophy, especially in its early days, but it's reasonably possible to write decent code in it. I refuse to work in java because of its ecosystem. I know what writing java is like in an enterprise setting, and I'll take 1000 `if err != nil`s over that experience.


Enterprise Java should really be called something else because it ruins the name of vanilla Java.

Vanilla Java is excellent. I miss some language features, but recent updates have really made the language competitive.

On the other hand, enterprise Java continues to be what everyone thinks of when "Java" is mentioned. It's also terrible.


As an implementation detail, exceptions are usually much more expensive than just popping the stack, as computation is needed for each frame you traverse.

Having both throw and return is like regex: now you have two problems.


Exceptions are less expensive in the unexceptional case. Consuming a Result value requires a branch to see whether it holds Error. That is not necessary if the function returns a value directly (and throws an exception on error).


However, exceptions require a larger runtime and can make optimizations harder to reason about.


I have heard this before but never really understood it.

Take this:

```

int bar(int arg); // May throw

int foo(int arg) {

    int b = bar(arg);

    return b * 5; 
}

```

VS:

```

Error<int> bar(int arg); // Uses error type

Error<int> foo(int arg) {

  auto b = bar(arg);

  if(b.ok()) { // Happy Path

    return b.unwrap() * 5;
  }

  return e; // Exceptional Path
}

```

In the happy case (no exception) how can the first version be harder to optimize than the second one? In the exceptional case exceptions will probably be slower. You trade fast common case for slow exceptional case so it makes sense to me. Is the slower one the one that is hard to optimize? Is this what we are talking about when optimizations are harder to reason about? Or are there some other things that become harder because of exception? Things like RAII, defer, goto stuff?


> In the happy case (no exception) how can the first version be harder to optimize than the second one?

If your runtime has to do extra work to support stack-unwinding, it could easily be slower. You're doing work in the happy path in that case just in case the sad path happens. My guess is adding function metadata to some sort of datastructure (linked list?) so that it can figure out whats happening when it long jumps to the handler. I'd bet that allocation has at least one conditional in it.

The alternative would be putting the metadata on the stack, letting the function complete normally, and then check a flag to see which path you're on (same overhead).

To be clear, I have no idea how various runtimes implement this, just that the extra magic could easily have the same, or more, overhead than a conditional check.


Admittedly, I'm not an expert. I believe it's due to the extra control flow points (assume all functions can throw, not just specially marked ones) and not having just a local value, which procludes analyzing the operations done on the error path.


Unless you're using exclusively Java-style checked exceptions, then there literally (literally!) is huge difference. That is:

    def main():
        try:
            my_a = a()
        except ExA, ExB:
            my_a = None
        ...
         
    def a() -> A:
        ...
        my_b = b()
        ...

    def b() -> B:
        ...
If b changes its signature to return type C, then it is a type error in a. main doesn't need to worry about what b returns; only a does.

BUT if b begins raising ExC instead of ExB, then that will break main at runtime. That is, main needs to be aware of what b could raise, even though it doesn't directly call it.


This is unrealistic example and has error handling completely backwards. The exception handler in main() only knows how to handle specific types of errors -- it doesn't know or care where they are thrown, that's not it's problem. Lets say, for example, it knows it can handle network errors by retrying the operation.

If b() changes to throw a different type of exception that main() doesn't know to handle them main will break. And breaking is entirely the correct behaviour.

Maybe b() is an interface and the actual implementation might not even have existed when main() was written. Maybe yesterday it was implemented with a file, tomorrow with a HTTP call, and maybe next week something else. But tomorrow, at least, it can retry the operation when it fails.


That's all great in theory, but in practice, I see except clauses mostly used to handle particular exception classes that callees are known to throw.


My applications have very few exception handlers and most don't do any "handling" at all except logging.

My most robust application is a desktop application with a single exception handler at the event loop that merely prints the exception message in an a message box. If you try and save a file to a bad network location or something, just click ok and try again.


Literally the exact same thing happens if you use exception passing style and encapsulate all errors as a generic `Error` type. (Which is what everyone does in practice.)


I prefer to build result-passing, no-exception-throwing systems out of an exception-throwing core where the language itself may throw exceptions but the thing I build from the language always returns result.

Elm is an extreme case where indexing into an array returns a result type instead of an exception when the index is out of bounds, unlike most languages including Haskell.

Maybe my program logic is intended to always access only valid indices in an array, but here I'm given a result type where I have nothing to do in the error case since my code is never intended to reach that case. I would rather let the language throw an out-of-bounds exception here to tell me that my implementation is incorrect while I'm testing.

Same with libraries in a language. It really depends on the use cases of the library you're writing whether results or exceptions are better. The most convenient thing for the user of a library would be to provide both exception-throwing and result-passing alternatives. This is what the OCaml standard library does as well.


If go didn't naturally eat error context, I'd like it more. But in the time I've used it, it makes errors much, much more painful to root cause without a debugger.


> Now let's see an example with toss. This is used mostly for passing multiple values out of the function. You don't really need it, but it's cute.

Not useful? You've implemented resumable generators! Of course, getting them to do anything except resume immediately might be... exciting. :P Just need to structure your entire codebase as an inside-out stack of `toss`es...


Not quite, since with resumable generators you can resume at any later point in the program, while here "return" must be lexically scoped to the handler (whereas in e.g. Python you can call next() wherever).

This is really more like passing a callback through a side channel. "toss" is invoking said callback, and "return" is, well, returning from it.


Exactly my first thought after reading that.

Part of me wonders if it's a bit of a joke, panning a genuinely useful feature (in any other language) as disposable and silly (in this one).


It's more like stack based event propagation.


>You've implemented resumable generators!

Like yield in C#?


That was my first thought, too. Nice, tiny language with generators? Neat!


Right. Lots of fun ahead debugging a larger codebase :-)


Yeah that made me chuckle. I wish the languages I use had resumable exception handling. This is a ridiculously good, and extremely useful feature. Great for API callbacks, among other things.


OCaml 5 essentially has it, as "effect handlers". It's how lightweight concurrency is implementated.


Hurl seems very close to having a conditions system (à la Smalltalk / CL): unwinding and resuming are just two possible restarts (https://gigamonkeys.com/book/beyond-exception-handling-condi...).


Not related to the project as such, but I am firmly of the opinion that the world would be a better place if more things used the .wtf extension for their domain :)


This sounds like a weaker form of algebraic effects, but it is still cool to see such a language and see what you can do with it.


I never understood "algebraic effects". But I understand the Hurl docs. Are "algebraic effects" basically the "toss" keyword from Hurl? If so, how are "algebraic effects" stronger than the "toss" keyword?


More general than the toss keyword even. If you're interested in concrete examples, you might have a look at the koka documentation


As mentioned in another comment here, the unwind is expensive. That is, searching back through the stack finding the original call site that triggered the exception. How do algebraic effects handle that performance hit?


You can think of algebraic effects kind of like:

From callsite Foo, I call out to a function with a known name, Bar, with one or more parameters. The runtime (or sufficiently smart compiler) searches upwards in scope, for a handler Baz that provides the function with that name. Baz's Bar is then called, with both the provided parameters, and, crucially, a function that is "the rest of Foo."

So, an implementation of Exception with effects would ignore the resume, and look something like:

    define_effect Exception {
      // only one function provided by Exception, but could have more
      throw(String)
    }
    define_handler printExceptions {
      throw(msg, resume_func): { println(msg) }
    }
    define_func getPage(url) {
      request = http.get(url)
      if not request.ok { throw("Could not download") }
      return page
    }
    // main entrypoint
    withHandlers printExceptions {
      page = getPage("https://cheese.com")
      println(page.text)
    }
But you could also write an "on error resume next" handler for Exception. Exceptions thrown with this handler would be equivalent to toss. (in real life you'd probably write a different effect, rather than re-using the Exception/throw effect):

    define_handler onErrorResumeNext {
      throw(msg, resume_func): { 
        println(msg)
        // YOLO, call it anyways
        withHandlers onErrorResumeNext {
          resume_func()
        }
      }
    }
    withHandlers onErrorResumeNext {
      page = getPage("https://cheese.com")
      println(page.text) // Probably uninitialized - a better  :P
    }
Breaking away from the Exception example, two cool examples are:

    // Stream - potentially infinite and/or asynchronous iterables
    // basically just like python generators
    define_effect Stream {
      emit(item)
    }
    define_handler toList(accumulator) {
      emit(item, resume_func): {
        if item == nil {
          return accumulator
        } else {
          accumulator.append(item)
          withHandlers toList { resume_func() }
        }
      }
    }
    define_handler take(how_many) {
      emit(item, resume_func) {
        if how_many == 0 {
          emit nil
        } else {
          withHandlers take(how_many-1) { resume_func() }
        }
      }
    }
    define_func fib_stream() {
      a, b = 0, 1
      loop {
        emit a
        a, b = b, a+b
      }
    }
    // Usage
    first_five = withHandlers toList {
      withHandlers take(5) {
        fib()
      }
    }
    println("The first five fibonacci numbers are", first_five)
And this one I'm just going to lift from the Unison documentation [1]:

    Each.toList do
      a = Each.range 0 5
      -- beginning of resume_func f_A
      b = each [1, 2, 3]
      -- beginning of resume_func f_B
      guard (a < b)
      -- beginning of resume_func f_C
      (a, b)
    -- yields
    [(0, 1), (0, 2), (0, 3), (1, 2), (1, 3), (2, 3)]
Note that this has the same semantics as:

    results = []
    for (a = 0; a < 5 ; a++) {  // for each a, call f_A(a)
      foreach b in [1, 2, 3] {  // for each b, call f_B(b)
        if not a<b { continue } // if a<b, call f_C(a, b)
        results.append((a, b))
      }
    }
And it is possible to do this, because these resumable functions can be resumed an arbitrary number of times - it's up to the handler. They also can pass parameters back to the resume_func.

Exceptions/hurl = exactly 0 resumptions toss = exactly 1 resumption effects = 0..many resumptions

[1]: https://share.unison-lang.org/@unison/base/code/releases/3.5...


I see. Many thanks for this! Sounds a bit like a mixture of coroutines and dynamic scoping. Very interesting!


Nice thought experiment. I absolutely hate exceptions and I'd like a language without exception.

They're the goto of our time.

When we have Maybe/Option and Effect/Result, there is really no reason to throw exceptions and having to mentally track where that is being handled.

I'm a bit worried about algebraic effects becoming more popular (and influencing frontend JS - which is already way too complex) because it's promoting throwing "exceptions" to control the flow. All of this to avoid the coloring problem in async/sync? Absolutely not worth it imho.


Wow, I hate this. But it’s … oddly almost kinda elegant? In a very hard to mentally model way, but even so.

Speaking more seriously than is perhaps warranted, I’d slightly prefer it if there were syntactically different “catch” constructs for resumable and nonresumable exceptions, which would remove syntactic ambiguity around whether “return” was sending control flow back to the thrower of the nearest immediate exception or not.

Also, the stdlib shouldn’t have chickened out with regular value-returning functions. Just because the dogfood gives you heartburn doesn’t mean you shouldn’t eat it :)


> Also, the stdlib shouldn’t have chickened out with regular value-returning functions. Just because the dogfood gives you heartburn doesn’t mean you shouldn’t eat it :)

Yes, although an alternative would be to make the syntax if you call the function where a value is expected, then it will automatically catch.


Another alternative would be to use macros instead.


Did I get that right that you can catch what was hurled but not what was tossed? That will take some getting used to.

Also, I am concerned about how much Hurl I can write before people start calling me a tosser?


'Toss' sounds like an interesting language construct: it walks the stack to find an exception handler and then walks back to where it was to resume execution as if nothing happened. It looks like you can inject additional behavior at runtime using this construct. Usually in object-oriented code you do dependency injection using services's constructors, but 'toss' allows to do it using "toss handlers"?


Check out Koka, a “legit” language with an algebraic effect system.

https://koka-lang.github.io/koka/doc/book.html#why-handlers


This is quite similar to the Common Lisp conditions system… which I actually don’t know much about, but I do know that it lets you inject behaviour at runtime like this does.


It's a very restricted form of conditions, since it only allows resuming.

A conditions system would subsume both `toss` and `hurl` (because you can have an unwinding restart), as well as provide more flexibility: the condition can provide multiple user-defined restarts and the handler can pick between them (statically or dynamically), as well as feed content back into the signalling function.

For instance a classic CL restart is `use-value` which is the conventional name of restarts feeding a default value in case of missing or invalid item.

So let's say that you have a hashmap type, on lookup failure it could trigger a condition with a `use-value` handler to feed it a default value (possibly dynamically constructed), or abort/unwind, it could also have a restart to insert-and-return the value (so would behave as a `defaultdict`/`putIfAbsent`)


This is similar `Resume Next` in VB



Is it implemented using CPS conversion?


It certainly looks like continuation passing everywhere manually.


Shame that you guys don't actually believe in this, as it's actually the way.


I knew I recognized that name, it’s from hurl.dev


Top-tier wordplay


You have reached the maximum number of changes allowed. Please subscribe to make more changes.


Don't just throw it, hurl it


Naming conflict with Hurl[0] a command line tool that runs HTTP requests defined in a simple plain text format.

[0] https://hurl.dev/docs/manual.html


Multiple products sharing a name is unavoidable. Github had 90,000 unique repositories in its first year. If each had to have a unique name, it would almost have exhausted the English dictionary. In 2018, it reached 100 million repositories.


I have a proposal: let's accept as a given that all single words have been taken, and require all new languages/tools to be named with either multiple words, or by including a special character.

Note that C# (aka C Sharp) was way ahead of its time by including both options.


If I ever create a language, I'm calling it clang for maximum naming conflict. Am I talking about the c language? The compiler? or this newfangled language written by a completely unqualified web dev? ¯\_(ツ)_/¯


How about clang:a markup language to describe things which could be made by blacksmiths.


Everyone understands that, but when the project name you're copying is popular enough (12k stars meets this threshold, personally) and is still actively used and developed, a name change should be more seriously considered as to avoid confusion (which I exhibited, fwiw).


Well it's a totally different category of thing (in multiple ways) so I can't imagine people getting confused about this.


Note that the specific naming of hurl.dev is not random: hurl is a wrapper for curl, where the requests are stored in a plaintext file.

So hurl.dev didn't roll dice and got hurl, but consciously got a close sounding word.

In that sense, that makes one claim over the other a little more valid, in my mind, though you're right that clashes will have to occur.

I did get confused extra hard by the url = hurl.wtf, when hurl.dev is so close yet about different topic.


Why do names have to come from the English dictionary? Google can't (couldn't) be found in one.


The English dictionary was an example. English names are also rather popular among devs. But there aren't over 100M unique words and names in the world, and most of them would, simply put, suck as names for projects.


100000000 is 8 significant places. If you build words by choosing out of 10 viable following letters out of an alphabet, you'll hit that in 8 characters. That's not an especially long word. (I don't actually know if that's a good heuristic, but at least it gives an idea.)

Maybe a lot of them suck as project names, but name collisions also suck. I'd rather have unique names rather than beautiful but hard to find ones. Collisions are fine if they happen across geographical and cultural boundaries, but the software culture, for better or worse, is pretty global.


lol

This industry pumps out new model languages every week, but is unable to generate new names with simple Markov chains?

Anyone giving a project a name that is already taken is a clown


Programming languages live in a separate namespace involved by passing the 'lang' attribute to your search engine of choice.


Gay Agenda License 1.0 had me laughing.



Unfortunately, it's not open source, but it's still hilarious :)


Although it is not open source, this program can also be used with AGPL3 which is a open source license, so it is good enough.


This is fantastic


Just rebrand this as “algebraic effects” and suddenly every academic will pretend it’s revolutionary.


Also my thought. It's very interesting how the designers of this language, presumably unaware of algebraic effects, write about it as if it a terrible joke when this is actually one of the trendiest ideas in PL.




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: