Hacker News new | past | comments | ask | show | jobs | submit login
An Object-Oriented Language for the '20s (nels.onl)
179 points by ar-nelson 3 months ago | hide | past | favorite | 329 comments



> But, if we made a new statically-typed OO language from scratch in 2021, something in the vein of Java or C#, taking everything we've learned from functional programming and a decade-plus of scathing OO criticism, could we fix this?

Modern C# is already a statically-typed OO language that takes lots of lessons from functional programming and decades of scathing OO criticism (after all it's basically "Java done right"). And of course there's also Scala and F# if you want something more functional than OO.

I think the author summarizes it best at the end: "I realize there's not much need for a language like this—the space is crowded enough as it is—but it's fun to dream.".


One of the best OO languages out there is Eiffel, value types, DbC, MI, generics, lambdas, non-nullable references, a very nice graphical editor, JIT for development and AOT toolchain that pings back into the C and C++ compilers of the host platform.

Java and .NET are still catching up to what it offered in 2000.

Sadly it never got a big name sponsor to push it.


Ocaml also has a long standing claim to being an oo plus functional language for the future that hasn’t seen a lot of uptake. It would have been heartening and reassuring for the author to explore more candidates than Scala.


The complaints I remember about ocaml the last time I checked it out were related to multithreading. No idea if that's changed. But I do see ocaml very often in use to write a compiler, especially for languages that aren't yet self-compiling, most recently when looking into Haxe.


Node.js, Python, and Ruby are single-threaded. They all have ways to achieve concurrency and multi-process usage right now, and so does OCaml. I get it, people really want OCaml multicore–and it's coming–but that by itself is not stopping OCaml from being an industrial-strength, production-ready language right now.


For generations, the only way to use multiple CPUs was multiple process and asynchronous syscalls.

So for C it was ok, but for OCaml a reason not to use the language.



Ocaml is awesome. The syntax presents a paradox though.

It makes it slower to approach than curly brace languages or even lisps, but once you learn it, it’s a secret weapon of conciseness.


It's pretty close to Lisp without parentheses.


In my brief usage of OCaml, it seemed like the consensus was that the OO features were better off forgotten.


It's not so much that, but rather that OCaml's dominant paradigm of modules and functions is just so powerful that you almost never need OOP. But when you do, it's super useful to have.


That feels like the consensus for every language that mixes OO with FP...


I disagree. OO features are very heavily used in Scala and the languages it influenced. TypeScript and Swift particularly come to mind. I think the key here is that Scala paved the way for type systems capable of modeling OOP and FP idioms, cleaning up what OCaml attempted. Although one could argue that none of the other post-Scala languages inherit the full toolkit of typed FP features as much as Scala (pattern matching, everything is an expression, true ADTs).


Every user of Scala I've ever talked to has said something to the effect that the more they became comfortable with Scala, the more functional their code became. That includes people who have never used a functional language before taking up Scala. I've never met any who said their code became more object oriented. In fact, most "best practices" in OO eschew some of the key OO features, i.e., composition over inheritance; of those two the latter is a distinctly OO concept, and the former isn't.


Like Lisp and Smalltalk?


> Sadly it never got a big name sponsor to push it.

Has Eiffel had a high-quality open source implementation? My recollection is that Eiffel Studio's software is proprietary and expensive.


You can download Eiffel Studio and use it for free. I'm not sure what restrictions there are or capabilities that may be removed. It's included in Homebrew (mac) but it's an older version. Macports seems to have the current version available. Not sure about various Linux distros.

https://dev.eiffel.com/Main_Page


Here's how Eiffel Software describes the licensing for EiffelStudio:

> If you wish to earn a commercial benefit from your application and not release its source code, you must purchase the number of licenses you need for your development from Eiffel Software. After you purchase licenses, you are free to use and distribute your application the way you want (see the end user license agreement).

> If you select the Open Source license, you must release your development under an Open Source license for the benefit of the community at large.

So I'm still confused about the licensing. For the open source license, is a GPL runtime embedded in the compiled program or something?


It's not free software, because it limits the commercial use. It's more like Creative Commons Share-Alike license. I remember that back in the day, it was more restrictive. (Same applied to Plan 9, which never enjoyed the same popularity as Unix, even being technically much superior.)

Back in the day, MySQL had a license like that.

It still should be fine for free software development.


Why would the GPL get involved? Copyright is flexible because by default you don't have any rights to use someone else's creative works. They can attach whatever requirements they like to it. Here, they've said you can use it for free if you release your own software as open source. Doesn't seem like a problem.


Sounds like a fundamentally proprietary license with a benevolent twist.

As much as I like the open source encouragement, this is no way to inspire industry-wide adoption.


We are speaking about languages, not which implementations are available.

Are there are free tools for becoming a mechanic? No, they got to be paid, either new or used in some form.

And yes, nowadays there is a community version for the free beer crowd.


> We are speaking about languages, not which implementations are available.

When it comes to adoption, an area where Eiffel has clearly lagged, the availability of implementations can't be an afterthought; not when you have good, free implementations for Python, Ruby, Go, Rust, Julia, and many more.

Objective-C and Swift are interesting languages that are held back by the fact that the best environment is Xcode on a Mac. Similarly, Qt has been plagued by license anxiety for years (first the QPL, and now the delayed LGPL releases). Therefore, I would say that one of the "obvious, non-negotiable things any new OO language should have," to quote the article, is a good open source implementation.


Language adoption was already a thing from 1950's until late 90's, before rise of free beer culture.

Even UNIX's license had to be paid in some form, although symbolic.


>We are speaking about languages, not which implementations are available.

We are speaking about the adoption of languages, though ("Sadly it never got a big name sponsor to push it."), not languages in abstracto, and for that available implementations matter.

>Are there are free tools for becoming a mechanic? No, they got to be paid, either new or used in some form

Which is neither here, nor there. Things are different for programmers, where there are free and/or libre tools, and people have come to expect them and even prefer them.


Language adoption was already a thing before the rise of GNU and BSD in the late 90's.

I will love to see the world where such programmers get paid 100% the same way they are willing to pay for their tooling.

Exactly the same way.


>Language adoption was already a thing before the rise of GNU and BSD in the late 90's.

Yeah, but not for Eiffel :-)

>I will love to see the world where such programmers get paid 100% the same way they are willing to pay for their tooling.

Aside from those language devs getting paid (which core dev teams for otherwise given for fre/ FOSS languages like Java, C#, Swift, Go, etc also are), what else would change?

Paid/proprietary is orthogonal to better languages/tooling. If popular compilers/apis were paid, we could just as well get the same commercial crap, lack of interest, bloat, needless novelty, etc. that we get with most paid proprietary software and hardware.

And even worse, since they would be both out of reach for amateurs/students/developing world/etc, and crappy "enterprise level" for the rest, driven by what sells to pointy hair bosses, not programmers.


> Are there are free tools for becoming a mechanic? No, they got to be paid, either new or used in some form.

But if there were widely available free tools for mechanics, the people making paid ones would have a lot of difficulty getting mechanics to pay for them.


So why should mechanics be paid at all?

They should repair my car for free as well.

Feeling happy that someone left the garage with a working car is a good enough reward.


Yeah, but it didn't have braces, so it never stood a chance /s

Meyer's book where pretty highly ranked in the canon of OOD, so at least a few people knew the language from there. But similar things could be said about Smalltalk. Eiffel was a bit in a weird place, though. C++ and later Java ate a lot of its customers, it didn't enter a smaller niche like Delphi and on the safe & secure side, you had Ada.

Doesn't Meyer now have the professorship that Wirth had before? Apparently the place for the Cassandras of programming language design.


On the other hand Go is doing very well with only minimal OOP features. I think for general purpose languages, a robust and big standard library is more important than any specific feature.


IMO Java has learned from the mistakes of C# as well. See Task<T> vs Loom's virtual threads. Brian Goetz (the Java language architect) himself even said they're waiting to see how C#'s nullable types plays out before considering them in Java.


Yeah. It's kinda funny. Due to extensive F# -> C# -> Java feature bleed you can consider them beta channels of the same language. Java being LTS, C# - main, and F# - Dev Builds.


C# keeps getting features from Java as well, e.g. default interface methods.

And you are forgetting the ML -> F# / Scala part.


Would you choose java or C# if you have to pick one?


C#, because of better support for value types and low level coding.

Also despite all its high and lows, the .NET story regarding desktop frameworks is much better than Swing/JavaFX, and whatever Google has butchered Java with.

However I have never been a X Developer, I always have used both at my employers since they exist, alongside C++, depending on the project and customer requirements.


Async/await and implicit thread scheduling solve different problems. Loom is more about catching up to Go and Kotlin, no?


They mostly solve the same problems, albeit in very different ways. Loom is primarily about making threads cheaper, async await is primarily working around threads being too expensive.


Implicit scheduling like Loom doesn't solve the problem of working with thread based synchronization such as what you often see in UI. You want a paradigm that lets the user easily define what runs on the main/ui thread and what runs else where. Loom's goal is implicit suspension without thinking about threads. Its a different use case.

Backend engineers seem unaware of the problems async/await solves.


Using coroutines on a UI thread is a recipe for horrible bugs. I've done it, I wouldn't do it again unless I had to (i.e. all JS in the browser is forced to work this way so there it's pointless to resist).

The right approach is what JavaFX or Swing do: make it easy to schedule a lambda onto the UI thread from any background thread. Whilst that lambda executes, events are not dispatched. Where things get painful is where you try to make a single chunk of code look blocking whilst it's actually not. The user can insert arbitrary events into the middle of your code the moment you do an 'await' and you have to plan for that.

In practice, I've not found the typical UI toolkit paradigm to ever be a problem. If your UI starts to stutter, it's time to move something to a background thread. It forces you to admit that the UI and work are genuinely running in parallel and consider cancellation semantics, work/results handoff, etc. And of course you properly use multi-core systems.


To some degree its a style choice but most (all?) of the languages used for UI use some kind of the UI thread paradigm with a way to run code on and off of that thread. Not everything called a coroutine is designed for UI interaction. Kotlin's are better suited than Goroutines. As you admit your self, js and others have settled on this style and one has to assume its for a good reason.

Async/await solves the problem of posting execution to another thread, getting a result, and easily and efficiently waiting for that result on exactly the thread you need.

It'll be interesting when or if someone uses Loom in the UI space.


With JS the reason isn't really all that good - it's because it's expensive and difficult to make JavaScript and browser engines thread safe (or any dynamic scripting language). If it's hard to make the language genuinely multi-threaded then you end up stuck with callbacks and a standard library that uses a totally broken design to try and be as single-threaded as possible. It's basically Windows 3.1 all over again, but in 2021 instead of 1991. Quite sad :(

Async/await in the browser at least is unrelated to threading (unless you mean hypothetical helper threads inside the browser). You don't get any choice about where the handle the callback. It's always on the single rendering thread.


Not Kotlin, AFAIK Loom is about the JVM.


> Modern C# is already a statically-typed OO language that takes lots of lessons from functional programming and decades of scathing OO criticism (after all it's basically "Java done right").

C# 1.0 was based on Java 1.3 and modern C# has inherited a lot of its warts and added some of its own, like delegates and events.


And Java is based on Objective-C, except without the good ideas like messages and with new bad ideas like checked exceptions.


The problem is that way too much of stdlib declared no checked exceptions, which forced any useful code to smuggle them out wrapped, or just give up and stop declaring any. Generic types in particular need exception lists; I should be able to take a Function<T, R, X1, X2, ...> arg and imply that I throw whatever it throws.

Though nowadays I think maybe language support for monadic Result<T, X> is the way to go.


Result/Either types are fashionable but strictly worse than exceptions, in my view. Even checked exceptions should be improved rather than junked.

Really, error types are only a small step above what C uses. That approach was tossed in favour of exceptions for good reasons that the industry now seems to be collectively forgetting, often with poorly reasoned arguments. FP languages like Haskell use them as if they were C because of laziness and insistence that everything be modelled as a mathematical function, not because they result in a better developer experience. Exceptions have many usability benefits that I'd be loathe to give up on.


> Result/Either types are fashionable but strictly worse than exceptions, in my view.

Why?

> Really, error types are only a small step above what C uses. That approach was tossed in favour of exceptions for good reasons that the industry now seems to be collectively forgetting, often with poorly reasoned arguments.

Your assertion makes no sense at all. Exceptions were designed with the explicit goal of brushing potentially unrecoverable exceptional errors under the rug. This is by no means the only use for return values. Take, for instance, HTTP requests, and the whole classes of different results that can be returned and how they are completely orthogonal to the data being returned. It would make no sense to handle this usecases with exceptions.

Another thing that it seems you're entirely missing is that using a Result monad to express results allows you to explicitly discretize and define the codomain if your function, and through dimensions that help you reason and thus test about the input and output of any function.

Using monad semantics to handle return types also provides a convenient way to implement what is often referred to as railway-orientrd programming

https://fsharpforfunandprofit.com/posts/recipe-part2/


Exceptions were designed with the explicit goal of brushing potentially unrecoverable exceptional errors under the rug

That's what I meant by poorly reasoned arguments. Do you really think the designers of C++, Java, C#, Delphi etc all sat around and explicitly said to themselves "how can we brush errors under the rug"? Of course they didn't. They didn't even imply it, let alone state it explicitly, so that whole argument is just nonsense. They all argued, quite explicitly, that exceptions are a better way to handle errors than C style error codes. And their arguments were correct.

Using monad semantics to handle return types also provides a convenient way to implement what is often referred to as railway-orientrd programming

Again, this is what I mean by poorly reasoned arguments. "Railway oriented programming" is something FP languages made up to try and get closer to how imperative languages work, it's not something that's useful if you're already in an imperative language, exactly because exceptions already give you that and a lot more in addition.

Exceptions give the imperative programmer a lot of highly useful features:

1. Errors can be handled at the appropriate layer of the program, instead of all pieces of code having to be polluted with boilerplate propagation logic. Yes you have that problem even in FP languages. The right place to handle most errors is often at the outermost loop of your application or server, by logging it and discarding that unit of work or returning a description of the error to the user.

2. Exceptions come with stack traces and structured data about the error. I've often been able to fix bugs by just looking at the stack trace and nothing else. Error types usually don't have this information.

3. Exceptions can be abstracted without losing information, when chaining is used (as in Java). You can catch an error, rethrow it as a type appropriate for the abstraction level of your library, and do so without losing the original exception. Error types almost never have this kind of functionality.

4. Because exceptions are by definition exceptional, the compiler can safely assume those paths will rarely be taken and optimise accordingly. Error handling code can be moved entirely off the hot paths and out of the i-cache. Error types on the other hand are opaque to the compiler and result in branching everywhere, which hits performance.

5. When writing code that doesn't actually need to programmatically handle all possible errors e.g. prototypes, quick one-off tools you're writing for yourself, you can just let errors propagate to the top of your program and be handled automatically. You don't need to be constantly unwrapping Result/Either types, or propagating them, or handling them. You can just forget about it and still have useful diagnostics if something goes wrong. With error types, you either do the work by hand or you lose all insight into errors that doesn't make it into logs.

By all means, try and convince people that giving up all these features lets them "discretize and define the codomain of your function" and that this somehow substitutes for the pain of only using error types. It'll just make them slower than I am.


Right about Objective-C, wrong about checked exceptions.

Checked exceptions were introduced by CLU, adopted by Modula-3 and C++, two big influences in Java design.


Let's see how C# compares to the author's wishlist.

"obvious, non-negotiable things any new OO language should have":

> No nulls

C# has null. It looks like after C# 8.0 there is a way to require annotating types are nullable. But it isn't as elegant as a true Option type, and looks like there may still be some gotchas.

> No unsafe cast

It isn't entirely clear what is meant by "unsafe", but casts in C# can throw an exception if the type doesn't match at runtime, which I suspect fails to meat the criteria here.

> Optional named arguments

C# has this

> Generics

C# has generics, and while I don't have a whole lot of experience with it, it seems pretty good.

> Immutability by default

Fields and variables in C# are mutable by default (but can be marked as immutable with `readonly`).

"less obvious choices":

> Class based discoverability

I think C# has this

> Multiple inheritance

C# does not have multiple inheritance. However default interface methods give some of the benefits.

> Minimal syntax

I'm not entirely sure what is meant by this, but from the comparison with scala, I think C# satisfies this.

> Higher kinded types

AFAICT, C# does not currently have Higher kinded types

> No Exceptions

C# definitely has exceptions

> Unified classes and Typeclasses

C# doesn't have an equivalent of typeclasses. In part because of the absence of higher kinded types (see above).

> Pattern matching without destructuring

C# switch statements do this.

So, C# has some of what the author is looking for, but is also missing a lot. Yes, C# did take some inspiration from functional languages. Yes it did learn from some of the mistakes of earlier OO languages (notably Java). But C# is itself over a decade old and constrained by backwards compatibility. If it had been created today, would probably be substantially different.

Disclaimer: I haven't used C# very much, so this is mostly from what I remember when I learned C# years ago, and what I was able to find with simple searches online.


The biggest problem for me is the age old „null“.

C# with monads would be great.


Something which I've used in the past (edited because I spotted some mistakes while re-reading the code [grin]):

    public readonly struct Option<T>
    {
        public static readonly Option<T> NONE = new Option<T>();

        //

        public T Value { get; }
        public bool HasValue { get; }

        public Option(T value)
        {
            Value = value;
            HasValue = value is { };
        }

        public Option<TR> Select<TR>(Func<T, TR> selector) => HasValue ? new Option<TR>(selector(Value)) : Option<TR>.NONE;
        public Option<TR> SelectMany<TR>(Func<T, Option<TR>> selector) => HasValue ? selector(Value) : Option<TR>.NONE;

        public T OrElse(T defValue = default) => HasValue ? Value : defValue;
    }
You can of course add more utility functions besides these, eventually as extension methods, but it's a starting point.


You can also use a library like LanguageExt. In practice I'm torn between how neat and logical it looks, and wasting 30min figuring out I used the wrong match() on a mixed sync/async scenario after a method signature changed and that's why my database context is disposed of randomly in the middle of operations.


You can set references to default to non-nullable now with compiler flags or some such thing.


<Nullable>enable</Nullable> in your .csproj


Highly recommend Functional Programming in C#: https://www.manning.com/books/functional-programming-in-c-sh... If you want to learn how to do some functional in C#.

If you just need some ready-made monads, language-ext is the way to go: https://github.com/louthy/language-ext

We tried to bring some functional ideas into our Unity3D codebase with help of these resources, it's hard but doable.


Lack of null and monads are kinda orthogonal. Yes, you can use monads to model "early return", but you can use them for so many other things. F# already offers computation expressions, which you can use for monads (but with no typing). But even Haskell provides no enforcement of the monad laws. Surely you must know about F#, so your comment makes little sense to me. If you want HKT's in F#, why not just say so?


I love how rust does interfaces (traits) and allows implementing outside the type. It feels much more natural to create a subsystem on the side without tangling it into other subsystems. E.g. to create a separate rendering subsystem for a game entity you simply

    impl Drawable for MyEntity
      void Draw(MyEntity self, Canvas...
Which feels quite natural compared to anemic-entity ECS or naive Composition-over-inheritance like so:

    class MyEntity {
      EntityDrawingComponent drawer
      EntityMovingComponent mover;
      
and of course, over the standard Java/C# OO way

    class MyEntity : Drawable, Movable, ...
      void Move(Vec..)
      void Draw(Canvas..)
With this type of declaration, I'm forced to mix the code for my different subsystems making it not just likely but inevitable that someone eventually ties these subsystems together.

In general, if I were to design a "better" OO language now; i'd make sure to make it almost not OO at all. I'd allow subclass polymorphism, but no virtual methods or classes. Only interfaces/traits and abstract classes. I'd very clearly separate data types from identity types at the language level (which aren't just a difference betweeen stack and heap as with C# structs and classes).

I'd want the bare minimum of functional niceness: enumerations and pattern matching which check for exhaustion. Having that in a lanugage easily lets the developer use different implementation for two completely different cases: open and closed polymorphism sets. OO is great when you don't know what variants might exist, but it's useless when you DO want to control it. If I as a developer know that the set of Payment options are Credit/Cash/Invoice - then I want it exhaustively checked. I deliberatel do not want to leave that open. I want to be able to switch on the closed set of 3 cases, and be warned if I fail to check a case. I want to be able to do this without inverting the logic like and calling into the unknown like paymentMethod.HandlePayment(order).


> stack and heap as with C# structs and classes

Nitpick: C# struct can be on the heap (when it's "boxed").

The fundamental difference between C# class and struct is how they treat assignment operation. The class has "reference semantics" (assignment creates a new reference to the same object) and struct has "value semantics" (assignment creates a new object).

C# uses terms "reference types" (of which "class" is one) and "value types" (of which "struct" is one) to make the difference clear.

Structs being on the stack (most of the time) is just a consequence of their value semantics, and should be treated mostly as an implementation detail.


I’m aware of the differences between structs and classes, boxing etc. (.net dev since ‘02), . My point is “class” can some times be a faceless “value” with no identity, and some times have an identity. The “records” feature is a way of addressing this difference, but like everything else it’s tacked on a bit late. A value class, like a value type (struct, enum), should/could be immutable, have structural equality etc.

I’d like my language to be extremely explicit about whether an object has identity or not. It ties into things like ownership, move vs. copy, disposal etc too.


I think you misunderstand the ECS pattern. You want:

    class DrawSystem{
        Draw(DrawComponent)
    }
With some mapping of DrawComponent to MyEntity. That mapping would most likely NOT be in the MyEntity class definition.

Or you can put the DrawComponents on the Draw system and simply track mapping back to the MyEntity instance they correspond to. New code lives in the systems.

In a game you'd have some kind of scene system that could track what entity has what components and then you don't even need a MyEntity class at all. You wouldn't track that composition in a class.

The sugar that Rust is bringing is that it feels like the implementation is being added to the instance instead of an external system outside it, but you can make that work in C# with some extension methods. It would be nicer if/when C# gets extension interfaces.


Thats exactly how I understand an EC system but my issue with it is the overly soft coupling with just ids.


I guess you want to know that every MyEntity can be drawn and can move so you don't have to try-get the component.

The disconnect I think is that part of the ECS design is that you have the freedom to break that guarantee. Entities are no longer types but collections of components. The soft coupling is the goal.


Yes exactly, I think it’s an elegant system for some contexts but I wouldn’t want every system to be forced to use it, so I wouldn’t want that ECS design to be a fundamental part of the language. Soft coupling is basically just giving up typing at some interface and relying on soft information. Which can be fine, but not always.


I wonder if you could implement this all within a linter on top of Rust or some other language? Some of your asks already exist at that level (for example, TypeScript autocomplete in VS Code knows which enums you haven’t checked in a switch, and it also knows if a property has been verified that you’ve narrowed via a type union, so perhaps you could extend this to enforce coverage of switch statements?


The Rust compiler already enforces that you match all variants of an enum. And Rust Enum is basically a C union with an integer tag.


I've only briefly looked at Rust. How would you handle your Drawable if it needs to store some extra data?

Few of the things I use inheritance or composition for can be implemented without storing some extra data.

As an example, say a buffered Stream, that takes a Stream instance and adds buffered access. This BufferedStream would need to store the buffer data somewhere.


Drawable is just a trait, a definition of methods that can be abstracted over. If you wanted to store data, you'd do it in the type that Drawable is actually being implemented for, i.e. MyEntity.

    struct MyEntity {
        cache: RenderingCache,
    }
    impl Drawable for MyEntity {
        fn draw(&self, canvas: &mut Canvas) {
            self.cache.draw_cached(canvas, |cache_canvas| {
                cache_canvas.render(ASSET);
            });
        }
    }
    const ASSET: Asset = load_asset("foo.png");


Thanks, that is very clear.

Think Rust differs enough that it's a bit hard for me to draw direct comparisons with my primary languages, but with the emphasis on interfaces like that it does seem like a nicer way to implement extension methods.


If you are used to Java: It's like making a class implementing an additional interface from the outside.

    class MyEntity {
      RenderingCache cache;
    }

    now MyEntity implements Drawable {
      void draw(Canvas canvas) {
        this.cache.draw_cached(canvas ...);
      }
    }


Here is some nuance with Rust's OOP relative to java/go/c++: https://stevedonovan.github.io/rust-gentle-intro/object-orie...

If you know haskell, Rust's traits are very similar to type classes, except it also has c++-like generics (templates) and is primarily expression-based like ocaml.

traits vs haskell type classes:

trait -> class struct -> data instance -> impl


Rust has BufWriter/BufReader and I'm not an expert but I think you'd just view it as extension through composition


That does indeed seem[1] to do regular, naive as per OP, composition.

I was just curious what OP was using these traits for, as in my non-Rust experience seldom have the need to compose stuff without storing additional data.

[1]: https://doc.rust-lang.org/src/std/io/buffered/bufwriter.rs.h...


> I love how rust does interfaces (traits) and allows implementing outside the type.

Then it might be interesting for you that Rust copied this (like a few other things) from Haskell, where they are (somewhat confusingly) called type classes.


> i'd make sure to make it almost not OO at all.

I think you mean "class oriented"


Never heard the term, but that could be what it is.


"First, a few obvious, non-negotiable things any new OO language should have: ..."

Well, that's like your opinion, man ...


That's the problem with language design: too much opinion, too little decision making based on actual data.

The author says there should be no Exceptions and there should be multiple inheritance... I mean, these seem like pretty extreme positions to have when talking about an OO language... if I remember correctly, multiple inheritance, in particular, was one of the features the languages from the late 90's had actually stayed clear of as it had been shown by the previous generation of languages that it was a mostly bad idea. I think even inheritance as done in Java is not the best of ideas, and there seems to be evidence extension methods or, similarly, type classes, are better suited to add behaviour to data structures.

IMO languages like Kotlin and Swift are much more like what modern OOP languages should be than what the author proposes.


Bah MI was given a bad reputation by Java fans but their arguments weren't very persuasive.. The diamond issue? As shown in the article, it isn't really an issue, Eiffel actually use this solution.


The author is explicitly using Kotlin as one of their inspirations, and IMO has a pretty reasonable approach to multiple inheritance (and I suspect they would agree that extension methods are preferred when applicable). What problems does MI cause that aren't fixed by OP's solution to diamond inheritance?


Kotlin extension methods can't be grouped and abstracted over like traits can. And moreover they are not virtual methods. Virtual methods are useful for all sorts of reasons.

In Kotlin extension methods are best used as a nice way to add utility methods to code you don't directly control, and as part of the 'extension lambda' system that lets you make DSLs. The way Kotlin defines language integration features using methods with magic names makes it easy to add a thin layer on top of a Java API that makes it much nicer to use, without needing to fork or patch the actual library itself.


If you want your language to be called “Object Oriented”, it needs to support implementation inheritance. (Encapsulation and interfaces predate OO).

My personal take is that implementation inheritance is a bad idea, and, by extension, Object Oriented programming is a dead end.


Lol, can't ignore the Lebowski reference :) good one! I actually think it is very subjective list :)


Personally I would drop the static type system and reinvent Self.


After working in Python and Ruby I don't think I could ever choose a dynamically typed language for a project with more than a few thousand lines. There's just a lot of safety, and documentation, you get cheaply when using static types.


Ruby has conventions and a big testing culture which helps mitigate a lot of those issues. I also work with Typescript and I see whole classes of bugs that could be avoided if everyone just stuck with conventions and best practices.

If you like Ruby and wish it had types, check out Crystal. It's basically Ruby with Go's concurrency model, and it's as fast as Go.

{Edit: repetition}


There’s languages like Python and Ruby and then there’s languages like Smalltalk, Common Lisp, Self or Clojure: if dev-time is runtime, you miss compile-time checks a lot less


Cue the "Dynamic Languages Fail At N Lines of Code" bikeshedding as if Facebook, AirBnB and Shopify never happened.


...and steal the Soups from NewtonScript on the way.


What are the soups?

Both Google and DuckDuckGo give me no results so an example would be appreciated.

(Hope I'm not caught up in a prototype joke :)



My impression of systems like soups/OpenDoc/component systems/etc is that they're doomed to failure because they don't respect Conway's law. The app model works because each app has its own development team and teams aren't directly editing other teams' data.

They don't have the proper expertise and will break something, because it's not possible to enforce data consistency well enough. (Apple's modern OO database is called CoreData, has a pedigree from WebObjects, and its consistency features are pretty hard to use in practice.)


Soups and OpenDoc have no actual relation. Soups is a object file system and OpenDoc was a component system. CoreData is a poor successor to NeXT's EOF. I so miss EOF.

The only connection to editing someone else's data was the ability to add fields to soups for things like Contacts which allowed an easy expansion. Given how many programs on UNIX can alter each other's data, I don't agree with Conway in the context of the file system or soup.


"Data in Newton is stored in object-oriented databases known as soups."

https://en.wikipedia.org/wiki/Apple_Newton#Data_storage


time to dust it off and sell it as new ?


Yeah - I think the kids are ready for that.


rebrand it `selfie`


I'm fine with implementing another static Object-Oriented language, but please learn from some of the best non-static Object Oriented languages out there like Ruby and to a lesser extent Erlang/Elixir.


So basically be like Crystal [1]?

[1] https://crystal-lang.org/


Erlang/Elixir are two of the least OO languages I can think of. Point still taken though. Elixir is an absolute joy, and the more new languages take examples from it, the better.


Actually, Erlang and Elixir are one of the most OO languages out there. The original definition of OO, by Alan Kay, was to have a lot of miniature computers, with state and computation ability, called objects, which communicate by sending messages to each other. Yes, calling a method in Smalltalk or Self is called a "message send". Now, take a look at Erlangs processes - doesn't it all sound very similar?

Edited to add: Kay also frequently says he's a proponent of object oriented, not class oriented programming. Classes are just a means of organizing the code, objects and their relationships is what matters. In that way, modules in Elixir are not inferior to classes in single-inheritance OO languages: they offer the same level of sharing and composing code (via macros, defoverridable, and defdelegate). Add to it protocols, which can extend arbitrary types (modules) and you get quite nice OO toolbox to use in Elixir.


My understanding is that Alan Kay cannot reasonably claim to have "the original definition of OO", only "a very good early definition". Additionally, it seems like he's either been a bit inconsistent or unclear about the essential ideas at different times.

One source: https://hillelwayne.com/post/alan-kay/


Alan Kay provided the first definition of OOP. That is not the same as using the word “object” for the first time.

As that article clearly shows, at least once you’re past its clickbait title, Kay acknowledges his influences. That isn’t a lack of clarity, it’s an awareness that we all build on the effort of our peers and predecessors, that ideas do not spring forth fully formed from the void.

This does not discredit anyone’s work, far from it; context is an aid to understanding.


I could probably have been more specific.

What I mean by lack of clarity is that Kay in 1998 thinks that messaging is the key idea, and suggests that was always the key idea. But none of the historical sources I've seen suggest he was clearly and consistently saying that in the 70s.

That's really a minor point, it does nothing to invalidate Kay's contributions. You can design a system that has value, and only later find the best way to talk about its value. That happens constantly: people create something and then end up saying "in hindsight, I got X right and Y wrong" or "back then, I knew this worked, but it was only later that I knew why".

It doesn't disprove the idea that focusing on messaging gives you the best version of OOP. It doesn't even disprove that "real" OOP is about messaging (though I think that argument has a high bar to clear--most terms are messy).

What it does disprove is that messaging has a right to be the core of OOP because Alan Kay defined it that way back in the 70s.

P.S. Maybe the title isn't ideal, but Hillel was literally reacting to someone who said Alan Kay invented objects: https://lobste.rs/s/8yohqt/alan_kay_oo_programming#c_5xo7on. Actually, that might be a better source than what I linked to, because it marshals more evidence that Kay was not consistently saying "messaging is what matters" during the 70s. He may have believed it, he may have said it sometimes, but he wasn't consistent about it.

Again, that's not a criticism of Kay, except the super mild one that his 90s/2000s memory of what he said 20 years earlier wasn't perfect. But I forget shit I said last week, so I'm not gonna judge.


> disprove ... that messaging has a right

Well, I think "messaging" is an abstract concept. It doesn't have rights, so there was nothing to prove.

Nevertheless, credit matters, because people matter, and do have rights. One of those rights is to have an opinion, including the opinion that messaging is the core of OOP.

> clearly and consistently saying that in the 70s

This is indeed a preposterous and unhealthy standard to expect of anyone, on any topic. It's the kind of thing that two-bit politicians like to dig up on each other, to nobody's edification.


> "messaging" is an abstract concept. It doesn't have rights

We agree. No idea how I screwed up that sentence. The intent was "that messaging is the right core of OOP because", but it was awkward, and I spliced something together.

> It's the kind of thing that two-bit politicians like to dig up on each other, to nobody's edification.

This is a discussion of what a word means, not an attack on someone's character.


Good read, thanks. I knew about Simula, but never analysed it deep enough to know how much of an effect it had on early Smalltalk.

Still, the definition Kay gave as late as in 1998 resonates with me the most:

OOP to me means only messaging, local retention and protection and hiding of state-process, and extreme late-binding of all things.

And this is almost exactly what Erlang implements: just add "asynchronous" before "messaging" and it fits perfectly.


Alan has commented a number of times on a number of threads on HN on this....

you can find his various comments here: https://news.ycombinator.com/threads?id=alankay


I've seen his account. I'm not sure I've read every comment, however. Is there one in particular that you think says something not covered in the two pieces I've linked in this thread?


Simula(67) definitely was the original concept/implementation of OO. Smalltalk, C++, Java, etc all derived different aspects of it.

- Main abstraction are classes (of objects).

- Concept of self-initializing data/procedure objects Internal ("concrete") view of an object vs an external ("abstract")

- Differentiated between object instances and the class

- Class/subclass facility made it possible to define generalized object classes, which could be specialized by defining subclasses containing additional declared properties

- Different subclasses could contain different virtual procedure declarations

- Domain specific language dialects


Note Java is actually explicitly based on Objective-C, not on Simula or C++. They just removed most of the Smalltalk bits to make it faster and less scary-looking to C++ programmers.


Hey smt1, looks like your account is dead and has been for a while.


it's clearly not dead.


Actually, it might be. The comment was dead, I vouched for it, so it appeared.


"Object oriented as Alan Kay intended" has been a strapline of Erlang proponents for some time. I think even Joe Armstrong said it once. And I seem to recall Kay speaking approvingly of the quip, and of Erlang itself.


On the contrary, at least if we talk about Alan Kay's vision then Erlang/Elixir are as close as we have today. Hint: the processes are objects.


> at least if we talk about Alan Kay's vision

Big if there. Most people do not mean Kays OO when they say OO.


No, most people think of some ill-defined idea shaped largely by the concrete design of early versions of, and practice in, C++ and Java, reflecting a set of compromises between early OO visions (including, but not limited to, Kay’s), C’s particular style of statically-typed procedural programming, and various practices that emerged to deal with the warts of those compromises.


Which is why they should be educated, given a chance. No?


I agree w/ your sentiments here, but it is worth keeping in mind that Simula predates Smalltalk and could be viewed as both the first OO language and the progenitor of what became mainstream OOP.


You're right. I'm constantly surprised how much of the history of programming I'm still missing, despite trying to learn as much as I can about it. The feature set of Algol blew my mind when I first learned about it, for example - and now I see I need to take a closer look at Simula, which was developed as a superset/extension of Algol. Learning about that is fascinating and really puts the "language wars" of today into perspective.


BTW I saw you mention class oriented vs object oriented elsewhere and I do think that captures the difference pretty well, thus why I agreed with your sentiments. The actor model basically is object oriented programming in that context, whereas what most people mean winds up being class oriented.

There's (at least) a third model IMO from CLOS & friends, such as Dylan, S4, and I'd argue Haskell Typeclasses. Not sure what I'd call it but it's the idea where you define data classes, generic functions, and then provide linkages between those data classes and the generic functions to give you methods on the state


Yes, generic functions and multimethods are one solution to the expression problem, and they're pretty different from Smalltalk-like OOP. It's a shame they aren't more widely adopted - I think only Julia, Nim, and Clojure have them as a feature.

Actually, if not for some historical accidents, we could have Dylan in place of Java today... I sometimes dream about such a world: functional core, CLOS-like generic functions and multimethods, sane multiple inheritance, syntax-case-like macros, native AOT compiler, sane module system... OpenDylan - the implementation I played with - is slow as molasses, but with even a tiny fraction of the work put into Java it could be made fast enough to rival C++. It's one of the most striking wasted chances in the history of programming.


I read "eradicated." But sure, let's educate them first.


Kay was a visionary, but not every vision of his was the final word on reality. It's possible that the "mainstream" version of OO is better than Kay's.


The mainstream version of OO is actually quite bad though. You can think of it as just being functions where the first parameter is to the left of the function name. It has a few more features than that of course, but all of these increase complexity and therefore bugs, and it doesn't have any features that reduce bugs.

An example would be non-fragile ABIs (ObjC has this, C++ doesn't) or typestate/built in state machines (so you can't call methods if the object is in an invalid state for them). There's encapsulation at least, it's a start.


It's pretty clearly not bad, given how many successful languages use it. Java, C#, C++, Kotlin. Windows, macOS, KDE, GNOME. They're all mainstream OO and for good reasons.

Fragile ABI is basically a C++ problem. Java has had a stable OO ABI for 25 years now. The .NET ABI has gone through a few iterations, but is stable now. Of course Microsoft also had COM, which is/was a stable OO ABI accessible to many languages including C++ and was arguably just a subset of the C++ ABI without templates.

In the AOT compiled work Swift now has a stable OO ABI (sort of). Rust is interested in getting one. Lack of a stable ABI is not an OO problem but more a problem of languages that are designed to compile ahead of time to native code and use native formats as their primary artifact format.

Most features OO has reduce complexity and bugs, that's why people use them. There's no comparison between a clean OO API and the de-facto standard state before that, e.g. POSIX, Win32, Carbon.


> It's pretty clearly not bad, given how many successful languages use it. Java, C#, C++, Kotlin. Windows, macOS, KDE, GNOME. They're all mainstream OO and for good reasons.

C++ is more of a multi paradigm language where the designers (eg Alexandrescu) actually publish books telling you to focus on algorithms and generic programming rather than object trees. KDE/Qt reinvent messaging on top of the default OO, don’t they? There’s something about slots.

> There's no comparison between a clean OO API and the de-facto standard state before that, e.g. POSIX, Win32, Carbon.

Don’t know about Windows but POSIX and Carbon were as OO as they needed to be - a function where the first parameter is a “file” or “context” is being object oriented. There were actually several attempts to replace Carbon with more OO APIs, that used heavy implementation inheritance, like Taligent and MacApp and they failed because they were too complicated. Cocoa succeeded by using messaging and composition instead like Alan Kay wanted.


Slots isn't a messaging system in the way Objective-C or Smalltalk define it. Slots are what GTK calls signals, if I recall correctly, and are more often called event handlers. They're ways to register a set of callbacks on a single emission point.

Don’t know about Windows but POSIX and Carbon were as OO as they needed to be - a function where the first parameter is a “file” or “context” is being object oriented.

Honestly I don't know of anyone, including the people who defined those APIs, who would call them object oriented. POSIX functions aren't even namespaced and routinely don't even take a struct as the first parameter. Implicit state is everywhere in POSIX, Win32 and I think also Carbon.

Cocoa succeeded by using messaging and composition instead like Alan Kay wanted.

Cocoa "succeeded" by being the official API of the first non-failed attempt at resurrecting macOS. On Windows, the API that "succeeded" was .NET which is a classical C++ type of OO. In practice, Objective-C caused a lot of problems for Apple. They would have junked it years earlier but Steve Jobs was wedded to it and couldn't understand why anyone wanted anything else. The moment Jobs died Swift was started as a project, and Swift does not have objc_msgSend at its centre, at least not if I understood the Swift ABI correctly.

The "messaging" concept caused problems in the following ways:

* Performance, despite objc message sending being heavily optimised.

* Bizarre semantics that led directly to horrible bugs, like sending a message to null being "valid" but returning junk if the return value was meant to be a struct.

* Difficult to optimise due to the dynamic language features, which are nonetheless hardly used (e.g. ability to redefine methods on the fly).

The latter problem is shared with languages like Ruby.


> Of course Microsoft also had COM, which is/was a stable OO ABI accessible to many languages including C++ and was arguably just a subset of the C++ ABI without templates.

Why the past tense, when COM is the main way Windows APIs have been introduced since Windows Vista, as COM took over Longhorn OO ABI ideas?

Nowadays we just call it WinRT.


Right, Microsoft don't talk about COM anymore.

I haven't kept up much with Windows programming so lost track at some point of what their current-gen tech is and how it works. WinRT isn't just COM is it. It's custom languages, new app metadata formats and stuff. I don't see IUnknown in any modern Windows code samples.


It's possible that weatherlight is a teapot and inopinatus is a coconut!


You can make a teapot from a coconut.


It doesn't really matter if its better or worse. As a term OO means what people understand it to mean. Human languages (unlike programming ones) are descriptive, not prescriptive.


> As a term OO means what people understand it to mean.

I would suggest that if you asked people to define the term, Kay’s definition would likely be the single most common (even after normalizing variations in wording).

It would still only be a plurality, sure, but the remaining majority doesn't share a common understanding, either.


Yes, though I believe knowing about him and his vision can be helpful in understanding the evolution of OOP and its current mainstream incarnations. Basically, "those who don't know history are doomed to repeat it", "history repeats itself as a farce", and all that... :)


It’s possible that I’m a teapot.


It’s possible that I’m a coconut.


Erlang is more OO than any other so-called OO language out there.


Also, C++ has solved many of the issues mentioned in the article (especially post C++11).

It has not solved the “minimum set of primitives that make OO tenable” problem.


A Turing Machine is the minimum (probably) set of primitives that make computation possible. That doesn't mean we want to program that way. I'm not sure we want to do OO programming in the minimum set of primitives that make OO possible, either.


Minimizing the number of OO primitives the language provides is one of the goals of the article. It’s not a goal of C++.

I think there’s room for kitchen sink languages and also opinionated, minimalist languages.


Which problems from the article are you referring to?

C++ is likely the worst OO language in common usage and C++ developers rightfully avoid using its native OO functionality as much as possible. They will literally jump through hoops and write tons of additional boilerplate code to avoid exposing the language's OO functionality (doing what C++ developers call type-erasure).

C++11 didn't improve OO in C++, it gave programmers many features to avoid having to use it altogether.


I have literally never seen someone program C++ that way. I mean, it's probably true that some people somewhere are doing that. But saying "C++ developers rightfully avoid using its OO functionality as much as possible" seems like a complete distortion of the actual situation.


Those some people would be the authors of the standard library who use type erasure for std::function, std::string_view, std::span, std::any, std::ranges, std::fmt, the new class of pmr memory allocators use type erasure, the upcoming coroutines are built on type erasure.

Common C++ libraries like boost have an entire library dedicated to type erasure:

https://www.boost.org/doc/libs/1_75_0/doc/html/boost_typeera...

Facebook's Folly library also provides type erasure functionality:

https://github.com/facebook/folly/blob/master/folly/docs/Pol...

Google's Abseil is full of type erasure:

https://abseil.io/

Here is Adobe's C++ library for type erasure:

https://stlab.adobe.com/group__poly__related.html

And here's a talk by Sean Parent about how Adobe uses type erasure to emulate runtime polymorphism without using C++'s native OO features:

https://www.youtube.com/watch?v=QGcVXgEVMJg

I suppose these are just some people somewhere...


And all of them use some kind of OOP ideas, while trying to sell themselves as not being OOP.

classes, polymorphism, some level of inheritance, delegation, composition, builders, method dispatch, ....


None of them are selling themselves as not being OOP.

What they are doing is implementing an ad-hoc version of OOP that forgoes using C++'s native OO feature-set. Native C++ OO clashes with modern C++, often creating a dialect that does not play well with resource management and common C++ idioms.

The type erasure libraries provide many of the traditional benefits of OO but allow you to retain value semantics (which native C++ OO does not).

My argument was never about OO as a general concept, it was strictly that C++'s native OO feature set is very poor and modern C++ allows developers to move away from that feature set.


How does native C++ OO not allow you to retain value semantics? Doesn't the Rule of 3 exactly give you value semantics? And it's quite standard to do that with what you're calling "native" C++ OO.


The rule of 3 doesn't apply to C++ native polymorphic types and your statement is so far out of left field I'm guessing you don't do much C++ development.

To humor you... if I have a base class of type Animal and a derived class of type Cat, then making a copy of a Cat through a pointer or reference to an Animal results in object slicing which is not what you expect when you make a copy of a polymorphic object. Basically you only end up copying the Animal instead of the entire Cat.

https://en.wikipedia.org/wiki/Object_slicing

One of the motivating reasons to use type erasure is precisely to solve this issue, so that one can copy an Animal or assign Animals and treat an Animal as a value and copies work as expected.

When one copies a std::function or any of the type erased classes provided by the standard library, it just works as expected, without needing to consider lifetime issues, object slicing, C++'s complex rules about inheritance vs. virtual inheritance, and a host of other problems that don't mix well with other parts of the language.

The cost of type erasure is that it requires a lot more work on the part of the author. The libraries I linked to help alleviate most of the boilerplate, but as the examples demonstrate, there's still a lot of work that needs to be done to get it right.


> The rule of 3 doesn't apply to C++ native polymorphic types and your statement is so far out of left field I'm guessing you don't do much C++ development.

First: I've done professional C++ development since 1996. Fulltime, with just a smidgen of Java in a couple of places. So your guess is completely mistaken, as is your opinion of where I'm coming from.

Second: The post I replied to never mentioned "polymorphic", so I interpreted "value" as your claiming that you couldn't treat a pre-C++11 class as a value type, which is clearly a bogus claim.

I do mostly embedded systems. I do use polymorphism, but most of my objects cannot be cloned (private, unimplmented copy constructor and assignment operator), and the only collection they live in is a fixed-sized array.

If I had to do your Cat example, I'd probably have Animal's copy constructor call a protected pure virtual clone() method. I understand the slicing problem, and that a slice is not what you want.

I confess that I don't understand how type erasure helps with the copying problem, nor how std::function uses type erasure.


So when I mentioned the traditional benefits of OO, it didn't occur to you that polymorphism was one of them? Okay...

>private, unimplmented copy constructor and assignment operator

If you're using C++11 or better, then avoid doing that. In modern C++ you declare those as deleted.

>If I had to do your Cat example, I'd probably have Animal's copy constructor call a protected pure virtual clone() method.

This would do absolutely nothing and once again, despite you claiming to have 25 years of professional experience in C++, I must shake my head to even hear such a thing.

All I get from this conversation is that you have a very antiquated idea of how to use C++ and have not kept up to date with modern standards. You mention the rule of 3, which is a pre-C++11 idiom and superseded by the rule of 5 with the advent of move constructors (or the rule of 0). Yet despite this you felt it appropriate to claim that since you personally have never encountered any recent advancement in C++ in the past decade, that it must be me who is distorting the situation.

It could not possibly be the case that the C++ of today is much different from the C++ back in 1996 when you started using it, nor could it possibly be that using C++ for embedded development represents only a tiny subset of how C++ is used, or that maybe you just aren't familiar with something because no one is an expert in everything and we have to prioritize what we spend our time learning. No... the conclusion you chose to go with is that I'm distorting reality because you happen to lack any modern knowledge about a technology that you claim to have 25 years of experience with.

Hate to break it to you... but it's not the 90s anymore.


Erlang is perhaps the best OO lang out there that doesn't look like one but it's principles are the heart of OO.

PS: I admit I lost the author at "scala is my favourite language" and so am biased.


Scala can be one of the most productive, elegant, reasonably performant languages to work in or it can be the opposite of all those. When done right it's amazing but the language is complex enough to permit some questionable choices


Life is too short to be forever tied to Java. This will be Scala’s downfall.

Elixir looks like the more promising approach. I love its pipe forwarding operator.


Life is too short to reimplement libraries and have them battle tested. As a scala dev I find this a very weak argument


Except like the BEAM/erlangOTP(and by extension Elixir) is older than the JVM, and arguably as battle tested, if not more....


Do you know the difference between a library and a runtime?


Do you know that these a lot of these libraries already exist as apart of the OTP platform? and that Erlang has been around for 35 years and because those libraries exists as apart of the platform by extension all the languages that run on the runtime have them too?

The JVM runtime is a runtime. BEAM/OTP/ERTS is a full fledged collection of middleware, libraries, and tools written into/as apart of the Erlang programming language. You can not separate one form the other.


Life is to short to reinvent the JVM and .NET runtimes, and IDEs, poorly on FOSS budgets.


scala-native exists?



It seems to me that you could do almost everything you want (except for the immutable thing) in Free Pascal.

Pascal doesn't have the kingdom of nouns problem. It supports good old fashioned procedures and functions.

Pascal can deal with nulls in strings, because we always know how long they are.

Or perhaps you meant null pointers. Pascal avoids unnecessary pointers, but supports them in a sane manner when you need them.

Pascal can deal with multiple inheritance in a sane manner, by composition. Objects can have other objects as properties, and it just works... no weird restructuring of everything horror stories I've heard about in C++, etc.

Free Pascal has generics there are libraries that let you do dictionaries and lists and trees of your type <T>.

I disagree about exceptions... proper handling of exceptions is a good thing. A function should always return the type you expect, not some weird value you have to test for.

As far as "pattern matching" I assumed you were talking about regex (which can be addressed in a library)... but you're talking about RTTI, or "reflection" where you can get type information at runtime, which is a thing in Pascal and many other languages.

I don't understand the obsession with immutable variables.

[Edit: I think I understand... in Pascal, parameters are passed by value as the default, but can be passed by reference, it depends on how you declare the function or procedure. It has nothing to do with variables that you can't assign new values to, if I'm correct]

Did I miss anything?


Have to disagree on exceptions. The return type is part of the function signature, it’s part of its contract. If a function can fail I want to see it right there on the return type and be forced to handle it on the spot.

Exceptions are an invisible, out-of-band mechanism that will crash your program because you forgot yet another try...catch. Sum types with pattern matching are a much better approach for writing safe and robust code.

The “proper handling of exceptions” panacea I keep hearing about sounds a lot like “OOP done right”... it’s supposed to exist, yet no one seems to know what it actually is.


> Exceptions are an invisible, out-of-band mechanism that will crash your program because you forgot yet another try...catch.

Out-of-band is the point. It lets you write your code for the correct path, without having to explicitly string a bunch of error types along (or do something annoying like mandating all errors are of type Error, which is a string).

The problem is with invisibility, which is the case with most exception-using languages. Java had it solved right, though, with checked exceptions. Of course back in the 2000s we were a bunch of lazy children and said "checked exceptions are bad because they make us declare stuff up front", which was actually a good thing, but we couldn't understand that, so they became optional in Java.

(On the other hand, exception handling is just a poor shard of proper condition system, like in Common Lisp.)


If you squint hard enough checked exceptions are sum types through other means:

  try
    foo()
  catch ExceptionX
    ...handle X...
  catch ExceptionY
    ...handle Y...
  catch ExceptionZ
    ...handle Z...
compare that to (borrowing some Rust-ish syntax):

  match foo() {
    ExceptionX => ...handle X...
    ExceptionY => ...handle Y...
    ExceptionZ => ...handle Z...
  }
with the added benefits that sum types can be used outside of error handling.


Yep, they are essentially sum types, but out-of-band, which is important.

In particular, you can have:

  def quux():
    xkcd = frob(foo(), zomg())
    ...lots of code...

  def bar():
    try
      quux()
    catch ExceptionX
      ...handle X...

  def baz():
    try
      bar()
    catch ExceptionY
      ...handle Y...
I assume pattern-matching can be partial without explicit default value, but (in the languages I've dealt with) it gets tricky with expressions. For instance, in C++:

  auto a = foo();
  auto b = bar(a, quux());
  return frob(xkcd(a), b);
if foo(), bar() and quux() can throw exceptions, and you wanted to replace them with sum types - say, std::variant<F, Err1, Err2, Err3>, or for better semantic separation (more on this below), tl::expected<F, std::variant<Err1, Err2, Err3>> - you'll going to pray you have enough "monadic" methods in those classes (spoiler alert: you don't, as of C++17/current tl::expected) to rescue the spaghetti code you'll have to write to model this flow.

Recently, I've been dealing with a C++ codebase that uses tl::expected in lieu of exceptions, and while the "main flow" becomes more readable - return doThis(x, y).and_then(do_that).map(sth).map_error(logErrors); - the side effect is that the codebase is littered with trivial functions and closures to make that interface work. It's much less readable than a bunch of catch blocks every now and then. Maybe it's a limitation of C++? I feel like a syntax for partial application would remove half of the boilerplate I have to write when chaining tl::expected-returning functions.

On semantic separation, I alluded to using a "nested" sum type like (Result1 + Result2 + ...) + (Err1 + Err2 + ...) instead of Result1 + Result2 + ... + Err1 + Err2 + ... - while mathematically equivalent, they aren't syntactically equivalent in code. Grouping possible return value types separately from possible error type values improves reasoning about code - the groups tend to change independently from function to function. And if you squint, this is what exceptions give you: they separate the error sum type from return sum type, and give the former a completely different control path. This means you can think about them separately, and don't have to force-fit the success path into shape that satisfies the needs of the error path.


Typed checked exceptions are a bad thing because they require libraries to redeclare things that are its implementation details, in particular the API of other libraries they use internally.

See Swift's error handling design: https://github.com/apple/swift/blob/main/docs/ErrorHandlingR...

(Also, exceptions/nonlocal returns are a pain because the invisible control flow is hard to reason about and hard to implement in a compiler.)


> they require libraries to redeclare things that are its implementation details

Exceptions that can fly out of a library are just as much a part of its public interface as the error types it uses for return values. Adding a checked exception Exception is equivalent to turning your Foo return value into Result<Foo> - the code will no longer work if it doesn't handle it.

> in particular the API of other libraries they use internally.

The library can make a choice between handling the internal exceptions at the interface boundary, rewrapping them, or exposing them as a part of its public interface. That's no different than a library considering returning a type that's defined in another library it uses internally.

> (Also, exceptions/nonlocal returns are a pain because the invisible control flow is hard to reason about and hard to implement in a compiler.)

I'm not a compiler guy so I'm not going to dispute the issues with implementing exceptions, but on the readability flow, I strongly disagree. There's nothing hard in exception-based control flow. The rule is simple: a given function either succeeds, returning what it declares, or fails - a failure cause the caller to enter the appropriate exception handling block if it has one, or exit immediately, propagating the exception up the stack.


I think the whole "exceptions vs. error types" debate is stupid, because both are designed for wildly different scenarios and as such are both non-optional.

An error type is for any deviation from the happy path that can be reasonably expected to occur during normal operation, and which my program must be able to handle. For a HTTP request, this would be something like "connection reset by peer" or "expected 200, got 404".

An exception is for any deviation from the happy path that a reasonable program cannot be expected to handle. For a HTTP request, this would be something like "malloc for send buffer failed".

Error types should be very visible in your function signature and their handling should be enforced or at least linted by the tooling, since failure to handle them very likely is a bug.

Exceptions, on the other hand, should be mostly invisible since most code cannot do anything when an exception occurs. In fact, if I had to design a language, I'd probably try to get by with reducing the try-catch-finally triad to try-finally or even just finally. Sometimes it's really useful to clean up after yourself before continuing to die, but if you can recover from an exception, you probably should have had an error value in the first place.


This difference is covered in Swift's design doc above, but its result is to technically not have exceptions (nonlocal returns), have fake exceptions called errors for "recoverable" errors (user can fix it), and just straight up crash for "universal" errors (vague things the user can't fix).


There have been many studies of exception handling correctness. They’ve shown that no one can correctly write exception handlers.

In my experience, proper exception handling is more difficult than advanced topics like lock free data structures, with the disadvantage that junior programmers have been taught to use exceptions and avoid lock free primitives.


> If a function can fail I want to see it right there on the return type and be forced to handle it on the spot.

Checked exceptions solve this problem. Checked exceptions vs. result sum types aren’t a black and white dichotomy, however. They are rather two points on a design spectrum, where one design axis is how you want to syntactically handle error escalation up the call chain, another axis is how you want to handle destructuring/restructuring of the results, etc.

Unchecked exceptions are still useful for notifying bugs (e.g. precondition violations) instead of aborting the OS process on failed assertions.


So how do you want to handle something like sqrt(-1.0) for real numbers? A runtime exception seems the sane way to do it for me. Are you suggesting that sqrt(-1.0) or any other invalid value should return a sentinel value like -999, or should return NUL instead of a real?

Any way of dealing with errors has to be out of band, we just disagree on the mechanism.


sqrt(-1.0) is NaN, it's typically used in numerical code where you can't stop and check for exceptions because performance is key.

But just for the sake of argument, the type of 'sqrt' isn't:

    sqrt : R -> R
it is:

    sqrt : R+ -> R+
So you have two options here. You can have sqrt return "Either Error Double" and force the programmer to deal with the possibility of an error (Rust does this) or you define a "nonnegative real" type that the compiler can guarantee will be an acceptable input for sqrt. The latter is much harder to do.

But either way no, you only need an "out of band" mechanism when the type system isn't strong enough or isn't being properly leveraged.


I used square root because it is commonly known that negative numbers are a problem for them. What about tangent(x) which periodically blows up? A type system can't fix that.

In my opinion it is far better to throw an exception that can be explicitly handled, rather than cause some math error at some random point later because NaN or some infinity snuck by as a number.


Math.tan will never throw exceptions because, for the exact same reason as above, it won't check its input.

But again for the sake of argument, the return type of tan() could be "Either<Error, double>".


I guess I would get used to that, but it seems broken to me. The way to deal with that is out of band, checking for the type to change to "Error"

In Pascal tan(pi/2) or tan(3pi/2) causes an overflow. The way to deal with that is out of band, to handle an exception of type E_Overflow.


Again no, it isn't. You are well entitled to believe Pascal is the best thing since sliced bread, but the two ideas are fundamentally different.

There is no type change. The function "tan" would return a value of type "Either<Error, double>". An expression such as:

   double x = Math.tan(Math.PI / 8)
would fail the compile-time type check, because the type of the variable "x" is "double" and the type of the expression "Math.tan(Math.PI / 8)" is "Either<Error, double>".

It would be the responsibility of the caller to prove to the type system that the error has been handled, with either a built-in language facility a la ML, or with ad-hoc code such as a Bohm-Berarducci construction.

Let's just agree to disagree.


I know Pascal isn't perfect, and lots has been learned since then... this is genuine confusion on my part. So tan in your system returns a variant type (could be Error, could be Double), right? (I didn't get that)

Compilers can find all sorts of things they couldn't in the past, which is amazing.

What happens at run-time if an overflow happens?

again... I'm sorry if I didn't express myself clearly enough. I'm coming at this from a Pascal programmers perspective... I've missed the past 20 years of compiler improvements.


Like you said, this is a variant type that represents either a value, or an error. Here's the definition of Rust's Result type:

  pub enum Result<T, E> {
      Ok(T),
      Err(E),
  }
I threw together some examples of how you can use a Result value, although I used arcsin instead of tan as the out-of-domain values are much easier to specify.

https://play.rust-lang.org/?version=stable&mode=debug&editio...

I'm not entirely sure what you're asking about regarding runtime behaviour, but here's two guesses.

As a user of a function that returns a Result type, at runtime the function returns a result that has a flag and a payload, where the flag specifies whether the payload is your value or an error. The only ways to get a value out of the result are either to check the flag and handle both possibilities, or to check it and crash the program if it's an error.

As someone writing a function that returns a result type, you write whatever logic you need to determine whether you've successfully produced a value, or which error you've produced, and then you either return(Ok(my_value)); or return(Err(my_error));. If you're doing floating point math, this might be checking an overflow flag, or looking at your inputs, or checking for NaN after operating, or whatever makes sense for your task.

Using a variant/product/discriminated union like this is orthogonal to the details of floating point operations at runtime. What it lets you do is have the compiler enforce that users of your function must check for errors, as it's a type error to assign a Result<f64,E> to a variable of type f64.

I hope that helps! Do you have any more questions about this, or things I didn't address, or topics where you'd like more explanation or examples?


In Rust in particular (and many other languages) tan operates on IEEE 754 double-precision floating points ("double" for Free Pascal, "f64" for Rust). These can contain infinity and NAN values, not just numbers. So if tan blows up, you get NAN back. It's your job as the programmer to check that the result wasn't NAN (or INF) before proceeding. "Double" is already such a variant type. So you have to check your results which can return an error variant for errors, just like you already do with floating-point values.


Rust chooses to panic, I believe, but you could always have applied the same Error strategy — it’d just be really annoying. The stdlib also provides a bunch of methods to pick-your-strategy for overflow behavior, but you need to use them instead of the +/* operation


Sqrt(-1.0) should return NaN or raise a signal. Let the programmer choose. Throwing an exception doesn’t help anyone.


I'm not the author, but:

> Or perhaps you meant null pointers.

A major problem in languages like Java is that objects can always be null. A better solution would be to force strict null checks a la Typescript, where if you want to take or return an optionally null reference, you have to explicitly encode that property in the type of the variable.

NullPointerExceptions are frankly a really stupid thing we have to deal with. A proper type system could check for that at compile time.

> but you're talking about RTTI, or "reflection"

OP is talking about ML-style type destructuring, which is kind of the opposite of this.

> I don't understand the obsession with immutable variables.

Mutating variables leads to all kinds of invisibile problems and makes the code a lot harder to reason on. When I write Java, most of my variables are final, the author is right that we should be taking the opposite convention a la Rust, variables should be immutable by default with mutability explicitly annotated.


>Mutating variables leads to all kinds of invisibile problems and makes the code a lot harder to reason on.

Am I correct in interpreting that as when you pass a variable to a function by reference it causes problems? Pascal defaults to passing them by value.


Sure, but passing by value isn't always realistic. If you have a hashtable with a million entries, you can't copy it just to update a value.

There's also the problem of internal mutation of state, with hashtables/collections being again primary examples. It's not always avoidable to have internal mutation, but it shouldn't be the default, in most cases it's just wrong.


> Sure, but passing by value isn't always realistic. If you have a hashtable with a million entries, you can't copy it just to update a value.

What else are you going to do? If you want to insert a value but also have access to the hashtable without the change, then you need to copy it. (Or use a data structure with history tracking, like RCU or finger trees.)

If you don't need the old value, then you can optimize out the copy - this isn't too hard.


That's a weird corner case... I wouldn't expect the compiler to keep me from shooting myself in the foot like that. I'm surprised that some people do expect it.


Ever seen any of the top 5 functional languages?

And "weird corner case"? Really?


What do you honestly expect me to answer? That's kind of a moot point: the entire reason we have type systems in the first place is that they are partial, machine-checkable proofs of correctness. The entire discussion is how to strike the best power/practicality balance while keeping the thing decidable. Keeping you from shooting yourself in the foot is exactly the type system's job.

Why do you even bother with static types if you seriously believe that?


I believe in type systems as a way of knowing how data is encoded in memory. I don't expect them to somehow prevent me from writing that memory location more than once.


Have you ever used

   procedure Foo(constref Something: SomeType);
instead of

   procedure Foo(var Something: SomeType);
in Free Pascal? Both will pass the same stuff in memory (stack), but one has different semantics.

Similarly

   const NiceFeature = $01; NeatFeature = $02; GoodFeature = $04;
   var Features: Byte;
and

   {$packset 1}
   type Feature = (Nice, Neat, Good);
   var Features: set of Feature;
would be stored the same way in memory but again, different semantics with the compiler helping you in the second case to avoid logic bugs (e.g. setting Features to an invalid value).

These are cases where Free Pascal helps you from shooting yourself in the foot.


I've never used ConstRef, nor $packset

I can see the value of making sure you don't end up with invalid values.


But it's not a hot sexy * new * silver bullet for all of your problems.


Can we get rid of 'sealed'? I really hate that keyword. Sometimes there's a good reason for it, sure, but sometimes it's used gratuitously, and then I have to work around it for no good reason.

Eg, in C#, SqlDataReader is sealed. Which means I have to do this:

    reader.GetString(reader.GetOrdinal("FirstName"));
When I just want to skip the verbosity and be able to do this:

    reader.GetString("FirstName");
So the logical thought there would be just to inherit from it, add an extra overload to those functions, and life got more comfortable, plus you can still pass it to anything that wants a SqlDataReader. But oops, you can't do that, because somebody at Microsoft decided the interface for this thing was perfect is not to be messed with.

If there's something that really gets me annoyed is when I run into a limitation that seem completely arbitrary but intentional. It's not there because of the technical limits of the hardware, or because the compiler isn't clever enough, but purely because somebody intentionally decided 'nope, you don't get to do this particular thing you can do all day otherwise'.


>Can we get rid of 'sealed'? I really hate that keyword. Sometimes there's a good reason for it, sure, but sometimes it's used gratuitously, and then I have to work around it for no good reason.

Fyi... this stackoverflow Q&A has opinions on why 'sealed' is a rational default for the person who designed the class: https://stackoverflow.com/questions/268251/why-seal-a-class/

It doesn't mean the programmer thinks the base class is "perfect". Instead, the author has not deliberately designed the particular class for inheritance and the "sealed" keyword expresses that.


Yes, I know there are good reasons to do it sometimes, but still don't get what about SqlDataReader benefits from it, other than "I don't know why would anyone would want to inherit from it".

Looks like my issue with that got fixed with extension methods, which should let me tack on such improvements whether the original designer of the class thought that would be a good idea or not.


You could do this with an extension method, no inheritance required.


Oh, that's very nice, and just the kind of functionality I've been wanting for some time.

That was something I was recalling from a very old project though. I'm not sure right now when that was exactly. Maybe it wasn't in the language yet at the time, or was a new feature still and I didn't find about it, or it wasn't in Mono yet.

I've not done C# in a long time, so no doubt I missed a lot of developments.


> If there's something that really gets me annoyed is when I run into a limitation that seem completely arbitrary but intentional. It's not there because of the technical limits of the hardware, or because the compiler isn't clever enough, but purely because somebody intentionally decided 'nope, you don't get to do this particular thing you can do all day otherwise'.

Just... exactly this. Hate that so much. I cannot articulate how much I agree with what you have said here.


I second removing sealed (and final) from all languages (except maybe for embedded/microcontroller work where implementation details matter).

This issue might not seem relevant in your own projects, where maybe you want to seal a class so a junior developer doesn't break something.

But it's disastrous for library and framework development. Here's an open question of mine related to this:

https://stackoverflow.com/questions/65852139/listen-for-chan...

In that case, I'm trying to write a script that people can attach to a game object to rebuild metadata after a mesh is changed. I need the metadata for my shader, since it needs info about neighboring vertices, which would only be available in geometry shaders (which have fallen out of fashion since they run at the wrong stage of the rendering pipeline).

That way my shader would "just work" without making the user have to remember to set a flag or send an event when something changes (which is not future-proof).

Normally I would just inherit from the Mesh class, override/extend methods like Mesh.SetVertices(), and then explain how to use my class in the readme. Better yet, I would use something like inversion of control (IOC) to replace the project's global Mesh class with my own (this is how frameworks like Laravel work, or try to work). In fact, there are half a dozen ways of accomplishing this, perhaps more.

But with sealed and final, I'm just done. There is no workaround, by design. Because C# inherited this kind of opinionated thinking (anti-pattern) from Java. And Unity uses C#, so unwittingly fell into this trap since so many of their classes are sealed. Which all reduces languages like C# and Java to being toy languages, at least for someone like me working from first principles under some approximation of computer science.

Maybe I should write to Unity and explain that all of this happened during the very first interesting thing I tried to do. Or maybe we could get rid of this concept altogether and avoid the problem in the first place.

Even the gospel of Martin Fowler tends to agree with my stance on this:

https://martinfowler.com/bliki/Seal.html


Can't speak for all languages and usecases, but at least in Scala its primary use (afaik) is to enable the compiler to reason about the correctness of an application (i.e., it knows all the possible subclasses of a class and can tell you if you forgot to handle some case). I think this is also useful in other languages if you have some logic somewhere that depends on knowing all possible subclasses of a class (independent of the compiler generating warnings or not).


Regarding the "Kingdom of Nouns", I always thought Lobster[0][1] had a really cute idea: x(a, b, c) and a.x(b,c) are equivalent. Don't know if that is original to the language, but it's where I saw it first. It only really makes sense with the other features of the language, though.

(Also, I just discovered that it now targets WASM. Maybe I should give it another go!)

[0] http://strlen.com/lobster/

[1] https://github.com/aardappel/lobster


In D, this is called "uniform function call syntax." [1] It's really handy, not just for OOP but also for pipeline-style functional programming, since it lets you compose functions from left to right.

[1] https://tour.dlang.org/tour/en/gems/uniform-function-call-sy...


Ada supports that syntax for tagged types:

https://learn.adacore.com/courses/intro-to-ada/chapters/obje...


Nim also has this. You can also leave out the parens from calls, which adds to the syntax flexibility.


This is called unified function call syntax


You know, now that you mention it, I think every time I mention this feature of Lobster someone explains this to me, and for some weird reason I keep forgetting. Thanks for enlightening me again, hopefully it'll stick this time!


Would be kind of neat if the language supported returning tuples or anonymous types. ie x(a,b,c) is equivalent to {a,b}.x(c)


Author here: unified function call syntax is awesome but sadly out of scope for what I was trying to do with this language. I mention this in the "Class-based discoverability" section: for the sake of uniformity and IDE discoverability (arguably Java's killer feature that its successors have diluted), I'd want every value and function to belong to a class, which means every nonlocal method call should come after a dot.


This already assumes that unified call syntax would inherently get in the way of that - do you have any reason to believe that?

I mean, Lobster has classes, or at least something that it calls classes:

https://aardappel.github.io/lobster/language_reference.html#...


Take Java as an example. In Java, there are no free functions. Every method belongs to an object or a class.

So you couldn't write `f(a, b, c)` as `a.f(b, c)`, because `f` as a free function couldn't exist to begin with. Either it would already be `a.f`, or `f` would be a method of the current class or a static method imported via `import static`, in which case using the `a.f` syntax could deceive readers about what class `f` is a member of.

That's what I'm going for with this language, for uniformity and discoverability. If you want to add a new method that takes type `A` as its first argument, you should add it as an extension method on type `A`.


Could you elaborate on what "discoverability" means to you, because don't see any reason in this explanation why an IDE would not be able do autocomplete on `a.f(b, c)`, with automatic documentation on top, which is all that is needed for discoverability in my book.

(also, to be clear, I'm not the one who downvoted you)


Am I wrong or is OP describing most of what swift currently offers. Optionals, extension where syntax, Result<> monads, exceptions collapsed to a single line with try? syntax, safe casting with as?, etc...


Same with kotlin. It appears that most of the advances are really really great and emerged in a wave in the '10s. We are a decade ahead of the curve ;) I think doing away with checked exceptions was a definite benefit in that direction too..

Having higher kinded types and type classes like Haskell would be nice though. I think there are definitely things in the article which could be even better, but we've definitely moved that way, and the next generation of language designers will probably have even more insight into how to elegantly combine those features as well.


> Having higher kinded types and type classes like Haskell would be nice though.

Don't have to leave the JVM, already there in Scala. And Kotlin Arrow may get merged into the compiler, so Kotlin may have first class support for these language features one day...


Scala uses type erasure to preserve compatibility with the JVM. No thanks. I want the compiler to statically verify my types, and then optimize based on that.

At some point, I realized Java code tends to have many more runtime type errors (in particular, NullPointerExceptions and ClassCastExceptions) than similar C++ code (segfaults). Things like rust should be even better than C++, but that’s practical largely thanks to performance optimizations from llvm. Other than Java compatibility, I don’t see any advantage to the JVM at this point.


Or OCaml, Eiffel, Sather,...


Most of what he mentions as features needed in this new language have existed in common lisp for a long time. CLOS.


F# is functional-first, but also fully supports OO. It’s a good balance for the 20’s, or any other decade.


It seems like a great language; I'm going to look into it this year - maybe for advent of code or some data science/ML stuff. For some annoyances there is F#+: https://fsprojects.github.io/FSharpPlus/


Agreed. F# lacks the HKT that the OP calls for, but it is possible to write plenty of pragmatic functional code without them. They might land soon though.


What's your source for F# getting HKT's soon?


C# has a HKT proposal now. If C# gets it, F# will soon follow.


Hard disagree on exceptions. Passing sum types types along with every call is annoying - in a good codebase, probably upward of 50% of your code might be returning some kind of Result/Expect type. Exceptions can be done right, and there are solutions to the main objections (implicit, hard to recover from) in various languages - but somehow nobody seems to have managed to put all of them in the same language.

Here's what I want from a new OO language, exception-wise:

- Checked exceptions only. Every function that can possibly throw, must declare what it throws. Supertypes may be used in declarations to cover a whole family of exception types. Every function must either handle or explicitly declare exceptions that can be thrown from its callees. This is to be baked into type system and enforced at compile-time.

- If the language is driven with IDE use in mind, allow "auto" for checked exception declarations. Will cut down on line noise, at the expense of readability (as type deduction always does). But since exception declarations are resolved at compile time, you won't be able to make an actual mistake here.

- A condition system, not exception system. Extend the out-of-band signalling mechanism to be used for arbitrary things, not just "exceptional situations". I.e. just as I can say `throw SomeError{someData, someMessage}`, I want to also be able to do e.g. `throw Progress{percentage, total}` to feed a progress bar that's declared few layers above in the stack (which would just execute its code and return control to the thrower; no stack unwinding). This is what you have in Common Lisp.

- Stack winding, not just stack unwinding. Also from Common Lisp, I want the exception (condition!) handler to happen prior to stack unwinding, in a way that would allow me to truly recover from the problem and resume execution where the condition was thrown, or somewhere in the middle of the call stack, between the handler and the thrower.

- Separating signalling, handling and recovery, both conceptually and in code. Again, from CL's condition system. A thrower throws an exception (condition), a handler decides what to do (or rethrows), and one of the things it can do is pick a "restart" declared down the call stack - then stack is unwound only to the point of that restart, and control resumes from there. Note the programmatic ability to choose a restart. Not 100% sure how to handle it in a type-safe way, but I believe it could be done.

- All the other stuff from Common Lisp's condition system.

So basically, a statically typed blend of C++, Java and Common Lisp, mashing together their best features into a coherent and powerful system.


So more like an algebraic effect system than what most people think of as exceptions? I definitely think that's a promising area, but unfortunately it's currently still only really a thing in research experiments a-la Koka (https://www.microsoft.com/en-us/research/project/koka/).


Personally I'd keep exceptions. Result<T> is boiler plate hell.


Having used Go a lot recently the err!=nil boilerplate is only marginally worse than the monad approach which ends up with a lot of match statements. Where monads work better is chaining calls together, but at the cost of the of choice how to handle the errors.

Exceptions end up being an elegant way to represent this, you can choose how much you handle and where and match against the types of errors. Everything else I have tried has been worse, forcing me to handle errors I don't care about at that point in the code with a boilerplate response to send it upwards, something exceptions do automatically. I don't get more robust code with explicit error handling I get more verbose code for errors I can't do anything about. In practice it also changes the interface of my methods far more with maintenance and propagates error handling code throughout.


Yeah, the problem is call/return: you have to return something. If you have nothing to return (or not yet → async), things get tricky.

What I found really, really nice for error handling is dataflow-style programming, because the main flow only has to deal with the happy path. If you don't have a result, you simply don't send anything to the next filter in the pipeline.

And you report errors using a stderr style mechanism, so error handling can be centralised and at a high-level, where you actually know what to do with the errors.


The key is to collect the errors in the subsystem where they originated and to handle them at an appropriate time.

Try not to bubble up errors. Many subsystem interactions can be unidirectional. If an error occurs, it is stored right there. The caller might not even care. The situation can be handled later.


Subsystem is too big of a boundary. GP and GGP are likely talking about function level - which is where I also experience this.

Imagine a situation as simple as:

  foo = ComputeSomething()
Where ComputeSomething() is a complex operation, spread into multiple functions, with many points that can fail for various reasons. If a function four layers down the stack from ComputeSomething() reports an error, this means all three functions above it need to also pass it back as Result<T>, and if you try to avoid the if-else hell, it means the entire call graph under ComputeSomething() will need to be wired with a monadic interface that smartly doesn't execute functions when Result<T> is of Error type. All it does is make you zoom through the call graph doing nothing, just to simulate the default behavior of an exception bubbling up the call stack.


Do you have an actual example where you do something 4 layers down the call stack from a subsystem entry point and (only) the lowest level can fail, and furthermore fail in a way that all the layers up the call chain need to bubble up the error? Maybe that can be rewritten so the error occurs / is handled at a more suitable place?

The most obvious idea is: instead of retrieving something 4 layers down, get the information first at the top level, and if there was no error push it down.

I've stopped thinking that syntactic features can solve structural problems. That applies to algebraic types just as it applies to exceptions.


Appropriate time is quite often at the end of the unit of work. Let’s say, your microservice is doing some interactions with database and querying another services. If an unlikely I/O error happens, DB will take care of the uncommitted transaction and your service is just fine with returning HTTP 5xx. Probability of this happening and impact of not implementing error handlers all over your stack are very low. Since the majority of developers are not writing perfect programs, they will not spend the money/time to avoid this risk. Exceptions may not help writing a better program, but they are practical enough to be supported in the language.


This absolutely works for a HTTP request handler, where all the error handling is implemented in the DB / garbage collector, and where we're dealing with almost perfectly isolated processes.

In more complex systems, this won't work out. Imagine a software that controls sensors and motors, for example. What if one of these devices fails? There is no way any automatic system (like exceptions) could come up with the right way to handle this situation.


There's no silver bullet that would solve error handling in such cases purely by the means of the programming language. Just like it's possible to have memory leaks in Java, it is possible to write broken code with Either<Result, Error> - imagine accidentally ignoring an error from sensor that results in overheating and fire. If you want to absolutely make sure, that this won't happen, then you build your system in a way, where prevention of the exceptional situations is implied by design, rather than requires an additional effort from developer.


Yup. To continue with this example, an approach that often works is to simply not have any error. For sensor readouts, request new samples to be reported asynchronously. (Coincidentally this is better for performance as well).

In this way, the client code is basically required to think -- oh, wait, what happens if I'm not getting updates for some time beyond what is acceptable for me? Also, in this way, the situation can be handled in a single central location.


Exceptions are great when you want errors to terminate a (sub)program with an error message without having to type any code at all.

That's suitable for batch programs and other short running things where you can just scrap the incomplete computation and re-try with a blank slate. I'm sure that can be productive for webapp code that sits between a database and the frontend.

For longer running processes and more complex systems, exceptions are not so great. It's almost a given that you will have a hard time tracking down all the error scenarios in your system.


The last desktop app I wrote had a single error handler at the event loop that displayed an error message and continued.

Because the stack unwinding cleanup is the same when an exception occurs or when an operation completes normally this app could recover from anything.

> It's almost a given that you will have a hard time tracking down all the error scenarios in your system.

You don't need to track down all the error scenarios. The only thing you need to worry about is things that you can recover from to retry and things you can't.

Passing errors around places too much emphasis on where errors occur. For recovery you only need to know what you can recover from and where you can do that recovery. This is usually no where near where the error occurs.


> Passing errors around places too much emphasis on where errors occur. For recovery you only need to know what you can recover from and where you can do that recovery. This is usually no where near where the error occurs.

This is actually the best argument why exceptions do not "scale": Because in larger systems, the place where you handle the error is almost certainly not up the call stack, but in a different subsystem.

> Passing errors around places too much emphasis on where errors occur. For recovery you only need to know what you can recover from and where you can do that recovery. This is usually no where near where the error occurs.

In a way, errors are just data, like everything else. There is not much sense in making them something special - as I said, with the exception of smaller script-like programs, where you usually want to jump out of a larger subprogram immediately, and rely on the garbage collector / stack unwinding to clean up (hopefully, everything) for you.


> There is not much sense in making them something special

The reason you treat them as special is because at the point an error/exception occurs you are saying that your function (or program) can no longer do any more meaningful work, forward progress is now impossible, and you can't return anything meaningful.

We've gone backwards in making error returns explicit because the most common and intelligent thing to do is pass the error back up the chain or right out of the subsystem entirely. The lazy approach is the correct one with exceptions.

> Because in larger systems, the place where you handle the error is almost certainly not up the call stack, but in a different subsystem.

If your subsystem is network connected then the most likely cause of an re-startable operation is a minor network issue. There's no need to inform another subsystem immediately. I don't really see how it's an argument that exceptions don't scale. If you need to inform another subsystem you can do that.


Why can't I do any more meaningful work? Why can't I return anything meaningful? Why do I have to return anything at all?


It's the definition of an error or exception.


> For longer running processes and more complex systems, exceptions are not so great. It's almost a given that you will have a hard time tracking down all the error scenarios in your system.

I disagree. I absolutely love throwing exceptions from lower levels of my web apps and letting them bubble to be handled by exception mappers that will formulate the correct http response from them.


I think the parent poster would classify that kind of web app under as a "short running thing". Sure, the application process might live for a long time but the requests are short-lived and independent of each other.


Every program is a collection of a lot of "short running things". Exceptions work well with this. There's a piece of code that calls some function and expects a result or an error; there's a piece of code much deeper in the stack that can fail. Everything in the middle doesn't care about the failure, except to make sure it doesn't get executed if the failure occurs. Without exception-like mechanism, you have to make the middle care about possible errors, just to prevent code past failure point from executing. Your calls like foo(a) get turned into if(a != error) { foo(a); }, even if it's hidden behind a nice monadic syntax.


parent: I'm sure that can be productive for webapp code that sits between a database and the frontend.


Not if you have language support for monads / do-notation / computation expressions.

Many "problems" that people have with functional programming occur because they are trying to use FP techniques in a language that isn't really equipped for functional programming (JavaScript, C#, Java, Go...).


what the author is proposing sure sounds an awful lot like Swift to me.


multiple inheritance.

  c++ : both sides can bring state. has diamond issues.
  Java : one side can bring state. other just signatures. too restrictive.
What you want is: both sides should be able to bring behaviour, but only side can bring state. (iirc, ruby does it like that... )


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: