Hacker News new | comments | show | ask | jobs | submit login
How Rust Gets Polymorphism Right [video] (youtube.com)
153 points by adamnemecek 31 days ago | hide | past | web | favorite | 49 comments

There is a rumor[1] even Rust has yet to get polymorphism right:

>I'm using nalgebra for math. I'll write a bit about it here, because I think it's relevant to Rust as whole. I'm not sure if I actually like it. It's so heavily templated that most of the error messages are impossible to understand, so I usually just double-check what I wrote and try to guess what's wrong. It's also where rust's documentation generator gets in trouble. Signal to noise ratio there is <10%. Not sure how to fix that. Anyway, my point is that maybe it's not always the best idea to write code as generic as possible.

[1]: https://www.reddit.com/r/rust/comments/795dg4/i_spent_the_la...

Much of nalgebra's woes come from it trying to be n-dimensional without support for constants-in generics yet (this is in the works). To do this it does crazy wizardy using traits to model numbers at compile time. It's impressive for sure, but the compile errors are equally impressive. Which is why work is being done to add support for constants in generic parameters.

But this is the case of one library. Another library to suffer from poor errors is the futures lib, but that will be solved with the `impl Trait` syntax that is in the works.

Definitely agree it's a good idea to have a balance when it comes to generics, but likewise I'm glad folks are pushing the boundaries and finding room where things could be improved. It does take time for the language to catch up though, but they're working on it. Lots of good stuff is coming out of the current impl period.

Using classes or traits to model dimension numbers is something I did in C# when working on Bling 10 or so years ago. If you only have a few dimensions you need to handle anyways, it works out, and you can embed predecessor and successor dimensions in the types. Good days, though I guess I’m not that reckless anymore.

Yeah, same in the C++ world with templates. They're just so handy, especially when you want performance! Thankfully C++'s `constexpr` has made this kind of thing much more tenable from a library consumer's perspective, and Rust will again be following in its footsteps, albeit with a less adhoc version based on dependent types.

For what its worth, my cgmath library just has a bunch of hardcoded `Vector{2,3,4}`, `Point{2,3}`, and `Matrix{2,3,4}` types.

Ya, that like is familiar. In C#, you can declare class D4 : Dim<D3,D4,D5> ...., and then use extension methods to access the D3 and D5 type parameters as previous and next dimensions. It all was nice and f-bounded as well. C# is an under appreciated language for these kinds of things.

Those criticisms are mostly of the way rustc outputs errors (it is definitely open to improvement, and notable improvements are made every few months) and how rustdoc shows generics (automatic generation of such things can’t really work well; there are ways that it could probably be improved, but they’re hard enough that no one has tried any of them that I know of).

Of course, the UI is a part of Rust, so these things do matter, but as far as the language implementation is concerned, it’s not as bleak as all that. Rust, being a comparatively complex language (unlike Go, for example, where linguistic simplicity is a feature) is taking a long time to mature, but it is steadily maturing.

To be clear, the paragraph you're quoting is criticizing one specific library (the naglebgra library) which overuses traits. In context, that quote is about the temptation of writing super-polymorphic code in Rust, and not strictly a complaint about Rust's specific implementation of polymorphism. I'm not at all convinced that the argument made by the video is somehow contradicted by the fact that Rust still lets you write unnecessarily polymorphic code.

Such heavy use of traits can allow you to do things that would otherwise have been impossible or less efficient—there are advantages to it. But they definitely come at a cost.

I’d love to see comparison between Rust and Haskell/ML WRT polymorphism. I wonder if “gets right” would appear there and on what side.

Rust's traits are basically Haskell typeclasses, with a few extensions enabled, like multiple parameters.

Trait objects are basically existential types, with a few more restrictions on them. But in essence polymorphism in Haskell and Rust are very similar.

The big thing Rust is missing is higher kinded polymorphism, ie the ability to abstract over type constructors. This makes it easy, for example, to abstract over a container type while putting different things in that container.

This has get approved in the Associated Type Constructors RFC, but it's still in the works for actually landing.

One thing no-one else has mentioned yet: in Haskell you can’t provide a specialised implementation of a generic function. In Rust you can.

Whilst this is obviously useful, it makes it impossible to perform parametricity type-based reasonbig.

Of course you can. The good old {-# LANGUAGE OverlappingInstances #-}.

In Haskell you have to supply the entire instance, versus just specializing one method as you can do in Rust. Also in Haskell this extension is not recommended for general use and you won’t often find specialization like this in the wild.

The typical way to do multiple instances for a given type is with trivial newtype wrappers for each instance. See the "Sum" and "Product" monoid instances (although these wrappers are also necessary for instance decidability). This is fine "in the wild."

Also, I'm not sure why you would ever want a "partial instance," as it would mean that your program could be unsafe but still typecheck (I haven't checked out Rust's implementation or motivation). Just use more specific classes, and you can even use a "generic" "instances of the components imply an instance of the whole" class and instance.

When you specialize an instance in Rust, by definition it already has a valid instance that covers that type. So implementing a single method is fine - the other ones are provided by the instance you are specializing.

Newtypes also exist in Rust but this is an alternative that is not the same as the overlapping instances extension.

Rust will find out why as soon as this gets widespread.

You can either have overloading classes or you can have your classes declaration independent of the type declarations. You can not have both.

I think you can using GHC Pragmas.

Here's a post that compares the two pretty well, though it assumes you know a bit of some ML-style language:


Apples & Oranges.

Although Rust heavily borrows from the ML family, the fact it has mutability and imperative reasoning makes it hard to compare them.

But if you want a short one:

- Sum Types EDIT: AND Product Types (silly me)

- function-objects & closures

- typeclasses (as far as I can tell), but no higher kinded types

- no monads, functors nor monoids built-in

- trait objects (eg. runtime polymorphism, think "abstract classes")

There are other imperative languages in the ML family too, right? OCaml has an imperative mode.

I don't think Rust is that much different from other functional languages. The main critical difference is the manual memory management feature - the way it's acknowledged and used throughout the language and the ecosystem means that the functional patterns commonly used in garbage collected languages don't look nearly as natural in Rust.

> - Sum Types but no Product Types

Maybe I'm missing something, but aren't tuples and structs/records examples of product types? Which both rust and haskell have?

Yes I always thought tuples were the archetypal product type, which is why in some ML dialects they're notated as "Type1 * Type2".

Rust most certainly has them, as does Haskell. Records/structs of course are just the same thing with names for the components.

Thus, even Java has product types.

> Thus, even Java has product types.

Even C has product types. Which is why I'm always bothered when people claim language X lacks algebraic data types.

They probably lack sum types, but are very unlikely to lack product types.

Product types without sum types and pattern matching are basically useless for expressing ADTs. That's what people mean.

> Product types without sum types and pattern matching are basically useless for expressing ADTs.

That makes no sense. Algebraic data types are not expressed, they're a concrete thing, a classification of composite types (by multiplication (hence product types) or addition (hence sum types)).

Sure it does, languages allow you to express things.

Some shitty languages don't have a concise way for you to express particular things (like sum types), so you have to rely on idioms or "patterns".

C also has sum types with unions (though without any indication of which of the alternatives is actually present).

They are mix types then.

And in SML tuples are just records with numeric fields in a special order. Enter this in the repl:

    { 1 = "hi", 2 = "howdy" };
And get:

    val it = ("hi","howdy") : string * string

Oh wow, never realized the connection between product types and the ML type notation. Thanks for the insight!

Thanks for the insight on this, I've edited my response. Are there any extra features in Haskell with product types? I've always understood that there was better composition support for product types in Haskell

Not really. Haskell has support for higher kinded types that allow it to support lenses, which are a powerful way of declaratively traversing through data structures, so perhaps that's what you mean. On the other hand Haskell's record support is pretty woeful in comparison to languages like Ocaml, Elm, and Purescript that have support for row polymorphism. Depending on the sophistication of their implementation, row polymorphism will allow you to add and remove fields from records in a statically type checked way, and some may even allow you to pass labels around at the type level. They can even be used to model extensible variants and extensible effect types. There are creative ways to encode this in Haskell, but they all have their drawbacks, can be painful to work with, and really don't feel like a natural part of the language.

Well, there is per-field lazyness (with the ability to opt out). So product types can let each consumer choose which parts of the result to actually evaluate and discard the rest.

It makes me smile that this type of programming is starting to influence so many things. You see it even in Swift with 'protocols' and 'extensions'. Typeclasses are a fantastic method of abstraction. I remember watching a video on Rust's history and they said that some time during Rusts development a bunch of haskellers joined and the current traits and ownership system was born (I'm sure I'm oversimplifying here)

I'm glad that it's happening, but at the same time I'm frustrated that it's taken 20+ years to adopt a limited subset of what Haskell had.

Watching this recording I really wished they would invest more in audio equipment and in people who know how to record it. Reminds me of a lecturer at uni who had a bunch of recorded quest lectures that he would show every year. Audio on those was terrible, constantly clipped sound, but when we complained he said he couldn't hear anything wrong.

This is why text articles are superior to videos - or at least an accompanying transcript. Text is very hard to mess up.


(Interpretation of that raised eyebrow: text isn’t hard to mess up. It is nonetheless more accessible in general, I just couldn’t leave the “text is very hard to mess up” unremarked on.)

Your counterpoint example to the proposition that text is hard to mess up is... a picture?

Edit: or have I completely missed the point of what you're saying!

It's a picture as much as any particular character in a font is, which is to say, it is. We're just very used to recognizing particular variations and combinations. There's one problem here that can be expressed in two ways, one is a proliferation of many new characters which people may not know the idiomatic usage of until they've encountered it in more context, the other is new combinations of known characters and words into a new idiomatic usage that has the same problem.

Consider that there was a time not so long ago where the vast majority of people didn't know what LOL meant and would be confused when encountering it. On the other end you could probably copy some heavy British slang in here right now and I would be very confused.

My response was simply a raised eyebrow. It wasn’t a counterpoint.

The fact that it was open to several additional interpretations merely suited my sense of humour.

I would disagree for research - text is usually in the form of papers. When you're writing a paper there's a strong incentive to make sound it as complicated as possible so very few papers are easy to follow.

Giving a presentation though there isn't such an incentive and the goal is more explicitly to explain your idea rather than to prove how clever it is. So I find presentations easier to follow in general (unless they are a bad speaker, of which there are unfortunately many).

That always makes me crazy; how a "simple" thing like sounds makes a talk useful or unbearable.

I always fantasize about a collaborative app that would blend a few smartphone audio recordings to cancel noise and room reverb into a nice vocal signal.

The app seems pretty feasible. There'd probably be some issues with having to periodically resync the timing due to moving microphones or other skews coming in and out of phase but I don't imagine that'd be insurmountable.

The bigger trick would be getting various parties at any given event to help out by first recording and then submitting their data to the same collaboration service. I've seen enough rooms full of techies struggling to get a projector or conference call working to imagine all kinds of headaches with connectivity, app installation, finding the right event reference, etc. If you allow post-event submission, most people would probably forget or not care by the time they got back to their home networks only to be annoyed weeks or months later that they still have a large audio file taking up space on their device.

As more people are posting various captures (just for personal-reasons/vanity, without necessarily going out of their way to prescribe any particular remix), maybe something like PhotoSynth where you crawl the web for media having similar GPS and timestamps could work. Maybe people could be bothered to tag their posts with hashtag #YetAnotherCon2017; then you just have to monitor a couple of the more popular social channels.

With accelerometer you may have enough to recalibrate.

About cooperation, I'm sure it's not hard to find 10 people willing to contribute. You can even count the people talking or organizers.

Pretty sure that already exists. Maybe not in a production app but I bet there are lots of PoCs.

Here's one for example: https://twei7.com/papers/Sur_Wei_MobiSys14_Dia.pdf

Honestly, I would be happy to see a podcast / Youtube series come out that finds quality tech talk speakers, and polishes them up to TED level: concise 20 minute videos, with top notch audio, well rehearsed presenters and informative content.

Have a look at InfoQ website. I've watched lots of great tech presentations on there. Also Gary Bernhardt from Destroy All Software.

InfoQ looks like they just recorded your typical conference presentation. Most of the videos I'm seeing are 2x the length I'm talking about, and are half vendor pitch.

Destroy all software is closer, but is kind of like one guy. Great for him! And for me on my flight here in a few minutes. But what I envision is more like training speakers to be concise and engaging yet informative. Kinda like how an editor helps a writer cut out things to make a better novel, but for tech talks. A tall order, I know.

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | DMCA | Apply to YC | Contact