Hacker News new | past | comments | ask | show | jobs | submit | verdagon's comments login

I can imagine a Rust-like language where we have move-constructors (in TFA), and every generated Future subtype is opaque and also heap allocated for us.

I think the need for Pin could then disappear, because the user would have no way to destroy it, since it's opaque and elsewhere on the heap, and therefore no way to move it (because having move-constructors implies that moving is conceptually destroying then recreating things).


Pin is a state rather than a property of the data itself.

This has a very nice effect of allowing merging and inlining of Futures before they're executed.

It's similar to how Rust does immutability — there's no immutable memory, only immutable references.


You don't need move constructors if every future is heap allocated. But then every call to an async function is a separate allocation, which is terrible for memory locality. Some kind of virtual stack would be much better than that (but then you need garbage collection if you want the stacks to be memory-optimized to be small by default).


You could use an actual stack. As I understand it this was not done for questionable reasons relating to borrows of thread-locals. You could also allocate a top-level async function and all of its transitive async callees all at once, if you force the user to put all this information in 1 translation unit. Or you could use a bump allocator specifically for futures used in a certain part of the program, if you're willing to give up using a global allocator for everything. So it seems like there are a lot of options.


All this is fine and dandy for async web apps but would make things like async-as-state-machine, which powers a lot of rust embedded development, basically a non-starter.


In embedded development it's impossible to use any allocation strategy other than the language constantly calling a global malloc on your behalf?


Normal async doesn’t require any malloc.


Great! I certainly didn't intend to say that stack-allocated or statically-allocated futures should somehow stop working?


If you're allocating a call stack per thread you might as well just use actual threads.


This obviously doesn't work. Maybe you can try refactoring a console emulator that uses its stack switching primitive millions of times per second to use threads instead and let us know how well it runs.


But you lose the async/await syntax (which I'm addicted to because it makes blocking point explicit, like `?` makes error explicit).


Yeah. I can guess how disruptive it would be, but I really wish rust bit the bullet and added a Move trait to std, baked into the language at a similar level as Copy. Move defines a function which moves a value from one address in memory to another. Structs without impl Move cannot be moved.

Almost all types would #[derive(Move)], which implements a trivial move function that copies the bytes. But this opens the door to self-referential types, futures, and lots of other things that need more complex move behaviour. (Actually, it might make more sense to define two traits, mirroring the difference between Copy and Clone. One is a marker trait which tells the compiler that the bytes can just be moved. The other allows custom "move constructor" implementation.)

I want move because pin is so hard to understand. Its a complex idea wrapped in double- or sometimes triple negatives. fn<X: !Unpin>(...). Wat? I drop off at unsafe pin-projecting. When is that safe? When is it not? Blah I'm out.

Moving from rust-without-move to rust-with-move would be inconvenient, because basically every struct anyone has written to date with rust needs #[derive(Move)] to be added. Including in std. And all types in existing editions that aren't pinned would need the compiler to infer a Move trait implementation. This should be possible to do mechanically. It would just be a lot of work.

Async rust is horrible. Especially compared to futures / promises in almost any other language. At some point, someone will make a new rust-like systems language which has an improved version of rust's memory safety model, a Move trait, and better futures. I'd personally also love comptime instead of rust's macro system.

I love rust. I love all the work the team has put into it over the years. But the language I’m really looking forward to is the language that comes after rust. Same idea, but something that has learned from rust’s mistakes. And it’s increasingly becoming clear what that better rust-like language might potentially look like. I can't wait.


I very much disagree that Pin is as hard as everyone makes it out to be. Using the pin! macros, the pin-project crate, and enough as_mut() to get it to compile and it's not hard at all to get a future impl working. It would be good to get this native (which is what boats wants) so it's easier to discover but it's not at all hard by any means

I think a lot of people think pin is confusing but don't actually try to learn it. When I've sat with people and helped them they understand pretty quickly what pin solves and how it works.

I very strongly think move constructors would be even more complex than pin.


I can only speak from my experience, but I really struggled with it. Ok, I understand moving. Pin is ... not moving. But the struct can still move, just ... not when you have a pointer to the struct. Ok, weird, but ok. Pin has a weird, limited set of functions to access the data fields of the struct. Some are unsafe. And then there's Unpin, which sounds like its not-not-move, so, something can move? No. From std:

> Implementing the Unpin trait for T expresses the fact that T is pinning-agnostic: it shall not expose nor rely on any pinning guarantees.

So, ??. Then there's macros for pin-project, which most projects in the wild use, but some are unsafe. Why? Which of my fields can I safely expose using pin-project?

I tried to implement a custom SSE-style streaming protocol over HTTP, to make a rust server implementation of Braid. I spent about a week trying, including pouring over the implementations of server-sent events and websockets in one of the HTTP libraries. Ultimately I failed to get it to work. (This is before TAIT and some other recent features, so things are probably be better now.)

I picked up javascript to write my server instead, and I had the protocol implemented about 20 minutes, in just 20 or so lines of code.

I adore rust, and I'd much rather a rust implementation than one based on nodejs. But I ran into skill issues here. Pin and futures in rust are hard. Or at least, I found them hard. I'm sure if I took another crack at it I'd be able to figure it out. But I don't want to spend so many of my brain cells on the language. I want to spend my attention thinking about my problem domain. Like I can in javascript.

Rust is an amazing language. But yeah, I really think that pin doesn't meet the standard that the rest of the language sets. I think it could use a rethink.


I'm curious why you needed to deal with `Pin` instead of using async functions. What led you to a path in which you needed to implement poll methods yourself?

For what it's worth, all of the practical problems you encountered with using Pin are exactly what my next post is to show how to solve.


I look forward to your next post on the topic then!

> I'm curious why you needed to deal with `Pin` instead of using async functions.

The protocol I was trying to implement streams messages over time over a single HTTP request thats kept alive for a long time. This is how Server-Sent Events (SSE) works, and its how Google Chat in gmail was first implemented in a way that supported IE 5.5 (!!!).

This was a couple years ago now, so the details are a bit fuzzy. And I was relatively new to rust at the time. I was, at the time, still sometimes surprised by the borrow checker.

My goal was to make a writable async stream that I could push messages into from other parts of my program. And it also needed backpressure. When you sent messages into the stream, the protocol implementation it encoded them and streamed them into the body of my HTTP response object. I was (I think) using hyper.

This is before TAIT was in rust, and for one reason or another I needed to store / reference the future object I was making. (If you use an async fn(), you don't get a name for the Future type the function returns. So I couldn't put the return type in my struct, because I couldn't name it.)

So I ended up writing a custom struct that implemented Future, so I could reference the future elsewhere in my code. Hence, implementing Poll myself. I can't honestly remember how Pin came into it all. I think hyper's API for doing this sort of thing stored a Pin<T> or something.

I remember at some point trying to write a where clause using higher ranked type bounds to describe the lifetime of a future object that was- or wasn't- associated with the lifetime of the corresponding HTTP request. And that may or may not have been Pinned, and I gave up.

It might be fun to revisit this at some point now rust's async support has matured a little. And now that I've matured a lot in how I understand rust. I certainly don't imagine that everyone using async will run into the sort of quagmire that I hit. But this was the first thing I ever really wanted to do with async rust, and it felt horrible to fall on my face trying.


Thanks for your write up. This sounds like a perfect use case for async generators (which yield many times and compile to Stream instead of Future), a feature I hope Rust will gain in the next year or two. To receive messages from other parts of the program, I would have the async generator hold the receiving end of a channel.


Yeah, I would love generators (and async generators) to make it into stable rust. They would be a godsend for my work.


When I worked on Fuchsia and we had a heavily async set of system interfaces, users went through this learning path very regularly, and it was very painful for many. Folks who reached out for help early in their first engagement on this kind of path got help and following a "learn by doing" started to understand what was going on after a few iterations of the same challenge. Those who struggled trying to figure it out all on their own had a really awful time and in one example even went back to c++ for a sizable project because of the wall they ran into. There's a big gap here for folks who want to self-help their way through this. TAIT reduces the number of cases that come up, but there are still plenty.

Reflecting on a point from the article, it's possible that the ?Move being required in every declaration might have been better on this aspect. The point here about not being able to remember where the requirement to deal with Pin comes from is an indicator: the virality of the key traits involved, along with implicit implementation is a particularly tricky mix, it leads to action at a distance, which is also why first time engagements are so hard for users. Mix in some misunderstandings and you're in nope territory.


Thanks for saying so. Its nice to know I'm not alone in struggling with this stuff. Now that some time has passed, I might loop back and take another stab at it. It'd feel great to clear this hurdle.


> Async rust is horrible. Especially compared to futures / promises in almost any other language.

Having written a bunch of async code over 20 years (first exposure was twisted in the early 2000s and all sorts of stuff since - including manual function/stack wrangling on my own reactor), async in rust is like many other things in rust: It forces you to deal with the problems up-front rather than 6-12 months later once something is in prod. It helps to stop and understand why the compiler is yelling at you - for me anyway once I do grok that I'm glad I didn't have a repeat of $BUG_THAT_TOOK_DOWN_PROD_FOR_A_WEEK - that was a bad week.


Good to see some more uses of linear types! Very few languages can use linear types, and their potential is huge IMO.

OP shows the benefits of a linear RC that tracks references dynamically, and one can even imagine a version where the compiler tracks the references instead.

For example (for those who are more familiar with Rust) imagine that it had linear types and that we used them for matthieum's static-RC [0] (like suggested by frank-king at [1] to prevent leaks) and you'd have a pretty good idea of linear static RC.

I also talked about a possible alternative linear static RC at [2] and [3] (I didn't explain it well at all, happy to elaborate), though lately I suspect that one would need to add a GhostToken-ish [4] substance to allow safely reading/writing the shared value, or perhaps a RefCell.

I suspect that something like GhostToken could also solve OP's restriction that types be opaque, if Haskell has something like GhostToken... I could be wrong on that though.

[0] https://github.com/matthieu-m/static-rc

[1] https://github.com/matthieu-m/static-rc/issues/7

[2] "Linear reference counting" in https://verdagon.dev/grimoire/grimoire#the-list

[3] https://www.reddit.com/r/rust/comments/1d06gjo/comment/l61qm...

[4] https://www.reddit.com/r/rust/comments/p5f78s/ghostcell_sepa...


It's a ways out, but an extension to -XLinearTypes, "linear constraints", has been shown to be able to be used to implement Rust-like ownership. See Section 4 of https://richarde.dev/papers/2021/linear-constraints/linear-c...

Linear constraints are technically just sugar, but without them it's too manual with bare -XLinearTypes.


Austral uses linear types as a main feature https://austral-lang.org/


I've been looking into this, and I suspect that one actually needs surprisingly little to interoperate safely with Rust.

TL;DR: The lowest common denominator between Rust and any other memory-safe language is a borrow-less affine type.

The key insight is that Rust is actually several different mechanisms stacked on top of each other.

To illustrate, imagine a program in a Rust-like language.

Now, refactor it so you don't have any & references, only &mut. It actually works, if you're willing to refactor a bit: you'll be storing a lot of things in collections and referring to them by index, and cloning even more, but nothing too bad.

Now, go even further and refactor the program to not have any &mut either. This requires some acrobatics: you'll be temporarily removing things from those collections and moving things into and out of functions like in [2], but it's still possible.

You're left with something I refer to as "borrowless affine style" in [1] or "move-only programming" in [0].

I believe that's the bare minimum needed to interoperate with Rust in a memory safe way: unreference-able moveable types.

The big question then becomes: if our language has only these moveable types, and we want to call a Rust function that accepts a reference, what then?

I'd say: make the language move the type in as an argument, take a temporary reference just for Rust, and then move-return the type back to the caller. The rest of our language doesn't need to know about borrowing, it's just a private implementation detail of the FFI.

These weird moveable types are, of course, extremely unergonomic, but they serves as a foundation. A language could use these only for Rust interop, or it could go further: it could add other mechanisms on top such as & (hard), or &mut (easy), or both (like Rust), or a lot of cloning (like [3]), or generational references (like Vale), or some sort of RefCell/Rc blend, or linear types + garbage collection (like Haskell) and so on.

(This is actually the topic of the next post, you can tell I've been thinking about it a lot, lol)

[0] "Move-only programming" in https://verdagon.dev/grimoire/grimoire#the-list

[1] "Borrowless affine style" in https://verdagon.dev/blog/vale-memory-safe-cpp

[2] https://verdagon.dev/blog/linear-types-borrowing

[3] https://web.archive.org/web/20230617045201/https://degaz.io/...


Have you taken a look at the paper "Foreign Function Typing: Semantic Type Soundness for FFIs" [0]?

> We wish to establish type soundness in such a setting, where there are two languages making foreign calls to one another. In particular, we want a notion of convertibility, that a type τA from language A is convertible to a type τB from language B, which we will write τA ∼ τB , such that conversions between these types maintain type soundness (dynamically or statically) of the overall system

> ...the languages will be translated to a common target. We do this using a realizability model, that is, by up a logical relation indexed by source types but inhabited by target terms that behave as dictated by source types. The conversions τA ∼ τB that should be allowed, are the ones implemented by target-level translations that convert terms that semantically behave like τA to terms that semantically behave like τB (and vice versa)

I've toyed with this approach to formalize the FFI for TypeScript and Pyret and it seemed to work pretty well. It might get messier with Rust because you would probably need to integrate the Stacked/Tree Borrows model into the common target.

But if you can restrict the exposed FFI as a Rust-sublanguage without borrows, maybe you wouldn't need to.

[0] (PDF Warning): https://wgt20.irif.fr/wgt20-final23-acmpaginated.pdf


Thanks for the write-up. My biggest fear is not references, overloads or memory management, but rather just the layout of their structures.

We have this:

    sizeof(String) == 24
    sizeof(Option<String>) == 24
Which is cool. But Option<T> is defined like this:

    enum Option<T> {
       Some(T),
       None,
    }
I didn't find any "template specialization" tricks that you would see in C++, as far as I can see the compiler figures out some trick to squeeze Option<String> into 24 bytes. Whatever those tricks are, unless rustc has an option to export the layout of a type, you will need to implement yourself.


You don’t need to determine the internal representation as long as you’re dealing with opaque types and invoking rust functions on it.

As for the tricks used to make both 24 bytes, it’s NonNull within String that Option then detects and knows it can represent transparently without any enum tags. For what it’s worth you can do similar tricks in c++ using zero-sized types and tags to declare nullable state (in fact std::option already knows to do this for pointer types if I recall correctly)


Yeah currently "niche optimization" is performed when the compiler can infer that some values of the structure are illegal.

This can be currently done when a type declares the range of an integer to not be complete with the

rustc_layout_scalar_valid_range_start or _end attribute (requires #![feature(rustc_attrs)])

In your example it works for String, because String contains a Vec<U8> which inside contains a capacity field of type struct Cap(usize) but the usize is effectively constrained to contain values from 0..=max_isize

The only way for you to know that is to effectively be the rustc compiler or be able to consume it's output


Just did, thanks for the suggestion!


It could, but then that List would have to be linear itself, and then the program would make sure that you eventually drain the list and destroy each element.

(One caveat is for linears in globals, which aren't implemented in Vale yet. We haven't decided which strategy we're going with there, but TFA talks about some)


A really good question. In short, a linear List (or array, or hash map, etc) will only have two available destroyer functions:

1. drop_into(list, func): It consumes the list, calling the given `func` for each element.

2. expect_empty(list): Consumes the list, panicking if the list isn't empty.


I bet expect_empty could be defined roughly as:

  fn expect_empty(list) {
      drop_into(list, () => panic());
  }
(pardon my made up syntax)

If you want to discourage runtime checks, you could even make the programmer do the above themselves since it's a one-liner anyway.


I see, so in that second case it’s a runtime check rather that a static check. That makes sense.


Depends on the rest of your type system. If your language is capable of tracking whether a list is empty then it can allow dropping empty lists without a runtime check.


Almost, but see the replies to IcyWindows' comment for why this unfortunately doesn't work in C++. Also, I would love it if C++ had a way to actually move. That would unlock all sorts of interesting possibilities.


They almost work like this. std::move doesn't actually move an object; it leaves an intact instance... which we still don't know how to destroy (because its destructor is private). This also means that if we have an e.g. Outer struct whose constructor takes some kind of linear type, that Outer has no way to ever relinquish it again, which is a prerequisite of all those desirable linear type / higher RAII patterns.

This godbolt hopefully illustrates: https://godbolt.org/z/qqMnnKbnd though apologies if it's not too clear.

If C++ didn't have its quirks, you'd fundamentally be correct. Higher RAII can almost completely be implemented by this one sentence: make destruction private. (In fact, that's why Vale's `destruct` keyword is file-private)


There are a couple technical reasons:

1. In RC and tracing GC languages, a linear object would need exactly one special "linear" reference to it, which would be responsible for eventually "consuming" (though not deleting) the object by handing it to its final destination function. (You can tell, I'm trying real hard to not say destructor here)

2. In single-ownership languages like C++, Rust, etc. we've been conflating the destructor to handle two things: fulfilling the object's final purpose, and handling exceptions/panics. It seemed convenient at the time, but it also prevented linear types.

But honestly, I think we were just stuck in a local maximum. There was no reason to change those points because we didn't know about higher RAII, and we didn't know about higher RAII because those two points prevented linear types.

But with more of us exploring this new area (Vale, Austral, maybe even Mojo!), I daresay we'll one day see linear types and higher RAII entering the mainstream. Especially if someone can come up with a better name!


> In single-ownership languages like C++, Rust, etc. we've been conflating the destructor to handle two things: fulfilling the object's final purpose, and handling exceptions/panics.

Well, from a transactional point of view, a good design for C++ classes is to have implicit destructors (i.e. a conventional C++ destructor) rollback any change, while explicit ones would commit. This works very well with exceptions.

What C++ lacks is the ability to statically prevent you from calling an explicit destructor twice, but should be doable, for example, in rust.


Thanks for the answer! Not gonna lie, I don't have the background to understand all your articles, just enough to see that you're pushing the boundary on what's possible and it's really cool!


Thanks! And yeah I get that a lot >_> You're always welcome to swing by the discord where I can give a less arcane explanation! (https://discord.gg/SNB8yGH)


How about COO - Close Object Obligation? And you could sound like a pigeon saying "coo" during talks explaining it.


Hah I like this one!

PS. Congrats! https://verdagon.dev/blog/easter-egg-notes


I'm glad you brought inheritance up! This rule has solved it nicely in my experience: if a struct is linear, then it can only inherit linear interfaces. And implementation inheritance works because it's really just a combination of interface inheritance + composition + method forwarding, none of which seem to have any particular trouble with linear types.

Even if a language really wanted to have linear types in reference-counted objects, I don't think we'd see cycles in practice. Generally, for a reference-counted object to contain a linear type, we'd need exactly one reference to be the special "linear reference" (think a linear flavor of unique_ptr) which has ultimate responsibility for "consuming" (albeit not deleting) the contents. Making a unique_ptr cycle is much harder than making a regular RC cycle, so I'm not too worried, even if it is theoretically possible.

Re: trivial moves, I'm not sure what you mean. The system works through the entire program, not just local variables. Also, the `onException` method would also handle panics. Hope that helps!


As for inheritance - I'm increasingly convinced that it is a useful idea to have an explicit `Object` (or `BaseObject`, to deal with languages where primitives are special) class at the root of the hierarchy, rather than relying on some nebulous type-checker-only `Top` type (sometimes called `Any`, though that usually has additional footguns). One thing I do think is a mistake in languages that have an `Object` class is that it is often not abstract. In particular it is often used for sentinel-like objects (with no contents - the lack of a name is annoying), to say nothing of Javascript's mess. Is "top-most interface" really a sufficient concept? As much as I've come to love late-defined traits (among others, no more PIMPL), I'm not convinced we can really afford to stop thinking in terms of an OO tree, or even to give up on multiple inheritance

You don't need RC to have a cycle - it already occurs with a pair of structs that refer to each other using `mut Option[Unique[T]]`. Rust seems to have settled for "document it aggressively, and otherwise ignore it" ... yet code very similar to this is actually very useful, so (with a few extra layers of nesting) it often does appear spontaneously in the wild. But the whole point of linear types is that you can no longer just say "leaks are okay I guess".

Moving even a small 64KB buffer isn't actually trivial even if your type system thinks it is by composition, but most linear-ish type systems seem to aggressively use destructuring (possibly via pattern-matching) which relies on moving the members. Not to mention all the other actually-useful things C++ move constructors can do (update peer objects, for example). While "move-by-default" is a definite winner, "trivial-move-only" is not. The question of "what exactly happens so I can move out of a language-defined Option[T], leaving None ... vs how can a user-defined variant type choose to do something similar" is significantly complicated by nontrivial moves.


> Inheritance is no longer a simple part of the language. And trust me, you do want inheritance.

I don't miss it.

> languages where primitives are special

Isn't the object-primitive separation a design flaw? (e.g. c++ can have lists of int, but Java cannot. In Java, sometimes a boolean has two values and sometimes 3, etc.)

> root of the hierarchy

Why a hierarchy? All beginner Java devs at some point attempt to put their domain objects into some kind of hierarchy before realising the futility of it. Especially in game dev.

Sometimes I want to operate on objects which can quack(), and sometimes I want to operate on objects which can toBytes(), and sometimes I want to operate on objects which can do both those those things. But why must they have a common ancestor? Why can't they simply exist in disjoint families?

> I'm increasingly convinced that it is a useful idea to have an explicit `Object` (or `BaseObject`, to deal with languages where primitives are special) class at the root

This is one of those things I dislike about Java. Object has a bunch of included behaviours that I do not want. equals() and hashCode() have the wrong default behaviour, and toString() isn't much better. I would much prefer the compiler to block me if I try to equate two things which haven't defined equality, rather than slip in its own implementation.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: