Hacker News new | past | comments | ask | show | jobs | submit login
A final proposal for Rust await syntax (boats.gitlab.io)
352 points by gaogao on May 6, 2019 | hide | past | favorite | 261 comments

I figure the reason the proposed syntax looks gross is because other languages have been using a prefixed await for many years. The Rust decision seems well thought out though, enough to make me wonder if perhaps other languages have been doing it wrong.

My first experience with futures was in the form of QFuture (https://doc.qt.io/qt-5/qfuture.html), and there you call .result() to block and wait for the result. This postfixing is intuitive.

Rust's "?" operator is neat. I don't know if I've seen a postfix operator anywhere else yet. It's certainly possible Rust got this language aspect wrong, but as a user it feels pretty right.

Given the existence of the "?" operator, and the fact that futures resolution via postfix is intuitive (and not a new thing either), IMHO the best course of action probably is postfix. I'm not sure if literally ".await" is the best, but it's in the ballpark.

IMO it's awkward for the following reasons:

(1) It looks like a field access, and field accesses are "cheap", method invocations aren't. This breaks the 'conceptual weight' model when you look at a line of code. Field accesses are simple and understood. It becomes impossible to gauge the 'cost' of a line by looking at it unless you're intimately familiar with the workings of 'await' and there's no visual cue.

(2) If we create new such postfix operators in the future we'll have to break yet more source code by reserving yet more field names in all structs in all existing code. This whole postfix thing is proving itself nice to work with so I'd prefer a path of less destruction if we do lean more into it.

(3) It's a one-off that's different from everything else in the language that's being shoehorned into existing syntax.

IMO, it's either break existing syntax by introducing something new or break semantics of existing syntax. I'll always lean towards the former over the latter. I like it being postfix, it's very much in the '?' family of ergonomics which has shown itself quite nice to work with. I'd prefer postfix "@await" or "!await".

> (2) If we create new such postfix operators in the future we'll have to break yet more source code by reserving yet more field names in all structs in all existing code.

This is already the case; all keywords in Rust are reserved keywords, not contextual keywords, so you already can't have a field named, say, `return`, despite the fact that a hypothetical `foo.return` would otherwise be unambiguous. Further, with the Edition system and rustfmt, reserving keywords in this manner shouldn't be disruptive.

> (3) It's a one-off that's different from everything else in the language that's being shoehorned into existing syntax.

The paragraph at the end of the OP suggests that the language team is seeking to propose allowing certain other keywords (specifically those that can return useful values) to be used in postfix position (which implies that `await` could then be used in prefix position as well), which could e.g. (I'm spitballing here) replace the `.for_each()` method on iterators with just `.for`.

> This is already the case; all keywords in Rust are reserved keywords, not contextual keywords, so you already can't have a field named, say, `return`, despite the fact that a hypothetical `foo.return` would otherwise be unambiguous. Further, with the Edition system and rustfmt, reserving keywords in this manner shouldn't be disruptive.

Well, the keyword could instead be "@await" instead of "await" meaning that no regressions would be introduced with field names, either now or in the future, right? Unless I'm mistaken, the '@' character isn't a valid prefix for an identifier.

> The paragraph at the end of the OP suggests that the language team is seeking to propose allowing certain other keywords (specifically those that can return useful values) to be used in postfix position (which implies that `await` could then be used in prefix position as well), which could e.g. (I'm spitballing here) replace the `.for_each()` method on iterators with just `.for`.

That's fine, I guess, but then we'd have yet more keywords we can't use. However, "@for" would not break anything. FWIW the way I read that section, the proposal enjoys little support.

Sure, the Rust developers could implement it such that `@await` worked, but `await` is already a reserved keyword as of the 2018 edition, so in this particular case it no longer makes any difference. For that matter, they could also have made it such that `.await` or `.anyotherkeyword` become legal in Rust 2015 (since keywords don't currently exist in that position, it wouldn't break any code). However this misses the point that the Rust developers have made a conscious philosophical decision to make their keywords reserved and not contextual. Reserving a keyword vs. contextualizing a keyword is a philosophical choice, not a technical one.

I'm not sure that the idea of contextual vs non-contextual keywords is relevant here unless I've misunderstood you. I assumed that the 'async' keyword was reserved in 2018 edition to ensure that no matter how they chose to implement it (prefix, postfix, whatever) they had the flexibility to pick after the fact. I don't think they closed the door on a new, non-breaking non-contextual operator, although I wasn't a party to any of those conversations.

Releasing 'async' and capturing '@async' would be 2015- and 2018- edition safe change unless there's something I've overlooked. It shouldn't break any 2018-edition code to make the 'async' keyword available, only to capture a new, currently-valid one, and I assume that's part of why they took this approach. I feel like the ball's still on the court until someone writes some valid 2018 edition code utilizing the keyword as defined in this proposal. This would also mark precedent for not reserving additional currently-valid keywords for postfix operations.

I'm proposing releasing 'async' and capturing '@async' as a new non-contexual composite keyword. In the state the language is right now, this change should keep the grammar context-free (except for raw string literals as is the case now afaik).

> there's no visual cue.

There is. Your editor is going to show the await keyword in a different color. (Unless you happen to be Rob Pike.)

There's nothing else in Rust that requires me to use an editor for a semantic cue, is there? Why start here? You are correct of course, it's more a question of Rust being just as useable right now with or without syntax coloring, this small change breaks a lot.

There's still a visual clue: You'll be in an async function, and it's highly likely that there'll be a question mark after it for control flow (e.g. most futures returns Result).

Postfix in itself is fine, although "await" as a verb reads best in a prefix position (cf. "try", "match", "yield", "loop", "return"). The reason the "dot await" notation looks gross is first and foremost because it coopts the field access syntax—familiar from almost every language in the C family—for a purpose entirely orthogonal to field access.

Good point. Maybe the best approach would be postfix but with a more fitting keyword than await. I wonder if alternative keywords were considered rather than only operators?

> it coopts the field access syntax [...] for a purpose entirely orthogonal to field access

You mean like C++'s a.this_method_is_a_function_call(...) did? Not to be confused with a.function_pointer_field(...), despite having literally identical syntax.

Personally, I think they should have intoduced a dot-keyword instead of postfix ?; it's too easy to miss, especially for something that hides a implicit return.

Meh, method calling is a fairly straightforward extension to field access, not something entirely different. Incidentally, in Rust you do have to disambiguate between calling a method and invoking a function through a function pointer field.

> I figure the reason the proposed syntax looks gross is because other languages have been using a prefixed await for many years. The Rust decision seems well thought out though, enough to make me wonder if perhaps other languages have been doing it wrong.

Lua[1], C++/Boost[2] also use "yield" it's not that uncommon and there is nothing new.

[1] https://www.lua.org/pil/9.1.html

[2] https://www.boost.org/doc/libs/1_54_0/doc/html/boost_asio/ov...

It's not the question of keyword chosen, but rather where it should go.

I wrote a lot of async/await code in C#, where it's a prefix operator, and I think what they did makes more sense. C# syntax works great when all you're doing is awaiting on a single call per line, but as soon as you start doing e.g. method chaining, you have to put parentheses everywhere, and it gets real messy real fast. This looks cleaner.

At first I was firmly in the prefix camp, until I read one of the (massive!) GitHub issues on the subject and came around to postfix.

I'd _much_ rather use something like `?` than `.await`, though. Using a keyword just feels wrong in a few different ways.

Can you share said issue?

I don't know about GitHub, but https://paper.dropbox.com/doc/Await-Syntax-Write-Up--AcIbhZ1... summarizes the different tradeoffs pretty well.

It seems they've settled on using a postfix approach. I can't help but feel this is a mistake. Rust is already a weird language to come to from the likes of python, Java, or Javascript, and it feels to me like this relatively unknown approach of putting the operator at the end is a mistake that will add another confusing aspect of the language.

I feel like the committee has worried about the wrong things when considering postfix vs prefix. Their main concern seemed to have been the extra syntax required to make prefix work with . And ? (i.e., (await foo).bar() ), but while avoiding this makes for a more optimized language I'm concerned they've missed the impact this will have on new people.

Or maybe I'm just a Luddite :)

> Rust is already a weird language to come to from the likes of python, Java, or Javascript

I think that's a great reason to evaluate new syntax/features on how they will be used, as opposed to how they will attract (or not) new users. New users are already going to be expecting things to be different, and so making this one thing more familiar probably won't make much difference. On the other hand, code lives for a very long time once it's been written, so making the right choice in terms of readability/maintainability seems like the right move.

While that's true on an individual basis you can't use that continually to introduce differences, because at some point if someone comes to rust and sees a heap of weirdness they're going to be less likely to adopt the language. I'm mostly a bystander since I've written some rust but not very much, but I think rust is neat and I don't want it to become an esoteric language.

If it's an effective solution it will only seem weird until it's proven to be more useful than the status quo approach.

I read this post and thought 'async as a prefix is intuitive!', but that's only because I saw it first in C#.

Rust already has an immense learning curve despite its ergonomics, I say it should continue to experiment. Same way Haskell does.

I do wonder if `.await` could have a sigil equivalent the same way `try!` did with `?`. `…` could work if it's not already used as spread args.

> 'async as a prefix is intuitive!', but that's only because I saw it first in C#

The analogy made in the post, with the possibility of “expression-oriented” keywords like .match, suggests that `match expr {..}` is equivalent to `expr.match {..}`. So by analogy `await expr` and `expr.await` are also similar.

So why was the "prefix await" syntax dismissed right out of the gate? It is the option that needs the best argumentation, because 1) it is familiar and 2) match is currently a prefix (keyword), so await could probably also be.

Edit: only the await prefix clearly signals a break in control flow, only the prefix variant emphasizes this. In contrast the dot await syntax hides this.

> If it's an effective solution it will only seem weird until it's proven to be more useful than the status quo approach.

Perl was a very effective solution. Still lost. And I say that as a long time perl5 nut.

Rust is, indeed, just syntactically insane to a new learner (honestly, worse than perl ever was). And it's getting rapidly worse.

I've used perl5 for a long time (although these days I've replaced it with python in my toolchain, although I have my fair share of complaints about python as well) and I think your comparison is unfair.

Rust's sometimes complicated syntax is made necessary by the very concept of the language. You need to be able to express lifetimes, you need to be able to express generics, you need to be able to strongly type lambdas etc... Of course the syntax is going to be a bit more complicated than, say, Common Lisp where everything is dynamic and garbage collected and you can let the runtime figure things out.

Perl 5 was syntactically insane because that's how the authors decided to do it. Effectively it didn't let you express things that were impossible in Lisp, Python, Ruby or JS. It's just a design choice to lean heavily on sigils and magic variables to favor short code even if it can be cryptic to people not intimately familiar with the language. I doubt anybody who hasn't used perl before will be able to figure out exactly what this does (and I'd argue that it's a fairly simple and idiomatic script, not some heinously cryptic code golf):

    while (<>) { chomp(); s/f.o/bar/; print; }
Rust is noisy because it has to represent concepts that simply don't exist in other programming languages.

I'd go even further: Rust does a better job than most languages at letting you only explicitly mention things that need to be explicitly mentioned. Thanks to the very powerful type inference you can write very tidy code, IMO much more so than in C where I have to explicitly type every intermediary variable that I would use for instance.

> Rust is noisy because it has to represent concepts that simply don't exist in other programming languages.

In a few cases having to do with lifetime management and the borrow checker, that's certainly true.

But the example at hand is fundamentally just cloning ideas from Javascript.

The point is not that particular feature. It's how this new feature interacts with a bunch of existing language features.

Actually JS just cloned those ideas from C# and Python.

... and when you get down into the details, there are significant differences in what Rust is doing as opposed to these languages, even if the high level idea is roughly the same.

> Rust is, indeed, just syntactically insane to a new learner (honestly, worse than perl ever was).

Can you elaborate?

I'll bite. For context, I'm very much a rust beginner with probably a couple dozen hours at most. I find that anything having to do with lifetime parameters is hard. I think this is even worse than C++ templating in some sense since it behaves differently than type templating.

Otherwise, I think the syntax is fairly reasonable.

That's legitimate; lifetime parameters are a new concept that aren't found in any other language. They naturally require a learning curve. Lifetime elision—making it so that you don't have to see lifetimes in most cases—is the main way we tried to make them easier.

My advice would be to completely forego lifetimes handling until much later in time. It is a difficult part of the language to master but it gets much easier once you get more practice. Get well acquainted with the rest of the language and you'll pick them up with less trouble.

Not even if you paid me to have that fight, no.

Edit: OK, one tiny crumb: https://doc.rust-lang.org/book/ch06-01-defining-an-enum.html -- Read through that thinking about how other languages express the same fundamentally simple concepts, and consider how Rust invents new syntax for basically everything, and then sticks it all together by re-using syntax from other areas. Why does a parametrized enumerant look like a function call and not a struct? Why is "" not a String? Why do parametrized enumerants look in rvalues like C++ constructor arguments, which you promised earlier Rust didn't have?

Don't tell me the answers to those questions. I know the answers. Just recognize that those are questions that every new reader to that very early page in your docs is going to have. And... did they even need to be questions in the first place?

> Why does a parametrized enumerant look like a function call and not a struct?

Because tuples are considered good things to have in a modern language. Structs and enum variants are just named versions of tuples (denoted with parentheses) or records (denoted with braces—unnamed records don't exist in Rust). To follow languages like Python, they're written "(1, 2, 3)". (It would be weird if tuples were written "{1, 2, 3}", right?) Therefore, a tuple named Foo is written "Foo(1, 2, 3)".

Removing tuples from the language was actually considered at one point, but the community (including me) really wanted them to stay, so they stayed.

> Why is "" not a String?

Because it's a str, and there is a difference between str and String. Not having a string-view type would be a big mistake in a systems language like Rust. Note that "" is not a std::string in C++ either (though there is an implicit constructor that can make it one).

In fact, the reason why we have both "str" and "String" is precisely to address your criticism: if we named "str" "string_view" or something, then people would be really surprised that a literal like "" isn't a string.

> Why do parametrized enumerants look in rvalues like C++ constructor arguments, which you promised earlier Rust didn't have?

Because saying Rust has C++ constructors would just make things more confusing, because then people would expect them to behave like C++ constructors. It's less confusing to say that Rust doesn't have constructors like C++ does.

> And... did they even need to be questions in the first place?

I don't see any alternatives to the above that would make the language better instead of worse.

> (It would be weird if tuples were written "{1, 2, 3}", right?)

That is the case in Crystal. Doesn't feel weird - if anything it is consistent in that ()s always denote something to do with execution. Both ()s and {}s have multiple different other syntactic usages though, so it is not as if either is optimal.

Most languages that offer tuples, either use (), or just treat commas as "tuple operator" anywhere they don't have another meaning already. And this goes a long time back.

Function calls can then be treated as functions from a single tuple argument, as is idiomatic in SML.

Like I said the first time, I know all that stuff. I'm trying to express problems a new user is going to have with this syntax, and "but our reasons are really good!" does nothing for them.

It's not inconsistent to hold the simultaneous beliefs that "trying to fix Rust's syntax would make it worse" AND "Rust's syntax is insane".

And given those beliefs, maybe adding a postfix operator to express what everyone else does with a function call was a poor call.

It is inconsistent to hold those beliefs. Because "insane" implies that the Rust team made wrong decisions with the syntax, in fact so obviously wrong that no "sane" person would make them. If you acknowledge that the syntax decisions make sense, then they aren't "insane".

Meh. That's just playing semantic games. I'm using "insane" in IMHO the more common colloquial sense of "inscrutable; hard to understand; inconsistent with existing paradigms". I'm absolutely not questioning your mental health.

Nothing about the soundness of your design means that Rust isn't hard to learn, is all I'm saying. And per the freakshow all around us, it's getting harder, not easier.

I've never understood the use of the word "semantic" in the dismissive sense. Meaning is everything when parties are trying to communicate. This thread finally got to the heart of the miscommunication, and instead of celebrating you say "meh." I don't get it.

You need to tell us which other language you compare it to then. AFAIK most mainstream languages (the ones these hypothetical new readers are most likely to come from) don't have anything like Rust's enums. C++ does have the difference between C-style "const char " and "std::string" which is a bit* similar to &str and String, although &str is obviously massively safer and more ergonomic.

If they come from Scala or Haskell then yeah, sure but I'd argue that most programmers these days are likely to come from JS, Python or Java/C#/C++ and copying Scala or Haskell syntax would look even more foreign to them. Rust instead preferred to go for a syntax inspired by C++

>Don't tell me the answers to those questions. [...] And... did they even need to be questions in the first place?

So you know the answers but you still think it shouldn't be done that way?

Having things like &str instead of forcing everything into a dynamic String is Rust's killer feature. It means that you can write safe code while not being forced into adding any overhead. You pay for what you use, to quote the C++ motto. That means that as Rust matures it becomes a good competitor for replacing C and C++.

That is exactly the same way Haskell does "parametrized enumerants", and in C terms, str is like any pointer to a string, while String is like a heap-allocated string. What's "new" about any of this? C++ has the same thing going on with string and string_view.

If anything, Rust's managed to be a whole lot simpler than other languages. Python, JavaScript and C++ have some of the worst string encoding stories I've ever seen. And most popular languages don't even have sum types, so it's like they have an "and" and no "or".

> Why does a parametrized enumerant look like a function call and not a struct?

I don't think this is an "invented new syntax"; plenty of languages with built-in support for sum types do it this way. Of course, most mainstream languages don't have built-in support for sum types, but that's an entirely unrelated point to Rust's syntax for them.

> Just recognize that those are questions that every new reader to that very early page in your docs is going to have.

I doubt all new readers will have those questions. It depends on their background. For example, plenty of languages use similar syntax for sum types (Scala, Swift, ML, Haskell, etc.).

Perl was designed to be familiar to C programmers.

No, Perl was designed to be familiar with Unix shell script (with its various variants), sed, and AWK programmers.

> Perl was a very effective solution. Still lost.

For some version of "lost." It lost to $LANGUAGE_OF_THE_MOMENT for web development, but Perl is still a great text-munging and glue language, and I'll bet I'm not the only one using it for that. It didn't take over the world, but I'll bet that most people interact with a bit of Perl every day.

I agree that Rust is syntactically insane. Worse, while a Shell-and-C programmer would have some intuition about Perl syntax, a C++ programmer will almost start from scratch in Rust.

I feel like unusual language syntax isn't an issue, as long as the syntax is understandable/readable once the programmer learns the syntax.

It doesn't take too long to pick up and become comfortable with a new syntax in a programming language.

Imo, the reason some esoteric languages fail are not purely because their syntax is obscure but because even once you learn the syntax, it is still hard to understand the code (example: Brainfuck).

With Rust, once you learn the new syntax, you can understand Rust code with relatively little effort.

> Imo, the reason perl failed was not because the syntax was obscure but because even once you learned the syntax, it was still hard to read perl code.

IMO, the reason Perl failed is that Perl 6 (which is a fairly awesome language in its own right) sucked up all the energy in moving Perl forward for too long and delivered too late, while competing languages kept providing usable progress (and avoided, as a consequence, looking dead, which may be as important as the actual usable progress itself.)

New users expect things to be a bit different by assuming there are good reasons for change of convention.

EDIT: this might very well be the case here though, especially future extension for similar operators and chain-ability.

Right, which is why I mentioned that focusing on readability/maintainability is the right move in this case. I think we're in agreement.

> Rust is already a weird language

Some folks came up with the idea of a "weirdness" budget - arguing that Rust had come close to using it all up.

>I feel like the committee has worried about the wrong things

There is no "committee" really, and all the worries have been expressed and weighed. The debate over this syntax has been going on for months and had huge community involvement.

It's all been summed up in this document:


I was definitely in the prefix camp before I read that document.

Thanks, I'd read the intro but missed that.

I don't really see how they refute this argument though? Clearly they state arguments both in favor of postfix and prefix but it feels like some framework for weighing the arguments is missing, otherwise it just comes down to how individuals weigh the arguments.

That document isn't about arguing one way or the other, it lists all the different options and pros/cons. The post thread is the current stance of the lang team and letting everyone know they're going to make the final decision later this month.

Boats' keynote at rust-latam goes into some more detail as well.


I'm just a casual observer here, but it seems like just about every argument and position on the subject has been iterated across Github, internals.rust-lang.org, the IRC/Discourse/Zulip chats, Reddit, and now HN.

This post by boats is just the next stepping stone alerting the community to the direction the team is leading and that it's going to be over soon.

Thanks for the link. As an outsider I also thought it seemed weird given so many other languages use prefix, but it makes sense with this framing that the main concern is the interaction with the ? operator.

> Rust is already a weird language to come to from the likes of python, Java, or Javascript

Every time I see a statement like this, I remember a (paraphrased) statement from Rich Hickey: "[musical] instruments are made for people who can play them!".

I think unless you are specifically designing a beginner language (like Scratch), you should not take into consideration "ease of use" or "familiarity" arguments.

Counterpoint: APL and perl vs. python. Python did take usability into account and familiarity. UX is important. Developers are users. As a language (or in general tool) designer you have a responsibility to make that tool easy to use, and difficult to misuse.

Familiarity is a big part of that, although ease of use is bigger (which is probably why python got the traction it did despite being unfamiliar to people who came from braces-land).

Different languages for different purposes. AFAIK, python is made for being easy to use and write, Clojure (Rich Hickey) is a pragmatic language for getting things done. Different languages will focus on different things and I think that's a good thing.

If I got to choose between the Clojure I know today VS a Clojure designed for ease of use and being familiar, I'm pretty sure I would chose the first.

Just like APL is much better for some tasks compared to Python, and vice versa.

> is a pragmatic language for getting things done

What does this mean though? Empirically speaking, a lot more "gets done" today using python than clojure. So, perhaps being easy to use is more pragmatic than whatever Rich means.

> Clojure designed for ease of use and being familiar

I.e. Lisp, basically.

Counter to your counterpoint: APL did take usability into account. Dr Iverson got annoyed with how inconsistent and hard to read normal math notation was, and how many problems that caused in its usability, and invented Iverson notation to fix that - a tool to be usable by people writing on blackboards to show other people mathematical ideas.

Years later, it was used at IBM to describe what the IBM 360 computer would do. After that, it got turned into APL\360 around 1962 (i.e there weren't all that many programming languages to be familiar with, then). The book "APL\360 An Interactive Guide" by L. Gilman, 1970, has a foreword which says:

APL is clearly gaining acceptance at this time as a computer programming language. This acceptance is not hard to understand. APL is one of the most concise, consistent, and powerful programming languages ever devised. (UX is important!)


From a pedagogical standpoint APL has a number of advantages. The material can be taught and used in small pieces. A student can be trying his hand on simple operations after five minutes of instruction. What he doesn't know won't hurt him (a statement that cannot be made about most other languages). If he tries something illegal such as division by zero or adding a number and a letter, he gets an understandable error message and is free to try something else. Nothing the user can do will cause the system to crash. (Usability!)


It is indubitably true that a "clever" programmer can use these advanced operators in such a way as to produce an "opaque" program, that is, one so compact and concise as to be nearly impossible for anyone else to understand. Whatever else may be said about such programs, which are questionable in many contexts anyway, they should not be used in demonstrations of APL. Experienced programmers who have seen APL demonstrated in terms of the fantastic cleverness angle sometimes criticize the language as being hard to understand, when their criticism more properly should have been directed at the demonstrator. Such misplaced cleverness is not to be found in this book. All operators are thoroughly covered, but there is no attempt to show off the ingenuity of the authors in writing ingeniously condensed programs.

> APL did take usability into account.

Perhaps usability for a certain, very specific, subset of people (namely those who are writing code on a whiteboard?) But math notaion is not programming, they have different needs.

> UX is important!

Note that concise and powerful also apply to perl. Concise, in programming does not equal good UX. Often they're antithetical. (This also doesn't mean that verbose is "good UX" either. Programming language design, like programming, is about finding the right abstractions and providing them.)

Compare to ABC (a language python was heavily inspired by) using `:` to declare a function (ie. `def my_f():`) despite it being unnecessary, the language was parsable without it. But they did user studies, and those found that the colon helper readers understand the blocks better.

> Usability

Granted this may have been an improvement in usability. APL, being a higher level language, avoids many of the problems that C and co had, and I've never used basic/fortran any of the other early languages that were popular at the same time.

That said, I have to strongly disagree with the last paragraph. A good tool discourages misuse. Inscrutable programs are misuse, "Readability counts". A language that encourages, or doesn't discourage, inscrutable programs isn't as good a language as one that does (at least if you consider inscrutable programs to be misuse, and it appears that you, I, and the APL author all agree on that).

Programming language design, like programming, is about finding the right abstractions and providing them. ref: "Python did take usability into account"

Which APL tries to do with neat and composable functions.

Sum a list of "values": APL: +/values Python: sum(values).

Product of a list: APL: ×/values Python: import operator, functools, then reduce with a lambda function, or write a loop.

Add constant to a list: APL: 1+values Python: [1+x for x in values]

Reverse a list: APL: ⌽values Python: list(reversed(values)) but you have to care if you want an iterator or a list.

Add two lists elementwise: APL: values1+values2 Python: [x+y for x,y in zip(values1, values2)]

Take five from the front: APL: 5↑values Python: values[:5]

In APL this syntax generalises to multidimensional arrays. In Python none of these wildly different syntaxes do, nor do the magic sum/max/min functions.

In APL these are simple instructions to the user, and can be implemented fast by the runtime. In Python these are more complex patterns to learn and run slower, if you want them to work fast you have to switch again to NumPy.

I am not all-in on APL, but this is so much more powerful, concise, consistent and composable than Python - the shining example of a beginner-friendly language - for basic data munging, it's like seeing Python after using Java and wondering why Java has to be so wordy and verbose to get anything done. Why does Python have to be so wordy and verbose and limited to get basic things done? Why is this large mess of inconsistent symbols and calling conventions and library functions and magic functions and sigils considered clean and usable compared to APL's very simple repeatable patterns and careful attentive use of composable symbols?

You examples with (and some without) numpy arrays:

values.sum(), values.prod(), 1 + values, values[::-1], values1 + values2, values[:5]

Yes, comparing the "array programming language" with a general purpose programming language when all your examples are array operations is going to make the general purpose language look funny. But I don't want a DSL for array operations I want a programming language. And when you aren't doing array-math, APL isn't so great.

Show me how you present those arrays as a graph in APL. In "wordy, verbose" python it's

    import matplotlib.pyplot as plt
Is visualizing the data you're manipulating so basic?

Python chooses extensibility (numpy) over domain specificity, and you get the same power (really, the numpy examples aren't any worse than yours, in fact to someone unfamiliar, `sum` and `product` are likely clearer than `+/` and `×/`.

Extensibility and composability is a better abstraction.

Extensibility and composability is a better abstraction.

Say that our modern day programmer is comfortable with + × > < and can learn in a few moments that max 3⌈5 and min 3⌊5 and reverse ⌽values have their own symbols. By learning the three patterns of / as replicate, reduce and n-wise reduce, they get a large amount of composable patterns to play with, covering sum(), min(), max(), reverse(), [::-1], functools.reduce(), functools.filter(), Numpy overloads of +, Numpy .sum() and .prod(), and more.

In Python, that is many standalone disconnected patterns which do not compose, In APL the "and more" is because composability means there are lots of ways of putting these operations together. values.prod() is what you have to do without composability, you can't re-use the builtin multiply without making a separate wrapper or overload. The result is visually different to sum(values) and conceptually different because it is a method call on an object, and the resulting prod() does not compose with anything else in the language.

Show me how you present those arrays as a graph in APL. In "wordy, verbose" python it's [three lines, another third party install, an import and rename, a two stage operation and a bizarre array show() call passing itself as a parameter(?)]. Is visualizing the data you're manipulating so basic?

Yes, in Dyalog APL it's:

    ]chart array
And that doesn't just show a bar graph, it also loads a GUI for customizing the look of the chart, and the chart library is SharpPlot for .Net, which ships with it. (This is not in ISO standard APL, like matplotlib is not part of pure CPython. APL is not giving up extensibility, you can write your own functions which hide things behind names, or in different implementations of APL call out to OS/.Net/library features).

when all your examples are array operations is going to make the general purpose language look funny.

Yes, true. But is it not the everyday task of programming to process chunks of data in collections?

I don't want a DSL for array operations I want a programming language. And when you aren't doing array-math, APL isn't so great.

Strings are character vectors, like Python lets you treat strings as iterables and slice them, so you can do many array transforms on text. APL's array operations aren't limited to numeric math like +5 or A×B, they also work on something I don't know what to call it - geometric patterns, maybe? Like, indicate where 5 is less than integers to ten:

        5 < ⍳10
    0 0 0 0 0 1 1 1 1 1 
Visually patterned half and half. Or this:

        ¯3 ⌽ 3 < ⍳9
    1 1 1 0 0 0 1 1 1 
Visually, spatially, patterned into thirds. You can feed this into filter() to make combinations of things more complex than "items greater than five", but "items in this pattern", the filter is not a single lambda function which takes an element and decides whether to keep or remove it, the filter-reduce is more powerful and composable than that, and building the patterns from composing the same basic primitives.

And yes you could pull in Numpy and fill an array with values and rotate it, but you wouldn't think to do that to apply it to a string, because it's /so much work/ and so far away and distant from the provided black-box string methods.

That is, APL is so great for things more than array-math. Albeit not everything more than array math. I sure have my own skepticism and questions about how well it scales up to larger programs and where its practical and pragmatic limits are.

But, take some imaginary pixels in one array and brightnesses in another and (50<brightness)/pixels will get you the pixels brighter than 50. Try that in Python and you get something like [p for i,p in enumerate(pixels) if i in [i for i,b in enumerate(brightness) if b > 50]]. The APL is "dense and unreadable" and the Python is "clear and composable". "Oh you wouldn't do that in Python", no indeed you wouldn't, you'd have to put stuff in a tuple or object to work around the fact that Python won't let you keep simple things simple.

That's still just pixels[brightness >50] with numpy. Which brings me back to what I said before: if I want a powerful array manipulation dsl, I still have it, but I'm not limited by it.

To your statement "you can't use the built-in multiply without adding an overload": yes, but that's because apl forces everything to be an array. If I want to work with something with non-array syntax (as an example, where addition uses a l2 norm), if my objects support that, that's just x+y, while in apl you have to do something more complex.

Note also that you're being unfair to Python. List comprehensions, array index notation, and numpy methods cover pretty much everything you've mentioned in apl. And they funny require matching weird syntax to operations.

> Note also that you're being unfair to Python. List comprehensions, array index notation, and numpy methods cover pretty much everything you've mentioned in apl.

All of that to cover ~ten symbols. Is it unfair to Python to point out that this is a huge difference in complexity that someone needs to learn to be able to do those things from scratch?

> Which brings me back to what I said before: if I want a powerful array manipulation dsl, I still have it, but I'm not limited by it.

Which is fine. It's just that the few operations of array manipulation DSL go so much farther than I expected they would. I do know that APL is not going to be the language to implement WireShark or Halo vNext.

The big catch in paragraph two is "if my objects support that"; yes you would have to do something more complex in APL. But not /much/ more complex. In Python you'd need to understand classes and magic methods and overloading before you could write that overloaded addition - and understand CPython internals, C and NumPy to add it to NumPy objects, I imagine. I doubt you can do that all in APL, but then again if I'm correctly reading what l2 norm addition is, Pythagorean square-root-of-sum-of-squares including complex numbers, it's not a lot of code to write that anyway:

    values ← 4J3 ¯2J8 10 20
    abs ← |values
    squared ← abs * 2
    sum ← +/squared
    sqrt ← squared * 0.5
or without the temporary variables which don't add much clarity:

    ( +/ (|values) * 2 ) * 0.5
⍨ is a cool operator which lets you swap arguments around, so instead of having to read from inside nested parens out, you can remove parens and read serial code left/right instead:

    0.5 *⍨ +/ 2*⍨ |values
Name that with a lambda/anonymous function/dfn:

    l2NormAdd ← {0.5 *⍨ +/ 2*⍨ | ⍵}
    l2NormAdd values
Which .. isn't so bad that you'd wish for overloading, if the cost of writing the overloading was so much higher, is it?

> That's still just pixels[brightness >50] with numpy.

That is cool, I didn't know you could do it. But it is completely separate from the normal Python list comprehension style, apparently a different use of > (?), won't compose with the normal Python sort(key=) to sort the pixels by brightness. At what point does learning one-off skills for every task start to get annoying? (From my personal experience, it never does get annoying, and that seems weird to me now).


But then a tiny amount more APL and here's a depth first recursive tree traversal with accumulator function, projecting a tree structure onto a nested array:

    ⊃∇⍨/⌽(⊂⍺ ⍺⍺ ⍵),⍺ ⍵⍵ ⍵
     │└┬┘  └─┬──┘  └─┬──┘
     │ │     │       └──── (possibly empty) vector of sub-trees.
     │ │     └──────────── new value of accumulator.
     │ └────────────────── left-to-right reduction →foldl←
     └──────────────────── recursive call on (⍺⍺ trav ⍵⍵).

 - https://dfns.dyalog.com/n_trav.htm
I sure could bash out a depth-first tree traversal in Python, with dictionaries or a dedicated tree-node class, and it would take me way less time than understanding this will take me. Yes this may be 20 characters, but it seems a shame to make "few characters" the main focus of why this is interesting. Each of these primitives in the line is almost trivial to learn on its own, none of them are complex magic not even omega-omega. But an expert combining them together carefully makes them do something way more than the sum of their parts, and way more than the shortness suggests they will do. (Here's John Scholes, founder of Dyalog APL, building on this to solve the N-Queens problem: https://www.youtube.com/watch?v=DsZdfnlh_d0 the commonly linked Conway's Game of Life in APL is more approachable, but this is more amazing because of what it's doing to treat arrays as trees, but way harder to follow and more "magic")

> I sure could bash out a depth-first tree traversal in Python, with dictionaries or a dedicated tree-node class, and it would take me way less time than understanding this will take me.

That's my point. APL is interesting, but its enforced structure doesn't fit things intuitively (perhaps there's an implied "for most people" here). Yes, omega combinators or whatever it is that's doing is neat and perhaps pedagogically useful. But

> That is cool, I didn't know you could do it. But it is completely separate from the normal Python list comprehension style

That's because you're not using list comprehensions, you're using ndarrays, which do poweful things to python's already powerful slice notation, and as a result get all of the nice broadcasting things that you get in APL. Its why a + b and a * b just do what you expect in numpy-land.

Slice notation in python is already powerful: a[:], a[5:], a[:5], a[::2], and a[::-1] are things I'd expect someone relatively new to understand intuitively (those are copy, head(5), tail(5), every_other, and reversed).

Adding the ability to customize it: `a[:,:,::-1,:]` for example inverts the 3rd axis of a 4d array, similarly you can pull out a subarray, strided subarray, etc. very declaratively. And numpy further extends that by allowing the argument to be a mask (which is what I showed you in the last comment), so a array[boolean_mask] does the kind of thing you'd expect.

>At what point does learning one-off skills for every task start to get annoying?

When the one-off skills are better abstractions for the task than the "consistent" thing, never, as it seems you're realizing.

You make very good points, but unfortunately most people won't seriously consider APL as it's not a general-purpose language. It's just too alien to put much effort into.

I do believe in the benefits of powerful notation. I also find some concepts useful outside array-oriented languages, eg. verb rank [0]. My qualms with the APL family is the difficulty of choosing language and implementation. J or K seem strictly better than APL (eg. forks and hooks) except they use line-noise ASCII notation. Implementations tend to be proprietary and require a license.

[0]: https://www.jsoftware.com/help/jforc/loopless_code_i_verbs_h...

I see "ease of use" and "familiarity" as two different concerns. I think I would agree with you on familiarity, but even if I'm advanced user of the language, I want it to be easy to use wherever that doesn't conflict with other more important goals. (And I think some things are less important than ease of use.).

That statement from Rich Hickey makes no sense at all, because we aren't born knowing how to play instruments. And if your brand new instrument is weird, then not many people are going to bother learning how to play it.

Thinking that "ease of use" should not be considered because [favourite reasons] is probably the number one misunderstanding software engineers have about humans. :) Please read Norman's "The design of everyday things", before accidentally making the life of someone miserable through software.

> And if your brand new instrument is weird, then not many people are going to bother learning how to play it.

Which musical instruments are _not_ weird?

> [musical] instruments are made for people who can play them!

And yet, for the most part, trumpets have the same valve configurations, pianos are in the same key, and violins have the same string arrangement.

Unlike different musical instruments, all programming languages make the same sound (more or less). The goal of using a programming language is not to use that programming language, but to program a computer.

> you should not take into consideration "ease of use" or "familiarity" arguments.

Especially if you really don't want it to be wildly successful.

I wouldn't use a programming language whose designer had this attitude.

This has been debated endlessly with regards to Clojure, sometimes with what seems like deliberate misunderstandings stemming from categorical thinking or entrenched positions.

It's not a binary choice for the most part.

- Ease of use will not be promoted over long term growth or power (I.e., the instrument should be designed such that it can ideally facilitate indefinite growth of skill. It shouldn't be a toy that only works up to a certain point.)

- Very often, ease of use can be optimised in such a way that it does not conflict with the first point, rendering the entire discussion moot.

- Familiarity is a non-goal unless the familiar design has the same properties as the optimal design, with regards to the first point.

This is not to say that easier, beginner friendly designs that _does_ sacrifice the long term power or growth aren't valuable. It's a different design goal, though.

And I wouldn't use a programming language whose designer didn't have this attitude.

I also wouldn't use a programming language that is optimized for increasing the number of users at the expense of performance, clarity, stability, consistency, or power. I don't want a language that appeals to most people, I want one that disproportionately appeals to careful and competent people.

Would you also refuse to play a musical instrument, because the designers had this attitude? Pretty much all musical instruments are "hard to play" and you have to learn them, sometimes for many years.

Professional grade, expensive instruments are highly playable compared to the junk.

Erik Naggum, Comp.Lang.Lisp, 1997:

"what makes _me_ sad is the focus on "most folks" and "Joe Sixpack".

why are we even _thinking_ about home computer equipment when we wish to attract professional programmers?

in _every_ field I know, the difference between the professional and the mass market is so large that Joe Blow wouldn't believe the two could coexist. more often than not, you can't even get the professional quality unless you sign a major agreement with the vendor -- such is the investment on both sides of the table. the commitment for over-the-counter sales to some anonymous customer is _negligible_. consumers are protected by laws because of this, while professionals are protected by signed agreements they are expected to understand. the software industry should surely be no different. (except, of course, that software consumers are denied every consumer right they have had recognized in any other field.)

Microsoft and its ilk has done a marvelous job at marketing their software in the mass market so that non-professional programmers pick them up and non-programmers who decide where the money should be wasted will get a warm fuzzy feeling from certain brand names. I mean, they _must_ recognize that nothing else they buy for their company is advertised in the newspapers that morning and they aren't swayed by consumer ads when they buy office or plant equipment, are they? so _why_ do they swallow this nonsense from the mass-marketing guys hook, line, and sinker?

they don't make poles long enough for me want to touch Microsoft products, and I don't want any mass-marketed game-playing device or Windows appliance _near_ my desk or on my network. this is my _workbench_, dammit, it's not a pretty box to impress people with graphics and sounds. when I work at this system up to 12 hours a day, I'm profoundly uninterested in what user interface a novice user would prefer.

I'm reminded of the response to how people of little or no imagination were complaining about science fiction and incredibly expensive space programs: "the meek can _have_ the earth -- we have other plans".

no, this is not elitist, like some would like to believe in order to avoid thinking about the issues. this is just calling attention to the line between amateurs and professionals, between consumers and producers, that is already there in _every_ field. I want people to wake up to this difference and _reject_ the consumer ads when they look for professional tools. if it's marketed to tens of millions of people, it is _not_ for the professional programmer, and not for you. if its main selling point is novice-friendliness, ignore it unless you _are_ a novice. (and if you are a novice trying to sell your services in a professional market, get the hell out of the way.)


Rust has some existing postifx syntax (shouldn’t it be called suffix?). And to be honest it didn’t threw me of in the slightest when I started with Rust. Using a questionmark at the end to let an error bubble up seems quite intuitive.

The most scary thing that threw me off the most were lifetimes (which for beginners sake are gladly not needed all that often).

I have trust in the Rust developers and the community that they will choose something very well thought out, because for now the whole language feels incredibly well considered.

For what it's worth, it seems that Rust isn't alone here. C# 8.0 introduced postfix `switch` expressions (https://devblogs.microsoft.com/dotnet/do-more-with-patterns-...), which are along the same lines of what's suggested for `match` in Rust further along in the OP.

I agree that it looks strange at first. But I think I'll get used to it.

Thanks for pointing this out, I had no idea they implemented this for C# 8.0

Also is it just me or the switch pattern really is inspired by Rust match?

Pattern matching syntax across many languages is very similar, most notably inspired by ML I'd say.

Pattern matching as is in Rust has many notable precedents that IIRC the C# folks considered as prior art, not least of which being F#.

Literally everyone else uses prefix await, so this will be a problem. But,

- this is a lot easier to chain, which is very useful, and

- the "future expansion" might bring a prefix await anyway (although introducing two syntaxes for the same thing might be even worse).

I'd prefer "f await" to "f.await" because it feels a lot less magical and lets me stick to my intuition that "." is just for stuff implemented by the library, but maybe that's just me.

My personal preference is for a happy medium between "f await" and "f.await", which is some other character that indicates "special postfix". Sort of like how func() is clearly different from macro!(), we can have "expr->keyword" or "expr.keyword!" to differentiate from "expr.property"

> Literally everyone else uses prefix await, so this will be a problem.

Kotlin uses `.await()`.

If you’re able to figure out how to write working rust code, you can learn how to google where ‘await’ goes.

I tend to agree with you. I love what Rust is doing, but can't force myself to use it because it's, to me, not easy to read. As I've grown, I much prefer simpler languages. Go, Python, and to a lesser extent C(while it can be simple, people abuse it in weird ways) all fit the bill. I wonder how many other folks have this simple aversion...

"It has also devolved into a situation in which many commenters propose to reverse previous decisions about the design of async/await on which we have already established firm consensus, or otherwise introduced possibilities we have considered and ruled out of scope for now. This has been a major learning experience for us: one of the major goals of the “meta” working group will be to improve the way groups working on long-term design projects communicate the status of the design with everyone else."

I'll be pretty intrigued to see what they come up with for that. I've been up closer to the Go developer's attempt to connect with the community, and while I'm not going to say they've done everything perfectly on their end, I've also been sort of frustrated by the way that they'll put up a request for comments about something in particular, and a good chunk of the community replies basically throw away everything they're talking about and rewrite arbitrary amounts of the language from scratch. By "everything", I don't just mean the direct proposal under discussion, but the goals of the proposal, the discussion of alternatives from other languages and how they are and are not relevant, as alluded to above the things already proposed and rejected for solid reasons, the context around the runtime and how it interacts with that... that is, not just the proposal itself, but all the thought and reasoning around it too. That wasn't really the question, so to speak. This frustrates everyone, on all sides, in various ways.

A lot of the problem is structural to what is being done; there's a lot of impedance mismatch between the core designers and the community at large for any language. It would be interesting to see some explicit thought around how to address that, from a community with Rust's experiences.

It's just means there is lack of trust to core designers. They haven't proven themselves to users, that they are capable of addressing their problems and align with their interests. Obviously core devs of Google backed language cannot do better. They can only gain that trust from users within Google, not outside of Google. Not sure about Rust though.

It does not necessarily mean a lack of trust. It could be different incentives or priorities.

For example, some users might desire major redesigns of past work but not bear the cost of the rework. As such, they might perceive some small improvement without any (direct) cost.

Personally, I think some groups have moved so far in the direction of community involvement that they forget the practical implications of diversity. Leadership is hard for many reasons — one big reason is that leading sometimes means making some people unhappy. Still, this is much better than inaction.

Could you please give some examples? I think I follow Rust pretty well, but have no idea what/which-improvements/who/which-groups you could mean.

I don't want to name any particular people or groups, because everyone makes mistakes and has limitations.

I will say this: in my personal experience, I've been a part of groups that struggle in dealing with complex decisions. Many times they get bogged down when they don't find a clear answer that satisfies everyone or all criteria. In many cases, such groups don't have a clear leader or the leader lacks the skills, experience, and character to do what is necessary; namely, choose (and communicate) the least-worst decision that keeps the ball moving forward.

In such cases, it is not necessary (and unrealistic to expect) that everyone agree with every aspect of every decision. A leader needs confidence and persistence to make tough decisions, as opposed to abdicating leadership. Some examples of the latter include (1) ignoring a choice until some default decision is made implicitly or (2) simply choosing the idea from the most vocal person.

Put more broadly, in this context, leaders must balance four aspects: (a) scoping and framing a decision; (b) gathering diverse points of view; (c) building some degree of consensus or buy-in; and (d) making a decision. It appears to me that the Rust language team handled all four comprehensively.

> Obviously core devs of Google backed language cannot do better. They can only gain that trust from users within Google, not outside of Google.

That's a non sequitur. Trust is not mandated by company boundaries. I for one trust in the competence of the Go maintainers, while at the same time having absolutely zero trust in Google as a company.

> I for one trust in the competence of the Go maintainers

That's fine. But ultimately Go employees do not work for you and cannot put much effort into proving they are good at solving problems for you, since the company doesn't pay them to do it. There will not be a track record in users eyes of their design decisions and not much trust from users they are even capable of doing it well.

> the company doesn't pay them to do it

It absolutely does. Google has a vested interest in making Go a popular language outside of Google: It improves sentiments towards Google by association, and it means that Google has an easier time onboarding new hires since they probably already know Go.

If they're paying people to develop tools for solving general purpose computing problems and presenting those tools out in the open, then they are doing something that is good for everyone.

"It's just means there is lack of trust to core designers."

I don't think that can explain the observations I've made. Either that or the sort of trust you describe is simply impossible, because I've never seen it in any language community I've been in.

I'd lean more towards a lack of clear description to the community for what they're looking for, and the community buying in enough that when communicating amoungst themselves they also do some lightweight enforcement of the expectations.

I wouldn't go that far. To me it looks like the usual bikeshed effect is at full force.

From what I've seen of Scheme they dont have this problem. Everyone implemented the feature they want or forks an implementation. And no one argues about syntax.

"And no one argues about syntax."

No one argues about the syntax, because everybody "wins" and gets their own syntax as a result, which is often held up as a key part of the explanation for why the Lisp family languages are wonderful and fun and mind-expanding and just awesome in every way... yet rarely escape from "niche" status and are yet to even threaten to break into the really top-tier languages.

Because languages like Rust and Go, and Python and Java and honestly almost every other non-Lisp family language, want more cohesion in their codebases, they need a different solution to the problem.

That's just a meme from that Lisp Curse article and wherevernot; it's not spread by people who have actually worked in a Lisp code base.

Fragmentation cannot really be an issue for adoption, as it happens after adoption.

So the Linux desktop will only become fragmented after the mythical "year of Linux on the desktop"?

I'd say fragmentation hurts retention, which only has secondary effect on adoption.

"I'd say fragmentation hurts retention, which only has secondary effect on adoption."

I have no idea how you come to that conclusion. Any conceivable model for adoption I can imagine is a differential equation which contains "current adoption" as a term going into the first derivative. If people are leaving a language faster than people are coming in (and I speak in general, not claiming this about any particular language), retention is going to have major effects on adoption and the lack thereof.

Yeah, but any language more popular than brainfuck has some level of adoption. Your language doesn't have to be super popular to start having adoptions issues.

There is some hearty discussion in the Scheme community about syntax, but less than in some other communities. Part of the reason might be that most new syntax looks like this, with no changes to the language grammar:

    (my-new-syntax-name arbitrary other stuff goes here)
Related to this, some things that would have to be primitives in many other languages can be implemented nicely by anyone as ordinary libraries in Scheme.

Wasn't there a bunch of disagreement about R6RS?

> Its very easy to build a mental model of the period operator as simply the introduction of all of these various postfix syntaxes: field accesses, methods, and certain keyword operators like the await operation. (In this model, even the ? operator can be thought of as a modification on the period construct.)

I like this explanation.

But this whole post could really use some more concrete examples of the at the very least the final style being used in various situations. Not just mainly `expression.await`.

Keep up the work on this, I'm excited to use this in my projects.

My dislike for the ”dot keyword” syntax has been strong, but if it is selected, hopefully it’s at least generalized to other keywords in the future, so that `.await` doesn’t stay a lone awkward exception to what dot notation means. In particular, I wouldn’t mind if in the future `expr?` could be spelled `expr.try` for consistency (even at the expense of introducing redundancy).

I like that idea for try! Near the bottom of the blog post, they mention that `match` could become a "dot keyword" expression, giving the following example:

  foo.bar(..).baz(..).match {
      Variant1 => { ... }
      Variant2(quux) => { ... }

The post says as much:

> In particular, some members of the language team are excited about a potential future extensions in which some “expression-oriented” keywords (that is, those that evaluate to something other than ! or ()) can all be called in a “method-like” fashion. In this world, the dot await operation would be generalized so that await were a “normal” prefix keyword, but the dot combination applied to several such keywords, most importantly match:

Yeah, that’s what I referred to, just wasn’t very clear.

I wonder if I would ever find something like


more appealing than

    return my_iter
I could see it happening, but I'll definitely need some time to get used to it.

See, now this is exactly my issue with this syntax, I generally expect to be able to "chain" things with the "." notation.

The post does mention that the hypothetical either-postfix-or-prefix-keyword idea would only apply to certain keywords, e.g. keywords that evaluated to non-useless values (which wouldn't include keywords like return, continue, fn, and so on).

You can chain .await, though.

    let n = future_of_future_of_int

Curiously, will there be limitations to where it can be used? Eg, imagine a `foos.iter().map(|f| f.await).collect::<Vec<_>>()`?

That seems crazy, think it'll be possible?

Not as things are currently designed. This is similar to how you can’t use things like the `?` operator, break, return, etc. inside a lambda and expect it to affect the outer function: it doesn’t work because lambdas are treated as their own functions. Personally I think it would be cool to pursue an extension to lambdas that would allow some of those things to work, but I’m not a Rust team member or anything, just an interested observer.

Technically, `?` can work in lambda's if the return value is a `Result` - though it won't work like it's being called as part of a normal loop or whatever. That's largely mitigated by the combinators available on an interator of `Result`s.

So I think we'll just need similar tooling for lambdas - perhaps an `async` modifier for them? That (I think) would lift await stuff up to `?` in terms of lambda support.

Yep, and in fact async lambdas are already a thing on nightly. But, while I may just be nitpicking, `.map(async |f| f.await)` wouldn't do anything useful. Applied to an Iterator of Futures, it would be a no-op, kind of like (since you mentioned `?`) `.map(|x| Ok(x?)). Instead you'd probably want some combinator to turn it into whatever "Iterator but async" trait Rust eventually standardizes on – futures-rs has Stream for this purpose, but std doesn't have anything yet. That trait would have its own collect() method which would return a Future<Output=Vec<T>>, and you'd then await on that.

Yeah, I guess I'm nitpicking.

Thanks for the nitpick! I hadn't thought it through all the way. I was mostly thinking about doing async stuff with the inner stuff, but you're definitely right. I'm glad you commented because I've had to rethink it.

True, but you can't chain returns, making this suggested use of a special postfix operation a bad idea imo.

You could chain returns in that case, in principle, but it would just be dead code. The type of a return expression is `!`, the never type.

The problem with that is return doesn't evaluate to anything, so there's no (return foo).bar expressions to benefit from that change. Same problem with break, continue, and goto, for that matter.

I think yield would be a candidate for this syntax because yield can return.

Super excited to see async/await finally get close to MVP and landing. I am not a huge fan of ".await", but there isn't much more to be said -- my personal preference of "#await" or "@await" does look like line-noise and I think there's no perfect answer to this one (await{} was too messy and no better than prefix-await, and (await foo))? had too many brackets).

I also appreciate that this proposal was insanely bike-shedded and so really, any decision is better than no decision. I would've been happy with "await << f()" if it meant we could get this feature (lots of Rust projects I'm interested in are waiting on async/await before focusing on further development).

Wouldn't "await expr" or "await@expr" (if you hate whitespace) make more sense compared to "expr await"?

    let db_conn = await pool.connect()
(Which is what e.g. Python uses, with the downside that you tend to need to parenthize more because await has very low precedence).


    let db_conn = pool.connect() await
.await is not soo bad, kinda method-y

    let db_conn = pool.connect().await

Did you see the previous post on this topic? They discussed this extensively:


I don't think "expr await" would be a good choice but that isn't the decision that was made, and I wasn't arguing for it in the first place. TFA says that using alternative punctuation was decided against because it would lead to line-noise and I don't think there's much more to discuss -- I understand that position and respect it.

My main issue with .await is that it does look method-y rather than keyword-y. But while I like Python's "await expr" syntax in Python the existence of ? and chaining of methods in Rust justifies having a different syntax for it.

Python also chains methods. And having to write (await (await (await foo).bar()).baz()) is annoying.

That doesn't look like well written async code at all... You're not supposed to use await literally everywhere and especially not multiple times in a single line.

Javascript already has an answer to this (Hint it was inspired by Monads but it isn't one):

await foo.then(f => f.bar()).then(b => b.baz())

What happens if f.bar() throws an exception asynchronously?

@await does look like line noise, but if they're considering giving other constructs a postfix syntax, I think the noise is justified. Overloading the dot operator creates confusion, it gives the impression that .await is somehow related to data access instead of control flow

Nice insight into the amount of thought going into this design proposal. It's always tricky to introduce new syntax to a language and re-using the field access notation here isn't as icky an approach as it first looks.

I don't know enough about rust to understand how this would operate during compile time w.r.t if a user tries to define a field named 'async'. Is that no longer allowed or would the compiler be able to disambiguate?

Disambiguation is easy when `await` is a keyword. [0] ;)

    error[E0721]: `await` is a keyword in the 2018 edition
    --> src/lib.rs:2:5
    2 |     await: i32
    |     ^^^^^ help: you can use a raw identifier to stay compatible: `r#await`

[0] https://play.rust-lang.org/?version=stable&mode=debug&editio...

Programming languages didn't used to have reserved words:


> Not all languages have the same numbers of reserved words. For example, Java (and other C derivatives) has a rather sparse complement of reserved words—approximately 50 – whereas COBOL has approximately 400. At the other end of the spectrum, pure Prolog and PL/I have none at all.

I don't really know why modern programming languages bother with reserved words. Yes, it would be confusing to have a variable named 'if', but compared to all of the other ways to write confusing code in, say, C, that's barely a drop in the bucket. Plus, it's something good tooling (highlighting, for example) could obviate, as it's entirely possible to use a grammar to show exactly what role each token is playing in a statement.

>I don't really know why modern programming languages bother with reserved words

Preventing potential footguns prevents a large number of bugs. In fact, preventing a specific kind of footgun is one of the primary reasons that Rust exists.

Fewer footguns means easier to work with code. Easy to work with code means fewer bugs. If having reserved words removes more footguns than it creates than they are a good thing to have - and IMO - they do indeed prevent confusion and footguns.

As long as these footguns can be removed without giving up much in exchange, I say remove all footguns!

I don't know about PL/I but Prolog kind of cheats, in my opinion, about the reserved words. It is very strict about naming conventions, which is (IMO) worse than having reserved words.

PL/I is syntactically like Algol, or Pascal, or C with more words and fewer brackets. At heart, it's a "normal" block-structured procedural programming language, which proves that a C clone could adopt the same basic idea.

XQuery is probably a more recent example of that. No reserved words there, just syntax.


`await` was changed in rust 2018 to a keyword [1] and will now get a warning if misused IIUC. I'm guessing `async` keyword only shows up in contexts where identifiers aren't expected, so there's no ambiguity?

[1] https://github.com/rust-lang/rust/pull/54411

If you want to use async or await as a fieldname you have to write r#await and r#async because they are reserved in rust 2018.

> Is that no longer allowed or would the compiler be able to disambiguate?

I would speculate that both will be true. The compile could disambiguate, but it will not be allowed.

My guess is the raw identifiers syntax introduced in Rust 2018 (`r#` prefix) will handle this.

afaik, as of Edition 2018, await is a keyword and one of the few(2?) reasons that the 2018 edition is not backwards-compatible with 2015.

If I'm not mistaken you couldn't have ever had 'await' as a field name because it's a reserved keyword. You can still use it by using the raw string syntax r# and that disambiguates when you access it to because it would be called like: expression.r#await

I'm anxiously awaiting this.

I just finished overhauling a rather involved Tokio/Future based project, and having await/async in the language spec will be a big step forward (and I'm hoping the compiler errors improve as well).

As always, kudos to the Rust team for making decisions that are well-reasoned and documented and in a way that looks to the future of the language (and not just meeting a feature deadline)!

> I'm anxiously awaiting this.

Pun intended?

If the way async/await has fundamentally changed how you write JavaScript code is any example, this will be a tectonic shift, especially for event-driven code like you use with Tokio.

I agree that it's an important addition to the language, but I think it will be less impactful for Rust than for JavaScript because server-side JS is almost always in a domain that benefits from async IO, whereas Rust exists in many domains where async IO isn't a performance priority.

It's an important concurrency model. Perhaps Rust doesn't have a lot of async code because it's been really annoying to do it. This could change that dramatically.

While there might be a better syntax if you only care about this one feature, async/await needs to fit into an existing language, and this syntax makes sense with the rest of Rust.

This syntax makes it clear that’s it’s not a function or macro invocation, works the same way as ‘?’, and allows for clear and concise chaining.

Reading this feels like a person with no good choices trying to convince themselves there's no other way than their best bad one.

"This is the best proposal, except the syntax doesn't make sense, we don't know if we can implement it, and conversation has broken down to the point where we are running in circles and we don't expect to have any more ideas."

I am curious what the current way to do non-blocking code in Rust is and why it's so bad that they'd introduce this much confusion to the language design to fix it.

I am unfazed by this observation. When it comes to design, sometimes ‘perfection’ isn’t available.

By the way, the idea you mention may also be true for democracy: the quote goes something like “Democracy is the worst form of government except for all the others.”

If you have a design option that has not already been considered, I’m all ears.

The best thing they could have done was to have arbitrarily picked a syntax months ago - but of course, at the time nobody knew it would be so difficult to determine the "right way" that choosing arbitrarily would be the best option.

It raises my esteem of the Rust lang leadership that they eventually realized what they had to do and picked something.

If every choice was obvious, there would be no need for a designer in the first place.

Who said any choice was obvious?

(Also would love your take on the question in the last line of my comment. My gut as an outsider is that the best solution may be cultural not technical, except Rust is struggling to implement cultural projects now that the community is scaling fast. But you know what I don't.)

Heres the most straightforward answer I can give: http://aturon.github.io/2018/04/24/async-borrowing/

At this point I don't really care how the syntax looks. I just want to use it, been waiting soo long for this.

Problems with differentiating .await from .member can easily be solved by using syntax highlighting. Also, Rust is already strange to write, this is just one more quirk one will get used to.

The designers of Kotlin also had an interesting point: await (synchronous call) should be the default, non-awaited (asynchronous) code should have a keyword indicating that it is async.

This would have been my favorite approach, and there was a discussion about doing it in Rust: https://internals.rust-lang.org/t/explicit-future-constructi...

There were two main objections:

First, a Rust async function like this (using explicit lifetime annotations for exposition purposes only, normally they would be elided)...

    async fn f<'a>(r: &'a i32) -> i32 { ... *r ... }
...desugars to a sync function whose return value closes over its arguments:

   fn f<'a>(r: &'a i32) -> impl Future<Output = i32> + 'a { ... }
Sometimes you want the future to live longer than the arguments, so you write that desguaring yourself:

    fn f<'a>(r: &'a i32) -> impl Future<Output = i32> {
        let i = *r; // Deal with the reference *now* before constructing the future.
        async { ... i ... }
Ideally, functions could switch back and forth between these two forms without changing their API, for backwards compatibility reasons. This means you can't just auto-await calls to `async fn`s like Kotlin does- it would need to be a more complex rule.

Second, a lot of users want suspension points to show up in the program text the same way `?` does. This is nice for managing invariants- you know when you might be interrupted. (Personally I don't think this is a good reason; the borrow checker and async-aware synchronization primitives would solve the problem with less noise, but it is what it is.)

I wonder about an altenate timeline where Rust kept its lightweight threads. Marking async calls explicitly instead of marking synchronous calls with await is a step in that direction, because that's also the syntax you have with lightweight threads. What would problem 1 look like in that alternate timeline?

In that case the concept of async functions disappears and your first function becomes a normal function. The second function remains a Future building function. So I'm tempted to conclude that this problem might be a non-problem, caused by a confusion between async functions and Future building functions. Even though an async function desugars to a Future building function, they are conceptually distinct in the lightweight threads model. With lightweight threads, all functions are async functions. A future building function explicitly builds a delayed computation. The types should be different.

An async function is just like a normal function, except that it may call async APIs (i.e. other async functions). Calling an async function from a normal function is an error; it does not return a future. The programmer never sees that async functions are implemented with Futures under the hood. In particular, an async function is not syntactic sugar for wrapping its body in an async block. We rename the async { ... } keyword to future { ... }, which constructs a future out of the block. You may call async functions in a future block. So if you want to call an async function inside a normal function, you must do future { foo() }, making it syntactically clear that the call is delayed even when the call is made from inside a normal function. The programmer no longer needs to think about how async functions work at all. Don't tell them that future{ foo() } actually will just call foo(), and foo() returns the future, they don't need to know that. The only thing they need to remember is that async functions can only be called from within async functions or future blocks. In all other respects they behave the same as normal functions. All delaying of computation and running computation in parallel is explicit.

IMO, problem 1 only occurs to programmers that have been told that async fn = Future returning function. That's a leaky abstraction; it's syntactic sugar. If you prevent them from developing this notion, the problem simply doesn't occur. To understand the main proposal for async/await you basically have to understand what desugaring the compiler is doing. With the "Explicit future construction, implicit await" you can use async functions and futures without understanding how they work under the hood. It's a non-leaky abstraction.

IMO, problem 2 is a problem for the IDE. The IDE can easily show which function calls are async and which are not.

I tried to lean pretty hard into "this syntax is just like threads" in that internals.r-l.org post, when I wrote it, proposing almost exactly what you describe here. Unfortunately problem #1 is not a result of confusion or unnecessary conflation, but a fundamental question of lifetimes- the exact same problem already exists with normal OS threads just as it would with lightweight threads.

That is, a function is always allowed to hold onto its arguments until it returns. If its execution is deferred (e.g. `|| the_function(and, its, arguments)`) for whatever reason (e.g. spawning a lightweight thread or async task), the borrow checker has to consider that those arguments may stick around indefinitely.

Of course, it is 100% doable to force people to work around this just by giving future-building functions a different type. But as I described, this means callers have to add or remove an extra `.run()`/`.await()`/etc. if the API ever switches between the two. This is accepted in the world of threads, but not in the world of futures, because we already have a solution which is "just switch to a future-building function, everyone's already awaiting it."

(Personally, while I certainly see it as a real problem, I would rather we just live with it. It's not hard to work around, and we already do it in the world of threads when necessary, which is rarely.)

I still don't understand why #1 is a problem.

> But as I described, this means callers have to add or remove an extra `.run()`/`.await()`/etc. if the API ever switches between the two.

Switches between what though? When you want to do something asynchronously, you indeed build a future and later .await() it. Suppose you then want to build that future in a different way, for example by transforming future { foo(x) } to making foo(x) itself return the future (i.e. moving the future{} block inside foo), possibly because you want to dereference x before building it. Well, the .await() was already there, and doesn't need to be changed. The future{} ... await() pair gets introduced when you want to making things asynchronous, which is exactly as it should be?

Furthermore, isn't that the same with the main async/await proposal? It is indeed true that when you make things async you only have to mark a function as async, and then all the calls to it automatically become async. However, at the end of the day you still need to await those futures or else they won't do anything. So when you switch from sync to async you still need to add those awaits.

The difference seems to me the other way around: with the main proposal you need more awaits (namely, at all points where you want to stay synchronous). With your proposal you need more async/future blocks (namely, at all points where you want to switch to asynchronous).

I think that using the same keyword for async fn and async{} block is a source of confusion, because it makes it seem like async fn is basically like wrapping the body in an async{} block. It's what makes people think that an async fn is like an automatically awaited future, which is a confusing way to think about it and makes it hard to see why this proposal is a good idea (even if it's actually implemented like that under the hood). I think it becomes a lot clearer if you use a different word for these two concepts (like async fn and future{} block), and remove the ::new() and only use future{} syntax.

This proposal does raise another question: why not just green threads, and remove the concept of async functions entirely?

> This proposal does raise another question: why not just green threads, and remove the concept of async functions entirely?

Making another reply because this is completely unrelated...

Rust already tried that. The problem is that Rust has a hard requirement as a systems language to support, at least, native I/O APIs, and the green threads implementation added a pervasive cost to that support because all standard library I/O had to go through the same machinery just in case it was happening in a green thread.

That overhead made green threads themselves basically no faster than native threads, so they were dropped before 1.0 to make room for a new solution to come along eventually. Futures and async is that solution, and it turns out to be much lighter weight than green threads ever could have been anyway- no allocating stacks, no switching stacks, no interferering with normal I/O.

The syntax could have been different, but the implementation is far better this way.

Couldn't green threads in principle be implemented the same way as your async proposal? The compiler could infer which functions need to be marked async. To support separate compilation it might need to compile two versions of each function, an async one and a normal one. You'd have exactly what you have in your proposal, except you never have to write async fn. You could still have blocking & non-blocking IO. It wouldn't totally unify green threads with OS threads, but Futures/async/await don't do that either.

Yes, though you probably wouldn't call them green threads anymore at that point. (I mean, Rust async/await is implemented that way modulo syntax and it's not called green threads. But that's beside the point.)

In fact Rust has already thrown out separate compilation with its monomorphization-based generics, so making functions "async polymorphic" in the same way wouldn't be anything new.

And while that's somewhat unlikely from what I can tell, Rust is getting a little bit of that "effect polymorphism" somewhere else- generic `const fn`s can become runtime functions when their type arguments are non-const. So maybe someday we'll be able to re-use generic functions in both sync and async contexts depending on their type arguments.

Switches between keeping the args for the function's full duration, or returning a closure (async or not) that doesn't hold onto them.

Here's the problem in terms of normal OS threads:

    fn f<'a>(r: &'a i32) -> i32 { ... *r ... }

    // oh no, I can't do this:
    let i = 42;
    thread::spawn(|| f(&i));
Here's the workaround:

    fn f<'a>(r: &'a i32) -> impl FnOnce() -> i32 {
        let i = *r;
        || ... i ...

    // now I can do this:
    let i = 42;
In this case, and the analogous lightweight threads case you're describing, and the "implicit await" post I originally linked, the workaround forces the caller to change its syntax. From `|| f(&i)` to `f(&i)`, or from `async { f(&i) }` to `f(&i)`, or from `future { f(&i) }` to `f(&i)`.

But in async/await as currently proposed and implemented, the transformation goes from this...

    async fn f<'a>(r: &'a i32) -> i32 { ... *r ... }

    // oh no, I can't do this:
    let i = 42;
...to this:

    fn f<'a>(r: &'a i32) -> impl Future<Output = i32> {
        let i = *r;
        async { ... i ... }

    // now I can do this:
    let i = 42;
You can imagine someone originally writing the first version, when all their callers just immediately `await` so it's okay if the reference sticks around. But then another caller wants to write something like the above, so they make the transformation above.

Under today's futures, all the other call sites keep working (`f(&i).await`) and the new use case starts working. Under our proposals, that transformation would break everyone just using the `f(&i)` syntax, so it probably wouldn't happen, and instead the new caller would have to write this:

    thread::spawn(async move {
        // move `i` in here, or worse, stuff it in an Arc, even though it's only needed for setup!
        let my_i = i;

I see. Would that be such a disaster under your proposal? Original code is:

   async fn f<'a>(r: &'a i32) -> i32 { ... *r ... }
Having some callers future{ f(&i) }.await().

Now the new caller comes in, so we add a function f_future:

   fn f_future<'a>(r: &'a i32) -> impl Future<Output = i32>         
        let i = *r;
        async { ... i ... }
The new caller uses f_future and the old callers keep using f. To prevent duplication we can factor out ... i ... into a function g(i) and do g(*r) in the async fn f. The other callers can migrate from future{ f(&i) }.await() to f_future(&i).await() over time.

It's not as ideal as not having to change the signature at all, but signature changes can be dealt with. Or is this a big problem with OS threads?

I agree, there's plenty of ways to work around it, and I'd prefer any of them to the syntactic mess we're in now. I'm not the one making the decisions, though. :)

The Kotlin people came up with a few useful notions around co-routines:

- functions that can be called asynchronous must be marked with suspend.

- functions marked as suspend can only be called from within a co-routine context. This is an abstraction that gives you a handle on resources consumed by your co-routines and some level of control over that.

- You can create/get a co-routine context in several ways and there's a global context (i.e. the main thread). Useful other contexts could be some web request or some thread pool. A context has dispatcher, a scope, and a few other things.

3) suspend functions calling other suspend functions implicitly await each other, i.e. preserve before/after semantics. It looks like normal code and there are no special keywords needed. IMHO this is genius compared to promise chaining and error handling you deal with in javascript which can become quite messy. Even with async await in js, you still need to return a Promise. In Kotlin all this is implied by using the suspend keyword on the function.

4) await is indeed something you do explicitly in a synchronous function only and it blocks the thread it is happening on. So, it's also something you should mostly avoid.

5) Co-routines can be terminated. This ensures that any still running async calls stop wasting cpu time. This is a problem in e.g. javascript where once you are awaiting something, you have no way to interrupt whatever it is you are awaiting.

6) pre-existing other asynchronous stuff in Java and its various frameworks can be adapted to co-routines quite easily.

This is a complicated topic and I'm sure there are a thing or two here that don't quite map to Rust that easily, which they've probably debated at length. I imagine tight memory and resource control is important for Rust. But it's a nice design for Kotlin at least. Compared to Rust, the development process was interesting as well. There was a long experimental feature cycle (1.1 and 1.2) during which you could opt into using it but during which there were also major changes based on the usage and feedback. I think they learned a lot during this phase. There are still new things coming that are still experimental (e.g. channels).

The problem with this approach is merely adding the `async` keyword to a function would then significantly alter the way code is interpreted inside of the function. It seems like a non-starter to me to have adding the `async` keyword cause a function to stop compiling.

In Kotlin this is not an issue, because you simply can't call an async function at all until you've added the async keyword (actually `suspend` in Kotlin). If you want to call an async function from a sync context, you just wrap it in an async block.

What does an async block evaluate to? Does that wrap up the contents in Kotlin's equivalent of a Future?

In any case, I'm a big fan of the idea that async/await is just syntax around something that can be done without the syntax (i.e. wrapping Futures that can otherwise be manipulated without the syntax).

Yes, an async block (or `suspend` closure in Kotlin) just wraps up its contents in a future, applying the same state machine transform as would be applied to a top-level async function.

It's worth pointing out that `await` is only needed in functions explicitly marked as async; `await` is the default for non-async functions (which is the default mode for a function to operate). I think the rationale behind this is that it's fairly common to need to group a bunch of async operations together, and this is easier to do if you can mark an entire function as async rather than needing to mark individual statements.

After reviewing the syntax writeup, I've reached my own decision on what color the bikeshed should be.

I like the "Unusual other syntaxes: Use a more unusual syntax which no other construction in Rust uses, such future!await." [1] (or future@await). It makes sense because it is in fact something different than what exists anywhere else in Rust so it should receive a commensurate syntactic treatment. This was written off pretty quickly in [1] but it appears the most consistent with language philosophy -- specifically because it's inconsistent with any other language features it should get inconsistent syntax. I think that's what's throwing people off in this process: an attempt to impose 'consistency' on something that just isn't. As such, it'll never feel natural to co-opt existing syntax. Let's embrace it's inconsistency and introduce new syntax.

It supports '?' and '.' natively, and offers a path forward for new similar kinds of postfix operators without making more field names off-limits -- even the expr@match { .. } postfix expression thrown around would fit well. The @-prefixed postfix operator would return a value like all other expressions.

tl;dr: Either the syntax is inconsistent or the semantics are, and IMO, the former is preferable to the latter.

Either way, cool to see this feature moving along! Looking forward to using it no matter how it's spelled out haha.

[1] https://paper.dropbox.com/doc/Await-Syntax-Write-Up--AcIbhZ1...

I'd tend to agree with you, except for the fact that Rust reserved words are reserved everywhere. Even if the syntax were changed, you still couldn't name a field "async" or "return". Since it's no extra burden on the namespace, I'm less opposed to it.

I'm on the fence about disambiguation via unique punctuation, vs avoiding Perl-like line noise.

The syntax feels a bit Ruby-ish, doesn't it? Here's an example from the article of how they could expand the syntax in the future:

  foo.bar(..).baz(..).match {
    Variant1 => { ... }
    Variant2(quux) => { ... }
Compared to some Ruby:

  5.times {
    puts "Hello world!"
Note I'm by no means an expert in Ruby or Rust, this is just what I thought of when I saw the syntax.

A .match "method" just opens the dam to "why stop there?"

I prefer Kotlin's general approach of .let and similar:

    number = foo.bar().baz().let {
      match it {
        A => 1
        B => 2
Now anyone has the general tool for chaining without needing library authors or language designers to create the API for them.

Neat! When working with Options/Results/Iterators, you can use `.map` [1] for exactly this purpose. It would sometimes be convenient to have something like `.map` / `.let` on unwrapped values as well.

[1] https://doc.rust-lang.org/std/option/enum.Option.html#method..., https://doc.rust-lang.org/std/result/enum.Result.html#method..., https://doc.rust-lang.org/std/iter/trait.Iterator.html#metho...

Note that in Rust, `.match` would itself provide the general way:

    let number = foo.bar().baz().match { it =>
        it + 1

Re: your second example, once upon a time you could write something very similar in Rust:

  do 5.times {
    // ...
Where do was just sugar for calling functions that took a closure as the last argument:

  fn foo(a: A, ..., z: Z, cl: ||) { ... }

  foo(a, ..., z, || {

  do foo(a, ..., z) {

In fact, before the current for ... in ... syntax, this is how foreach looping used to be done:

  do some_array.each |item| {

Here's a question that may sound a little naive but I'm behind on where the Rust async stuff is shaping up and I don't know what to search for to answer it.

You can of course already do async things in Rust the same way you do in C: by passing around function pointers and either passing around a handle to the event loop or, more commonly, using a single global handle. But you theoretically could run multiple event loops (one per core, say) and could pass around a pointer to which event loop you wanted to work with and a lot of libevent examples do this. (IMO it's the only correct way to do it, but I have sympathy for wanting to have the global variable.)

This is a little different to Javascript where the event loop is part of the language/runtime. Javascript's "the event loop" model means you can't run more than one event loop per runtime, so it's kind of nonsense to want to pass it around explicitly.

Rust is more like C in this regard. So if I wanted to mix threads and events, or run an event loop per core, or have an event loop that my Fancy HTTP Library manages itself but exposes a blocking interface to, do I lose access to the new fancy `.await` syntax? There's nowhere in this syntax to say which event loop I'm registering myself with. Is there just one global one? Does it get globally registered on runtime boot, and everyone references it? Or is there a thread-local variable for the "current" event loop that this thread knows about?

Even more concerning is how to make _other_ libraries do this than the one I'm writing. If I'm calling into an async library how can I tell that library that I want it to use "an" event loop instead of "the" event loop?

What should I be googling for to answer this? :)

You do not lose access to it.

A short version of the answer is that at its core, you create a chain of futures, and then place them on an executor. The executor is responsible for executing them.

Async is sugar for creating a chain of futures. Doing that stuff is a property of the executor. Completely different axis.

(And tokio today has a multi-threaded, work-stealing executor, for example.)

I see, so the "root" Future in the chain is bound to an executor and that's where the "which event loop" part is specified

Wow, this is the only suggestion I really didn't like. I'm really happy that it's progressing forward, but I can't help but feel that staying with the macro and postponing the decision would've been a better choice.

A language should, in my opinion, not depend on syntax highlighting.

I think it was a tricky situation where the option with the "fewest/weakest haters" wasn't the same as the option with the "most/strongest likers". I think restricting ourselves to options that don't generate too much hate is one of the reasons why "design by committee" tends to produce disappointing results.

What good will postponing do? The blog post implies that it's already been debated for a long time and all the arguments seem to have already been brought forth. You can't just keep punting it forever; it has to get stabilized at some point.

It's been debated, but there's hardly any code using it. After seeing how it's being used by most people, it'd be easier to make a choice.

Because it’s been clear that what exists is not stable, many people are waiting until it’s stable to try.

There were a number of people who converted code to the various proposals, so that helped.

I think concerns that it depends on syntax highlighting are exaggerated. Compare `.await` to the `?` operator, both of which indicate a change in control flow. `?` does have the advantage of being a symbol which looks visually distinct even without syntax highlighting – but `.await` is much longer and thus harder to overlook entirely. Also, in practice `.await` will often be immediately followed by `?`, since async operations tend to be fallible. Since using `?` directly on a field of something is very uncommon, `.await?` acts sort of like its own unique syntactic construct with a unique visual shape, helping it stand out further.

That's assuming that you know the syntax already. If you're learning the syntax for the first time without the benefit of syntax highlighting, `.await` might be more confusing than it would otherwise be – but I think the point about `.await?`, combined with the familiarity of `await` as a special operation from other languages (as well as it just being a verb rather than a noun), would strongly hint that something different from a field access is going on. If not... well, you can always learn what it does from the documentation, since it'll be one of the first things explained in any kind of async tutorial. I'm a big believer in the concept of "strangeness budget" and avoiding unnecessary syntactic unfamiliarity, but I think the strangeness is very strongly mitigated here. In any case, most people will learn the syntax with the benefit of syntax highlighting, and IMO taking advantage of that is much more defensible than depending on syntax highlighting for readability more generally.

From a historical perspective, in the debate leading up to the switch from `try!()` to `?`, people were worried that `?` would be lost in the noise within long chains, an issue that others said would be mitigated by syntax highlighting. As it turned out, syntax highlighting does help, but `?` is also pretty distinctive without it. I think `.await` will pan out similarly.

More generally, I think that once people get used to the idea of `.await` and it stops being this new weird-looking thing, the mere-exposure effect[1] will subside and people will stop seeing confusion with field access as such a big problem. Well, people already using Rust, at any rate. Ironically, once you get past unfamiliarity, the reverse bias comes into effect, where it becomes unintuitive to think how other people first learning the language will find something weird. But from that perspective, there are a lot of weird things in Rust's syntax already. Adding more is not a good thing, but it's also not as bad as it might seem if everything in Rust seems familiar except this one thing.

[1] https://en.wikipedia.org/wiki/Mere-exposure_effect

The process for how this feature is being handled is fantastic, and a great example of what I love about Rust and its community.

I'm sold. This post did a good job of addressing my concerns with the syntax, plus a couple others. I'm now looking forward to postfix match.

I still kind of want postfix macros, though. I saw a couple other neat use cases like `file.write!(...)`

User definable postfix macros might be nice, I don't know.

But we don't need user definable postfix macros to have a single `expression.await!()` as a magical macro. There are already magical macros, and to most Rust users `foo!()` means "magic here, read the docs". It would have been syntactically consistent to have `await!()` also mean "magic here, read the docs".

100% of Rust users will recognize the inconsistency of overloading field access syntax. Only a portion of Rust users will dig deep enough into macros to recognize that `expression.await!()` is special and is not defined as a normal macro. By the time a user digs deep enough to realize that, they are ready to understand why that inconsistency was necessary.

With `expression.await` new users may ask about the inconsistency and the answer will be "because reasons you can't understand right now".

I'm curious as to why the mandatory prefix syntax wasn't chosen (`await {...}`). It's less magic than a magic field and it fits into preexisting syntax better.

I'm not sure why it wasn't chosen, but I can give you some arguments against it. The braces introduce line noise. They introduce a new scope. I believe that rustfmt will currently put a newline after the opening brace. And it has the general problem that it can't be read from left to right.

(P.s. I'm not trying to start a debate, just trying to answer the question.)

I think it has to do with "ergonomics", and the existance and interaction with the "?" keywords played a big part of that.

Try writing a chain of async member accesses and calls with . and ? like that.

I wonder why similarly to the existing `yield` keyword operator in the experimental generators support doesn't rate highly in the decision making here. Maybe they plan to revise generators to `expression.yield`, too? Also `expression.return`?

Control flow from a dot-access rubs me just a bit the wrong way - honestly, I'd rather the syntax be a bit cumbersome because if I saw multiple nested `await`s in a single expression, I'd think the code was suspect and review it more, while I'm probably not going to have quite the same reaction for `x.await?.await?.foo.await?`, just because it _looks_ like normal property chaining.

Really don't like the magic field access syntax.

> Really don't like the magic field access syntax.

From a language-feel and ergonomics standpoint, I agree. But the Rust team has made it pretty clear why a postfix is much more flexible, and I think we'll grow into it.

I do think it'll continue as a bit of a Rust oddity when compared to other async/await implementations (e.g., Python).

Having done most of my work in C# and TypeScript the Rust syntax felt weird when I read about it, but on the other hand, I always felt bad being unable to readably chain awaits, least they become

  var result = (await (await obj.DoSomething()).SomeOperation()).SomeValue;
In the end, all syntax is magic that you need to get used to, I guess.

Do notation solves that problem!

Do notation does not solve that problem. It solves the problem of chaining `.and_then` calls, but this is one layer up- do notation would still require each of those awaits to be a separate `a <- b` "statement," which couldn't even be chained in the first place!

Yes, exactly, bindings shouldn't be chained. The whole point of do-notation i.e. syntax that looks like normal bindings is to make them look imperative so that our monkey brains can grok concurrent code easily. Making 'await' expression-oriented is missing the whole point of 'await'. You could have just kept using 'flat_map' or whatever macro.

Well, the real benefit of `await` in an imperative language is that it works inside/across all the usual imperative control flow- loops with break/continue, early return, etc. Whether it's an expression or a binding is mostly orthogonal to that.

Came here to say that. I think do notation (and applicative idioms) are easily one of the most-underappreciated features outside the Haskell community.

Scala has had monadic do-notation (called 'for-comprehensions') for a long time and OCaml has recently gained both monadic and applicative 'do notation' (called various things, but mostly 'let+ syntax'). Edit: Oh and F# has also had computation expressions for a long time.

It’s even nicer in Haskell because not only does do-notation make code look clean, await is a function, not an operator.

Want prefix await? Go for it!

    res <- await $ foo bar
Want postfix? You can have that too!

    res <- foo bar & await
Being “just a function” means it composes with everything else in the language, something the Rust languages designers have held in high regard when designing this.

(But I also fully appreciate the design constraints that prevent Rust from using “just a method” or ”just a macro”)

I don't really like it either, but the "future use for expression-oriented operators" aspect makes it easier to swallow. I'd rather deal with some slightly odd notation that accurately represents what's going on than have a bunch of different syntactic constructs for different postfix operations.

This is way way off-topic but: I have never seen the word "postfix" used in this way. "Suffix" is the normal/common English word for this as far as I've always been aware.

A quick search has it listed it as a synonym, but I can't find any usage outside tech and it sure seems like a tech-industry/coding erroneous neologism trying to balance the seemingly logical "post" -vs- "pre".

Am I way off the mark here?

Reverse polish notation is an entirely different usage: not the same meaning as used in this proposal.

It's also commonly used as the name of a piece of email software, but that's also an unrelated usage to this.


RPN and postfix notation refer to the same core idea. From Wikipedia: "Reverse Polish notation (RPN), also known as Polish postfix notation or simply postfix notation, is a mathematical notation in which operators follow their operands, in contrast to Polish notation (PN), in which operators precede their operands."

Can you please explain why you wrote "Reverse polish notation is an entirely different usage"? With a citation, preferably.

Reverse polish notation is a term used in mathematical notation for an a alternative operator syntax to the more common "infix" notation.

The article describes affixing the word "await" to methods in a programming language as a "postfix" and explicitly contrasts it with a "prefix", which is a linguistic term for affixing words, commonly contrasted with the term "suffix".

I don't see any reference nor any clear argument supporting your claim that RPN and postfix notation are different.

Again, RPN and postfix notation mean the same thing.

Here are some more references that show how these terms are commonly used:

1. https://en.wikipedia.org/wiki/Infix_notation

2. https://en.wikipedia.org/wiki/Polish_notation

3. https://en.wikipedia.org/wiki/Postfix_notation (which redirects to a page on RPN, which is very strong evidence that your claim is incorrect)

Ok, onto the next thing. You wrote:

> The article describes affixing the word "await" to methods in a programming language as a "postfix" and explicitly contrasts it with a "prefix", which is a linguistic term for affixing words, commonly contrasted with the term "suffix".

I'm not getting your point. I don't think you are summarizing the language accurately. Perhaps you could quote the section of the article at length.

Here is one quote from the article: "The lang team proposes to add the await operator to Rust using this syntax: `expression.await` This is what’s called the “dot await” syntax: a postfix operator formed by the combination of a period and the await keyword. We will not include any other syntax for the await operator."

Here is another quote: "Our previous summary of the discussion focused most of our attention on the prefix vs postfix question. In resolving this question, there was a strong majority in the language team that preferred postfix syntax. To be concrete: I am the only member of the language team that prefers a prefix syntax. The primary argument in favor of postfix was its better composability with methods and the ? operator."

These usages of "prefix" and "postfix" are consistent and idiomatic.

I hope this clears it up.



All of this stuff has different histories and slightly different usage, but yeah.

You seem to be presenting affix/suffix as opposites? Prefix & suffix are opposites: affix is the superset of both.

Errr yes my bad.

Even if it is tech neologism, what would make it erroneous?

I just mean that some neologisms result from literary "cleverness", some result from slang, and some result from erroneous usage being adopted.

I trust that they've made the best decision.

They dismiss the best option - choosing another symbol instead of a period with hardly a though because they don't lke "line noise".

Just as `?` was introduced, I don't see a good reason ("line noise" doesn't even count as a thought) that they wouldn't introduce postfix notion with `@` `#` `$` or any thing else similar.

If they are looking forward to expanding postfix synax for things like `match`, then this would provide them with the most flexibility and essentially provide a new namespace for those features.

"Line noise" isn't a reason. APL had it right in that consistent syntax (Perl doens't count as consistent) is useful.

Since `await` is a keyword reserved for this purpose, could `(await expression)` ever mean anything else? What is the cost of ripping off the bandaid and starting there, as at least an alternative syntax that is consistent with the language?

  match expression
  await expression
It feels like a future proposal, to allow these keywords to be chained more conveniently with a postfix. It's cool that this can possibly be generalised for chaining ergonomics.



I expect a proposal of this nature to pop up relatively soon. If I had to guess, I'd say the reason they're not doing this now is because they only want to choose one option for the MVP, and they've decided that they prefer this option to that option, and that the chosen option fortunately does not preclude the future option.

Also, I would expect `await` to have mandatory braces, just as `match`, `if`, `for`, etc. do.

For what is worth <- is already taken by the placement in syntax, which has been pulled back but rustc needs to continue parsing it to avoid breaking existing code which used it behind cfg flags. It doesn't do anything but it has to be understood. It would be possible to resolve that problem, but it's another thing to be aware of.

As an old salty programmer, I don't understand why people are upset at this. Sure the syntax is unusual, but they had good reasons for it, and people are adaptable and get used to things.

Personally I like the fact that ".await" emphasizes that you are dealing with something fundamentally different from other language constructs, because futures _are_ fundamentally different.

To me, this syntax feels like a "magic" method call (like in languages where a field access can go through a getter method). That is, if you can think of "x.await()" as a compiler-implemented method call on x which does some stuff and returns a value (plus some magic to allow you to omit the parenthesis), this syntax looks intuitive enough.

Except that, as described in the article, `await` is fundamentally a control flow construct and too magical to be ever considered a ”sort of a method call”. It’s not really a useful mental model. No function is supposed to be able to reach outside its definition and rewrite the control flow of its caller.

Language syntax and futures is only a half of the problem. IMO actual async IO implementation is harder.

Too many incompatible platform-specific APIs: epoll and AIO on Linux, kqueue on BSD and OSX, IOCP and new thread pool API on Windows. I’m pretty sure I forgot couple others, and each of them have non-trivial amount of quirks, bugs, and performance-ruining edge cases. Also these APIs are not directly compatible to each other.

To be usable, async IO needs to be integrated into the rest of IO. You can’t just place that on top, e.g. Java did that, IMO didn’t work particularly well.

The combination of the above makes creating good cross-platform abstraction for async IO challenging.

Not saying impossible, but it’s very hard to do.

I’ve tried once in C++, for that project I needed epoll and iocp, but I wasn’t able within reasonable time. Ended up swapping relatively large IO-related parts of my app depending on platform.

Most of the work you're talking about has been completed for a long time in the crate "mio". It abstracts out all of the platform dependent async io operations into a single consistent interface.

Are you sure about that?

I've looked at the library, it doesn't support anything besides epoll and IOCP, i.e. no BSD or OSX support, no AIO on Linux (the kernel one, not POSIX).

You probably don't want to consume IOCP API on Windows anymore, Vista introduced higher level and easier to use StartThreadpoolIo and friends.

Also on Windows, even with IOCP, you don't want to split async IO from thread pool. The OS kernel manages them both at the same time. Work stealing or other custom scheduling is usually slower than what kernel does.

It says that it is also backed by kqueue and supports FreeBSD , NetBSD, and OS X.

I found this issue where they're discussing rewriting the Windows implementation using the library `wepoll`.


What would be the advantages of supporting AIO on Linux?

> rewriting the Windows implementation using the library `wepoll`.

Interesting idea but I don’t like it too much. While removing the huge complexity of manually managing IOCP and required resources, It’s an undocumented API. New Vista+ threadpool-based IO also removes that complexity, removes complexity of implementing thread pools in the higher level in Tokyo. It’s documented and supported, and Rust has issues with WinXP support anyway.

> What would be the advantages of supporting AIO on Linux?

Faster async disk IO for the apps which are OK with the limitations, i.e. which read/write complete blocks of O_DIRECT files. Databases come to mind.

BTW, io_uring feature coming into Linux kernel removes most limitations of AIO while also improving performance.

It was glossed over why `await expr` isn't in the running; I take it it's because the language team doesn't like the necessity of adding parentheses any time you want to do postfix expressions on the await results e.g. `(await foo()).bar()`?

That was discussed in a previous post, but yes that's why. And in Rust, the biggest postfix expression is the extremely common `?` operator, expected to be used with most futures.

I haven't really been following along very much. I take it from your comment that await in Rust will not automatically propagate errors upwards like it does in JavaScript? So you have to use `foo.await?` in the normal case if you only want to handle successful results?

That's correct. In a sense Javascript await isn't the thing that propagates errors, that's just the language's usual exception behavior plumbed through the Promise.

That's actually a really good point, I never thought of it that way.

Serious suggestion, and I assume you have -- have you considered ".await?" as an alternative? It won't conflict with field names and cribs on the fact that "?" changes control flow.

That conflicts with `.await` + the ? postfix operator. Namely, if `.await` returns a Result, the ? operator could then be used like it is in code today. In fact, this was referenced in the article:

> The primary argument in favor of postfix was its better composability with methods and the ? operator.

Yeah, I thought that through after, and revised my thoughts. I wrote it up top-level but I think the best way forward is embracing that this isn't consistent with anything else in Rust and introducing new syntax, specifically '!await' or '@await' postfix operator. The reason there isn't a good answer that's consistent is that the behavior is inconsistent. As such, it needs new syntax.

Either the syntax is inconsistent or the semantics are, and IMO, the former is preferable to the latter.

Yes, it was.

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact