Hacker News new | past | comments | ask | show | jobs | submit login
Rust: A Critical Retrospective (bunniestudios.com)
577 points by sohkamyung 37 days ago | hide | past | favorite | 494 comments



> This is a superficial complaint, but I found Rust syntax to be dense, heavy, and difficult to read

I'm a huge Rust fan, but sort of agree. First, I dislike C-style syntax in general and find it all very noisy with lots of unnecessary symbols. Second, while I love traits, when you have a trait heavy type all those impl blocks start adding up giving you lots of boilerplate and often not much substance (esp. with all the where clauses on each block). Add in generics and it is often hard to see what is trying to be achieved.

That said, I've mostly reached the conclusion that much of this is unavoidable. Systems languages need to have lots of detail you just don't need in higher level languages like Haskell or Python, and trait impls on arbitrary types after the fact is very powerful and not something I would want to give up. I've even done some prototyping of what alternative syntaxes might look like and they aren't much improvement. There is just a lot of data that is needed by the compiler.

In summary, Rust syntax is noisy and excessive, but I'm not convinced much could have been done about it.


IMHO it's at least somewhat better than "modern" C++ where you end up having to wrap virtually every single thing in some kind of template class, and that's without the benefit of much stronger memory and thread safety.

Overall I think Rust is a hands-down win over C and C++. People who want it to be like Go are probably not doing systems-level programming, which is what Rust is for, and I have severe doubts about whether a rich systems-level language could be made much simpler than Rust and still deliver what Rust delivers. If you want full control, manual memory management with safety, other safety guarantees, a rich type system, high performance, and the ability to target small embedded use cases, there is a certain floor of essential complexity that is just there and can't really be worked around. Your type system is going to be chonky because that's the only way to get the compiler to do a bunch of stuff at compile time that would otherwise have to be done at runtime with a fat runtime VM like Go, Java, C#.NET, etc. have.

Go requires a fat runtime and has a lot of limitations that really hurt when writing certain kinds of things like high performance codecs, etc. It's outstanding for CRUD, web apps, and normal apps, and I really wish it had a great GUI story since Go would be a fantastic language to write normal level desktop and mobile UI apps.


For what purpose would you need to wrap everything in a template class? In my work, I've only touched templates a couple of times in years. They're useful, but I don't see how it's always needed.


std::unique_ptr and std::shared_ptr are templated wrapper classes.


Oh. I misunderstood. I was thinking of user-made templates, not the built-in ones from the standard library. I don't see the issue though. Something like vector feels intuitive. I have a vector<int> or a vector<myType> etc. A pointer to an int, a unique_ptr<int>. It's convenient and fairly flexible. I don't really see the downside, or how it could be done better given static typing.


Something like Kotlin but with a borrow checker might be the ultimate in developer ergonomics for me. I sat down at some point to wrap my head around Rust and ended up abandoning that project due to a lack of time. And because it was hard. The syntax is a hurdle. Still, I would like to pick that up at some point but things don't look good in terms of me finding the time.

However, Rust's borrow checker is a very neat idea and one that is worthy of copying for new languages; or even some existing ones. Are there any other languages that have this at this point?

I think the issue with Rust is simply that it emerged out of the C/C++ world and they started by staying close to its syntax and concepts (pointers and references) and it kind of went down hill from there. Adding macros to the mix allowed developers to fix a lot of issues; but at the price of having code that is not very obvious about its semantics to a reader. It works and it's probably pretty in the eyes of some. But too me it looks like Perl and C had a baby. Depending on your background, that might be the best thing ever of course.


The borrow checker doesn’t really work without lifetime annotations. When I see complaints about rust, that’s seems to be the thing most are talking about. The issue is the notion of an object lifetime is a hard thing to express with familiar type systems. It’s an unfamiliar concept.


Yep. And the struggle learning rust is that you sort of need to learn the borrow checker and lifetimes all at once. You can’t use rust at all without any of that stuff, and there’s a lot to learn before you feel productive! Even a year in to rust I can still throw things together a lot faster in javascript. That may always be true.

So I doubt rust will ever displace python and javascript for everyday app development. But it’s an excellent language for operating systems, game engines, databases, web browsers, IDEs and things like that where stability and performance matter more than cramming in ever more features.


The trick is to avoid long living references as much as possible. You need to unlearn the Java/JS/Python style where everything is a reference. In Rust you should default to value (fully owned) types and move semantics and use references for temporary borrows only. Think about cloning first when you need the data in more than one place, before considering references.

It is surprising how far you can get with that approach without ever writing a sigle lifetime annotation.


I’m nosy sure I agree. There definitely isn’t a “one-size-fits-all” to writing Rust, but I do think that falling back to a “error here? try .clone()” mentality ends up defeating the purpose.

Maybe what Rust needs is a “build a program with lots of allocations” and then “whittle down allocations with smart use of references” section of the book.


> Maybe what Rust needs is a “build a program with lots of allocations” and then “whittle down allocations with smart use of references” section of the book.

Rem acu tetigisti. This is what needs conveying. I think with Rust people are often speaking past one another. One person says "hey, you can't write EVERYTHING with meticulously-planned-out lifetimes, cows for every string, etc", and what they have in mind is a junior/novice programmer's first-time experience (or any combination thereof).

Whereas another person says "you can't clone everything everywhere, or wrap everything in a RefCell", and they are right about what they mean: that properly written software eventually, to be respectably efficient, needs to replace that kind of code with code that thoughtfully uses appropriate lifetimes to truly benefit from what Rust provides.

As usual, Wittgenstein is right, specifically in what he says in s.4 of his famous Buzzfeed article[0]: most so-called problems are merely confusions of language, where two people mistake an inconsistency in their use of signs for an inconsistency in their opinions. If they set it all out more fully and explicitly, I don't think there would be much disagreement at all about what's appropriate, at least among the 80% of reasonable people.

[0] https://www.buzzfeed.com/ludwigwittgenstein/fantastic-ways-t...


That was my first reaction too, but then I realized it's also value vs. reference types

i.e. Java doesn't have value types and that makes the syntax a lot cleaner vs. C++, and I'm pretty sure Kotlin must be the same

So you could still have ownership in a language with references only, and the syntax could be cleaner than Rust.

In fact I think there was a post about just that: Notes on a Smaller Rust https://without.boats/blog/notes-on-a-smaller-rust/


You very well may be able to, but any language that wants to position itself in C's niche won't get away with not having value types. When it comes to systems languages, it's a sad fact (for language designers, anyway) that elegance often loses to mechanical sympathy.


Kotlin has value classes. Here's a good overview of how they work in Kotlin: https://github.com/Kotlin/KEEP/blob/master/notes/value-class...

That document is a few years old now and was intended as a design document. But Value classes shipped with Kotlin 1.5. Apparently they are compatible with the project Valhalla value objects that will be added to the JVM at some point. So, this stuff is coming.

I had to look it up because even though I write Kotlin a lot, value classes are not something I have used at all. Looks useful but not that big of a deal and doesn't really solve a problem I have. Data classes and records (in Java) are a bigger deal IMHO.

In practice, the way you deal with immutability in Kotlin is to keep most of your data structures immutable by default unless they need to be mutable. E.g. there's a List and a MutableList interface. Most lists are immutable unless you create a MutableList. Same with val vs. var variables. Val variables can't be reassigned and you kind of use var only by exception when you really have to. The compiler will actually warn you if you do it without good reason. A data class with only vals can't be modified. Java is a bit more sloppy when it comes to mutability semantics. It has records now but all the fields have setters by default. It has var but no val assignments (you can use final to force this but few people do). And so on.

Semantically this is not as strong as what Rust does of course but it's good enough to make e.g. concurrency a lot easier. Mostly, if you avoid having a lot of mutable shared state, that becomes a lot easier.

You could imagine a Kotlin like language with much stronger semantics implementing borrow checking instead of garbage collection. It wouldn't be the same language of course but I don't think it needs to be very different. Using it would not be a massively different.


Kotlin doesn't really have value classes, those are wrappers around primitive types.

That is the thing with guest languages, they cannot invent something that the plaform doesn't support, and if they indeed come up with lots of boilerplate code to fake features, they risk to become incompatible when the platform actually does provide similar features with incompatible semantics.


Swift is getting ownership/borrow checker mechanisms in Swift 6[1]. Kotlin and Swift have very similar ergonomics [2] (little outdated link).

Together with Actors types/Distributed runtimes, Optional Automatic Reference Counting and good support on Linux, Swift is developing into a killer language imo.

[1] https://forums.swift.org/t/a-roadmap-for-improving-swift-per...

[2] http://nilhcem.com/swift-is-like-kotlin/


Rust and Kotlin have very similar syntax. The main differences are semantics-related, such as Kotlin relying on GC.


Is the syntax really that much of a burden? I don't have a perfect grasp of the syntax and would have trouble if I was having to write rust code with zero reference, but as long as I have other rust code open it is pretty easy to remember what each part does.


It's just hard to parse

    fn apply<A, B, C, G>(mut f: impl FnMut(B) -> G, a: A) -> impl FnMut(&B) -> C
    // must still be `for<'r> impl FnMut(&'r B) -> C`, because that’s what filter requires
    where
        G: FnMut(A) -> C,
        B: Copy, // for dereferencing
        A: Clone,
    {
        move |b| f(*b)(a.clone()) // this must do any bridging necessary to satisfy the requirements
    }

I wrote this code myself, and it's SLOW for me to read. Each part isn't hard, it's just too much crap


The same information can be communicated in different ways, trading one form of noise for another. I have a personal preference for Pascal-like or PL/I syntax. Instead of int *char x or int&& x, there's x: byte ptr ptr. It's more to type and read, sure, but sometimes having an english-like keyword really helps clarify what's going on.


The english words preference is a cliché that has been argued back and forth, to heaven and hell, since Cobol. I'm sympathetic to your opinion in some cases, but ultimately it fails in my view. Terse notations requires an investment in the beginning but then pay off massively with increased bandwidth, you can hold more of the code in your head. You don't see math being done by (x.plus(y)).pow(3), you see it done by (x+y)^3, it gets even worse when expressions increase in size.

Ideally, the language should have enough syntax-bending facilities so that you can still simulate what you want, this is mostly just operator overloading and not treating custom types like second class citizens. For example, your example of byte ptr ptr can be easily done in C++ by a bytePtrPtr struct, or even better, a Ptr<Ptr<Byte>> instantiated class from the template Ptr<T> for any T. Overloading the dereference and conversion operators will completely hide any trace of the fact it's not a built in type, and compiler optimization and inlinning will (hopefully, fingers crossed) ensure that no extra overhead is being introduced by the abstraction.

As for the 'byte ptr ptr' syntax specifically, in F# generic instantiation can be done by whitespace concatenation of type names in reversed C++/Java/C# order, so the above C++ type would (if translated to F# somehow) literally be written out as you want it to be, so even what seems like it would require language support (whitespace between related identifiers, generally a tricky thing in PL design) can actually be accomplished with clever and free minded syntax.


That is a good point about typedefs, and I would hate to be using 'ADD 1, a, b TO x ROUNDED' instead of 1 + a + b + round(x). I'll also have to check out F#.


I agree, and my ideas for alternative syntax were effectively this. They were, in my opinion, a slight improvement, but still result in lots of syntax. My point is that while I might want a more "python-like" or "ML-like" syntax we often forget that it simply isn't possible in the same way those languages use it, and by the time we add all the extra things we need, it doesn't look that much less "noisy".


> It's more to type and read

I wouldn't even say that. Using words (or multi-letter symbols generally) may be more to type, but virtually everybody uses editors with completion features and those completion features tend to work better with words than symbols. Furthermore, despite there being more characters, I don't think it's actually more to read. People who are fluent in reading languages written with alphabet systems don't read letter-by-letter, but instead read word-by-word, using effortless word recognition.


Typing words is faster than symbols for most folks even without an IDE. Typing `&{% takes way longer than “static long foo”.


> People who are fluent in reading languages written with alphabet systems don't read letter-by-letter, but instead read word-by-word, using effortless word recognition.

It all adds up. Languages like COBOL, PASCAL or ADA (originally designed for terminals with very limited character sets, sometimes even lacking lowercase text - thus requiring case-insensitive syntax) make it a lot harder to survey larger code blocks.


> It all adds up.

If that's true, I would expect logographic writing systems to cause less reader fatigue than alphabetic writing system. But as far as I'm aware that isn't the case, and the two are roughly equivalent.


As far as I know, Chinese speakers read texts on average faster than English speakers


One of the most readable languages that I've ever seen is XQuery, which is also very verbose, and prefers keywords over punctuation.


Considering what XQuery does the syntax does a great job of being readable. Every time I look at React components using JSX I'm unfavourably comparing it to XQuery.


> I found Rust syntax to be dense, heavy, and difficult to read

Reminds me of this section of Rich Hickey talk: https://www.youtube.com/watch?v=aSEQfqNYNAc


There are similar issues in Rust to the one Hickey talks about in Java, in terms of cognitive overload and difficulty in a human parsing the program. However, I've found rust largely avoids issues with a bunch of different classes and with their own specific interfaces with a bunch of getters and setters in the HTTP servlet example because of Trait interface reuse.


Familiarity also alleviates the issue. I can remember when I first encountered TeX in the 80s and Perl in the 90s and thought the code looked like line noise and now I no longer see that (even in Larry Wall–style use-all-the-abbreviations Perl).


The problem is that familiarity needs to be maintained or you can lose it. As someone that doesn't get to use Rust at my day job that can be hard to keep fresh.

I only occasionally dabble in Rust in my free time and coming back to a project of mine after months of not having used any Rust, yeah lets just say that line noise made me prematurely murder some of my pet-projects.

Sure it gets probably better with time but still it is a cost that one pays.


> ...thought the code looked like line noise and now I no longer see that

"I don't even see the code anymore. I just see blonde, brunette, ..."

I myself have just started to get like that with my understanding of Rust:

"I don't even see the `impl<P: 'static, const PS: usize> IndexMut<usize> for SomeThing<P, PS>` anymore. I just see a mutable iterator."


It alleviates the issue the way the frog doesn't notice in the "boiling frog" fable. That is, not in a good way. The cognitive load to parse and understand it is still there; you're just not as aware of it distracting from other matters. Some (me) would say it distracts from more important things, like how units of code compose and what the purpose of a program is.


I find that Rust tends to have code that goes sideways more than downward. I prefer the latter and most C code bases, that I find elegant are like that.

It is like that, because of all the chaining that one can do. It is also just a feeling.


There are two upcoming features, let chains and let else, to counter the sideways drift.

Sometimes it's also the formatter though that outputs intensely sideways drifting code: https://github.com/rust-lang/rust/blob/1.60.0/compiler/rustc...


I think it's because of the expression focus. Isn't it easier to make the code flow like a waterfall when it's imperative, but is harder to reason about values and state.


I have noticed this tendency as well.

To counteract it, I write exit-early code like this:

    let foo_result = foo();
    if let Err(e) = foo_result {
        return Bar::Fail(e);
    }
    let foo_result = foo_result.unwrap();
    ...


Any reason why this wasn't preferred?

    let foo_result = foo()
        .map_err(Bar::Fail)?;


Bar::Fail is not wrapped in a Result type, so you can't use '?' with it (on stable at least).

You can write it like this:

  let foo_result = match foo() {
    Ok(v) => v,
    Err(e) => return Bar::Fail(e)
  };


The result type is the return from Foo -- Bar::Fail does not need to wrap Result. Foo is Result<T, E> and map_err() would convert it to Result<T, Bar::Fail>. I think GP's `map_err()?` is the most straightforward way of writing this idea (and it's generally speaking how I would suggest writing Rust code).


GP's code will return Result<_, Bar>, the original code we are trying to fix just returns Bar.


If you are writing code to handle Results, it’s going to be a lot less painful to just return Result.


I do this too, with a different spin on it:

  let foo = match foo() {
      Ok(foo) => foo,
      Err(err) => return Err(err),
  };
I was typing it out so often I made editor aliases `lmo` and `lmr` for Option and Result


let me introduce you to the famous '?' operator.

The code above can be written as:

  let foo = foo()?;


LOL you're right! I just pasted the template here, but my defaults are mostly equivalent to plain old `?`. I don't use the match if `?` would work.


Early exit code would be easier to write if Rust supported guard let.


its coming soon, already available on nightly.

   let Some(x) = foo() else { return 42 };


I'd suggest that something like that is already achievable by having foo return an Option and combining it with unwrap_or.


Like this?

    if let Ok(x) = my_func() {
        // ...
    }
Or do you mean something else?


Isn’t that a good general practice todo? Exit early


You'd be surprised. For every person that things exit early is good, you'll run into another that prefers a single exit. At worked at a C++ shop that preferred "single exit", and some methods with an ungodly amount of conditions just to make this possible. Ugh.


In my experience, a preference for single exit comes from C where you always need to make sure to clean up any resources, and an early exit is a great way to have to duplicate a bunch of cleanup logic or accidentally forget to clean things up.

Of course, that's what goto is actually good for.


I love Rust and use it everyday but the syntax bloat is something I will never get over. I don't believe there's nothing that could be done about it. There are all sorts of creative grammar paths one could take in designing a language. An infinite amount, in fact. I would really like to see transpiler that could introduce term rewriting techniques that can make some of that syntax go away.


It's a pain to write all that boilerplate, I agree. I don't think it's bloat though - I've been doing rust for a few years now, and when I revisit old mostly forgoten code, I love that boilerplate. I rarely have to do any puzzling about how to infer what from the current file, it's just all right there for me.

I feel this way about all the verbosity in rust - some of it could likely be inferred, but but having it all written down right where it is relevant is great for readability.


Having done a bit of C lately (lots in the past) and quite a bit of Rust, Rust is not verbose!

The functional syntax the author of this (good) article complains about is what this (long experience in procedural C like languages) old programmer has come to love.


>when I revisit old mostly forgoten code, I love that boilerplate. I rarely have to do any puzzling about how to infer what from the current file, it's just all right there for me.

This is going to sound absurd, but the only other language I had this experience with was Objective-C.

Verbosity is super underrated in programming. When I need to come back to something long after the fact, yes, please give me every bit of information necessary to understand it.


Useful verbosity is fine to me. However, I never wish to see another line of COBOL, thank you very much.


This is a really good point, IMO. I've never written extensive amounts of Objective-C, but in my adventures I've had to bolt together GUIs with Objective-C++ and I learned to love the "verbose" nature of Obj-C calls whenever I had to dive back into the editor for a game engine or whatever because it meant I didn't have to rebuild so much state in my head.


What you want is for the complex things to be verbose and the simple things to be concise.

Objective-C makes everything verbose. It’s too far in the other direction. Memories of stringByAppendingString


Eh, sure, I'm willing to buy Objective-C went too far in some places. It still works for coming back to, though. :)


That's true, I found this writing F# with an IDE vs reading F# in a PR without IDE it really becomes easier to read if you at least have the types on the function boundary.

F# can infer almost everything. It's easier to read when you do document some of the types though.


> F# can infer almost everything. It's easier to read when you do document some of the types though.

F# is also easier to avoid breaking in materially useful ways if (like TypeScript) you annotate return types even if they can be inferred. You'll get a more useful error message saying "hey stupid, you broke this here" instead of a type error on consumption.


"Creative" grammar introduces parsing difficulties, which makes IDE tooling harder to build and less effective overall. My overall guess is that Rust made the right choices here, though one can endlessly bikeshed about specifics.


Creative grammar can introduce parsing difficulties, but it doesn't have to.

I've made a couple small languages, and it's easy to end up lost in a sea of design decisions. But there are a lot of languages that have come before yours, and you can look to them for guidance. Do you want something like automatic semicolon insertion? Well, you can compare how JavaScript, Python[1], Haskell, and Go handle it. You can even dig up messages on mailing lists where developers talk about how the feature has unexpected drawbacks or nice advantages, or see blog posts about how it's resulted in unexpected behavior from a user standpoint.

You can also take a look at some examples of languages which are easy or hard to parse, even though they have similar levels of expressivity. C++ is hard to parse... why?

You'd also have as your guiding star some goal like, "I want to create an LL(1) recursive descent parser for this language."

There's still a ton of room for creativity within constraints like these.

[1]: Python doesn't have automatic semicolon insertion, but it does have a semicolon statement separator, and it does not require you to use a semicolon at the end of statements.


> you can look to them for guidance. Do you want something like automatic semicolon insertion? Well, you can compare how JavaScript, Python[1], Haskell, and Go handle it

You can't look at JavaScript/Python/Go (I don't know about Haskell), because Rust is a mostly-expression language (therefore, semicolons have meaning), while JavaScript/Python/Go aren't.

The conventional example is conditional assignment to variable, which in Rust can be performed via if/else, which in JS/Python/Go can't (and require alternative syntax).


> You can't look at JavaScript/Python/Go (I don't know about Haskell), because Rust is a mostly-expression language (therefore, semicolons have meaning), while JavaScript/Python/Go aren't.

I have a hard time accepting this, because I have done exactly this, in practice, with languages that I've designed. Are you claiming that it's impossible, infeasible, or somehow impractical to learn lessons from -- uhh -- imperative languages where most (but not all) programmers tend to write a balance of statements and expressions that leans more towards statements, and apply those lessons to imperative languages where most (but not all) programmers tend to write with a balance that tips more in the other direction?

Or are you saying something else?

The fact that automatic semicolon insertion has appeared in languages which are just so incredibly different to each other suggests, to me, that there may be something you can learn from these design choices that you can apply as a language designer, even when you are designing languages which are not similar to the ones listed.

This matches my experience designing languages.

To be clear, I'm not making any statement about semicolons in Rust. If you are arguing some point about semicolon insertion in Rust, then it's just not germane.


Not the parent, but you can certainly have an expression-oriented language without explicit statement delimiters. In the context of Rust, having explicit delimiters works well. In a language more willing to trade off a little explicitness for a little convenience, some form of ASI would be nice. The lesson is just to not extrapolate Rust's decisions as being the best decision for every domain, while also keeping the inverse in mind. Case in point, I actually quite like exceptions... but in Rust, I prefer its explicit error values.


Ruby is a great example of a language that’s expression oriented, where terminators aren’t the norm, but optionally do exist.


> I have a hard time accepting this, because I have done exactly this, in practice, with languages that I've designed.

I don't know which your languages are.

Some constructs are incompatible with optional semicolons, as semicolons change the expression semantics (I've given an example); comparison with languages that don't support such constructs is an apple-to-oranges comparison.

An apple-to-apple comparison is probably with Ruby, which does have optional semicolons and is also expression oriented at the same time. In the if/else specific case, it solves the problem by introducing inconsistency, in the empty statement, making it semantically ambiguous.


Ruby is expression-oriented like Rust and doesn't have semicolons. Neither do most functional languages.


Have you also written tooling - e.g. code completion in an IDE - for those small languages? There are many things that might be easy to parse when you're doing streaming parsing, but a lot more complicated when you have to update the parse tree just-in-time in response to edits, and accommodate snippets that are outright invalid (because they're still being typed).


Yes, that's a good example of exactly what I'm talking about. Code completion used to be really hard, and good code completion is still hard, but we have all these different languages to learn from and you can look to the languages that came before you when building your language.

Just to give some more detail--you can find all sorts of reports from people who have implemented IDE support, talking about the issues that they've faced and what makes a language difficult to analyze syntactically or semantically. Because these discussions are available to sift through in mailing lists, or there are even talks on YouTube about this stuff, you have an wealth of information at your fingertips on how to design languages that make IDE support easier. Like, why is it that it's so hard to make good tools for C++ or Python, but comparatively easier to make tools for Java or C#? It's an answerable question.

These days, making an LSP server for your pet language is within reach.


Tooling should not depend on code text, but on language's AST.


I'm not an expert as I do not work on these tools but I don't think IDEs can rely solely on ASTs because not all code is in a compilable state. Lots of times things have to be inferred from invalid code. Jetbrains tools for example do a great job at this.


In practice though, getting the AST from the text is a computational task in and of itself and the grammar affects the runtime of that. For instance, Rust's "turbofish" syntax, `f::<T>(args)`, is used to specify the generic type for f when calling it; this is instead of the perhaps more obvious `f<T>(args)`, which is what the definition looks like. Why the extra colons? Because parsing `f<T>(args)` in an expression position would require unbounded lookahead to determine the meaning of the left angle bracket -- is it the beginning of generics or less-than? Therefore, even though Rust could be modified to accept`f<T>(args)` as a valid syntax when calling the function, the language team decided to require the colons in order to improve worst case parser performance.


This is why using the same character as a delimiter and operator is bad.


How does C# manage to handle without the turbo fish syntax? What’s different in Rust?


It's not impossible to handle the ambiguity, it's just that you may have to look arbitrarily far ahead to resolve it. Perhaps C# simply does this. Or perhaps it limits expressions to 2^(large-ish number) bytes.


Comments tending to skip on being a part of the AST make that harder.


That's really cool that you think Rust syntax could be significantly improved. I'd really love to hear some details.

Here's the example from the post:

  Trying::to_read::<&'a heavy>(syntax, |like| { this. can_be( maddening ) }).map(|_| ())?;
How would you prefer to write this?


That whole example feels like a strawman, from my (maybe limited) experience something that's rather the exception than the norm.

First, lifetimes are elided in most cases.

Second, the curly braces for the closure are not needed and rustfmt gets rid of them.

Finally, the "map" on result can be replaced with a return statement below.

So, in the end we get something like:

  Trying::to_read(syntax, |like| this.can_be(maddening))?; 
  Ok(())


That part is actually not so bad?

  Trying\to_read\[&'a heavy](syntax, |like| { this. can_be( maddening ) }).map(|_| ())?;
I can't improve it that much


Rust has an extremely powerful macro system, have you tried that?


Rust macros are one of the more annoying features to me. They're great at first glance but whenever I want to build more fancy ones I constantly bump into limitations. For example they seem to be parsed without any lookahead, making it difficult to push beyond the typical function call syntax without getting compiler errors due to ambiguity.


Procedural macros have a peek function from the syn crate. macro_rules macros can stuff this into the pattern matching.

e.g.

https://turreta.com/2019/12/24/pattern-matching-declarative-...


But proc macros are limited by requiring another crate (unless things have changed in the last year). Sure, it’s just one extra crate in the project, but why must I be forced to?


Asked and answered in an adjacent comment.


But there's the weird limitation that procedural macros have to be in a separate crate.


Why is that weird? Procedural macros are compiler plugins. They get compiled for the platform you're building on, not the one you're building for, and so they need to be a separate compilation unit. In Rust, the crate is the compilation unit.


Because you can't just throw together a simple procedural macro to use in a specific project, as you can in other languages.


Nim, which technically accomplishes all (I assume) of the Rusty things that require syntax, manages to do it with quite a lot nicer syntax.


Nim accomplishes memory safety using a garbage collector. That's pretty dissimilar to rust and more comparable to go or D.


Nim allows you to chose what memory management method you want to use in a particular piece of software. It can be one of various garbage collectors, reference counting or even no memory management. It allows you to use whatever suits your needs.


> > Nim accomplishes memory safety using a garbage collector.

No memory management in Nim equals no memory safety guarantees. Or no? Well in that case the statement above is true.


You can get management and safety with one of Nim's modes, as per the peterme link in my sibling, if you would like to learn more.


I don’t understand why you all are posting tedious details and well actuallys when the original assertion was (way back):

> Nim, which technically accomplishes all (I assume) of the Rusty things that require syntax, manages to do it with quite a lot nicer syntax.

Nim does not have something which gives both memory safety and no ((tracing garbage collector) and/or (reference counting)) at the same time. End of story.

The fact that Nim has an off-switch for its automatic memory management is totally uninteresting. It hardly takes any language design chops to design a safety-off button compared to the hoops that Rust has to jump through in order to keep its lifetimes in check.


>Nim does not have something which gives

You are simply incorrect, appear unwilling to research why/appear absolutist rather than curious, and have made clear that what I think is "clarification" or "detail expansion" you deem "tedious" or "nitpicking" while simultaneously/sarcastically implicitly demanding more details. That leaves little more for me to say.


It would be helpful if you pointed out the Nim memory management method that works the same as the Rust one.



You have managed to point out that tracing garbage collection and reference counting are indeed two ways to manage memory automatically. Three cheers for your illuminating clarification.


Sorry about being an arse. I got frustrated by all the talking-past-each-other.


While tracing garbage collection is indeed one possible automatic memory management strategy in Nim, the new --mm:arc may be what darthrupert meant. See https://uploads.peterme.net/nimsafe.html

Nim is choice. :-) {EDIT: As DeathArrow also indicated! }


Reference counting is a form of garbage collection.


Terminology in the field can indeed be confusing. In my experience, people do not seem to call reference counted C++ smart pointers "garbage collection" (but sure, one/you might, personally).

"Automatic vs manual" memory management is what a casual PL user probably cares about. So, "AMM" with later clarification as to automation options/properties is, I think, the best way to express the relevant ideas. This is why I said "tracing GC" and also why Nim has recently renamed its --gc:xxx CLI flags to be --mm:xxx.

Whether a tracing collector is even a separate thread or directly inline in the allocation code pathway is another important distinction. To muddy the waters further, many programmers often mean the GC thread(s) when they say "the GC".

What runtimes are available is also not always a "fixed language property". E.g., C can have a tracing GC via https://en.wikipedia.org/wiki/Boehm_garbage_collector and you can get that simply by changing your link line (after installing a lib, if needed).


Terminology in the field is what CS books like "The Garbage Collection Handbook" (https://gchandbook.org), or "Uniprocessor garbage collection techniques" (https://link.springer.com/chapter/10.1007/BFb0017182), define.

People don't call reference counted C++ smart pointers "garbage collection", because they aren't managed by the runtime, nor optimized by the compiler, rather rely on basic C++ features.

But they call C++/CX and C++/CLI ref types, automatic memory management, exactly because they are managed by the UWP and CLR runtimes respectively,

https://docs.microsoft.com/en-us/cpp/cppcx/ref-classes-and-s...

https://docs.microsoft.com/en-us/cpp/dotnet/how-to-define-an...


I doubt you are, exactly, but I think it's really hard to argue that the terminology, as often used by working programmers, does not confuse. ("Need not" != "does not"!) All that happened here is darthrupert made vague remarks I tried to clarify (and j-james did a better job at [1] - sorry!). Most noise since has since been terminology confusion, just endemic on this topic, embedded even in your reply.

I may be misreading your post as declaration rather than explanation of confusion, but on the one hand you seem to write as if "people not calling RC smart ptrs 'GC' is 'reasonable'" yet on the other both your two books include it as a form of "direct GC" - GC Handbook: The Art of AMM with a whole Chapter 5 and the other early in the abstract. darthrupert just reinforced "working programmer usage" being "not academic use" elsewhere. [2] GCHB even has a glossary - rare in CS books (maybe not in "handbooks"?) So, is your point "Academics say one thing, but 'People' another?"

C++ features you mention were intended to blur distinctions between "compiler/run-time supported features", "libraries", and "user code". Many PLs have such blurring. Such features, basic or not, are optimized by compilers. So, neither compiler support nor "The Runtime" are semantic razors the way I think you would like them to be (but might "explain people/working programmers"). If one "The" or "collection" vs. "collector" are doing a lot of semantic work, you are in confusing territory. Also, human language/terms are cooperative, not defined by MS. MS is just one more maybe confusing user here.

Between intentional blurriness, loose usage, and many choices of both algos & terms used in books, papers, documentation and discussions, and the tendency for people to just "assume context" and rush to judgements, I, for one, don't see existence of confusion as mysterious.

Given the confusion, there seems little choice other than to start with a Big Tent term like "memory management" and then qualify/clarify, though many find "not oversimplifying" tedious. I didn't think this recommendation should be contentious, but oh well.

[1] https://news.ycombinator.com/item?id=31440715

[2] https://news.ycombinator.com/item?id=31445489


I see now that the GP wrote “a garbage collector” (not the article). Oops! “A reference counting method” doesn’t roll off the tongue. So it appears that your nitpicking was indeed appropriate.


Until you get a reference cycle. Then it's a form of garbage accumulation.


See the neighboring subthread: https://news.ycombinator.com/item?id=31438134 (which has details/links to more information and is more explicit than just the 4th footnote at the end of the mentioned twice before peterme link.)


Well, it's not exactly garbage collection in the way it's commonly understood, I believe:

https://nim-lang.org/blog/2020/10/15/introduction-to-arc-orc...


That doesn't really matter for syntax. You could easily add lifetime syntax to Nim.


It matters for the question of whether Nim manages to do the same things as Rust with better syntax.


Okay, so instead of Nim, consider a hypothetical language that has Nim-like syntax but Rust semantics. What would be the problem with that?


I'm curious what that assumption is based on. Rust and Nim are pretty different, and both of them have features that the other doesn't even try to have.


This is an interesting comparison of memory semantics I stumbled upon: https://paste.sr.ht/blob/731278535144f00fb0ecfc41d6ee4851323...

Nim's modern memory management (ARC/ORC) is fairly similar to Rust. ARC functions by reference-counting at compile time and automatically injecting destructors: which is broadly comparable to Rust's ownership + borrow checker.

(A big difference is that Nim's types are Copy by default: this leads to simpler code at the expense of performance. You have control over this, keeping memory safety, with `var`, `sink`, and others, as highlighted in the above link.)

https://nim-lang.org/blog/2020/10/15/introduction-to-arc-orc...

For reference cycles (the big limitation of reference counting), there's ORC: ARC + a lightweight tracing garbage collector.

As I understand it Rust also cannot handle reference cycles without manually implementing something similar.

https://nim-lang.org/blog/2020/12/08/introducing-orc.html

https://doc.rust-lang.org/book/ch15-06-reference-cycles.html


ARC is not "fairly similar" to idiomatic Rust. ARC is not idiomatic in Rust; it's a fallback for when you can't make do with lifetimes and borrowing.


Nim passes by value by default, which eliminates much of the complexity overhead of lifetimes and borrowing in most programs. (the compiler does optimize some of these into moves.)

But when you do want to pass by reference: that's where Nim's move semantics come in. These are what are fairly similar to Rust's lifetimes and borrowing, and what the paste.sr.ht link briefly goes over.

If you're interested, you can read more about Nim's move semantics here:

https://nim-lang.org/docs/destructors.html


Note that rust doesn’t have ARC; there is an atomic reference counted pointer, but it’s not automatic, which is what the a in ARC stands for.


Tongue in cheek: Then it's exactly like (modern) Nim, only that Nim does the fallbacking automatically as needed ;) There are lots of devils in the details, I assume.


Rust had a lot to go against so I can't blame them for somehow subpar syntax. Maybe it's gonna be revised.. maybe some guys will make a porcelain layer or a rustlite.


There's definitely a space for native languages that are not as dense and performant possibly as Rust. I will trade some readability when I need strict memory guarantees and use Rust, but most of my time I'd like to use something readable and fun to use, which Rust ain't.

I used to use Go, not much of a fan anymore, but I'm liking Crystal a lot to fill this space. Eventually Zig when it's more mature.


Just started learning Go. If I may ask, why did you loose interest in Go?


Not enough syntax sugar, not functional enough, and a few smaller papercuts.

It feels dry, like a humourless android. Not very fun to write but probably the most pragmatic choice for the problem. I prefer having fun when I program.


I tend to see that as an advantage - the language gets out of the way and lets me just have fun with the problem itself. Especially if I'm referencing code other people wrote (ie libraries) where I can pretty much instantly grok what's going on and focus on the logic. Different strokes


I wouldn't call a huge mountain of `if err != nil` “gets out of the way”.


Your summary is the thing I struggle with as well. How do you deal with the issues of density without either making it more verbose by a wide margin (which also hampers readability) or hiding information in a way that makes the code less obvious which is, IMO, worse.

Software is becoming more and more complex and unless there are entirely different design patterns we have failed to find, managing and understanding that during both the writing and the maintenance of software is the fundamental problem of our time. Someone else in these comments mentioned leaning more heavily into IDE tooling and I do wonder if we are coming to a point where that makes sense.


> unless there are entirely different design patterns we have failed

It’s not that we’ve failed to find different design patterns, it’s that we found these patterns in the 70s and haven’t done much with them since. Since C there has been a pretty constant march toward more imperative programming, but imperative programming I feel has reached its peak for the reasons you describe.

We’re only just starting to explore the functional programming space and incorporate those learnings into our work. But what about logic programming, dataflow programming, reactive programming, and other paradigms that have been discovered but not really fully explored to the extent imperative programming has been? I think there’s a lot of room for improvement just by revisiting what we’ve already known for 50 years.


The imperative design matches the hardware too well to just dispense with. Rust's abstractions are probably the closest we've gotten to a composable design that fits closely to the hardware while mechanically preventing writing the most common bugs at that level.

That said I agree that we've barely scratched the surface of the space of good paradigms; I'm partial to logic programming but most are underexplored. Perhaps other approaches can use Rust or its IR (MIR?) as a compilation target. As an example, this strategy is being used by DDlog ( https://github.com/vmware/differential-datalog ).


> The imperative design matches the hardware too well to just dispense with.

I don't think we should dispense with it for that reason, but we also then have to admit imperative programming doesn't match the design of promised future hardware as well as it has past hardware. The future will be focused around manycore distributed heterogenous compute resources like GPGPU, neural cores, computer vision accelerators, cloud compute resources, etc.


Yeah future computing hardware will be more diverse, but most of the things you mentioned are ultimately be programmed imperatively. GPGPUs are converging to look like many tiny general purpose CPUs, neural cores and accelerators are implemented as specialized coprocessors that accept commands from CPUs, distributed cloud compute is just farms of CPUs. All of these things have an imperative kernel, and importantly every one of them care very much about path dependencies, memory hierarchy, and ownership semantics, the very things that Rust brings to the table.

Even languages with 'exotic' execution semantics like prolog's Unification is actually implemented with an imperative interpeter like the Warren Abstract Machine/WAM, and there's not an obvious path to implementing such unorthodox semantics directly in hardware. Low-level imperative programs aren't going away, they're just being isolated into small kernels and we're building higher level abstractions on top of them.


Sure, but that's not really imperative programming. That's imperative programs finding their correct niche. It's a shift in perspective. Today we do imperative programming with a little distributed/async/parallel/concurrent programming sprinkled in. In the future distributed/async etc. will be the default with a little imperative programming sprinkled in.


I wanted to learn Go while working professionally with PHP and Python. I loved the simplicity and syntax of Go overall. I learned Go enough to build a small internal tool for our team and it is Production ready (at least internally). Then I wanted to learn Rust since it is so popular and always compared with Go and the syntax made me lose interest. Rust may be amazing and I will be more open minded to try later but it didn't spark the interest. Superficial I know since the real power is in functionality etc but just an anecdote from an amateur.


Another point is just about the maturity of language and libraries.

I started learning Rust recently and when searching how to do some pretty basic stuff, the answers are like "well you used to do this but now you do this and soon you'll be able to do this but it's not stabilized"

I figure I'll just check back in 5 years, I don't have the time or patience to be on the bleeding edge when I'm trying to get things done.


Good point. In software especially web world, I am usually wary of tech that is not at least 10 years mature/old.


> In software especially web world...

The frontend frameworks used to have really short lifecycles "Oh,you're still using FooJS? That's so last year. Everyone's on Bar.js now- it's so much better"


Between this and the syntax/learning curve I've not gotten around to learning it (and maybe some Squirrel! as well).

One of these days. In the meantime I'm rather happy with Go because it's easy for a code monkey like myself to do the needful.


Yes. Go is better unless you need the features of Rust.


Completely agree. I think of the extra syntax as us helping the compiler check our code. I have to write a few more characters here and there, but I spend way less time debugging.

Although I may have PTSD from Rust, because lately I find myself preferring Qbasic in my spare time. ¯\_(ツ)_/¯


Probably a very general response to overload of any kind.. you start to reevaluate the opposite side of the spectrum.


the dialectical method


I'd have used the 'banging on both guard rails but yours sounds better.


> That said, I've mostly reached the conclusion that much of this is unavoidable. Systems languages need to have lots of detail you just don't need in higher level languages like Haskell or Python, …

I am not convinced that there's so more to Rust than there is to GHC Haskell to justify so much dense syntax.

There's many syntax choices made in Rust based, I assume, on its aim to appeal to C/C++ developers that add a lot of syntactic noise - parentheses and angle brackets for function and type application, double colons for namespace separation, curly braces for block delineation, etc. There are more syntax choices made to avoid being too strange, like the tons of syntax added to avoid higher kinded types in general and monads in particular (Result<> and ()?, async, "builder" APIs, etc).

Rewriting the example with more haskell-like syntax:

    Trying::to_read::<&'a heavy>(syntax, |like| { this. can_be( maddening ) }).map(|_| ())?;

    Trying.to_read @('a heavy) syntax (\like -> can_be this maddening) >> pure ()
It's a tortuous example in either language, but it still serves to show how Rust has made explicit choices that lead to denser syntax.

Making a more Haskell-like syntax perhaps would have hampered adoption of Rust by the C/C++ crowd, though, so maybe not much could have been done about it without costing Rust a lot of adoption by people used to throwing symbols throughout their code.

(And I find it a funny place to be saying _Haskell_ is less dense than another language given how Haskell rapidly turns into operator soup, particularly when using optics).


> In more plain terms, the line above does something like invoke a method called “to_read” on the object (actually `struct`) “Trying”...

In fact, this invokes an associated function, `to_read`, implemented for the `Trying` type. If `Trying` was an instance `Trying.to_read...` would be correct (though instances are typically snake_cased in Rust).

I'll rewrite the line, assuming `syntax` is the self parameter:

    syntax
        .to_read::<&'a heavy>(|like| this.can_be(maddening))
        .map(|_| ())?;
In my opinion, this is honestly not bad.


> like the tons of syntax added to avoid higher kinded types in general and monads in particular (Result<> and ()?, async, "builder" APIs, etc).

Rust is not "avoiding" HKT in any real sense. The feature is being worked on, but there are interactions with lifetime checking that might make, e.g. a monad abstraction less generally useful compared to Haskell.


> Systems languages need to have lots of detail you just don't need in higher level languages like Haskell or Python

True, but Rust is being used for a lot more than just system programming, judging from all the "{ARBITRARY_PROGRAM} written in Rust" posts here on HN.


>That said, I've mostly reached the conclusion that much of this is unavoidable. Systems languages need to have lots of detail you just don't need in higher level languages like Haskell or Python, and trait impls on arbitrary types after the fact is very powerful and not something I would want to give up.

Have you checked out C++20 concepts? It supports aliases and doesn't require explicit trait instantiations, making it possible to right such generic code with much less boilerplate.


My experience in C++ prior to 20 is that it is a lot more verbose/boilerplatey than Rust. I'd love to see that get better, but I think C++ is starting from significantly behind.


C++17:

    template<typename T, std::enable_if_t<std::is_floating_point_v<T>, int>* = nullptr>
    void func(T fp) { ... }
C++20:

    void func(std::floating_point auto fp) { ... }


There is an equivalent syntax in Rust to both of those examples, and in both cases I find it less verbose. The template variant is roughly equivalent to:

    fn func<T: FloatingPoint>(fp: T) { ... }
And the "auto" variant is similar to impl argument in Rust:

    fn func(fp: impl FloatingPoint) { ... }


I really don't see in which way the second case is less verbose especially if you add a non-void return type, e.g. i32. The first case would also be doable like this, which is pretty munch the exact same than your first example with the added "template" keyword

    template<std::floating_point T> 
    void func(T t) { ... }
(also, it's not really equivalent - if I'm not mistaken with traits you can only use what the trait declares ; in C++ you can for instance do something like

    template<typename T>
    concept CanBlah = requires (T t) { 
      t.blah();
    };
and still be able to do

    void func(CanBlah auto t) { 
      log << "func:" << t;
      t.blah();
    }
instead of polluting the prototype with all possible side concerns)


> instead of polluting the prototype with all possible side concerns)

C++ Concepts are duck typed, and Rust's Traits are not, so in Rust you are expressing meaning here, and in C++ only making some claims about syntax which perhaps hint at meaning.

WG21 seems to dearly wish this wasn't so, offering Concepts which pretend to semantics they don't have, such as std::totally_ordered and I'm glad to see your "CanBlah" concept doesn't do this, to be sure all things which match this requirement can, indeed blah() although we've no idea what that can or should do.

Once you've accepted that you only have duck typing anyway, you're probably going to have to explain in your documentation the actual requirements for this parameter t, as the prototype merely says it CanBlah and that's not actually what we care about.

In contrast the Rust function we looked at actually does tell us what is required here, something which "implements FloatingPoint", and that implementation (plus the data structure itself) is all that's being exposed.


I don't understand - you seem to say that duck typing is a bad thing. In my experience, some parts of a program have to be strongly typed and some have to be "lightly" - I'd say that a good 5% of my work is to make C++ APIs that look&feel closer to dynamic languages with even less typing checks than the C++ baseline.

> WG21 seems to dearly wish this wasn't so

how so ?

> Once you've accepted that you only have duck typing anyway, you're probably going to have to explain in your documentation the actual requirements for this parameter t, as the prototype merely says it CanBlah and that's not actually what we care about.

the documentation having to state "t can be logged" would just be useless noise and a definite no-pass in code review aha

> In contrast the Rust function we looked at actually does tell us what is required here, something which "implements FloatingPoint", and that implementation (plus the data structure itself) is all that's being exposed.

my personal experience from other languages with similar subtyping implementation (ML-ish things) is that this looks good in theory but is just an improductive drag in practice


> you seem to say that duck typing is a bad thing

C++ didn't have any other practical choice here.

But this introduced another case where C++ has a "false positive for the question: is this a program?" as somebody (Chandler Carruth maybe?) has put it. If something satisfies a Concept, but does not model the Concept then the C++ program is not well formed and no diagnostic is required.

> how so ?

I provided an explanation with an example, and you elided both.

> the documentation having to state "t can be logged" would just be useless noise and a definite no-pass in code review aha

In which case it's your responsibility to ensure you can log this, which of course CanBlah didn't express.

> similar subtyping implementation

The only place Rust has subtyping is lifetimes, so that &'static Foo can substitute for any &'a Foo and I don't think that's what you're getting at.


> WG21 seems to dearly wish this wasn't so, offering Concepts which pretend to semantics they don't have, such as std::totally_ordered and I'm glad to see your "CanBlah" concept doesn't do this, to be sure all things which match this requirement can, indeed blah() although we've no idea what that can or should do.

I don't understand how random assumptions on what WG21 may or may not think counts as an example (or anything to be honest)

> If something satisfies a Concept, but does not model the Concept then the C++ program is not well formed and no diagnostic is required.

uh... no ?

I think that you are referring to this blog post: https://akrzemi1.wordpress.com/2020/10/26/semantic-requireme... for which I entirely disagree with the whole premise - the only, only thing that matters is what the compiler understands. The standard can reserve itself the right to make some cases UB, such as when trying to sort un-sortable things just like it can state that adding to numbers can cause UB and that's fine: it's the language's prerogative and is all to be treated as unfortunate special-cases ; for the 99.99999% remaining user code, only the code matters and it makes no sense to ascribe a deeper semantic meaning to what the code does.

> In which case it's your responsibility to ensure you can log this, which of course CanBlah didn't express.

A metric ton of side concerns should not be part of the spec, such as logging, exceptions, etc - everyone saw how terrible and counter-productive checked exceptions were in java for instance. Specifying logging explicitly here would be a -2 in code review as it's purely noise: the default assumption should be that everything can log.

> The only place Rust has subtyping is lifetimes, so that &'static Foo can substitute for any &'a Foo and I don't think that's what you're getting at.

I meant polymorphism, my bad


> uh... no ?

Actually, yes. I was hoping people like you were familiar, but that was actually more to ask of you than I'd assumed since C++ has a tremendous amount of such language in the standard, going in I'd figured hey maybe there's a half dozen of these and I was off by at least one order of magnitude. That's... unfortunate.

Exactly which language ended up being in the "official" ISO standard I don't know, but variations on this are in various circulating drafts through 2020 "[if] the concept is satisfied but not modeled, the program is ill-formed, no diagnostic required", if you're trying to find it in a draft you have, this is in the Libraries section in drafts I looked at, although exactly where varies. [ Yes that means in principle if you completely avoid the C++ standard library this doesn't apply to you... ]

> I think that you are referring to this blog post

It's possible that I've read Andrzej's post (I read a lot of things) but I was just reporting what the standard says and all Andrzej seems to be doing there is stating the obvious. Lots of people have come to the same conclusion because it isn't rocket science.

> only the code matters and it makes no sense to ascribe a deeper semantic meaning to what the code does.

This might be a reasonable stance if the code wasn't written by people. But it is, and so the code is (or should be) an attempt to express their intent which is in fact semantics and not syntax.

But let's come back to std::totally_ordered, although you insist it doesn't "count as an example" for some reason, it is in fact a great example. Here's a standard library concept, it's named totally_ordered, so we're asking for a totally ordered type right? Well, yes and no. Semantically this is indeed what you meant, but C++ doesn't provide the semantics, C++ just gives you the syntax check of std::equality_comparable, and if that's a problem you're referred to "No diagnostic required".


> "[if] the concept is satisfied but not modeled, the program is ill-formed, no diagnostic required",

Couldn't find anything ressembling this in the section of the standard describing concepts and constraints. The spec is very clear (C++20 7.5.7.6):

> The substitution of template arguments into a requires-expression may result in the formation of invalid types or expressions in its requirements or the violation of the semantic constraints of those requirements. In such cases, the requires-expression evaluates to false; it does not cause the program to be ill-formed.

Maybe the stdlib has different ording, but the stdlib can literally have any wording it wants and could define std::integer to yield 2+2 = 5 without this being an issue.

> [ Yes that means in principle if you completely avoid the C++ standard library this doesn't apply to you... ]

in just a small library i'm writing, there's already ten-fold the number of concepts than there are defined in the standard library, so I'd say that this does not apply in general ; the stdlib is always an irrelevant special case and not representative of the general case of the language, no matter how hard some wish it. E.g. grepping for 'concept [identifier] =' in my ~ yields 2500 results, with only a small minority of those being the std:: ones.

> This might be a reasonable stance if the code wasn't written by people. But it is, and so the code is (or should be) an attempt to express their intent which is in fact semantics and not syntax.

I think this is very misguided. I am not programming for humans to process my code, but for computers to execute it. That's what comes first.

> Semantically this is indeed what you meant,

no, if I type std::totally_ordered, I mean "whatever the language is supposed to do for std::totally_ordered", and exactly nothing else


> no, if I type std::totally_ordered, I mean "whatever the language is supposed to do for std::totally_ordered", and exactly nothing else

That's easy then, if you mean "whatever the language is supposed to do for std::totally_ordered" you could say what that is exactly, right?


System languages on the Algol/Wirth branch prove otherwise.

They can be ergonomic high level, while providing the language features to go low level when needed.


Agree, and even C got really far without traits. Traits are a lot of rope for building confusing abstractions, IMHO.


>>> I'm not convinced much could have been done about it.

Are you sure? What stops Swift with its beautiful syntax and safe optionals from becoming a systems language?


As someone who has actually tried writing a kernel in Swift, the issue is purely the runtime. While you can technically build a module without it needing to link to external Swift standard library binaries, the second you try and do anything with an array or optionals, you suddenly need to link in a 15MB behemoth that requires SIMD to even compile (at least on x64 and arm64). Porting this to bare metal is possible (and some have done it for a few microcontrollers) but its a pain in the ass.

I do love Swift and would use it for systems stuff in a heartbeat if I could, but there are also some downsides that make it pretty awkward for systems. The performance isn't always the best but it's (generally) very clear and ergonomic.


Perhaps not that that much. Swift’s arrays are refcounted. And you can’t store an array on the stack. Classes are refcounted too, but you could avoid them. It also has a bit of a runtime, and you don’t know when it will take locks or allocate (though there is work to tag functions so they can’t do either).


Nothing, that is even Apple's official wording for Swift.


The main Rust syntax is OK, but as the author points out, macros are a mess.

The "cfg" directive is closer to the syntax used in ".toml" files than to Rust itself, because some of the same configuration info appears in both places. The author is doing something with non-portable cross platform code, and apparently needs more configuration dependencies than most.


Maybe we've reached the limits of the complexity we can handle in a simple text-based language and should develop future languages with IDEs in mind. IDEs can hide some of the complexity for us, and give access to it only when you are digging into the details.


This just plasters over the underlying problem, which in case of Rust is IMO that features that should go into the language as syntax sugar instead are implemented as generic types in the standard library (exact same problem of why modern C++ source code looks so messy). This is of course my subjective opinion, but I find Zig's syntax sugar for optional values and error handling a lot nicer than Rust's implementation of the same concepts. The difference is (mostly): language feature versus stdlib feature.


Rust developers are doing an awesome job of identifying those things and changing the language to meet it. Today's Rust is much cleaner than it was 5 years ago (or 8 if you count nightly).

But yes, there is still a lot of it.

Anyway, most of the noise comes from the fact that Rust is a low level language that cares about things like memory management. It's amazing how one is constantly reminded of this by the compiler, what is annoying, but the reason it doesn't happen on the alternatives is because they never let you forget about that fact.


I’m not familiar with zig. Can you give some examples to illustrate your point?


An optional is just a '?' before the type:

For instance a function which returns an optional pointer to a 'Bla':

    fn make_bla() ?*Bla {
        // this would either return a valid *Bla, or null
    }
A null pointer can't be used accidentally, it must be unwrapped first, and in Zig this is implemented as language syntax, for instance you can unwrap with an if:

    if (make_bla()) |bla| {
        // bla is now the unwrapped valid pointer
    } else {
        // make_bla() returned null
    }
...or with an orelse:

    const bla = make_bla() orelse { return error.InvalidBla };
...or if you know for sure that bla should be valid, and otherwise want a panic:

    const bla = make_bla().?;
...error handling with error unions has similar syntax sugar.

It's probably not perfect, but I feel that for real-world code, working with optionals and errors in Zig leads to more readable code on average than Rust, while providing the same set of features.


I don't see how that is all that different from Rust.

The main difference I see is that in Rust it will also work with your own custom types, not just optional.

  fn make_bla() -> Option<Bla> {
    // this either returns a valid Bla, or None
  }

  if let Some(bla) = make_bla() {
    // bla is now the unwrapped valid type
  } else {
    // make_bla() returned None
  }
..or with the '?' operator (early return)

  let bla = make_bla().ok_or(InvalidBla)?;
..or with let_else (nightly only but should be stable Soon(tm))

  let Some(bla) = make_bla() else { return Err(InvalidBla) }
..or panic on None

  let bla = make_bla().unwrap();


How does Zig represent Option<Option<i32>>? Would it be something like this?

    ??i32


I had to try it out first, but yep, that's how it works:

    const assert = @import("std").debug.assert;

    fn get12() i32 {
        const opt_opt_val: ??i32 = 12;
        const val = opt_opt_val.?.?;
        comptime assert(val == 12);
        return val;
    }
[https://www.godbolt.org/z/oncd5shvP]


The problem with this premise is that by raising the bar for any IDE that wants to support that language, you risk the creation of an IDE monoculture.


Is that as true any more with the language server model?

I'm not familiar enough with it to know how much is truely in the protocol vs what the editor still has to do themselves.


LSPs are great, I think they've proven fairly easy to integrate into many text editors. But consider something like the Scratch programming language. How many editors support Scratch? Once you stray from code-as-text, adding support to old editors often becomes infeasible and the effort needed to create new editors is a significant barrier to entry.


JetBrains is doing quite well with InteliJ monoculture.


Like Common Lisp, Smalltalk, SELF, Delphi, C++ Builder, Java, C#, Visual Basic,....

The problem is that most on FOSS community tend to design languages for VI/Emacs workflows.


People seem allergic to anything that isn't superficially ALGOL like. I still remember Facebook had to wrap Ocaml in curly braces because it would apparently blow peoples minds.


I think making things syntactically explicit which are core concepts is stupid:

```pub fn horror()->Result{Ok(Result(mut &self))}```

A function returns a Result. This concept in Rust is so ubiquitous that it should be a first class citizen. It should, under all circumstances, be syntactically implicit:

```pub fn better->self```

No matter what it takes to make the compiler smarter.


> A function returns a Result.

That is not, in fact, a core concept in Rust. Plenty of functions have no reason to return Result. (And some that do also have a reason for the inner class to be a result.)

> This concept in Rust is so ubiquitous that it should be a first class citizen. It should, under all circumstances, be syntactically implicit:

“Implicit” is an opposed concept to “first-class citizen”. Result is first-class in Rust, and would not be if function returns were implicitly Result.


> Result is not a core concept in Rust.

If you don't see std::result::Result as a core concept in Rust, which might be fair, one can still argue that it _should_ be a core concept, given its ubiquitous usage.


You misquoted, I never said Result is not a core concept.

What I said is that “A function returns Result” in the universal sense (that is, everything that is a function returns Result) is not a core concept in Rust.

Some functions return Result<T,E> for some <T,E>. Some functions return Option<T> for some T. Some functions have no reason to use that kind of generic wrapper type (a pure function that handles any value in its range and returns a valid value in a simple type for each doesn't need either; Option/Result are typically needed with otherwise non-total functions or functions that perform side effects that can fail.)


This would break the principle that you always know how to invoke a function by looking at its signature. Option of T and Result of T are not the same type as T. You would have to look at the body of the function, or rustdoc, to know how to invoke it, which would be very annoying.

Besides, what is the error type for Result? You haven't declared it.


Others have addressed the problem with "implicit", but I might be on board with "lightweight"; maybe in a type context `T?` can mean `Result<T>` for whatever Result is in scope? That way you can still define functions with various distinct error types the same as today, but the common (idk just how common, not claiming a majority) case of using the same error across a module or package with a Result type alias will get cleaner.


That would be confusing, because T? means Option<T> in other languages (Kotlin, Typescript).


That's a good point, but I am not sure it would actually be a problem.

For one thing, we already have syntactic collisions that don't seem to cause much problem (consider `foo?.bar` in .ts vs .rs), and this one would probably be prevalent enough that it would quickly be familiar to anyone using the language.

For another, if we squint I'm not sure those other languages aren't "really" using it for the same thing. If in some module we define `type Result<T> = Option<T>` then we have the same behavior in Rust, and we can imagine that those other languages have basically done so implicitly, meaning it's a less flexible version of the same mechanism (put to slightly different purposes).


What kind of meaningful data is passed (besides lifetimes) that isn't passed in Kotlin or scala 3 extension methods?


Ah yes, "what kind of meaningful data is passed besides the most important concept in the language?"

Also, the ownership mode, a concept entirely missing from Kotlin or Scala.

As GP says, Rust's syntax is pretty noisy, but much of the noise is answers to questions other languages don't even ask.

And many complains are requests for additional noise for things which are just regular in Rust, like additional syntactic sugar for Option and Result.


From the modern systems programming languages set, Go does better in this respect. But admittedly it doesn't reach to quite as low in fitness for low level programming as Rust.


Oh, not even close. It does what most languages do and just elides, ignores, or hard-codes the answers to all the questions Rust has. That's a solution, sure, a very valid one chosen by many languages over many decades, but certainly not "much better". We absolutely need at least one language that doesn't hide all that and I think the programming language community as a whole will really benefit from the list of choices for that being expanded from "C++", which is getting really long in the tooth And I'm not even sure C++ was ever really designed to be this language, I think a lot of it just sort of happened by default and it sort of backed into this role, and frankly, it shows. Rust being designed for this can't hardly help but be nicer, even if we completely ripped out the borrow checker aspect.


Go is not a systems programming language.

I also personally find Go syntax to be horrible, especially now with generics.


Depends if one considers compilers, linkers, networking stacks, kernel emulation, unikernels, userspace drivers, databases, GPGPU debugging systems programming or not.

I personally consider better use Go than C for such purposes, even if they aren't "systems programming".


The Go syntax is fine and easy to read, you don't need to know Go to undertsand what the code is doing, can't say the same for Rust.


I think this is a matter of opinion not fact. I have worked as a Go programmer for three separate companies and it may be the least readable, least understandable language I have encountered.


Well it's hard to argue against a language with 25 keywords.


Go is a systems programming language?


Yes. The only people for whom this is controversial are message board nerds. The actual language designers don't have much trouble over the concept. Here's a link to the designers of Rust, C++, Go, and D on a panel having little problem working through the nuances:

https://news.ycombinator.com/item?id=31227986

This perpetual debate reminds me of the trouble HN used to have with the concepts of "contractors" and "consultants", where any time someone mentioned that they were doing consulting work there'd be an argument about whether it was in fact contracting. It's a message board tic, is what I'm saying, not a real semantic concern.


To be fair, that first question about 'what is a systems programming language' is answered by Rob Pike then Andrei Alexandrescu as

Pike: When we first announced Go we called it a systems programming language, and I slightly regret that because a lot of people assumed that meant it was an operating systems writing language. And what we should have called it was a 'server writing language', which is what we really thought of it as. As I said in the talk before and the questions, it's turned out to be more generally useful than that. But know what I understand is that what we have is a cloud infrastructure language because what we used to call servers are now called cloud infrastructure. And so another definition of systems programming is stuff that runs in the cloud.

Alexandrescu: I'm really glad I let Rob speak right now because my first question was 'go introduces itself as a systems programming language' and then that disappeared from the website. What's the deal with that? So he was way ahead of me by preempting that possible question.

So it seems to me that they struggle with the nuances of the concept as much as the commenters here, particularly as it pertains to Golang.


Depends if one considers compilers, linkers, networking stacks, kernel emulation, unikernels, userspace drivers, databases, GPGPU debugging systems programming or not.

Despite my opinion on Go's design, I rather would like to see people using Go instead of C for such use cases.


To be fair, compilers and linkers can be thought of as pure functions (if we ignore stuff like including timestamps in builds e.g.). They have no special requirements language-wise. You can write them in any general purpose programming language. Even Brainfuck will do (for one-file programs fed through stdin). No argument about the others though.

Although I guess JIT compilers may be classified as systems programming, since you need to do some OS-specific tricks with virtual memory during execution of the compiler.


Yes, as it's used for that a lot. Eg many databases (CockroachDB, Prometheus, InfluxDB, dgraph etc), gVisor, Kubernetes, Fuchsia, etcd, and so on. And also in the origin story it was aiming to compete with C++ for many use cases IIRC.


That's tricky to answer, because it depends a lot on what you count as "system software". If you mean literally "the operating system", then arguably not. But if you include middleware, databases and other "infrastructure" stuff, then arguably yes.


A proper database can be implemented in python -- I've done it -- but that doesn't make it a systems language. A "systems language" comes with the strong implication that it is possible to write an implementation of most software that is competitive with the state-of-the-art in terms of performance, efficiency, and/or scalability. That is only possible in languages like C, C++, Rust, and similar, hence the "systems language" tag.

Languages that are not systems language trade-off this capability for other benefits like concise expressiveness and ease of use.


Lots of real world systems have other design tradeoffs than aiming to be state of the art in those axis. Eg cost, security, maintainability, adaptability to changes, etc.


Go has been used to implement OS kernel code, e.g. in the Biscuit OS from MIT: https://github.com/mit-pdos/biscuit

Of course, the garbage collector did not exactly make it easier - but it's an interesting piece of software.


Go has been used to implement OS kernel code,

but it's an interesting piece of software.

Agreed. And I didn't mean to imply that it's impossible to use Go that way, but I think it's fair to say that it's less common and perhaps even less desirable to do that.

OTOH, people have written (at least parts of) Operating Systems in Java[1] even, so never say never...

[1]: https://github.com/jnode/jnode


There have been lots of OS written in languages that support GC. And indeed mainstream operating systems use GC quite a lot for selected objects.


No, due to its runtime.

You can write a database with it but that makes it an application language, not system.

Otherwise you could call every language a "system language" and the distinction would lose all meaning.


It can be, but I wouldn't recommend it personally:

https://golangdocs.com/system-programming-in-go-1

EDIT: formatting


Yes it is, but not a a good low level systems language mainly due to garbage collection and runtime requirements. It still is used for writing keyword here systems.


It looks like I'm on the minority here, but I generally like Rust's syntax and think it's pretty readable.

Of course, when you use generics, lifetimes, closures, etc, all on the same line it can become hard to read. But on my experience on "high level" application code, it isn't usually like that. The hardest thing to grep at first for me, coming from python, was the :: for navigating namespaces/modules.

I also find functional style a lot easier to read than Python, because of chaining (dot notation) and the closure syntax.

Python:

    array = [1, 0, 2, 3]
    new_array = map(
        lambda x: x * 2,
        filter(
            lambda x: x != 0,
            array
        )
    )
Rust:

    let array = [1, 0, 2, 3];
    let new_vec: Vec<_> = array.into_iter()
        .filter(|&x| x != 0)
        .map(|x| x * 2)
        .collect();
I mean, I kind of agree to the criticism, specially when it comes to macros and lifetimes, but I also feel like that's more applicable for low level code or code that uses lots of features that just aren't available in e.g. C, Python or Go.

Edit: Collected iterator into Vec


There are people who write Python code like that, but it's an extreme minority. Here's the more likely way:

    array = [1, 0, 2, 3]
    new_array = [x * 2 for x in array
                 if x != 0]
Just as a matter of style, few Python programmers will use lambda outside something like this:

    array = [...]
    arry.sort(key=lambda ...)


i have always felt the backwards nature of list comprehensions makes them very hard to read


me too. Its one of the things that I kinda dislike in Python.


I guess you're right, list/generator comprehensions are the idiomatic way to filter and map in python, with the caveat of needing to have it all in a single expression (the same goes for lambda, actually).

I still feel like chained methods are easier to read/understand, but list comprehensions aren't that bad.


Even in Rust I don't like chains that go beyond ~4 operations. At some point it becomes clearer when expressed as a loop.


> with the caveat of needing to have it all in a single expression (the same goes for lambda, actually).

one could use multiple expressions in lambda in (modern) Python


Do you mean using the walrus operator? Because unless I missed a recent PEP, I don't know of a way to do this without something hacky like that.


yes

x = 1 y = 2

q = list(map(lambda t: ( tx := tx, ty := ty, tx+ty )[-1], [1, 2, 3]))

print(q)


Guido van Possum has expressed distaste for list comprehension. Take that for what it's worth.

https://news.ycombinator.com/item?id=13296280


I realize that "Guido van Possum" was almost certainly a typo here, but it _does_ make for an amusing mental image. I wonder what other forest creatures might have built programming languages? C++ built by Björn "the bear" Stroustrup? C# built by Anders Honeybadger? Ruby by Yukihiro "Catz" Cat-sumoto?


Indeed an autocorrect. But it was definitely Graydon Boar who created Rust


From the link, it sounds like GvR has expressed distaste for functional programming idioms like map/reduce, but not for list comprehensions.

At least it's not Go's "You gonna write a for loop or what?"


He was also against having any kind of lambda, and the one line version was the concession he was willing to let in.


1. I don't think your Python example if fair. I think

    new_array = [x*2 for x in array if x != 0]
is much more common.

2. In your example, `new_array` is an iterator; if you need to transform that into an actual container, your rust code becomes:

   let new_array = array.into_iter()
        .filter(|&x| x != 0)
        .map(|x| x * 2)
        .collect::<Vec<_>>();
And there your generic types rear their ugly head, compared to the one liner in python.


Oh, yeah, you're right! If you want to collect into a Vec you may need to specify the type, but usually, you can just call `.collect()` and the compiler will infer the correct type (as I suppose you're collecting it to use or return).

If it can't infer, it's idiomatic to just give it a hint (no need for turbofish):

    let new_vec: Vec<_> = array.into_iter()
        .filter(|&x| x != 0)
        .map(|x| x * 2)
        .collect();
I don't think that's ugly or unreadable.

About the Python list comprehension, I answered your sibling, I think you're both right but it also does have it's limitations and that may be personal, but I find chained methods easier to read/understand.


They maybe rear their ugly head but they also allow you to collect the iterator into any collection written by you, by the standard library or by any other crate.

While in python you have list/dict/set/generator comprehension and that's it.


I don't think it's bad thing. In fact one of my favorite features is that you can do `.collect::<Result<Vec<_>, _>>()` to turn an interators of Results, into a Result of just the Vec if all items succeed or the first error. That is a feature you just can't express in Python.

But you have to admit that is a pretty noisy line that could be difficult to parse.


Then don't write it that way.

   let new_array: Vec<usize> = array.into_iter()
        .filter(|&x| x != 0)
        .map(|x| x * 2)
        .collect();
Isn't so bad.


I believe I have the habit of putting it on the end because the final type might be different. Consider:

    let array = ["DE", "AD", "BE", "EF"];
    let new_array: Vec<u32> = array.into_inter()
      .map(|x| u32::from_str_radix(x, 16))
      .collect()?;
In this case you need to specify the Result generic type on generic. This has come up for me when working with Stream combinators. Most projects probably end up in needing some lifetime'd turbofish and you have to be able to parse them. They aren't rare enough, IME, to argue that Rust isn't noisy.


I generally just write it like this:

   let new_array: Vec<_> = array.into_iter()
        .filter(|&x| x != 0)
        .map(|x| x * 2)
        .collect();
I'm actually a bit confused by the `&x` given that `into_iter()` is used, which would take ownership of the array values, but assuming that it was supposed to be just `iter()` (or that it's an array of &i32 or something I guess), you're going to be copying the integer when dereferencing, so I'd probably just use `Iterator::copied` if I was worried about too many symbols being unreadable:

   let new_array: Vec<_> = array.iter()
        .copied()
        .filter(|x| x != 0)
        .map(|x| x * 2)
        .collect();
There's also `Iterator::filter_map` to combine `filter` and `map`, although that might end up seeming less readable to some due to the need for an Option, and due to the laziness of iterators, it will be collected in a single pass either way:

   let new_array: Vec<_> = array.iter()
        .copied()
        .filter_map(|x| if x == 0 { None } else { Some(x * 2) })
        .collect();
This is definitely more verbose than Python, but that's because the syntax needs to disambiguate between owned and copied values and account for static types (e.g. needing to annotate to specify the return type of `collect`, since you could be collecting into almost any collection type you want). It's probably not possible to handle all those cases with syntax as minimal as Python, but if you are fine with not having fine-grained control over that, it's possible to define that simpler syntax with a macro! There seem to be a lot of these published so far (https://crates.io/search?q=comprehension), but to pick one that supports the exact same syntax in the Python example, https://crates.io/crates/comprende seems to do the trick:

    let array = [0, 1, 2, 3];
    let new_array = c![x * 2 for x in array if x != 0];
    println!("{:?}", new_array); // Prints [2, 4, 6]
I'm not trying to argue that Rust is a 1:1 replacement to Python or that if Python suits your needs, you shouldn't use it; I think it's worth pointing out that Rust has more complex syntax for a reason though, and that it has surprisingly good support for syntactic sugar that lets you trade some control for expressiveness that can alleviate some of the pain you might otherwise run into.


It actually needs the & in both my example and your second example, because .filter receives a reference to the item being iterated. Your second example doesn't compile: https://play.rust-lang.org/?version=stable&mode=debug&editio...


Ah, you're right! I had forgotten filter worked like that. I'll have to concede that equality with references to `Copy` is somewhat annoying when dealing with closures (and closures in general are one of the few parts of Rust I do consistently wish had better ergonomics, although I understand why it's hard).


You can just specify the type along with the variable itself instead of relying on type inference in this case, which makes it look a lot better I think.

There is also the collect_vec method from Itertools that avoids this. I normally am not a big fan of pulling crates for little things like this, but the Itertools crate is used in rustc itself, so you already are trusting it if using rust.

I do agree that rust syntax can be a bit verbose sometimes but I actually prefer the syntax to most other languages! I would have preferred if it would have been less inspired by the C family of languages syntax wise, but that would have likely hindered adoption.


I think part of this comes down to: Does your Rust code make heavy use of generics? I find myself deliberately avoiding generics and libraries that use them, due to the complexity they add. Not just syntactic noise, but complicated APIs that must be explicitly documented; rust Doc is ineffective with documenting what arguments are accepted in functions and structs that use generics.

See also: Async.


> See also: Async.

Ah, the good ole generic, nested, self-referencing, unnameable state machine generator.


I want to like rust but the fact that there are replies with 9 different ways to write that loop posted here alone, and no consensus on which one is the idiomatic, is not a good sign.


minor point but your python code creates a generator here not an array, you'd have to wrap it in a `list()` to get the same data type and be able to for example assert its length (of course you can just iterate over the generator)


You can only do this chaining when everything happens to be a method. Compare to Nim or D, where you can chain arbitrary procedures like this.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: