Rust’s choice of sigils is aesthetically displeasing. Double colons (::) and angle bracket generics are ugly. Unmatched apostrophes is jarring to those used to them being used for strings or character literals. It was an unforced error to imitate C++, which was forced to use those characters due to backwards compatibility with C and the C preprocessor. A new language doesn’t have to use those characters. For example, the D language uses the dot as a path separator for modules, namespaces, templates, etc. as do other languages like Python.
Rust spent enough of its weirdness budget on new semantics. I may not be a fan of C++-style syntax, but if your goal is to specifically tempt C++ developers with the promise of safer/saner language semantics, then making syntactic concessions towards familiarity with C++ is the right call. Rust isn't trying to be The One Language To Rule Them All, it's trying to target a very specific domain.
> if your goal is to specifically tempt C++ developers with the promise of safer/saner language semantics, then making syntactic concessions towards familiarity with C++ is the right call. Rust isn't trying to be The One Language To Rule Them All, it's trying to target a very specific domain.
I wonder how well that's worked out for them. My impression is that there's been a lot more people moving to Rust from Python/Ruby/Javascript than from C++.
I think there's a distinction between people making side projects/posting online and professional usage of rust.
There's a lot more Python/JS/etc developers than C++ developers, and Rust meaningfully offers something those languages don't, so I suspect you see a rush of folks from that world going to learn Rust and posting about it online.
However, I generally don't see or hear about Rust being used professionally except in a context where C or C++ would have been seriously considered as well.
Rust is used in production by many high profile companies. You almost certainly are running Rust code, or interfacing with something that does, in your daily life.
Yes, I agree - I didn't mean that the "C++ is a contender" use cases are few, in fact the opposite. Just that I don't see a ton of of professional "We wrote our Postgres CRUD app in Rust for reasons". I'm sure a few have, but that's a harder choice to defend that using Rust instead of C++.
D fell into the same trap. D 1.0 started as a relatively simple language mixing OOP and basic template semantics and attracted many people from Java/C# backgrounds. But with D 2.0 they started to appeal more to C++ programmers. Advanced template metaprogramming, even language features were added to make it easier to interop with C++, in hopes that C++ programmers will transition parts of their apps to D.
In the meantime, chasing C++ programmers introduced problems. Template heavy code means that IDEs can't follow it well and there's problems with compilation time. C++ programmers never were keen on GC so D tried to introduce GC-free semantics for the language which aren't complete.
I don't think C++ programmers in general are worth chasing. If they stuck with C++ for such a long time, it means they are used to it and accept C++ with all its flaws. It's also my impression that Rust is attracting more JS programmers than C++.
> If they stuck with C++ for such a long time, it means they are used to it and accept C++ with all its flaws.
This is also partly because there was never a language that fit the niche where GC isn't a good fit that brought enough to the table in the eyes of that space. Even today some improvements like being able to avoid UB or seemingly high level features like pattern matching are looked upon with suspicion.
> It's also my impression that Rust is attracting more JS programmers than C++.
I think this is a testament to how successful the efforts to bring high level constructs to the systems space have been, as well as the ongoing efforts to make the system level features be ergonomic and usable by as many people as possible.
If you have a theoretically perfect language that only three high priests in an ivory tower can use, then you don't have what Rust is trying to be.
> I don't think C++ programmers in general are worth chasing. If they stuck with C++ for such a long time, it means they are used to it and accept C++ with all its flaws
People don’t use C++ because it’s convenient and easy, they use it because it’s one of the best widely used options/ecosystems if performance really matters.
Any other language targeting this space has to chase C++ users because C++ is so prevalent. It would be like coming up with a new general purpose dynamic language and not appealing to Python users, or a new stats modeling language and not appealing to R users.
Recall that the initial target was browser developers. I've seen what feels like considerable adoption from among Firefox developers. Chromium developers feel very tempted, too. Feels like success!
> My impression is that there's been a lot more people moving to Rust from Python/Ruby/Javascript than from C++
Is there any reason in particular you think a lot of high-level language people are moving to Rust? That is not my impression or experience at all and I struggle to think of any reason to do so besides a large, individual change in career focus. As a higher-level language user myself, I can't think of a single reason I would ever use Rust for anything. Sure, I think it's cool, and it's always good to learn new languages, but in practise it's just not suited to my domain - and I think that goes for the vast majority of us.
I would say golang has scooped off more of the dynamic language types myself, although I also consider it a productivity downgrade and/or cargo cult to mostly be ignored without compelling reason. Which is even more the case with Rust!
Fullstack & Wasm.
There comes a point in your career where you're dissatisfied with the capabilities of the warm and friendly upper layers of your stack, and you descend into the cold dark waters where things are hard but fast.
I used to write primarily in highlevel languages, but if you want to write realy interesting high performance stuff in javascript, WASM+Rust is a really neat tech to use.
Similarly, with WebGL/WebGPU you need some familiarity with bare metal programming.
I still write a lot of JS, but I regularly drop down to Rust.
I'm migrating everything that's feasible (eg. not stuff where I need Django's ecosystem) from Python to Rust for its combination of a powerful type system that can encode my invariants better than MyPy annotations, sum types, no exceptions, no null, an ecosystem that values "fearless upgrades", PyO3 so my Rust code can be reused in projects that still need Python without me having to get memory-unsafe code right, and not being something as alien as Haskell.
Golang specifically has three things that turned me off as a replacement for Python in the early days before I got hooked on what Rust offers:
1. A disgust-inducing level of boilerplate use for things like error-handling and poor support for metaprogramming.
2. A design that not only lacks things like PyO3, but is implicitly hostile to FFI.
3. Git repository URLs as dependency references and "you figure it out" for ensuring they don't go away.
As someone with a long history of c++ experience I found rust syntax fairly familiar even if some details are meaningfully different. For example, most code you read is very very similar. I found an easier time coming to Rust from c++ in terms of syntax familiarity than swift. I found swift much faster to write code once I actually got my head around it (I also had substantial experience with Objective-C/C++ prior to picking up swift).
I think the strong counter is that you only really learn a language once, but Swift is a bridge from those coming from Objective-C not C++ and tries to hit other language goals so it’s hard to compare. Certainly this is the first real language giving c++ a run for its money in a way that nothing else has before, that’s not nothing, and it’s hard to argue whether the syntax chosen is part of that success or in spite of.
Because Rust is, in effect, a GC-less Ocaml derivative with a coat of cherry-picked C++ syntax as paint.
That's why the semantics are so different. It's more of the lineage that began with ML in 1973 than of the ALGOL 68 lineage like C is.
The Rust compiler was even originally written in Ocaml before it became self-hosting.
That's also where syntax elements like these come from:
- let for variable declaration and postfix type annotations separated with colons
- 'a (Ocaml's equivalent to <T> for generics, because, on an abstract/theoretical level, Rust's lifetimes are a special type of generic parameter, right down to "a" apparently being the habitual default type variable name, just like "T" is in C++)
- Option (option in Ocaml), Some, and None
- match
- fn (fun or function in Ocaml)
- -> and => (just -> in Ocaml) for function returns and match arms
<> are the worst characters for generics, except for all the others that have been tried. () leads to the type syntax being difficult to distinguish from value syntax, which is great in dependently-typed languages but bad elsewhere. [] is already used for slices, albeit in a different way; and C should be objection enough if you want to make types look like values. {} is a nonstarter because it leads to ambiguity between return type and function body, and in an everything-is-an-expression language, once again we are back at types-look-like-values. There are no more matched pairs left on an ANSI keyboard.
Personally, while I hear you that :: is ugly, I do really like being able to distinguish module paths from value paths at a glance. And when looking for an alternative, we once again run into other characters already having far more familiar conflicting uses, or just being too weird. (E.g., # or @ could have been used, but that's probably worse than ::.)
[] are already taken by the slicing syntax, but if that wasn't the case you could have something like vec@2 and vec@{end - start} for that, which would open them up for generics. I'm pretty sure the main reason to choose <> was familiarity above all else.
<> are obviously already used for less than/greater than, so they're disqualified too by this logic.
If you broaden your viewpoint, [] is the same thing for arrays and generics. A generic function is a family of functions parameterized by types; an array is a family of values parameterized by 0..n; [] selects one member of the family. In fact in math subscripting is already a common notation for the parameter of a generic.
Same for paths using dot, it's just member access on a module object. In languages like JS or Python it's literally just normal runtime member access. In Zig it's also just normal member access on a comptime object.
To be fair, arrays and slices are probably uncommon enough in current code that they shouldn't get exclusive use of the [] brackets. They could just be a generic type like any other.
Scala uses [ ] for generics, works well. I think you're right that array access is a rare case, definitely doesn't deserve its own reserved characters. And [ ] tends to mean "crash if out of bounds", crazy to have gotten its own reserved characters :-)
Angle brackets aren't that uncommon, are they? Swift is a modern, elegant language that uses angle brackets for generics. Fortunately it uses the period instead of the double colon (thank goodness for that!) but I don't see anything ugly about brackets.
It's directly responsible for Rust needing the "turbofish" syntax to pin down generic return types as in
let foo.iter().map(|x| x*2).collect::<Vec<_>>()
If they'd used something like [T] for generics, then there'd be no collision with the < and > operators needing disambiguation in some contexts.
They even have a Book of Mozilla-esque entry in their test suite named "Bastion of the Turbofish" to pin down an example of the problem case, complete with
I don't think that's true - .collect[Vec[u8]]() - here you might be accessing property collect and taking the value at index denoted by the item of Vec at position denoted by the variable u8. And at the end calling the resulting thing? (That's not actually valid in rust afaik, you'd have to wrap it in parentheses like this `(...collect[])()`)
The argument that generics should have used [] generally goes hand in hand with the argument that [] for panicking indexing is a wart on Rust's "pit of success" design philosophy and that panicking indexing should have been more verbose to type than the .get() for Result-returning indexing... but probably less verbose to type than the `unsafe`, potentially UB-inducing `.get_unchecked()`.
I like angled brackets for generics but hate double colons. I assume it's because I'm coming to Rust from C# though? I wonder how C++ devs learning Rust feel.
I don't know if it's just me, but find angle brackets really hard to read. Many times I just miss the ending one, because they have the same height as normal characters, so they don't stand out.
That's a font issue. In proper type setting, angle brackets are different from normal less/greater than characters. An IDE with good in-built parsing could use variant forms for <> characters as used in generics, as opposed to relational operators.
Yes, but syntax decisions of programming languages shouldn't rely on such things. I would've preferred square brackets for generics (and something else for indexing).
«..» Always seemed like a good solution to me, from a readability perspective. But I think many will object on account of being too hard to type (which is not unreasonable; on my Linux machine it's "Compose <<" and fairly easy; I'm not sure how hard/easy it is on Windows or macOS, but even if it's easy I think most people don't know it's easy and will still make the language harder to learn).
I had the same initial opinion about the '.' Vs. ':' for module separation. Why add weight to make this distinction?
However, after over a year of coding Rust every day, I now really appreciate the distinction. It's an effective way to give a meaningful context point at a very low cost.
I was wrong again! Which has been a consistent pattern with my initial Rust appreciations.
I think most of what people call Rust's "ugliness" is actually just Rust's explicitness. You have to chain a lot of method calls for things that might happen implicitly in other languages; you have to annotate lifetimes and constrain generic type params; sometimes you have to manually parameterize the generics you're using; etc. But I think most of that is unavoidable without shifting some of Rust's core values.
The only piece of surface-level syntax that I'm really truly bothered by is the closure syntax:
- The fact that the | | is the same character on each side
- The lack of directionality from the args -> to the body
- The lack of any kind of overlap with the normal function syntax fn foo()
Especially compared with something like JavaScript which has a really lovely lambda syntax, this one's hard to look at. I'm sure there were good reasons around ambiguity with other syntaxes or something, but... eesh.
To my knowledge, this closure syntax was a direct Ruby influence mainly because this syntax was also supposed to work like Ruby blocks: `array.each |e| { ... }`. Originally owned and shared closures had a different syntax (`fn~() { ... }` or `fn@() { ... }`)---this was back when Rust had a built-in shared reference, roughly equivalent to `std::sync::Arc` in the modern Rust---and it was pointed out that they should be harmonized at some point [1]. It was that time when people realized that then-loop-only syntax can be generalized into any closures, and this semi-decision got stuck even after the "internal" iterator that accepts a closure has long gone.
This is pretty much the closure syntax used by Go, and it's one of its biggest issues (IMHO). Even if you had map/filter/etc. methods on slices, it would still look like:
squares := numbers.
filter(func(x int) bool { return x >= 0 }).
map(func(x int) int { return x * x })
With what I understand to be your suggested Rust syntax, this would still me:
let squares: Vec<_> = numbers.iter()
.filter(fn(x: i64) -> bool { x >= 0 })
.map(fn(x: i64) -> i64 { x * x })
.collect();
which is a lot of noise compared to
let squares: Vec<_> = numbers.iter().filter(x => x >= 0).map(x => x * x).collect();
What makes the Go code so verbose isn't so much that you have to type "func()", but rather that you need to list out all the types, as well as the need for an explicit return.
squares := numbers.
filter(func(x) { x >= 0 }).
map(func(x) { x * x })
let squares: Vec<_> = numbers.iter()
.filter(fn(x) -> { x >= 0 })
.map(fn(x) -> { x * x })
.collect();
You don't necessarily need new syntax for this, although it may help with clarity.
The Rust => code is still a bit shorter, but not much and I don't think it matters much.
Yeah, I tend to agree. I do think it's important that the { } be optional, for single-expression lambdas, but something along these lines would've been much more readable imo if it were possible to fit into the grammar
yeh, but likewise I don't see `fn double(arg: u32) -> arg * 2` an issue if you want to contract the syntax. using multiple lines requires {} it's got nothing to do with the "type" of function. Unless function and lambda are different beasts, I'm still learning.
`|arg| arg*2` and also `|arg| { arg * 2 }` I'm not really a fan.
> But I think most of that is unavoidable without shifting some of Rust's core values.
Well yes. Turns out beauty is not just skin deep. Semantics actually matter. You can't make a beautiful language by slapping some magical surface syntax on top of any language you like.
> I think most of what people call Rust's "ugliness" is actually just Rust's explicitness.
I think most of what people call C's ugliness is actually just C's power. Some of it is C's age: we can't blame C for being born in the very early 1970s (when sensible ISAs were just born), but when you update C, don't try to make it be something that it's not. You see the beauty of UTF-8? People who could not have invented UTF-8 can at least see the beauty of UTF-8. Unfortunately, people who could not have invented C, are updating C.
When you try to update C, not only don't wreck it, don't try to replace it with something emo, weak, and overwrought, which is what Rust is.
C is powerful, natural, and concise. It doesn't wipe your ass for you. Wiping your ass neatly is not something you want to entrust to a bulletproof syntax, not when you can always just wash your hands. Rust? Rust has the color of not washing your hands because Rust imagines that it made it unnecessary.
I strongly disagree that Rust syntax is ugly, in absolute terms, even for the worst-case scenario presented in the article. Rust syntax is derived from a few very simple rules/concepts/themes. After observing how one set of operators/keywords combine, you should be able to make strong assumptions about how another set will combine.
Once the syntax sinks in, it's significantly easier to read than most other mainstream languages because there are significantly fewer rules/concepts/themes to comprehend.
As an extremely simple example, after learning what this does:
let a = loop { break 1; };
you can bet the programmer can figure out what's going on here:
let a = if true { 1 } else { 0 };
A language serves three purposes: it must be easy to write, it must be easy for a human to read, and it must be as unambiguous as possible (so that a computer can understand it). From a purely syntactical standpoint, Rust does great on all three. Most modern languages barely manage two.
I have never written a line of rust in my life but I assume that a statement that terminates with a value can be used as a rvalue in an assignment or other expression.
Rust is an expression-based language, so a loop block is just an expression. Break exits the loop and returns the expression following the keyword, which is the integer 1 in this example, but it could also be nothing if the loop returns nothing (aka unit "()", aka void in some languages), or any other expression as well.
> must be as unambiguous as possible (so that a computer can understand it)
A nitpick. Unambiguity (and other language features) is extremely important not that a computer can understand the language (it can), but rather that the understanding (or interpretation) of a program by programmer and computer are the same.
Bugs like off by one, use after free and similar issues quite often stem from this misunderstanding.
Source code is mainly a communication tool for other people, and less importantly, instructions for a computer. At the same time, compilers are an error reporting tool with a code generation side-gig.
They didn't explore a Haskell (or PureScript) inspired syntax, but I think it could massively improve things without compromising on semantics. The separate function signature itself is a great improvement.
"Haskell flavored Rust" anyone?
read
:: forall p
. AsRef Path p
=> p
-> Result (Vec u8)
read path = inner (as_ref path)
where
inner :: &Path -> Result (Vec u8)
inner path = do
mut file <- File.open path
let mut bytes = Vec.new
read_to_end file (&mut bytes)
pure bytes
I didn't mean the designers of Rust. I meant the author of the article did not consider a Haskell like syntax.
I completely understand the reasons for Rust devs targeting a C like syntax. Rust had already blown its novelty budget on the borrow checker semantics.
I guess they didn’t explore a Haskell (or a Lisp, Prolog (which always surprises me by looking so clean that you don’t think anything can be going on), Python, Pascal etc) style because it scares c/c++ programmers generally. And then they would have to fight harder for adoption than they had to now.
Personally I think many styles, like the Haskell-y you show here, would be far more pleasing on the eye; luckily we live in a time where it is not too hard to create either a different compiler frontend, or, easier, a translator (transpiler, but I don’t like the word as it only translates, hence it takes too much credit being called transpiler).
Yup, this could and should be a separate preprocessor step. Changing the syntax of the Rust language itself so drastically would be nearly impossible at this stage.
It looks better, but hard to comprehend without looking into the original code. E.g. this line:
inner path = do
Generally speaking, improved syntax would have been great, but similar projects that existed 10-15 years ago failed to be adopted widely (CoffeeScript, HAML).
Why is that hard to comprehend? `do` starts a new block with sequencing semantics (so we don't need the `?` operators) in Haskell.
CoffeeScript can actually be considered very successful, since it influenced the design of a lot of other language features we take for granted. Same can be said for HAML.
It's similar to what you can do in Julia, but there you use this syntax only for one-liner functions. The problem with this function declaration is that you need read up to the '= do' combo in order to understand that you're in a function definition. A keyword in the beginning would make it much easier.
I'd love to hear what features CoffeeScript made appear in other languages. JS did not take the syntax at all. I guess it's hard to tell whether generators and var-args came from CS or Python.
Off the top of my head Coffeescript introduced the function arrow syntax to JS, as well as class based inheritence.
Also you don't need to take my word for it, Brendan Eich, the creator of JS himself has said that CoffeeScript had a significant impact on his thoughts about the future of JavaScript (https://brendaneich.com/2011/05/my-jsconf-us-presentation/).
I think it’s reasonable that std library code goes to one extreme of ugly-but-performant while in my own code I can choose something slightly more elegant. Such as avoiding a nested function.
So while I agree with some of the points of Rust having a tendency for hiding the what in all the how (and token salad), I think looking at the stdlib isn’t always a good example of idiomatic Rust.
Rust made a strange decision abbreviating keywords which are significant (pub, fn) and making things verbose where it isn't needed (separate declaration of P). Here's how my dream syntax for Rust would look like:
public func read(path: AsRef<Path>) -> io.Result<Vector<u8>> {
func inner(path: &Path) -> io.Result<Vector<u8>> {
var file = File.open(path)?;
var bytes = Vector.new();
file.readToEnd(&mut bytes)?;
return Ok(bytes);
}
return inner(path.asRef());
}
I have replaced pub with public, replaced fn with func, removed separate declaration for P, replaced :: with ., replaced Vec with Vector, replaced let mut with var, replaced snake_case with camelCase, added return keywords. Note that all these are syntax changes. Semantics hasn't been changed at all. One more thing would be to remove the semicolons..
Compare that to the original. Which one is more readable?
One thing you've missed here is that `let mut` is two different things: `let` is the keyword for creating a new binding, while `mut` is part of the pattern that `let` binds to. For example, you can pattern match a tuple and make one value `mut` and the other not:
let (a, mut b) = //some tuple
You can't just replace `let mut` with `var` because it changes the semantics of the code.
You complain about abbreviation of omnipresent keywords, but then allow function only two more letters? :D I think proofs well, syntax is overrated and it is all opinionated taste..
Yeah, that's my main gripe with Rust as well: the lack of human-readable words ("pub", "fn" and "mut" are abbreviations, not words, and "return" was even abbreviated to an empty string) and the high density of esoteric special characters.
"return" wasn't "abbreviated to an empty string", it's that Rust is expression-oriented, the value of a block is the value of its last expression, adding ; turns expressions into statements, and a function with no terminal expression returns (), which serves the same role as void.
fn foo(x: u8) -> u8 {
let y = {
let z = something();
x + z
};
y
}
TL;DR: It's not a property of the function, it's a property of the curly braces.
I like the real Rust syntax more, save for double colons. (And I might prefer other brackets to <> with generics) But how is that syntax of yours separating types and generics with traits? Currently `AsRef<Path>` looks like a type although it's not supposed to be one.
Do they really need to be separated? Does it matter all that much when you are creating a function whether parameter is a specific type or just sny type that implements specific trait?
* Is this function monomorphised for each type passed in or is the argument passed in as a trait object? If the latter, does it implicitly fall back to monomorpisation if the trait is not Object-safe(e.g. it takes or return `Self` by ownership)? How do we specify which of the two to use when we need to for performance reasons?
* Let's say we have a function with two inputs: `func foo(a: SomeTrait, b: SomeTrait)` How do I specify that both inputs must be the same type?
Functions with generics are not quite first class citizens in Rust, so there's a difference also in the type of a non-generic function and a generic function. Or rather, the generic function doesn't have a "type" as Rust understands it. (It has a higher kinded type, but Rust doesn't support those.)
Yeah, it could be that the compiler just referred back to the definition of the name to see if it is a type or a trait. Actually, there was a plan to implement that behaviour, but it was given up in favor of the current "explicit" style. There was some fears of confusing traits and types being more confusing in the long run.
I think using . for both field/method access and name pathing would hurt readability and make name lookup more difficult. Perhaps replace :: with a backtick instead?
use std;
use std`fs`File;
pub fn read<P: AsRef<Path>>(path: P) -> io`Result<Vec<u8>> {
fn inner(path: &Path) -> io`Result<Vec<u8>> {
let mut file = File`open(path)?;
let mut bytes = Vec`new();
file.read_to_end(&mut bytes)?;
Ok(bytes)
}
inner(path.as_ref())
}
public func read(path: URL) throws -> Array<UInt8> {
let file = try FileHandle(forReadingFrom: path)
let bytes = try file.readToEnd() ?? Data()
return Array(bytes)
}
Yes, the example above is not generic and not even idiomatic Swift, but it's a good glimpse into how Rust could look like.
Hopefully someday Swift would become more safe, less tied to ObjC legacy and actually cross-platform. It's a shame that such a nice language is basically Apple-only.
I think the reason Rust's syntax is so controversial and divisive is that its syntax works well for some areas and poorly for others.
TFA highlights it well:
> It is needed because Rust loves exposing physical layout of bytes in memory as an interface, specifically for cases where that brings performance.
Rust (and its syntax) is designed for precision and performance and explicitness. That's great for the parts of your code that are extremely performance sensitive.
However, for most of our code, we need flexibility and high-level readability a bit more... a bit of a mismatch with some of Rust's decisions.
I like C#'s approach here. By default it decouples away all of these unnecessary low-level decisions, and lets you focus on what the code is actually trying to do. Then, if you want to do performance-sensitive work, you can use things like `struct`. I wish it went a bit further, but it's pretty nice.
> I like C#'s approach here. By default it decouples away all of these unnecessary low-level decisions, and lets you focus on what the code is actually trying to do. Then, if you want to do performance-sensitive work, you can use things like `struct`. I wish it went a bit further, but it's pretty nice.
But if I read correctly what you wrote, that's not just syntactic, right?
There's always a lot going on in every line of Rust and I respect that. But a lot of it is boilerplatey. E.g. this AsRef business - the argument is that it helps performance, but I hope that sometime in the future it would be much easier and cleaner to express things like "I want two arguments of any type I can transform to an owned string" without adding a bunch of explicit generic arguments, or even "I want a hashmap with keys and values of any type I can transform to an owned string", allowing people to use your api without adding/removing .to_owned() after strings every time they change something, or all this iter-map-to_owned business.
And no, I don't mean it as having java-like semantics. Semantically such functions should stay generic over the types. There's no reason not to sugar it over, though
But this has the same flaw as the original code in OP, where you need a separate inner function to avoid excess monomorphization. What's wrong with just taking a String (or a ref to Path) and expecting user to add .into() at the calling site?
Basically you have the choice of adding a tiny bit of boilerplate in the caller, having monomorphization bloat, or adding lots of boilerplate to the callee.
In general I'm with you: just stick the .into() in the caller.
But this is pub std code, expected to be called from many many places, so they probably made the right choice to make it more convenient to call.
I do want to highlight a comment in passing in the OP: the team has plans (and some work) around accomplishing that optimization in an automatic way, without extra syntax, under the whimsical name of polymorphisation.
Is this a sarcasm piece from the author? The way the article ends (by throwing away more and more features from Rust) makes me wonder how sincere the beginning was.
I think the intent is to show first by counterexample that, no, the syntax is quite readable for what it's trying to represent. After showing that the syntax isn't the problem they then show that the language semantics are indeed what make it complicated, but you can only get away from that by throwing away the best features of Rust.
Basically they're criticizing, in a roundabout way, people who criticize Rust for it's syntax by showing that such people don't actually understand Rust.
It has a sarcastic tone, but the point is to show by demonstration that the ugliness is not purely in the syntax, but because Rust actually exposes a lot of knobs to the programmer.
There's a lot to learn about Rust. It's very different language from pretty much any mainstream language. Even C++. But it makes it very interesting to me. So far I've seen mostly either javesque-scriptish languages or type-algebraic-for-the-sake-of-type-algebra languages. There were some oddities like we-just-write-ast or Forth. But mostly languages seemed boring and only syntactically different.
Rust is high level value semantics running on bare memory kind of thing. It's meaningfully different and useful.
I've written both C++ and Rust and I've basically settled for C++ without inheritance in all my personal projects. I think Rust made me a better C++ programmer.
The problem with Rust is that even when you reach a point of understanding all the base things about the language, how traits work, generics, all the reference types, patterns like interior mutability etc, I've never gotten to a point where I can get an idea and quickly implement it in Rust. That just doesn't happen. With C++ on the other hand it is much easier to prototype, and I really need that being-able-to-quickly-prototype aspect in my workflow.
It seems obvious that bad syntax harms readability, in that it's very easy to construct an example of a programming language that is truly unreadable (use commas to indicate generics, everything must be upper case, backslash to call functions and pipe to separate arguments, the letter `H` is used to start a class declaration, etc - the Intercal of syntaxes).
But the reverse seems more difficult to argue: that there is some ideal syntax that is objectively more reasonable than all other syntaxes. I have preferences (do/end blocks feel like unnecessary noise, I like my types to be written inline in my functions, angle brackets feel right for generics), and some of them even have a veneer of logic behind them (significant indentation is a nuisance when I want to paste a block of code that has the wrong indentation). But clearly other people have different preferences, because they keep on designing languages that don't fit my ideal aesthetics.
The other side of this is that the more I write different languages, the less important syntax becomes. The places where I appreciate the syntax of a particular language are usually because the syntax fits particularly well with the language's semantics, rather than because the syntax itself is good or bad. Partial application in ML-family languages is syntactically very neat, but I don't feel the need to remove parentheses from Javascript because of it. What I really like is partial application, and using space as the function application operator works well with those semantics.
I mean, sure. I disagree, but I can see what you mean. But that's not an argument for one being better than the other, it's an argument of preference. You happen to like words over symbols.
In my experience, when I actually start getting to grips with a new language, the syntax very quickly becomes one of the least interesting things about it.
"Make it simple" is easy to throw around, but it's almost meaningless. Perception of simplicity and readability is incredibly subjective and dependent on previous experience. Beyond that it also depends on what goals the language has, because "simple" from one point of view can be oversimplified and unclear from another POV.
Rust is a systems programming language concerned with several low-level details. Programmers aren't reading Rust code for entertainment, but need to be aware of things like ownership, mutability, lifetimes, trait bounds, dynamic dispatch, and so on. Beyond being eye-pleasing, Rust also needs to be locally explicit to communicate costs, limitations, and unsafety of the code.
As the article shows, beyond just bikeshedding which ASCII sequence you use for what, the aren't many ways to simplify the syntax that don't involve hiding information or changing language semantics that could worsen performance or create unpredictability.
The article only provided one small example, and the code is not really that ugly. There are many third-party projects/libs that exposes way more ugly APIs, specially those which heavily utilize generics and `where`s. These things, while small and can be improved individually, wears programmers down quickly.
The syntax design of Rust is adding to the situations as well. For example, Rust is probably the only language that makes me cautious about the usage of semicolon, as you might have loads of unsound problems if you misused one.
My experience with Rust has been generally great, but I think these small details really deserves some thoughts from the Rust team.
BTW: I also wouldn't really trust those people who have to navigate through all the quirks of something, and then calling their experience "Great". There are people who memorized the entire `:help` book of `vim`, just so they can use the editor at hyperspeed instead of lightspeed. Those people are monsters, they can eat RAM chips and then spit out the bits cache line by cache line. You give them a grab of sand, they'll make a computer out of it and play DOOM on the computer. So of course those people well have great experience on almost everything (Or they just been polite, like me)
The point of the post basically demonstrates that all that syntax isn't just there to annoy you, but is driven by pragmatic needs to be explicit for performance purposes.
But the ending "much better" isn't really addressing the actual question of whether there is a way to improve syntax while keeping all things that need to be addressed instead of dropping them one by one like the post did.
Rattlesnake is actually pretty good. I also like CrabML but I wouldn't expect that to be popular though.
In general, I don't love Rust's choice of sigils (::, <>, etc), but the language reads well enough. That inner/outer function thing is legit the compiler model leaking into user code though.
It's funny how even the final example does look ugly; all those unnecessary ;s , ::s and {}s. This article has convinced me of the opposite of what it was probably trying to claim: Rust syntax is not just trying to express difficult concepts, it's genuinely bad.
I'm absolutely biased, as one of the rustc parser maintainers, but removing all redundancies of a languages syntax is actually terrible for usability. The fewer "landmarks" the parser has to work with, the harder job tooling has to provide feedback when something was written incorrectly and increases the likelihood of two unrelated concepts being a typo away, which won't be caught by the compiler. Increased verbosity doesn't have to mean reduced readability, as long as the syntax is regular and allows you to selectively skim parts of the code you don't currently care about.
Syntactic noise absolutely reduces readability. Redundancy may have value sometimes, but ignoring the information that's already there (newlines and indentation) in favour of a separate, parallel track isn't actually giving you more chances to catch an error, because indentation that doesn't match the blocks or newlines in the middle of an expression are generally a warning at best.
I can give you a straightforward example: the parser can recover from missing semicolons at the end of statements. The language could do automatic semicolon insertion a-la javascript. But then that means that all the ambiguous cases have to be handled anyways, either with a confusing error, or potentially subtly incorrect behavior. The parser uses whitespace to infer intent for this, if you encounter a statement starting keyword starting a new line when you expected a semicolon, confidence is high. We also use indentation levels to infer where missing closing braces should go. That mechanism is unreliable enough that I wouldn't want to have to have to rely on that for the actual language, but for recovery you can get away with incomplete solutions.
Note that relying purely on whitespace significance is particularly problematic for expression oriented grammars like Rust's, as opposed to statement oriented like Python's.
It's not about removing redundancies or landmarks, it's about Rust having made some not-so-good choices.
It's slowly becoming apparent to more people (e. g. compare the screeching a few years ago about Rust's wrong choice of <>/::<> generics to today), which I guess is why we see more of these huffy articles from Rust defenders.
The problem is not the brackets, and the colons and so on, but rather the cultural habit of wanting to cram as many concepts as possible into as few lines as possible.
A Rust function has to be generic and fast, it should absolutely use references and also Result types, some functional idiom and generally encode as much as possible in the type system. When other languages would settle for another if or an exception, Rust always has one more weird trick up its sleeve to use instead.
This is the curse of the bigbrain programmer wanting to use bigbrain concepts everywhere. Crate and library developers have internalized these lessons and it has become a self-fulfilling prophecy.
It's interesting. The same post was on the Rust Reddit a few days ago, and almost everybody got the point of the article, which is the opposite of what we could infer from the title alone. Many have a good laugh as well.
But somehow, on HN, the article seems to have been taken literally with long threads about "::" v.s. "." and how < > are bad for generics.
Physics over optics, not everything is about cosmetics.
> It would be much simpler to not think about error handling at all, and let some top-level try { } catch (...) { /* intentionally empty */ } handler deal with it:
I dislike this, as even in C++, I like expected or StatusOr better than exceptions. I don't really think the Rust semantics for Result is that bad.
That said, if I were to fantasize about potential improvements, one thing I'd really like is an overhaul of error types in results. I wish there was some magical way that the error type could be inferred to be a union of all of the errors that can be returned. Most of the time, needing to explicitly do this manually doesn't add much value, and worse, it leads to much less precise error types as people reuse the same union type for functions that return disjoint errors. Unfortunately, there's no obvious way to make this work out, especially since it breaks Rust's rule about non-local inference, since the return type would be inferred by the function itself. Also, it would make life a lot worse for not breaking your API. But it would make the code a lot simpler and lower effort, that's for sure.
Perhaps another thing would be simply not needing to write out Ok(value), and just have an implicit conversion there, but while that could actually be done, it is also ugly for other reasons.
I don't see a problem at all with the ? syntax itself.
but it hasn't been stabilized yet. The other workaround is to make an anonymous function and invoke it immediately, but sometimes you run into janky lifetime issues with that.
> I wish there was some magical way that the error type could be inferred to be a union of all of the errors that can be returned.
You could use `Result<(), Box<dyn std::error::Error>` to use dynamic dispatch at runtime and allow the function to return any type that implements `std::error::Error`.
Otherwise people often use the anyhow[1] or the thiserror[2] crate to easily work with errors. They allow concise error propagation and quickly creating custom error types.
dyn Error would be fine by me performance-wise, but I do want to exhaustively know the exact possible kinds of errors a function can return statically at compile-time, and dyn Error thwarts that. It looks like anyhow Result is basically in the same boat, just with better ergonomics.
The thing about the inner function is something I didn't know about and while I understand the motivation and need it definitely feels a little goofy. But if you're passing impl AsRef<Path> just to pass it as a &Path to an inner function, apart from slightly better ergonomics, there's not really any difference to just having the outer function accept &Path, no? That might the actual wart beyond just bikeshedding.
If you only accept `&Path`, then the caller will have to turn it into a `&Path` or you; that just moves the complexity from the one function to its many callers.
It would be more elegant to move the "inner" function to the top level, and expose a separate helper function that does conversion. The current Rust code seems to be constrained by backward compatibility, where the code was written with a generic interface to start with and this had to be preserved.
I think it'd be most elegant if we avoid the inner/outer structure, write the function generically without calling an inner function, and then teach the compiler to automatically factor out the non-generic part of the function as an optimization.
I don’t think it’s so bad since conversion to &Path is pretty trivial. Conversely, impl AsRef<Path> can move the argument even though it only really needs a borrow, which is kind of funky (both PathBuf and &PathBuf are AsRef<Path>, after all).
To be honest, I've always been unnerved by "Why is that block ending without a } or END? Is the indentation significant? Is the number of arguments for each construct back-fed into the lexer?" languages like Ante, Ocaml, and Haskell.
If when trying to use them the compiler fails to provide adequate feedback to get you on track, please file diagnostics tickets! We take them seriously.
I think they're saying that it's counter-intuitive that the specialization relationship between Fn, FnMut, and FnOnce is the reverse of foo, &mut foo, and &foo.
One of the suggestions I liked from Uncle Bob’s Clean Code was that the contents of each method should be at a single level of abstraction. In languages that require constant bookkeeping this is either impossible or requires sacrificing performance or correctness. The level of satisfaction reported amongst Rust programmers implies this isn’t an enormous issue for them, but I don’t know if we as an industry have reliable measures of productivity and correctness. Either way, nice that Rust is able to devolve into a less optimal but simpler language in the ways shown in the article. I often feel some guilt when giving up on highly templated C++ and doing things the OO way, but it’s simpler to reason about and still plenty fast.
Let's put it this way... when you absolutely must syntax highlight code in order to be able to explain Rust syntax.. it's ugly.
Let's just go with a really contrived and stupid example... "pub fn". Would it really have killed them to spell it out? Why do we need 'fn'? Functions with a return type (i.e. `fn add(a: i32, b:i32) -> i32`) is just not easy to read. Yes, I'm aware that someone here will find this 'beautifully terse and concise', as well as easy to read. But without syntax highlighting, it's just a jumble of characters to me.
We read code more than we write, so clarity of code should be the goal of any 'modern' languages these days.
Funnily, nearby on the frontpage is The Python Paradox[1], an essay which contains:
> At the mention of ugly source code, people will of course think of
> Perl. But the superficial ugliness of Perl is not the sort I mean. Real
> ugliness is not harsh-looking syntax, but having to build programs out
> of the wrong concepts. Perl may look like a cartoon character swearing,
> but there are cases where it surpasses Python conceptually.
I do think there's something to be said for the notion of "conceptually ugly" rather than "syntactically ugly", though the latter is what people tend to focus on. Perhaps we need a better word than "ugly," with its superficial connotations.
I suppose there's also a distinction between "beautiful in implementation/concepts available" vs. "beautiful/pleasant to work with."
Harping on Perl again, one could see its roots in natural language with sigils representing different kinds of "nouns", and ability to rearrange conditions to be more like spoken language ("do X if Y vs. if Y do X") as an aspiration towards "beautiful to work with."
And of course the various notions of beauty can get in each others' way.
But they're all a lot more fruitful to consider than a mere "yuck, too much punctuation."
There has to be a more objective way to rank uglyness. The example function you provided looks perfectly fine to me. I don't see how syntax highlighting makes it any better.
I did mention it will look perfectly fine to someone here. [ and what better, to someone's nickname is Gigachad :) ] But if you had a sea of functions that looked just like that.. it'd be hard to pick things out without syntax highlighting.
How can we actually prove this statement though? I work with TS, Ruby, and Rust daily and I honestly can't think of a single time I had any problem reading Rust once I understood the language. But saying "works for me" is just as anecdotal as "doesn't work for me". There needs to be a better way of talking about the problem to progress on this.
For me it is an issue as well, but it doesn't seem to be an issue for the others. I strongly prefer Haskell/math notation for denoting the types of functions, but all the new languages choose other syntax.
I don't think the syntax of TS is the problem. It's that people are trying to define types for massively dynamic JS that is inherently hard to express as a type.
Brevity isn't always a bug even when optimizing for reading rather than writing. Spending extra characters spelling out `function` doesn't necessarily make code more readable.
I do, in general, think keywords should not be abbreviated. We have "return" rather than "ret", "continue" rather than "cont", and so on. This wasn't always the case; ancient (pre-1.0) versions of Rust had a lot more abbreviation in the language and library than current versions.
I think there's a good reason for the function keyword to be abbreviated.
We know that when people read lines of text, they pay a lot of attention to the beginning and the end and comparatively less attention to the middle. One result is that lines with common prefixes tend to blur together. To get around this, you want the reader to encounter the unique portion of the line ASAP, which makes it desirable that any kind of necessary prefix be brief.
Upshot is you want the function signature to be what you see as you scan down the left side of the page, so it does make sense to keep the function keyword particularly concise.
In theory the return type can be inferred by the body of the function given the fixed argument types.
However rust has made the stance that the full signature is required for the reason of local static analysis. If you have a function A that calls another function B, you shouldn't need to analyse B in order to analyse A (both as a human and as a compiler)
That's another thing. I don't want to see 32 and 64 crap in code that is supposed to be high level and portable.
i32 means I'm doing FFI with C libraries.
If you check the Rust documentation, you see it has no integer types that are not a fixed size.
"All the world is x86/ARM (formerly DEC VAX / Sun SparcStation )"
C started on 18 bit machines. A program that correctly used "int argc" for the main argument count runs on an 18 bit machine or a 16 bit machine without modification.
> C started on 18 bit machines. A program that correctly used "int argc" for the main argument count runs on an 18 bit machine or a 16 bit machine without modification.
I'm guessing you're too young to remember the 32 -> 64 bit transition, because this is backwards in practice. Non-fixed integer sizes are a nightmare for portability and the reason why some C code still hasn't been ported even today, whereas programs in e.g. Java or OCaml ran without modification on the new machines.
A modern language should make it easy and first-class to use arbitrary-precision integers and decimals though.
Java programs run without modification on a new machine for basically the same reason why, say, a Nintento Game Boy program runs without modification on an emulator, more or less. The machine has been ported, taking the programs with it, which have not been ported.
Not necessarily; in that era compiling with GCJ was a viable option too. And in the case of something like OCaml there's no VM. The point is that running the program on a different machine does not result in random changes to when arithmetic does or doesn't overflow.
MC68K MacOS software running on x86 via executor has not been "ported", even though it is transpiled to x86 code.
That's approximately the same as Java portability. There is a Java virtual machine (whether interpreted or compiled). Programs are written to that, and the virtual machine semantics is what is ported.
The programs are running on the Java machine, just like the programs running under Executor are running on a 68K Mac (as far as they know).
C programs also run on the C abstract machine, if you want to use that kind of logic. GCJ was a true native compiler, not an emulator. And Java is far from the only language that offers specific-sized integers. (Even in C, people writing portable code know to avoid the machine-sized integer types and use POSIX intN_t types instead).
I am seriously confused. Using fixed size integers is about correctness. Randomly changing the size of your integers will cause unexpected integer overflows.
I16 and I64 is more concise than short and long and it is immediately obvious what the number range is for a beginner. The only complaint regarding portability is the unstated assumption that every computer can only store two values per bit. A hypothetical QLC based computer might store a I64 number in 32bits.
Which is why Rust has `usize` for indexes and memory offsets. Beyond that, a good programming language can't fix a bad programmer.
Were you aware that, because people use `int` so liberally, the LLP64 model 64-bit Windows uses and the LP64 model Linux, macOS, and the BSDs use keep `int` 32-bit on 64-bit platforms? Hello, risk of integer overflow.
The C standard describes an int type whose minimum range is -32767 to 32767.
Any programmer worth their salt knows this. If writing maximally portable code, they either stick to that assumption or else make some use of INT_MIN and INT_MAX to respect the limits.
The minimum range is adequate for all sorts of common situations.
Some 32 bit targets had 16 bit int! E.g. this was a configuration for the Motorola 68K family, which has 32 bit address and 32 bit data pointers.
Rust is one big "C and C++ have proven that 'hire better programmers' isn't a viable solution". You might as well use the same logic to argue against C and C++ restricting and complicating assembly language with a type system and the removal of unconstrained GOTO.
> And, so, Rust builds on that by renaming that to i32, to further entrench the poor practices.
The rationale is that a program's logic should function the same on different targets, or fail with a compile-time error if that's not feasible.
THAT is why the default is to use fixed-size variables except for the `size_t` equivalent which is intended to fit any valid memory offset, and to encourage use of correct types for various applications by making it more bothersome to convert types. (eg. Does a file format use a 16-bit integer for an image dimension? Expose that in the parser's API.)
> Of course, you will not win any popularity contest by appealing to people who know what they are doing, who are the minority.
I dunno. I know what I'm doing, but I choose Rust over every other language because it's refreshing and de-stressing to teach the compiler to watch my back as much as possible.
Everyone has bad days, and distracted days, and tired days, and, unless everything you write is closed-source hobby code, you have to deal with code other people have touched.
Personally, I struggle to see what's good about unspecified-but-fixed-width integer types as in C/C++. A lot of real-world code I see uses uint32_t (sometimes typedef'd to u32) and the like to get predictable data layout and to avoid cute gotchas like long being 64-bit on x86_64 Linux but 32-bit on Windows.
I also like how Rust makes both signed and unsigned types equally easy to type, because I feel that a lot of people use signed integers where they should be using unsigned simply because it's easier to type "int" compared to "unsigned (int)". And if you absolutely need a machine word size dependent type in Rust, you do have usize, which is the equivalent of size_t.
I on the other hand like that _int_ doesn't mean "some integer of implementation defined width". I want to know for sure how big my numbers can be and how much memory an array of such numbers consumes.
It’s still ugly! There’s nothing wrong with that. Everyone will have different opinions on these things. Bike shedding will probably never go away.
The semantics are what matter (at least in most programming languages)!
To me, syntax wise, I just can’t stand inline type definitions. CrabML becomes close to what I can grok syntax-wise. I spent many years with C++ and I can read TyoeScript… but to me they’re examples of bad syntax, not role models.
However if you read A Logical Approach to Discrete Math or Programming in the 1990s you’ll know just how syntax matters… most programming languages can’t even reach this level of which is why I think semantics are more important.
Code aesthetic appreciation is a temporal matter. Something that looks ugly initially might feel natural and sound later. This happened a lot to me while learning Rust.
Rust has to be learned, true, but once the initial learning phase is done, those syntaxes start making sense and become part of the value rather than the cost.
I would recommend putting personal aesthetic preferences aside while learning and reassessing them later.
Also, I hope the article's point was not lost. It took me too long to get it, but I am glad I did before I wrote my first comment on Reddit a few days ago.
Personally I'm fine with semantics but I don't like how lifetime arguments are mixed with template arguments and how types that contain and behave like borrows have no visual indication of that in their names.
> [...] but I don't like how lifetime arguments are mixed with template arguments [...]
It does make sense if you think of lifetimes as being part of a type system. Almost all lifetimes are generic (aside from 'static), so it kind of makes sense to put them in the section for declaring generic parameters.
It also works for how you constrain lifetimes: `'foo: 'bar` means that `'foo` outlives `'bar`, but could also be read as `'foo` inherits `'bar` because, like in type inheritance, a `'foo` can be used anywhere that wants a `'bar`.
I find caring about a language being “ugly” such a weird concept. Like, imagine caring that a hammer is pretty or not… All I care about is how well it can hammer my nails and that’s literally it.
Your interface with the hammer is your hands, so you might care if the grip is rough or slippery, apart from how well it hits nails.
Your interface with code is your eyes, so it can matter a lot how it appears. We read code more than we write, so writing for legibility is important. People talk about line noise. And we use syntax highlighters and monospaced fonts. All of this is for appearances.
My interface with code is definitely not my eyes. My interface with code is the mental structures I keep in my mind while coding. Rust is perfectly legible, even better than legible, because all the structures in my mind can be simpler since the language guarantees so much on its own. Something as superficial as wether the language uses :: or whatever doesn’t even register while I’m programming.
Also, that was a nonsense metaphor for you to use. The grip being slippery is so far removed from “beauty” that not even trying to change the subject to be about “interfacing” can redeem it.
C++ is a slippery hammer, not because it’s ugly, but because it’s easy to hit yourself in the foot using it.
I care about how nice my tools are as well, but notice "nice" implies usefulness. Beauty? beauty is irrelevant, and it is irrelevant in many human endeavors where it is not treated as such.
I'd say that because of everything-is-an-expression, and items (e.g. functions, type definitions) allowed in block scope, rustc will accept exceptionally opaque syntax compared to other languages.
But that doesn't mean that you should be writing exceptionally opaque syntax. For example, in the given example, I would demand that <T: AsRef<Path>> be written with a where clause instead. rustc will accept either, but humans should demand `where` at least whenever the traits contain `<>`.
I mean the post itself basically says as much. Not quite joking, but sarcastic, and to demonstrate that (as far as the author is concerned, I have no strong opinion personally) the complaints about Rust's syntax are not actually due to the syntax, but the semantic features that necessitate some kind of syntax to make them work.
It's a bit of a straw-man, of course, but it's an interesting idea that made a couple things a bit clearer for myself.
I keep thinking Rust should allow you to skip the < > if there's only one generic argument. Like just say io::Result Vec u8. Much more lightweight and not ambiguous (but breaks macros people write). You can keep the angle brackets for multiple arguments like idk Vec HashMap<Option String, Result<i32, ()>>.
This would involve using whitespace as an operator. If you're going to do that (I'm not sure that it stays readable once you go beyond trivially small code snippets) I'd argue that the generic type should be postfix, as it is in the CrabML syntax.
And yes, these proposed syntaxes for Rust are not practical because Rust's support for macro definitions relies on the existing syntax. But if you're willing to reimplement any macros, it becomes viable.
That's wild actually, I've been aware that ML languages exist and had a rough idea what they look like but I never realized they do postfix generics. I'm going to have to let that bake in my brain.
The postfix notation isn't particularly essential. Haskell has usual left-to-right application but the types are still written without extra syntactic clutter, e.g. "Vector Int".
I was against because I felt that there should always be parens/brackets/braces (macros can use any of them) for better scannability when you're invoking an unclear amount of extra stuff behind the scenes.
(An appeal to the "make costs explicit" side of Rust's philosophy that also requires explicit .clone() rather than copy constructors.)
Yeah, I think that's very cool and reads nicely in a lot of cases. D is definitely always a language that comes to mind thinking about "has lots of important features" + "doesn't look crufty".
I am wondering if author is the one who would use all of those nasty-but-beautiful utf special maths/greek characters which were super-popular a couple of years ago.
I don't find Rust's syntax ugly. But then I don't find Modern Art ugly as opposed to what many people think so well...
The syntax is honestly what's keeping me away from Rust; it's ugly AF.
I jump into an unknown C++, or TypeScript / JavaScript or C# codebase and it's just readable and easy to understand. I read a bit of complex Rust and it's making me queasy.
The C++ sample is definitely a strawman: you allowed Rust to `use std` but C++ was not permitted `using namespace std`. Still ugly, less so. And further, I think your "deoptimized" Rust sample should be the headliner, since you point out that the Rust sample sacrificed readability for performance reasons.
> The next noisy element is the <P: AsRef<Path>> constraint.
Should be replaced with `fn read(path: impl AsRef<Path>)`
> We’d have to use some opaque Bytes type provided by the runtime.
A less joke example might be to have `pub type Bytes = Vec<u8>` as a secondary declaration.
I guess the rest of the post is just hyperbole. Next time, take an example that has a struct with a lifetime parameter implementing traits that have lifetime parameters. That's some ugly syntax!
Total tangential but I just did a quick search to confirm my suspicion: using the word "strawman" in an HN comment these days very highly increases chances of getting downvoted. It would be curious to analyze why this is, but my guess is that it has become a cliche and suggests a lack of original perspective.
Because many people think they've won an argument by throwing about terms like that or loosely related ones. It's like a bad language framework and is often inappropriately shoehorned into conversation by people who think it makes them look smart.
Reddit is atrocious for this and I suspect people don't want to see it get in the way of real discourse.
I would actually love an ML frontend for Rust. I've always felt it was just such a nice and legible way to write code, hope CrabML someday becomes a real thing
Building a language that functions well is different from designing a language that reads well.
Just like building an app that functions well is different from designing an app that looks great.
We don’t expect every backend developer to be the best front end designer and at some point we shouldn’t expect every language developer to be the best language designer.
It might be worth treating these two tasks as separate disciplines that can be studied and practiced independently.
Otherwise, we keep getting languages that feel akin to 1990’s websites. They function great, but look as if the aesthetic is an afterthought.
Some nitpicks, since you're almost asking for them:
'read' taking 'ref P path' isn't entirely accurate. In Rust, 'path' is moved into the 'read' function, which destroys it after calling 'inner'. (If the caller doesn't want it to be destroyed, it provides a reference, i.e., a safe non-null immutable pointer; when 'read' destroys the reference, nothing happens.) As I understand it, 'ref P path' in D still gives the caller the ability to access 'path' after calling 'read' and the responsibility for destroying it.
Also, the '?'s are important for error handling: the 'File::open()?' line basically expands to
let mut file;
match File::open(path) {
Ok(value) => { file = value; }
Err(error) => { return Err(error); }
}
and similar for the 'read_to_end()?', except the result is discarded. And one of the reasons (apart from flexibility) that 'read_to_end()' takes an external vector as input is so that the caller can retain the result of a partial read; otherwise, the function would have to return a vector on error as well as on success.
I think you missed the implied wink. This post is not arguing for throwing away all performance and safety, it is showing how “ugly syntax” is actually mostly inherent complexity from trying to be safe and fast.
Like, all the syntax does something that contributes to being safe and fast, but that doesn't mean there couldn't be a completely different notation for all of that. Maybe there is nothing that would make everybody happier, but for any given person unhappy with Rust syntax, there's probably a bunch of tweaks you could do to make it nicer for them.
for me it's really unpleasant that there is so much stuff between the function name and the names of the parameters. It's a bit nicer with already-a-Rust-feature where-clauses like
pub fn read<P>(path: P) -> io::Result<Vec<u8>> where P: AsRef<Path> {
but if nothing else I think that makes a mess of the indentation if you linebreak anywhere in that line.
C++ avoids this by just yanking all the extra bits at the very start of the declaration, so we could copy that (without going all the way into C++ caricature):
This addresses my specific complaint, but is probably worse for other people. Who knows.
As another example, the whole business with explicit Result types and the early-return question mark is basically isomorphic to checked exceptions, which may or may not be syntactically lighter/more pleasant. Would Rust enthusiasts prefer that? Probably not.
Would it make the language more approachable to people coming from some Java ecosystem? Maybe? Would it be worth it? No idea.
Are you a simple parser? Because you read until finding the first reason to stop and bailed out.
Marginally more advanced compilers will try to recover, get to the end with as much context as possible, and then complain, including maybe some extra info that came after that might be relevant to the error seen.
Writing 'o' and 'oo' to differentiate the two words was a workaround for not being able to write the letter 'z' in script, because it was easily mistaken for the letter 'ȝ'.
Reminds me of a time was talking to German relative about cooking and she kept talking about making a "duff". Took me a long time to realise she was making a "dough", but had only seen the word written.