Hacker News new | past | comments | ask | show | jobs | submit login
Thoughts on what a next Rust compiler would do (matklad.github.io)
214 points by arunc on Jan 26, 2023 | hide | past | favorite | 140 comments



I agree that the C-model is keeping Rust back. But it's ubiquitous and extremely successful. Dynamic linking also practically gives us an ABI (a very simplistic one) for free, which is great for FFI and debugging.

So if I could choose, I'd invest money into a new ABI/linker that reconciles polymorphic languages with separate compilation without demanding a uniform object model. Not only Rust would benefit from such a linker.

I have no complete solution for how such a thing might work, but I think it's possible because the actual specializations that stem from monomorphization are very few. It might simply be possible to create one function for every possible monomorphization ahead of time. Even if this turns out to be futile, the actual specialization only needs to change very few things in the code, so monomorphization could effectively be handled by a dynamic linker.


> Even if this turns out to be futile, the actual specialization only needs to change very few things in the code, so monomorphization could effectively be handled by a dynamic linker.

This is true in only the simplest case. Already if you have a polymorphic function that sums the elements of an array, you'll be doing function calls to string concatenation if those elements are strings but would really like the vectoriser to produce nice SIMD code when specialised to floats.

Specialisation is something that should happen before the optimiser.


> Specialisation is something that should happen before the optimiser.

That's true, but your example shows a very important point: These cases are limited and (in current languages) known by the compiler developers. I don't know a language that would allow a user-defined datatype to come with such specific optimizations. So for today's languages one could probably enumerate the special optimizations before specialization and then later only dispatch to the optimized code.

But even if we say that optimization must happen after specialization, I still think that these optimizations are general enough to be put into a dynamic linker. That's what all JIT engines do, after all.


I must not be understanding your idea at all, because it seems to me in both Rust and C++ you can just give user-defined data types completely different behaviour which doesn't occur in any of their built-in types which would be impossible with what you're suggesting.


This sounds great, but I don't think Rust has the complement of highly skilled core developers needed to tackle something this ambitious. I'm a close observer of the Rust project, and my impression is that Rust's core development team has been hollowed out over the past few years - there's been a string of quiet departures, which sadly included some of the most powerful contributors. The Rust project is quite secretive and opaque about its internal politics, so I don't know if there's any unified reason for the attrition, and I don't think it's useful to try to read the tea leaves. At this point, I'm just keen to be reassured that Rust has the momentum to complete things like async and shore up holes in the type system on a reasonable timescale.


People come and go from every project. I haven't noticed any particular drop in momentum for the Rust compiler. (Also, the Rust compiler dev community is hardly "secretive and opaque"--just look at the amount of drama that regularly spills out into the open. I can't think of any compiler that's more openly developed than Rust, in fact.)

A brand-new Rust compiler may well never happen, but if it doesn't happen it's because the business value for a complete rewrite isn't there, not because of some mass departure of compiler developers.


> I'm a close observer of the Rust project, and my impression is that Rust's core development team has been hollowed out over the past few years - there's been a string of quiet departures, which sadly included some of the most powerful contributors.

Can you give more specifics on this?


I don't want to list specific contributors here. We often have no information on why people left the project - there might be all sorts of personal factors. I will say that my broader sentiment - that Rust's momentum has slowed, that it's not delivering on its commitments in a timely way, that there are concerns about it's ability to deliver in future - has been expressed publicly by high profile past core contributors:

https://github.com/rust-lang/rust/pull/96709#issuecomment-11...

I don't think Rust should tackle any ambitious project to rewrite the compiler while these basic concerns remain.


Most of the high-profile departures from the compiler happened in the wake of the 2018 edition (the first new edition since 1.0, and the edition that had to invent the very notion of "edition"), where a lot of people pushed themselves far too hard to deliver on what would be, in retrospect, far too aggressive of a release deadline. The edition just barely made it out in the 2018 calendar year, after getting delayed a handful of times, but it resulted in massive burnout among those contributing to its features, including the person who wrote the comment linked above. In practice Rust has actually been recovering quite nicely over the past few years after reaching a nadir of volunteer motivation in 2019; https://github.com/rust-lang/rust/pulse/monthly tells me that 178 discrete authors have contributed to just the rust-lang/rust repo over the past month.


Thanks for linking that thread, it's an interesting discussion. I didn't realis GATs were so polarising. I understand why, but it was my assumption that everyone was okay with the limitations, and that it would eventually be lifted (similar to how NLL and GAT lift limitations currently)


The biggest name that left was Steve Klabnik. He made some statements that made it sound like he was upset with Amazon's involvement in the language, but as I recall it he didn't really go into any specifics so I don't really know.


While there was some stink around his departure, he's not one of the contributors I'm concerned about here. I'm thinking more of people working on deep technical issues around the language and type system.


There are already at least two Rust compilers, now that there's a GCC version. That's good; it means the language becomes stronger than the implementation.

Right now, the weak points are mostly library side. Too many 0.x version crates where the API is still in flux.


It seems in terms of web dev related crates, admittedly a niche for Rust, there have been quite a bunch of abandoned, unfinished projects and other stability issues.

There also seems to be a general trend towards sophistication and away from simplicity. That’s a tradeoff that makes me personally wary, rather than excited at this point in my life.

Much of that is to be expected, given the age and the unique value props of the language. But the stability to hype ratio seems unattractive overall, even though the language and tooling are great to a large extent.


In fairness, I don’t think rust in web dev makes any sense outside of personal hobby projects. I’m not surprised there’s lots of abandoned and unfinished projects there. That’s not an important area.


For backend APIs it absolutely makes sense, not only for the safety guarantees but also its type system which makes data flows much easier to reason about. I recommend the book Zero To Production in Rust in particular for learning about this, I finished it recently and at the end I had a production-ready backend API.

https://zero2prod.com


Rust isn’t as safe as GCed language and you don’t need systems level speed on a backend API.

Rust’s safety is better than C/C++, but memory vulnerabilities are found in cargo packages because of unsafe code.


> Rust isn’t as safe as GCed language

Safe rust should be. And C# is a notable example of a GCed language that also has 'unsafe'. Along with most of the GCed languages with FFIs.


Safe rust can still use crates with unsafe rust, so no. Unless it’s safe rust and uses no outside code or none of its dependency tree has unsafe rust.

It’s pretty crazy how many people will argue this point and downvote comments about it. It’s like a C++ dev claiming they never write memory bugs. I love rust, but the community pretending like memory safety bugs are impossible is as annoying as people claiming memory safety isn’t important.

EDIT: And GC languages “can” call unsafe is not the same as a language that has unsafe in it. It’s laughable if you are claiming there are the same memory bugs in say Elixir as there are in Rust.


> Safe rust can still use crates with unsafe rust

Every language ecosystem that I have used has a good subset of its popular libraries implemented in memory-unsafe languages (usually C or C++), and the use of these libraries is typically unavoidable if you need high performance or to interact with the OS. Which is to say, most large projects in GC languages end up pulling in memory-unsafe dependencies.

Rust's advantage is that at least your dependencies are not 100% unsafe code, but have unsafe parts limited to small, easily auditable blocks.


> Unless it’s safe rust and uses no outside code or none of its dependency tree has unsafe rust.

If you already know what you want, why pretend there is no solution?

But you skipped the other half of the post, where I explained that the average GCed language either has unsafe code or can easily call unsafe code or both, just like Rust.


That's trivially true. But a web backend will preferably use:

- A std library that provides much of the stuff you need to write web servers.

- A few select well established, stable libraries to fill the gaps

- A battle hardened database like SQLite, MySQL or Postgres

- Maybe additional libraries and generators to program against specific, open protocols, standards and over the wire schemas, like json-schema/OpenAPI/GraphQL/Protobuf etc. Which are typically tested against a common test-suite.

- Maybe additional components like message queues and key value stores and object storage in order to scale.

You get the drift. It's not like these things are some random libs and tools that are maybe "unsafe" (whatever that means).

A good thing about Rust is that unsafe is always explicit. And if you're not using the keyword, then satisfying the type checker gives a "if it compiles, it runs" feel, because runtime assertions can be avoided.

But saying "anything is unsafe anyways" is missing the point. The language itself can nudge you towards a certain outcome, but I trust some of the mentioned technologies above, not because of the languages they use, but because they are well designed, stable and full of old battle scars.


If you are doing web backends, you are using crates with unsafe code. Again, I could write C code without memory violations hypothetically the same way I could use no crates, so I guess all rust rewrites are no longer necessary?

As I said in the edit, GC languages are safer than an unsafe language like rust, even if you could find and call unsafe code.

Again, I don’t know why to someone that uses rust I need to make that point. If a C++ dev argued using C++ is fine cause all GC languages can call unsafe code, would you agree? Cause if you do then we might as well end rust now, it’s not adding any safety. All language are unsafe!


> you don’t need systems level speed on a backend API.

I do, actually.

> but memory vulnerabilities are found in cargo packages because of unsafe code.

Forbid unsafe in your config and you're good.


Correct me if I am around, but Forbid unsafe does not include dependencies, which can and do have unsafe rust.


You can use cargo-geiger to check whether dependencies have unsafe code too, along with your own code.


This is a good book. Very hands on and goes all the way.


All of this sounds fantastic.

I can't donate my time, but I'd be happy to donate money to anyone taking on this effort.

On that note, I'm looking for more Rust crates and maintainers to donate to. I just started paying Bevy, and I'm looking for more areas of the Rust ecosystem that need it.

This language brings me immense business and personal value, and I'm happy to give back.


Bjorn3 for all their work on cranelift and working towards getting it to be a better debug compiler for rust.[1]

cwfitzgerald for wgpu. He took over as the head maintainer after kvark left Mozilla to go to Tesla.

winit maintainers, they're a core ecosystem crate but don't get that much support because windowing isn't flashy.

[1] https://github.com/bjorn3/rustc_codegen_cranelift


Web3 or something useful?


How much faster can the compiler be? This is the top issue with rust IMHO - compile times are real bad.


This is absolutely a pain point for us, working on a very large Rust project. In particular, incremental compile times are absolutely critical for developer comfort, and by far the most common complaint working on our codebase is that developer tooling, IDEs and running unit tests is slow. We've done everything that can reasonably be done - splitting the project into crates, using mold as a linker (the single biggest improvement), etc. Without a really big improvement to incremental compile, we will have a continuing drag on our development momentum.


I'm curious what are your actual compile times? What does "painfully long" mean for you and your workflow? 15 seconds? 2 minutes? 15 minutes?


This is a bit of a "how long is a piece of string" question. It depends on the developer workstation, which file they're currently working on, etc. That said, we're talking about incremental compile times here, not total project from-scratch compile times. A lag of a few seconds really makes a difference to developer comfort - I'd say, when it's an issue, we're talking about rust-analyzer responsiveness (mainly due to cargo check on save, which can be disabled) of a 3-8 seconds, and delays in running unit tests of 5-20 seconds. I work on a super beefy ThreadRipper monster so I'm less affected, but colleagues on under-powered laptops suffer a lot.


Painfully long to me is everything over 5 seconds.



If you consider how much LLVM instructions are emitted and how much time Rust spends linking, the potential for speedups seems very significant.


Any reasons why it could not be as fast as the Go compiler? Maybe GC and the borrow checker could slow down a bit, but apart from that?


The Go compiler is barely an optimizing compiler. Rust needs massive amounts of inlining to avoid the performance cost of abstractions, and after inlining the code needs to be cleaned up. In this respect it isn't unlike C++.


Does anyone particularly care if development builds inline much? If you can get Go-like compile speeds for development iteration, and the current ones for release, it would be the best of both worlds for me.


In fact, that's exactly why cargo's default for dev builds is -O0: https://doc.rust-lang.org/book/ch14-01-release-profiles.html...


The Rust dev builds are very often (1) slower to build than the equivalent Go program, (2) slower to run than the equivalent Go program. Rust programs really need the `--release` flag to shine—and in some cases, the dev builds of a program cannot be meaningfully used, even for local testing, since they're so slow.


> The Rust dev builds are very often (1) slower to build than the equivalent Go program, (2) slower to run than the equivalent Go program.

It could definitely be better, sure.


Go production build is quite small as much as I can say, but I don't understand much at compilers so I will not discuss this. I've a lot of hands on experience with both Go and Rust, and on mid size project, an incremental production build in Go will take less than 1 second, whereas a development (non release) incremental build in Rust will take 10/15s (something as simple as a `println!` update). It's subjective, but it's not much exaggerated to tell that a Go prod build is >10x faster than a Rust dev build

So, I get your point for production build, that it needs some special optimization that maybe doesn't happen for Go, but I don't need any optimization when doing Dev testing and this could well remain totally unoptimized. Then I would at least expect for an "un-optimized" dev Rust build to be roughly as fast as a production Go build. At least not >10x slower


The reference compiler needs a lot of time to make the resulting binary fast, okay fine. But where's the rust-lang.org group's fork of the golang.org group's `go build` tool modified to accept source files written in the Rust language as input and that (quickly) outputs a binary with a performance profile no faster or slower an equivalent program written in Golang? ~15 years of compiler research is too long and comparisons between the two languages (however fair or un-) are too numerous for there to be any excuse for this not to exist, and yet


It could, if it stopped doing a lot of things it does which the Go compiler doesn’t.

Or of course if the Go compiler started doing them so it became a lot slower.

One example is “full”monomorphization of generics. But there are lots of such examples.

Usually though I find the Rust compilation speed a non issue, so long as “cargo check” and the IDE analysis is great, I just rarely compile to begin with.


The borrow checker is not an insignificant cost. Maybe you could imagine an incremental compilation mode that just didn't evaluate the borrow checker for faster development.

As other commenters have noted, the Go backend is extremely simplistic ("barely an optimizing compiler"). Rustc is also a lot faster in -O0 mode than any optimizing mode. But -O0 is somewhat dumber than Go's compiler. You could imagine an -O1 (or -O0.5) with compilation time vs performance tradeoff similar to Go, but it isn't there today. There's also an attempt to adopt a more simplistic backend (cranelift) into Rustc to improve codegen performance, but so far it hasn't provided much speedup.


Apart from rare pathological cases borrow checking is not a significant part of compilation time. Disabling it would be very detrimental for little gain.


As an end user of Rust, I think the borrow checker seems slow because it's one of the last phases to run during vscode's real-time feedback. More than once I've been in the midst of a long refactoring session, and finally fix what I think is the very last syntax problem. But then the borrow checker finally gets to run, and then I learn about a whole new set of problems.


Apologies, I was under the impression it was a significant cost. Do you have any good analysis of where Rust compiler time goes you could link to?


Here is a nice article that goes into detail: https://fasterthanli.me/articles/why-is-my-rust-build-so-slo...

Here is an older article that compares compilation times between multiple different libraries, where one of them does spend a significant (and majority of the) time borrow checking: https://wiki.alopex.li/WhereRustcSpendsItsTime


https://www.pingcap.com/blog/rust-compilation-model-calamity... is a good overview. In general it varies depending on the crate but we track the performance at https://perf.rust-lang.org/ - if you look at cargo, for example, over 60% of the time is spent in codegen through LLVM: https://perf.rust-lang.org/detailed-query.html?commit=222d1f...


For someone who hasn’t written any larger rust programs. Is the compile time worse than c++?


The quick-lint-js project ported a part of their code from C++ to Rust. Rust's compilation time scaled less well with the size of the code base. Here is a detailed report:

https://quick-lint-js.com/blog/cpp-vs-rust-build-times/


Same order of magnitude, i.e. depends, but really bad if you actually use the language.


I say this with great respect, but the thing keeping me from Rust isn't the compiler, it's the language.


That's fine. It doesn't have to be everything to everyone. We probably don't need this kind of comment on every single article about Rust, though. (Or similar generic "I don't like this language" comments on any article related to any programming language.)


Then we equally don’t need pro rust comments on literally every other thread

Since a major thrust of the linked article is barriers to rust adoption it seemed relevant.


> Then we equally don’t need pro rust comments on literally every other thread

Correct.


Eh. Every article about Go has the same thing - where regardless of the content of the article people discuss the pros and cons of the language itself.

I think it’s a symptom of there not being enough places where adherents of different programming ecosystems can argue back and forth about our choices. I think there’s an unmet need in our community to have those conversations, and there’s probably something healthy in that.

It definitely distracts from the specifics of the article though. The corresponding discussion on reddit is much more technical:

https://reddit.com/r/rust/comments/10ld2vn/blog_post_next_ru...


I felt this way about Rust for a really long time, but after sitting down and doing a few projects with Rust I've really come to appreciate how much the language stops me from being a dumbass.


Once you're done fighting with the compiler (Rust Analyzer), it's actually very enjoyable and one can be _relatively_ productive. The language allows so much flexibility that it's very easy to produce well organized code...

But the compiler is really slow. Even an incremental build on a mid size project is never below 10s, whereas on a similar size project, Golang will take me less than 1 second to build incrementally. Also, I find cross-compilation (from/to major platforms) really challenging on Rust, as opposed to Golang where it's for my use cases super straightforward


My personal issue with Rust isn't the borrow checker. I actually find it quite intuitive. What I don't like is the extreme complexity of everything. There's just so much... stuff.

What I want is a Rust, but with almost everything stripped away. Complexity similar to Go.


It's worth pointing out that if you say "Go except with Rust's guarantees for thread safety", a _lot_ of the complexity comes back. You need to pull in lifetimes and move semantics and the no-mutable-aliasing rule. You need the Send and Sync traits. You probably want Deref-based smart pointers too. (And you'd need to make extensive use of the new generics features that are already in Go.)


That's not what I'm saying. I want rust, but with the simplicity of go. I want a compiler that's super portable and it itself compiles in a few minutes. I want a straightforward build system that doesn't leave me confused. I want less language features.

I also want a more lightweight syntax but beggars can't be choosers.


You can only add so many constraints to a problem until it becomes unsolvable.

It's like saying "I would like to have a non-volatile storage device that is as long-lasting as microfilm, as fast as an SSD and as cheap as a HDD." At some point, you have to renege on at least one of your constraints.


Fair point, but I don't think my constraints are _that_ extreme. I think it's doable, but we'll never know unless someone seriously attempts it.


What language features would you like to take away from Rust?


I just use Go (or Nim) for native. It’s not comparable in goals and what Rust tries to do at all. But Go compiles super fast, is very fast, solves the glaring C safety issues, very keen on helping you with threads, and has a very fast garbage collector. If I run into performance problems, it’s not because of Go. But yeah Rust has its place in certain scenarios. I still hesitate to call it general purpose though — I think it’s best suited for stuff that underpins applications/usermode. And I don’t think it can be made much less obnoxious.


Go is fast, cgo is slow. Cgo is also annoying to set up on windows. It kills the simplicity of a simple go build.


I would be quite impressed if there was a way to have Rust with "less stuff". It's not like there is a lot stuff that can be removed without making the language quite useless.


nim comes the closest of anything I've found, the infrastructure just isn't quite there.

ocaml has potential but I strongly dislike the numeric operators, and I'm not a huge fan of the syntax in general.

OCaml with python like syntax would be damn near perfect. I could even overlook the operator thing.


Memory management is hard. Thread safety is hard. Go (and Java, Python, …) pushes the complexity of the former to a GC, which has its own drawbacks. Few popular languages actually tackle the latter - most languages have safety problems there such as race conditions (in Go, Python, Java, etc.)

This is not to deny that Rust also has other sources of complexity. But, most of Rust’s complexity can be traced to its fundamental goals of safety and performance. If you’re willing to sacrifice one of those, there are much simpler and easier languages to use.


so you want Go then?


I don't have much experience but after writing some tiny rust scripts and similar sized things in kotlin (trying kotlinc v1.7 that was supposed to be a lot faster than previous versions), rust managed to stay ahead.

it's true golang is smoother, but the language is a lot messier (idioms and typechecking).

Can't conclude anything yet, but it makes me think back of rust more often than others.


The slow compile times I look at as a trade off. Rust does a lot more during it's compile phase than most other compilers and there is a cost. Do I do all this static analysis and eat time here to hopefully reduce runtime bugs and subsequently the time to debug runtime bugs? It takes me a whole lot longer to debug something with a debugger than if the compiler can catch it.

I do 100% agree on your commends on cross compiling though. I also don't like how the target for linux has "unknown" in it.


For the vast majority of programs most time is spent in code generation/optimization. Rusts static analysis is not why compilation is slow.


Yeah … I’ve done a lot of optimization work. (I love that sort of thing). I have an instinct that there’s at least an order of magnitude compilation performance gain possible with a different architecture for the compiler. Especially if you didn’t bother implementing a lot of llvm’s optimizations.

It’s just tricky because writing a new rust compiler is a monster amount of work because of how complex Rust is. And rearchitecting the existing compiler would also be a massive undertaking because there’s so much existing code, and it’s spread over rustc (written in rust) and llvm (in C++).

A faster compiler probably wouldn’t bother with all of llvm’s intermediate representations. Refactoring across a code boundary that spans two languages (and two projects) sounds utterly exhausting.


It seems obvious to me that eventually someone is going to write a compiler for a language that delivers on rust's main value propositions (fast native code/memory safety/data race protection) but that compiles at least 10x faster than rust for most problems. Initially the generated code might be somewhat slower than rust on average, but I think developers will find they would happily trade some runtime perf for such a dramatic compile time improvement. The runtime performance gap will also narrow over time and eventually probably even supplant rust for most common problems. Rust itself is already too complex and weighed down by many competing interests that will prevent it from being the best possible version of itself. But it also has many self owns such as objectively bad syntactic decisions (like angle brackets for generics) that were made to appease intransigent c programmers.


Yeah; a simpler language would also be easier to learn. Rust is experimenting with an awful lot of new ideas. I imagine rust's successor (whenever that happens) will take the best ideas from rust - including the best design for a borrow checker - and build a much simpler language around that.

Rust is the first language in its category. It won't be the best language we ever make in that category.


Optimisation happens in release builds. I assume when people talk about compilation speed they mean debug builds because those are the ones you have to wait for.

For me the biggest hurdle to adopting Rust is that rust-analyser turns my laptop's fans up to 11. I don't know whose problem that is, but it doesn't happen with other languages, and my black box impression of rust's compilation infrastructure is that it's fucked up.


What about the language do you dislike? It's a strongly typed Ruby with generics.

Edit: trait-based OO is a breath of fresh air compared to tree-style class inheritance. Super flexible without having to overthink.

Immutable by default is reassuring, Option/Result are fantastic null/exception replacements.

Enums and match blocks are powerful and gracefully help ensure handling of all cases, have nice syntax, and work well for a systems language.


Not OP but for me it's the syntax. I'm not even saying it's bad or could even be improved... I just don't find it intuitive. I've only sat down and walked through some of the learn rust book and tried to hack out a few small things.

I'm not sure what it is - there just always seem to be random symbols that dont seem to follow conventions of other languages. Guess I am use to C-style (including C++, C#, JS) syntax and python. Its possible that there are just new concepts that aren't really represented in the languages I use too.

Again, I don't necessarily think its bad. Just what I don't find intuitive.


Interestingly very little of Rust's syntax is novel (the exception being lifetimes which of course don't exist in other languages). The closest language syntacticly is probably TypeScript (you'd be surprised how many programs are actually valid in both Rust and Typescript!). And where syntax doesn't match TypeScript it usually matches something else common. For example the `self` parameter in methods comes from Pythong, and the closure syntax comes from Ruby


For me, its the lifetime syntax I don't like the most. It doesn't feel "ergonomic" to type something like <`a>. Its mostly the ` I wish were different.


(Aside: it's 'a , not `a.)

One of the hopes is that it's usually not necessary to write those lifetimes explicitly in most programs, unless you're doing something unusual.

If you could go back and change it, what would you have used for the lifetime syntax?


Don't you need them anytime you have a struct with borrowed members? Even it just has a Cow you need the lifetime in the struct definition. Sometimes, but not always, you can elide the lifetimes in a function (iirc they have to occur at the outermost layer). But I definitely wouldn't call it unusual.


Good question. Thinking about it, I would choose '@'. So only when I read something like: fn print_one<'a>(x: &'a i32) using @ fn print_one<@a>(x: &@a i32) I can would see it as I sort of semantically read it. Function print_one AT lifetime of a.


Fair enough. I agree that that would have been more evocative of meaning. On the other hand, I also think that this:

&'a X

looks less like Perl / line noise than this:

&@a X

. In any case, not something we'd be likely to change at this point, but I agree that ' is effectively "arbitrary unique symbol with no evocative meaning".

(Arguably the same is true for things like & for "address of", but that has a long history in many languages which makes it more intuitive for current developers.)


One thing I wonder about, although I don't have a clear recommendation, is the preponderance of single character lifetimes.

Other than 'static, a Rust beginner is unlikely to run into any lifetime with a meaningful name for quite a while. If when learning a new language you only ever saw variables named a, x, t, s, v and m it's not a big jump to guess that p, z and b would be allowed as well, but would you assume that in fact the language allows big_set and old_corp_logo as identifiers and it just forgot to mention that ? I'm not sure I would.

The Rust standard library does have lifetimes with reasonable names, for example std::thread::Scope needs two lifetimes and it names them 'scope and 'env which, although brief, are clearly not single letters. But most of the library and documentation doesn't bother with meaningful names, since it doesn't have anything worth saying about the lifetime, e.g. the signature of str::ends_with:

  pub fn ends_with<'a, P>(&'a self, pat: P) -> bool
  where P: Pattern<'a>,
  <P as Pattern<'a>>::Searcher: ReverseSearcher<'a>


This is absolutely true, and I completely agree. When I'm writing code related to git, I have lots of 'repo and 'tree lifetimes, and I think that's much more readable than 'a and 'b.

I think it'd be a really good idea for the standard library to change many of its lifetimes to be descriptive, to encourage others to do the same. With my libs-api team hat on, I'd happily merge PRs that do that. (In small batches, please, not the whole library at once.)


I think way way back (about 10 years ago) the syntax was

    &x\a
The current tick is much easier on the eyes.


You mean having one ' ?

It's the same syntax in Lisp, Haskell, Ocaml, Ada and VHDL.


Lifetime _syntax_ is lifted verbatim from OCaml.


Not quite used for the same thing, but Haskell also allows using 'x (for symbol names)


TIL. I haven't used OCaml yet.


New syntax is pretty easy to pick up. It's just window dressing.

What you're actually seeing is a trap a lot of programmers fall into where if something is unfamiliar then it's treated as if it's incorrect.


> What you're actually seeing is a trap a lot of programmers fall into where if something is unfamiliar then it's treated as if it's incorrect.

I said that I don't think it's bad or wrong.


The borrow checker. The syntax. It isn't a good fit for my needs. It'd be like using a forklift to move a few books. I'm not generally a fan of strong explicit typing/high ceremony in general.


> strong explicit typing/high ceremony

If you think strong typing is "ceremony" then you probably spend most of your time writing (and documenting) code and not reading, refactoring, or collaborating on it.

The time people spend writing unnecessary, buggy unit tests (that static analysis can do in better languages) is far greater than the time to just use the type system.

Most languages don't even force you to be explicit anymore. They infer the types, so you don't even write extra code. You just get better errors and speed.


For what it’s worth, I agree with the GP comment when it comes to network services and for quickly prototyping things out. I’ve been writing in rust almost exclusively for the last few years, but I still reach for typescript from time to time.

JS/TS just requires fewer decisions per line of code. As a pithy example, in javascript I don’t have to decide whether I want a String / &str / SmartString / Rc<String>, etc. It’s just string. When I pass a callback I don’t have to decide between accepting a closure or FnOnce/Fn/FnMut. Or decide whether to accept a function pointer or take a generic parameter. In javascript I never have to think about the lifetime of the callee’s stack.

I tried to implement my own server-sent events style protocol in rust a couple years ago. I spent 2 weeks trying a bunch of different approaches but I eventually gave up - I just couldn’t get it working. (Mind you, async was pretty new then - I don’t think it was ready). I moved to javascript and had the whole thing working correctly in about a day and ~50 lines of simple code.

I think the resulting program is much better when I write it in rust than what I get when I build on top of nodejs. Rust programs can be orders of magnitude faster and I have a shockingly low defect rate in shipped rust code. But there’s a trade off. Rust programs take more effort to write.

Rust is a brilliant systems programming language. I’d pick it over C or C++ any day of the week. But not all problems are systems programming problems.


Rust programs take more effort to write initially, but less effort throughout the software lifecycle. Unless you're writing code just to throw it away and not even run it once (which practically doesn't ever happen, at least not intentionally) Rust front-loading the effort is the right trade off.

Never mind that some of these choices you can just avoid by thinking in general terms and deferring optimization for later. Just passing String and FnMut everywhere will be a lot more efficient than whatever the JS translates to on the machine.


Huh? Rust is better if I ever run my program even once? Even when it takes significantly more effort to write? Hard disagree on that.

> Just passing String and FnMut everywhere will be a lot more efficient than whatever the JS translates to on the machine.

I can’t comment on the performance of FnMut, but I’ve seen rust code run slower than the equivalent javascript because the rust code in question allocates everywhere without thinking about it (via Vec, String and Box).

In my mind, javascript and friends are good languages to build things fast. Rust is a good language to build things right. For plenty of software (eg websites), good enough is good enough.


> The time people spend writing unnecessary, buggy unit tests (that static analysis can do in better languages) is far greater than the time to just use the type system.

Absolutely. Thank you for speaking truth.


Rust's type checker is very clever tough. You can omit types almost anywhere, and rust will infer it. Say you net a vector containing Foo object. You can just do

    let mut v = Vec::new();
    v.push(m_foo);
In fact it's so nice that you often rewrite code to keep clear what's actually happening, even though you could keep types largely implicit.

Here's another example:

    let f: Vec<i32> = vec![1, 2];
     let f2: HashSet<f32> = f.iter().map(|n|n.into()).collect();


You don't see the borrow checker. It manifests itself in encouraging RAII inspired writing and logical flow.

Lifetime annotations help how you use and exchange certain data, but you can get away without them unless you need them. For shared data structures (when synchronization primitives are too much) and highly performant code.

You can write high level Java, Ruby, and Python code in Rust if you want to. Writing Actix or Axum web handlers feels no different than Golang or Python/Flask.


Curious, are you a fan of Typescript?


Never used it. I don't even have the need to touch vanilla javascript that often.


What do you primarily program in, Java? C#? Python?


Python


Then that makes a lot of sense why you might not like types. I'd say give Type Driven Development a try, where you layout the data flows for a program through its types, then you fill in the actual code which becomes very easy. I wrote about this in another comment before (in Rust, but you can even do it in Python with its type hinting):

https://news.ycombinator.com/item?id=34454158#34492209


Huh? Rust shares very little with Ruby??? Are you thinking Crystal?


Crystal is directly inspired by Ruby, so I don't think is too far of a bridge!

I programmed Ruby for years before learning Rust, and aspects of Rust's library and language inspiration from Ruby are obvious to me: the closure syntax closely resembles Ruby's blocks, for example, and much of the core iterator APIs match their Ruby counterparts. It's a very different language overall, of course!

(Then of course there's the part where many early Rust contributors came from the Ruby community.)


It's probably not the language syntax per se, but satisfying the compiler (borrow checker) can be tedious for beginners.


In real world software, you need lots of freedom in order to design high performance scalable systems. This includes using non-trivial data structures, and being able to access and organise memory in subtle ways. The Linux kernel is an example of this. Look at the reclaim subsystem in the kernel, for instance, and how it interacts in subtle and very complex ways with various other subsystems, the page cache, the generic block layer, the memory control groups, the VFS and the filesystems. All of them are multi-threaded and run on complex high performance CPUs performing memory re-ordering, speculative execution, branch perdiction and parallel execution of both processes and instructions. You really need extremely smart smart data structures and organisation and very smart algorithms to manage the concurrency. C is a language where you have all that freedom to do whatever you want leading to high performance and low latency. In Rust, you can't even safely implement a doubly linked list. Now, if you have to use unsafe every time you need high performance or scalable data structures, then Rust is as unsafe as C and it's just making your life harder for no reason, except hype.


As someone who has done kernel development work (though not Linux), not every part of a kernel is as filled with dragons as you make it out to be. Why would my e.g. filesystem driver need to punch a lot of deep unsafe holes down to the internals of memory management? Why can't I make use of a safe abstraction implemented using unsafe code to free myself from caring about those details?

With C, you can only do these kinds of safety guarantees with a lot of discipline. With Rust, you can offload much of your discipline to the compiler.


Sometimes, you can't make hard walls between subsystems. Some filesystems need very close interaction with memory management to have very good performance. Some objects are accessed and manipulated in a racy way from logically independent places for scalability reasons. I am not against Rust, modularity, etc. But building very high performance systems is just something different.


But do you need this in the 99% of cases? Probably not. You can write productive, performant, and safe code in safe Rust fine.

If you do need to do something this exotic, the fundamentals don't change: you still need to take care of memory lifetimes, you still need to ensure thread safety, etc. I really don't think it's as all-or-nothing as you believe it to be.


You would encode that into types and expose a safe API. The usafe would still be local in scope.

For example you might build an abstraction named ReadWriteMutex that guarantees safe access at runtime.

I don't see a scenario where this would impede performances. The invariants to validate are the same. But rust would guarantee that it is memory safe.


> Rust is as unsafe as C and it's just making your life harder for no reason, except hype.

I hear what you’re saying, but that hasn’t been my experience with rust. I love C, but I’ve been writing rust for the last couple of years. And rust has stolen my heart.

You’re right that unsafe rust is a bit less ergonomic than just writing C. Sometimes I miss void pointers. I definitely miss how fast C compiles. But most rust isn’t unsafe rust. Even deep in my custom in-memory btree implementation I think more than half of my methods are safe. And safe rust is a fabulous language when you can use it. There’s all these bugs you just can’t write.

Early on with rust I had this magical experience. My program segfaulted in one of my tests because of memory corruption. It was “spooky action at a distance” where the segfault happened well after the buggy code was executed. Bugs like this are awful to track down in C because the bug could be literally anywhere. But when I looked at the trace, there was only one unsafe function which could be causing the problem. (Since I could rule out all my safe code). Sure enough, 10 minutes later I had a fix.

I have a skip list for large strings that I ported from C to rust. I still have no idea why, but the rust code ran about 20% faster out of the box than the original optimized C code. And yet, the code is significantly smaller and easier to work with. I’ve added a few more optimizations since then - it’s about 10x faster than the C code now. The new optimizations are all from tricks I never got around to adding in C because I was afraid I might break something.

And then there’s the things rust does well that aren’t in C at all. Cargo is incredible. Monomorphization is the right tool for a lot of problems, like custom collections. (Higher performance and types? Yes please!) Parametric enums & match statements are so much better to work with than C enums and unions. Then there's rust's std, which is fantastic. I use Option, Result, Vec, BtreeMap, PriorityQueue, the OS-agnostic filesystem API, and so on daily.

So yeah, I hear you about C being lovely. But I think rust is even better. Rust is a bit of an ordeal to learn but I'm really glad I learned it. Its a joy to use.


The key is that Rust lets you define safe interfaces, so you can separate your data races and union juggling, which need to be unsafe, from your business logic, which basically never needs to be unsafe.


"Object graph" architectures are common in C++ and sometimes necessary in Rust for business logic, when building GUI applications or emulators. But Rust doesn't allow mutating through a &/Rc, and throws a compile error if you create multiple &mut, and the workarounds are unreasonably boilerplate-heavy and RefCell carries runtime overhead, whereas C++ doesn't get in the way of making your code work.

I've put together a playground at https://play.rust-lang.org/?version=stable&mode=debug&editio.... It's not just dereferencing invalid */& that's illegal in Rust, but constructing and dereferencing valid &/&mut in ways that don't respect tree-shaped mutability. Rust's pointer aliasing rules invalidate otherwise-correct code, placing roadblocks in the way of writing correct code. There's so much creation of &mut (which invalidates aliasing pointers for the duration of the &mut, and invalidates sibling &mut and all pointers constructed from them), that's so implicit I don't know what's legal and what's not by auditing code. (Box<T> used to also invalidate aliasing pointers, but this may be changed. The current plan for enabling self-reference is Pin<&mut T>, but the exact semantics for how and when putting a &mut T in a wrapper struct makes it not invalidate self-reference and incoming pointers, is still not specified.)

(I've elaborated further at https://news.ycombinator.com/item?id=33658253.)


You can use UnsafeCell https://doc.rust-lang.org/std/cell/struct.UnsafeCell.html if you commit to manually enforcing RefCell's borrowing rules.


The problem with "subtle and very complex" interactions is that they don't really scale to larger systems. They're inherently anti-modular, since they depend on the state of the program as a whole. This is exactly what's avoided in Safe Rust.


If you want blazing fast and very scalable performance, you have to do that. Multiple independent contexts in the kernels can have a reference on certain objects and manipulate them concurrently with relaxed consistency. It would be very impractical to hard-protect certain parts of those objects and guarantee serial consistency. For scalability and performance reasons, certain data structures, objects or certain parts of certain objects could be accessed in a inherently racy manner, but guaranteeing correctness in the existence of races is extremely hard. Any attempt for modularisation, message passing or serial consistency would make your kernel look like a joke compared to Linux. It would make the interaction between subsystems easy but would not provide the opportunity for very fast low latency execution.


> Now, if you have to use unsafe every time you need high performance or scalable data structures, then Rust is as unsafe as C

That's not true, because you can wrap unsafe code in safe interfaces and use it from safe code.


> You really need extremely smart smart data structures and organisation and very smart algorithms to manage the concurrency. C is a language where you have all that freedom to do whatever you want leading to high performance and low latency.

But it leaves that work to the developer, thousands of hours of testing and reviewing to ensure no little corner case is missed.

I think the argument that unsafe is used making the language have “holes” is somewhat misdirected. When I see a rust implementation of a doubly linked list with unsafe I know exactly where I’m on my own (a few lines) and where the compiler does the job for me (the rest of it). It’s not as if that means safety or flexibility is out the window. It means “do the manual safety review on these two lines”.


It's the standard library & tooling for me. To be perfectly honest for must stuff I use rust for I'd prefer to have garbage collection. But the standard library and tooling I just too far ahead of the competition (diplomatically avoiding naming languages here)


If one wants control over everything and manual memory management as safe as achievable (ie. systems programming), currently, this is the only (major) way.

IMO the market share for this profile is indeed small. Languages that trade off a part of that security for simplicity may have considerably more market share in the future, although on the other hand, the public may instead prefer a "fast-enough and dead-simple" language.


It would actually be such an amazing language if it wasn’t so verbose. There are so many places where more concise syntax wouldn’t even hurt safety and they forgo it anyways. But here we are, where macros are required for a hello world.


You don't technically need the macro to print hello world. However, the macro is useful due to format strings, which are also type-checked (eg to checks if the args implement `Display` or `Debug`, etc).

The good thing about the default hello world example is that it introduces you to both language features without being overwhelming (IMO).

You can read more about format strings here: https://doc.rust-lang.org/std/fmt/


Right, the canonical C "Hello, World!" uses C's formatted print, even though obviously it's not formatting anything, so it seems appropriate for Rust to do the same.

It would be more impressive if Rust had an actual type checked variadic print format function like C++ 23's std::println - nobody should be under any false impression about how impressive that feature is, but the type safe macro is effective, that'll do pig.


I think Rust is probably working towards features like this. They've added partial const generics only a year ago, and I'm not sure what the stance on variadic functions is, but unless they're considered bad, they might get added too.

AFAIK C++ does format string parsing at runtime (as you need to provide a parser function), but maybe it can be made constexpr? I've not written C++ for over a year now so my fluency is fleeting a bit


> There are so many places where more concise syntax wouldn’t even hurt safety and they forgo it anyways.

Can you give some examples of this? If there are ways we could simplify the language without hurting existing use cases, we should consider doing so.


No default arguments, named parameters, variadic functions and generics, no C-style arrow operator, the whole default trait thing (why can’t you shorten it to just `..`?). And a bunch of awful reasoning to defend these decisions. Named function arguments are bad but `Iterator<Item=T>` is fine? Default arguments are bad but default trait implementations are fine? Something something implicit behaviour but type inference is ok? It’s to the point where idiomatic rust means writing a three different structs and enums to call a function.


I can't think of a better way to do compile time, zero allocation printing than with a macro. Runtime format strings are so ugly and fragile.


I recently spun up a project in Rust (a small game using Bevy) and the main issues I ran into were around smart defaults for the compiler. I was surprised how many lines I had to add to my cargo.toml to just complete a simple game example.

Some examples:

It defaulted to the fully backwards compatible version (vs 2021) which threw errors as I went through some recent example code.

(I think) I had to add a few lines to my cargo.toml so the compiler would not rebuild bevy every time I recompiled (when I only changed 1 line in my program).


> It defaulted to the fully backwards compatible version (vs 2021)

"cargo init" and "cargo new" default to the 2021 edition, and have ever since it was stabilized: https://github.com/rust-lang/cargo/pull/9800

Either you accidentally installed a version of cargo from before the 2021 edition was stabilized, or you ran "cargo new --edition <something>", or you started by cloning an out of date project of some sort, in which case it's not really an issue with "defaults".

> smart defaults for the compiler. I was surprised how many lines I had to add to my cargo.toml

Normally this would be a pointlessly pedantic point, but cargo is not the compiler. This thread, the linked title blog post, they are about the rust compiler, not cargo. There's a close relationship, but cargo's defaults aren't necessarily related to what the "next rust compiler" might do.


Thanks this is helpful!

I started my project without cargo at first and tried to start writing code without a cargo.toml. I was surprised that cairo didn't default to 2021 until I specified it in the .toml file. Good point that cargo init/new would have solved this!

I guess my point about the compiler was that it seems to rely on cargo.toml for many 'optimizations' that I would expect to be defaults. (Examples include the two i mentioned above).

But I'm new to the language and understand that most people will just use `cargo init` and google a few other common cargo.toml settings to improve compile times.


Ah, yeah, hand-writing a Cargo.toml like that does run into that issue.

However, I don't think the solution is a better default, but rather the solution is time-travel.

Defaulting to the latest edition would mean that any rust library that predates editions would likely break when you imported it (since it would default to an edition that didn't exist when it was written, and editions are allowed to make breaking changes of that sort).

The thing that would fix your issue would be time-traveling back in time to when cargo was created, and making edition a required field of all cargo.toml files that results in an error until you add one. That would have saved you from any trouble.

Rust could also do a python2 -> python3 like transition, where crates from the old "edition not required" world can't be imported anymore at all, but that seems like a very small thing to cause so much ecosystem pain over.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: