Hacker News new | past | comments | ask | show | jobs | submit login
Why the developers who use Rust love it so much (stackoverflow.blog)
135 points by cyber1 3 months ago | hide | past | favorite | 165 comments



I love this quote from a friend of mine who was learning Rust:

"It's hard but I love it. Dealing with the compiler felt like being the novice in an old kung fu movie who spends day after day being tortured by his new master (rustc) for no apparent reason until one day it clicks and he realizes that he knows kung fu."


Sounds a lot like programming in Haskell. Applying higher-order functions or condensing a parser into a oneliner just makes me feel smart in a way that generating a new Rails controller never does.

My personal theory is that a lot of Rust and Haskell (and many other "advanced" languages) usage is caused by people chasing that feeling. The publicly expounded benefits like memory and type safety are true, but the real reason is that people want to use the language for their own reasons and "I really like it" usually does not have enough weight in a business context.


Unlike Haskell, Rust offers a unique combination of safety and runtime performance, which does provide real business value in sone domains.


Algebraic data types, monads, and things based around those have tons of business value for writing correct business logic for non-performance critical code. You get that in haskell.

Though admittedly you can get them other functional languages that are not quite as different. Like Ocaml.


Monads are so cumbersome to implement and consume in OCaml that they are not very commonly used. Jane Street Core has monadic abstractions for things like async but it’s a pain without syntax extensions.


That was true until 4.08. OCaml now supports monadic let bindings, and more generally, user defined let bindings [1].

You get the benefits of ppx_let (Jane Street) but in a more general and syntactically nicer form. I still wouldn’t call it as nice as do notation in Haskell but it makes monads highly usable in OCaml without the need for an external dependency.

[1]: https://github.com/ocaml/ocaml/pull/1947


Wow, I completely missed this! Nice.


Rust's safety and runtime performance are not unique, and these properties can be improved further from the level of Rust. Have a look at ATS:

- http://www.ats-lang.org

- https://www.youtube.com/watch?v=zt0OQb1DBko


What kind of project would be a sweet spot to ATS? Would it be comparible with F* [1]?

I would love to understand the nature of the theoretical advantage ATS has here over Rust.

ATS seems to be very much a concept/language in R&D phase. Its ergononics are difficult ergonomics (have a look at the error messages), documentation and example projects sparse.

[1] https://fstar-lang.org/


> What kind of project would be a sweet spot to ATS?

I'd say a large mission-critical C codebase that decides to grow and expand a formally-verified kernel of its most important functionality - it can be a real-time sytem, an OS kernel, a HFT algorithm etc.

Nowadays people seem excited about gradual typing in widespread dynamic languages like JavaScript and Python, because it allows them to adopt the best parts of static typing without having to rewrite the entire codebases in new langauges. Now imagine that you can introduce gradual formal verification to your battle-tested C codebase, without a loss in performance[1] and without forcing you into a new tooling ecosystem (compiler, debugger, profiler etc). Actually, you don't need to imagine it, because that's what ATS is :)

> I would love to understand the nature of the theoretical advantage ATS has here over Rust.

I'd say it's a synergy of two key features, that produce miracles when put together - 1) direct and safe pointer arithmetics [2], which enables the same data structures and optimisation tricks available in C, yet in a verifiable environment, and 2) refinement types [3].

> Would it be comparible with F*

Unfortunately, I don't know much about F*, MS Research regularly releases a new PL prototype after the other and I lost my count on Koka[4].

> ATS seems to be very much a concept/language in R&D phase. Its ergononics are difficult ergonomics (have a look at the error messages), documentation and example projects sparse

All true, but nonetheless it's a solid implementation of the mentioned theoretical parts, and if one dares to get through the objective lack of docs and ambigouos compiler messages, they are rewarded with an impressively powerful production-ready system-level compiler, complimentary to their standard C ecosystem which has decades of accumulated wisdom. That's the level of code-reuse other ecosystems can only dream of.

[1] http://blog.vmchale.com/article/ats-performance

[2] http://blog.vmchale.com/article/ats-safe-pointers

[3] http://ats-lang.github.io/DOCUMENT/INT2PROGINATS/HTML/c2584....

[4] https://www.microsoft.com/en-us/research/project/koka/


As someone who needs parsers in a daily basis, I got hooked by your "oneliner parser". A search for parser generation in Haskell only gave me typical parser generators. Can you point some examples?


What you'd want to google is "parser combinators" or "parsec tutorial" I think. The idea is that parsers are "just" functions and can therefore be combined with higher order functions in various ways to produce "bigger" parsers. For example, an element in the Redis replication stream can be either an Array, a BulkString, an Integer, an Error or a SimpleString and you can express that with the `choice` function:

  parseRedisValue = choice [ parseRedisArray, parseRedisBulkString, parseRedisInteger, parseRedisError, parseRedisSimpleString]
Each of the underlying parsers (ie `parseRedisArray` etc) are also built up out of smaller parsers, all the way down to parsers that just detect a single character. It seems like it should be slow, but in my experience the results have not diappointed yet. If you're interested, I wrote a longer example on my blog, at http://www.wjwh.eu/posts/2019-01-01-parsing-infinite-streams...


Sometimes I wonder a key part of the appeal of Rust is that feeling of climbing the mountain. I've noticed that coming to a solution around the particular constraints of Rust can have a eureka feeling which is similar to how it felt applying a new concept successfully for the first time when I was first learning to program (e.g. recursion). I'm not always convinced the solution I come to is superior to what I would have done in a more traditional programming language, but it certainly feels rewarding to overcome challenge, and I would bet that's true of a lot of people, and that it's more motivating than the promise of performance and memory safety alone.


I've experienced that climber's high several times while working out problems with Rust during my first two years working with the language. It's a rewarding feeling. I couldn't climb without the community, either. There's so much to learn, especially when one chooses to learn while doing, but it can be done with help. I am convinced that the solutions I come with are superior to those I would write with Python, but the first time solving them may involve a costly investment of time and effort.


I consider myself a quite expert C++ programmer, and when I program in C++, I do fight with the compiler, constantly.

For example: 2000 line template error, some overload cannot be picked deep in the template instantiation stack. Probably a bug with SFIANE, or template argument deduction, or ADL, or... the error message isn't helpful. The compiler tries hard to give me a "smaller" error message, by filtering out overloads that it thinks aren't relevant. One of those is the one that should get picked. Why isn't that happening? Let's figure it out. Let's start going through the template stack, trying to break the compiler in the right way to give you the right error message. Tricks like:

    template <typename T> struct NeverDefined;
    NeverDefined<U> {}; // errors spitting out U
    int******** k = t; // errors spitting out the type of t
etc. Its a constant fight.

That's the exact opposite of "fighting" rustc. Using rustc feels like pair programming with a god-like developer. They see your mistakes dozens of moves ahead. They give you a nice error message, 10 lines long, with a summary, and an annotated version of the relevant parts of your code, explaining how the different pieces fit together to cause a problem. If you are not sure you fully understood it, you can just ask rustc to explain the error a bit more, and it would give you a couple of "self-contained" examples that produce the same error, so that you can learn from those, and then go back to your big program and fix the issue. Its fighting in the same way that going through university and learning for exams is a "fight".

People who think they know better and there is nothing left to learn from them will see that as a real fight. They'll also see pair-programming as just another fight, a waste of time. Or even will start a fight when somebody elses review their PRs.

If you are that kind of person, then Rust probably isn't for you. You won't enjoy the compiler. You won't enjoy the language. The language is designed by listening to all opinions, listening to how many different people weight them, and then making an engineering decision. That's the complete opposite of "i'm always right, best products are designed by a single person".

Luckily for Rust, most programmers aren't like that. Github is proof that for many, programming is a social activity. Programming with rustc makes you feel less alone, you are always pair programming. Programming is not a fight with the compiler, but a back & forth interactive conversation, in which you together with the compiler are working together to write a bug free program.

When you collaborate in Rust projects with other people, the social experience is also improved. The discussions about dependencies are very "goal-oriented and high-level": is this a dependency that will make the product better? Things like "how do we integrate or maintain this dependency" aren't issues with cargo. You also never discuss about formatting, that's handled for you automatically. You also never discuss about undefined behavior, or what the C or C++ standard say, or whether dereferencing something is UB or not, or whether you need a check here or not. If it compilers, the compiler proves your program for you. A huge refactor in a lock-free multi-threaded program? If it compiles, it has no memory safety or data-races. You can focus on just discussing the underlying algorithms, the architecture, the things that are most relevant for humans.

Just look at github.com/rust-lang/rust itself. People have time to write issues, explaining in them how to fix them for beginners, offering to mentor beginners into fixing them. You can drop a Rust beginner into a compiler, and you just need to explain them the high-level description of the fix. The compiler makes sure that they make no errors, and they can explore that part of the compiler on their own, by just following the types.

Because Rust is not "just memory safety". Its also having very strongly typed and explicitly-typed APIs and function signatures, that automatically give you great docs, great error messages that tell you which types to use, which you can use to find which methods these types have, and just explore code.

And these are just a couple of things that make Rust better than everything else I've ever used.

Is it perfect? Of course not. I've been using Rust full-time 5 years. It has _a lot_ of issues that probably cannot be fixed due to backward compatibility. But these issues are smaller, and less annoying, than the issues that all other programming languages I've ever used have.

Will I use Rust forever? I really hope not. I really hope somebody manages to discover a language that's significantly better than Rust in every way.

But I doubt I'll see that in my lifetime. Given the amount of features in Rust pipeline(), and the amount of time the group of PL design PhDs are taking to get them just right, such a language would need to be a significant advantage over the state of the art.

() specialization, higher-kinded types, generic associated types, variadics, full operational-semantics specification, ... there are even RFCs for dependent-types. It wouldn't surprise me if programming in Rust in 10 years feels closer to programming in Idris2 than in "Rust 2018".


   has _a lot_ of issues 
I value your perspective on programming languages. Would you mind sharing those issues? I am in a position where I can and do influence PL design.


There are way to many probably unfixable issues at this point, from the top of my head:

* Lack of proper Linear and !Movable types, and being in a spot where adding them to the language would be a backward incompatible change. This results in complex abstractions, like Pin.

* Fallible destructors without an easy way to return failure. In Rust, destructors (Drop::drop) can fail, but the only way for them to fail is to unwind the stack, which doesn't play very well with the rest of the Rust error handling story. C++ gets this a bit better with destructors being noexcept(true) by default, and providing tools to query whether that's the case.

* Inconsistent handling of Out-of-memory errors. Initially, Rust decided that OOM errors would be fatal, and settled on using unwinding for them. Then realized that some projects driving Rust design, like Servo, actually needed to handle them (e.g. imagine downloading and image that's too big to fit in memory bringing your webbrowser down..), so they patched being able to handle OOM on top, by adding methods to some of the collections, like `Vec::try_push`, `Vec::try_reserve`, etc. This essentially means that there are two methods to do any operations on collections, that it is pretty much impossible to make sure that code that must handle all OOM errors for correctness does so, and does not call an unwinding method by default, etc. Its also one of the reasons why colelcitons parametrized by allocators (Vec<T, A>) its taking so long.

* Lack of parametrized modules, probably a decision that's just too hard to fix at this point.

* Lots of lang items for singletons, like global allocator, panic runtime, etc.

* Unsafe keyword being too coarse grained. Unsafe functions implying unsafe function body block, unsafe keyword used to allow different types of contracts, from dereferencing a raw pointer, to calling a target-feature function, ...

* const and mut raw pointers: this distinction adds pretty much zero value, and complicates certain types of code quite a bit, for no reason.

* too monolithic standard library: libcore adds floats suport, which essentially requires you to implement floats for embedded targets, for which they might make absolutely no sense. No way to prevent programs there from using floats, etc. No way to only implement the different parts of the standard library that make sense for a platform, like threading, etc. but instead requiring targets to just mock the standard library with "unimplemented!()" hacks that error at run-time when you try to use certain APIs.

* PartialOrd, Ord, PartialEq, Eq, traits are not very mathematically sound. They attempt to achieve what C++ spaceship operator attempts, but fail quite hard. They assume that there is only one "ordering" that might make sense for a particular type, but that's usually not true. For example, floats have many orderings (the classic partial order, and IEEE total order, etc.).

* Operator overloading traits like Add or PartialOrd mix half-backed semantics with operator overloading. The standard library ends up, e.g., implementing addition for Strings, to mean concatenation. So essentially any program that constraints generics using the Add trait and requires, e.g., associativity for being free of logic errors kind of fails when passed strings (it will work, but produce garbage).

* Iterator::next returns &Target, which means that you cannot implement Target to return a "proxy" reference type.

* Many of the standard library algorithms, like "sort" and "unstable_sort" only work on slices, there is no way to use them on Lists or other collections, requiring everybody to re-implement them. C++ got this a bit better.

* BinaryHeap is only a min-heap, reusing it as a max-heap is quite weird. In general, these std::collection API mistakes are consequences of having bad fundamental traits (like PartialOrd and friends mentioned above), and a lack of more meaningful Operation traits.

* std-library is a mixed bag, with some things being quite good thought out (Result, Option, The smart pointer types, Cell types), and others being kind of an after-thought (operator overloading traits, collections). Method names are also often inconsistently named across the different parts of the library, does not feel very cohesive.

* C FFI feels like an afterthought. Some things got bolted afterwards like va_args, unions, packed, etc. Its pretty much not clear at this point how to use these correctly, or whether they can be used correctly at all in C FFI. libc is one of the most used Rust libraries, and its a huge mess of duck tape, ABI incompatible depending on where you run your binaries, which resultes in crashes that are hard to debug, etc. It basically assumes that all operating systems that Rust will ever support never will break their platform ABI. Which is an assumption that no widely-used OS satisfies. How to pass C callbacks that panic, etc. is also still super weird.

And probably many more that I can't remember right now. These are just a small fraction of the issues I've encountered and that I believe are not fixable in a backward compatible way. None of them is, in isolation, a big deal. Whether they lead to a death by 1000 paper cuts at some point or not, only time will tell. Rust also has maaany more issues, that are hopefully fixable.


We should be able to write a MIR lint that gives us linear types wherever we want them. I hear they are most useful with unsafe code.


Wow, thanks for this beautiful post! I'm coincidentally about to publish a language similar to Rust which addresses #2, and I'm now seeing perhaps I should improve a lot of these other things!

I've been looking for someone knowledgeable about Rust's benefits and drawbacks, to learn how the language compares and how to best explain it, would you be willing to chat?

(My email is my username @gmail.com, and I'm also on discord at Verdagon#9572)


Thanks!

The C FFI worries me because if I choose Rust for our project I'll need to interface a huge number of C / C++ libraries with the Rust core. OTOH, Mozilla seems to be doing fine mixing Rust with C++.


Its not that it doesn't work, but rather that it is not guaranteed to work, and the design is basically a pile of duck tape. For example, you can use Rust arrays in a C FFI declaration, but that won't do what you think it does:

    extern "C" { fn foo(x: [u8; 42]); }
does not declare the C function

    void foo(uint8_t x[42]);


I believe it is only time until you start discussing standards with Rust.

If Rust wants to be taken seriously in many domains where C, C++, Ada and Java are used nowadays, it needs to eventually get a standard specification and multiple implementations.


That's an excellent quote; and 100% true based on my experience. The only language that I've struggled even more with is Prolog.


Reminds me of learning Vim


Coding in Rust isn't easy, but where it is hard it is hard in a good way. It is like being on a journey with a good friend, who deeply cares about not letting you shoot yourself into the foot and explains you why without judging you.

Even if Rust would be wiped off the face of the earth tomorrow, thr things I learned from it have definitly made me a much better programmer.


>explains you why without judging you

Yep, the compiler doesn't judge you, that role is left to the community when they discover you posted a library on Github that uses more unsafe code than they'd like.


for anyone not following Rust, the drama around the (popular) Actix web framework was a prominent instance of this. iirc people opened issues and PRs re: the project's (significant) usage of `unsafe`, but the maintainer (Nikolay Kim) wasn't receptive; then some folks got nasty about it on reddit, escalating into full-blown drama. sadly, it ended with Kim posting he's "done with open source" and quitting the project.

a more detailed (and probably more accurate) account: [A sad day for Rust](https://words.steveklabnik.com/a-sad-day-for-rust)


For the most part, the Rust community seems to value safety and correctness above extracting the last bit of performance. Using “unsafe” code is seen as a necessary evil, only allowed when its use is hard to avoid, local and easy to understand. Mostly, you’re supposed to use unsafe code as the foundation which safer abstractions are built on top of.

It’s a bit if an oversimplification to say that the Rust community complained about the amount of “unsafe” code in Actix. It was more about the how and why.

From what I gather, the author didn’t accept patches that fixed safety/soundness bugs when they had even a small impact on performance. That’s a trade-off that one is allowed to make in my opinion (if clearly communicated, so that people can make informed decisions on whether to use a piece of code).

I think this difference in values (or rather, engineering goals) explains in part why this escalated so badly. But a healthy software ecosystem should have room for libraries that make different trade-offs. Zealotry (in the form of “There’s one right way”) doesn’t encourage experimentation, and shrinks communities instead of growing them.


The soundness argument usually didn't carry water and was presented by people who were very much out of their depth (not that I was any more in-depth, but at least I stayed out of the way). The author and others addressed a lot of unnecessary unsafe, though. That which was left remained a sore point for those who demanded code purity.

The author was a one-man army who took flak for way too long. Repeated requests to resolve non-issues blinded him to requests revealing actual concerns. It's analogous to the story of the boy who cried wolf. He was not respectful, but also had been fighting way too many battles for too long and consequently was all out of patience. It was a bad situation waiting to boil over.


By no means is it only about Actix; that’s just one of the more notable times that something like that has happened. The eminently reasonable criticism boils down to baulking at people undermining Rust’s safety guarantees with demonstrably wrong unsafe code, while publishing said code in such a way that you’re suggesting that others use it (i.e. it’s not just private code). It is also then often taken further to a general unease at gratuitous use of unsafe code, which I consider fairly reasonable because it’s so hard to get right (it’s unsafe for a reason). Then sometimes a few people take it beyond what might be considered socially reasonable.


It is a mistake to marginalize what a small number of people achieved. An organized team successfully "cancelled" the Actix project and smeared its author after repeated attempts-- three distinct episodes over 12 months. This was a campaign that spanned social media and github.

It is worth noting that to this date, none of the members of the anti-actix effort have contributed towards a viable alternative, nor have they attempted to resolve their concerns in the actix ecosystem. It turns out that the patrons who complained about free beer never intended to brew their own.


I never followed the matter closely, but from the parts that I did follow or investigate, your comment doesn’t feel particularly accurate.

• I don’t believe there was any organised team; it arose organically.

• In each case the complaints were of concrete problems that incorrect usage of unsafe caused. After that, people did then tend to pile on and baulk at other unsafe code that was probably OK, because the author had been proven untrustworthy in using unsafe code. (“Unsafe” says “trust me, it’s OK”, and that trust had been broken.) And there were one or two people that made it more direct personal attacks—but probably only one or two.

• Your final paragraph just seems completely wrong. Various of those that complained did offer alternatives, some of which were turned down. And people did go with viable alternatives, switching to competitors to Actix. Even apart from all that, if you can prove that by some metric a piece of software is bad, why would the burden lie with you to fix it? (Especially if your patches are rejected.) If the problem won’t be fixed, recommending that others avoid it seems perfectly reasonable. You’re presenting a common logical fallacies, that you can’t criticise something unless you can provide an alternative. I don’t need to be able to build a better bridge to be able to point out that it’s falling down.

Feel free to correct me if I’m wrong, but I intend to engage no further on this. The points have been hashed out before and there is nothing new to say; things just tend to get heated. Have an enjoyable day. :)


Have a great weekend


> [Actix] is just one of the more notable times that something like that has happened.

edited my comment to reflect this, thanks

i was trying to keep my summary relatively opinion-free – the internet probably doesn't need my take on that situation :)


Since I haven't seen anyone cover it from quite this perspective, here's how I saw what led to things blowing up like that:

What really got people riled up was that...

1. Nikolay refused to acknowledge the principle (stated in the rustonomicon, IIRC) that a function which can violate memory safety when fed incorrect arguments must be marked `unsafe`, EVEN IF IT'S AN INTERNAL-ONLY API. (Increasing the bus factor with regard to maintainability is core to how Rust competes with C++.)

2. The Actix website gave the strong impression that it was a production-ready dependency, rather than an experiment in squeezing out maximum performance, and, unless you'd been present for the previous stink-raisings, it did appear to be a mature, safe option for writing code in Rust that would then be exposed to the public Internet for an entire world of potential attackers to hammer on.

3. This was the third time Nikolay had done something like this.

So, to some people, it felt like Nikolay was actively and callously developing a PR timebomb for Rust and its ecosystem.

I won't rule on how they chose to act, but that's my take on why they chose to act.


It would be great if that feature could be moved into the compiler, though!


Yup, I got called a jackass (sic) when I was asking why the most commonly recommended Rust crypto library doesn't support some of the older decryption algorithms which I needed to process some encrypted files.

I got so many smug "you don't know what you're doing, it's insecure!" answers from people who clearly had no basic understand how security works.

And I only asked for help, I didn't demand or try to accuse anyone of anything. And that's not the only case where Rust community was just permeated with certain arrogance which I've never really felt in any career as C++/Python/Java developer.


I think I remember this. The problem was you we're insisting to add in an insecure encryption algorithm into the crypto library.

They didn't want to because they didn't want to add an insecure algorithm to the library to encourage its use. The best bet is to use another library whose goals are different. This was explained a few times.

Tbh I'm on the side of the library developers here. If you want to be the official encryption library, you probably want to avoid broken encryption algorithms.

Leave the backwords compability encryption to other libaries, designed for the task.


I'm not sure how you could remember this since I wasn't insisting on anything like that and I wasn't even opening a ticket on any official bug tracker - I think you mistook me for someone else. I took extra care not to demand anything and the conversation was mostly on IRC and some specialized subreddits.

Also not being able to decrypt data older than a few years with safe Rust libraries is a bit of a strange way to do security - if anything, Rust is perfect for parsing files in secure manner. My concrete use case was processing PDFs which tend to be signed with older certificate algorithms and the idea that being able to verify signatures on those files encourages insecurity is just strange. The only solution offered was to use OpenSSL via C bindings which (besides not compiling at all on Windows) is a security nightmare.

In any case, I accept that library (and core language) developers can decide that Rust won't be a language that's usable for verifying and processing data formats that are older than couple of years. What I find it harder to accept is being personally insulted for having a use case that doesn't fit into community's perfect world. That didn't happen in any other communities in my career.


> Also not being able to decrypt data older than a few years with safe Rust libraries is a bit of a strange way to do security

Are you saying there's no Rust libraries/modules that can decrypt these formats, no pure rust libraries/modules without unsafe, or no support in core? If I'm not mistaken, core is no protection from usage either, core makes use of unsafe in ways they deem safe but unable to be ensured by the language.


Coulnd't all sides be satisfied if only signature verification code was added, but not signing?


I think it's an unfortunate side effect of Rust's reputation for being difficult to understand: this creates an appeal for a certain erudite elitist fandom.

The same type of attitude was quite prevalent among fans of RX programming in the mobile space a few years ago and led to fantastical claims that it was going to usher in a new era of programming.

But it's just one unfortunate facet of the Rust community which can also be extremely positive and helpful, especially for beginners.


This is fun to observe sometimes, especially because Rust is not that difficult when compared to other research languages that move PL design forward. Truly, arrogance in a community is a good indicator of a completed research making its way into broad masses, who want to feel good by achieving something, while not being aware that the mount they are climbing up has already become a tourist attraction and what is left of a challenge is to follow the guidelines.


You see that when some mention something that was already safe while using Ada.


Money quote: "Rust benefits here that very few people are being forced to use Rust."

Probably more people pick up C++ for the first time, in any given week, than the total who use Rust in production today.

Rust also benefits from the limited historical baggage that comes with being new and incompatible. Unlike Java, which was in the same position, Rust adopted very few old mistakes, and especially unlike Java made few new ones. But as the language approaches industrial maturity (possibly within 10 years) early mistakes will become evident, and cruft will be seen to accumulate.

Rust designers have consciously chosen to keep the language's abstraction capacity limited, which makes it more approachable, but reduces what is possible to express in libraries. Libraries possible, even easy in C++ cannot be coded in Rust. The language will adopt more new, powerful features as it matures, losing some of its approachability and coherence. But Rust has already passed a key milestone: there is little risk, anymore, that it could "jump the shark".

The language is gunning for C++'s seat. Whether it becomes a viable alternative, industrially, is purely a numbers game: can it pick up users and uses fast enough? The libraries being coded in C++ today will never be callable from Rust.

Go proved that the world will make room for a less capable language (in Go's case, than Java) if it is simpler. Rust is much more capable than C, Go, or Java, and the world would certainly be a better place if everybody coding those switched to Rust. So, my prediction is that Rust and C++ will coexist for decades. The most ambitious work will continue to be done in C++, but a growing number will have their first industrial coding experience in Rust instead of C, and many will find no reason to graduate to C++.


> and the world would certainly be a better place if everybody coding C or Go switched to Rust

Perhaps. Let's engage in a thought experiment. Sorry for moving slightly off-topic, but the line I quoted made me think about this.

Someone fashions a magic wand, which you can wave over C, C++, and Go programs / libraries to instantly re-materialize them as idiomatic Rust, while preserving all of the "good" output they produce, and simultaneously removing the "bad": all memory safety and data race related bugs they exhibit.

You get to use this magic wand on any program you like, instantaneously. You do so, creating linux-rs, glibc-rs, chromium-rs, etc. in the process. You cargo build all of this new software and replace the old C / C++ versions with it, in-place.

In the brave new Rust-powered software world, does your day-to-day computing experience change? It is materially better?

Speaking for myself, the answer is "no", unfortunately. Perhaps this message is coming from a place of frustration with my own day-to-day computing experience. Most software I use is much more fundamentally broken, in a way in which doesn't seem to be dictated the programming language of choice. The brokenness has to do with poor design, way too many layers of absolutely incomprehensible complexity, incompatibility, and so forth. I don't remember the last time I saw a Linux kernel oops or data corruption on my machine, but I am waiting _seconds_ to type a character into Slack sometimes.

I like most of the ideas behind Rust (I don't like the language itself and some of the choices the authors made, but that is another discussion). However, I think there is only so much you can fix with the shiny and sharp new tools, because it seems to me that most issues have little to do with low level matters of programming language or technology, but with higher level matters of design, taste, tolerance for slowness / brokenness / incompatibility, etc.


Part of the reason your Slack is so slow is that a lot of stuff is built to protect from problems that Rust might eventually solve.

Slack builds the UI on web technology that got widespread in part because it solves awkward problems with deployment (self-contained and consistent graphic libraries, so you don’t have to worry about how your DE compiled this or that other toolkit) and safety (web tech is heavily sandboxed so that crashes and executions won’t open doors to bad actor). In the long run, Rust will definitely make the latter less cumbersome (less worrying about crashes -> simpler, lighter, faster sandboxes) and possibly help with the former a bit (desktop environments and their libraries could shed some complexity when moving to Rust and make it easier for programs to access them safely).

I think it’s a noticeable step forward. Will it solve everything? No, some of the problems with Slack-like situations are due to economic factors (browsers sticking to JS will forever continue to make JS programmers cheaper and more plentiful than basically any other type of programmer) that Rust is unlikely to affect. But perfect is the enemy of good in this sort of thing: incremental progress is better than no progress.


But I think Rust is also quite vulnerable to the layering problem the previous commenter is speaking about. One of the best things about Rust is how easy Cargo makes it to include 3rd party code in a project, but this is also one of Rust's biggest risks. It's already common for Rust projects to have massive lists of dependency, and that's something which generally gets worse as time goes on rather than better.

Rust as a language may have favorable properties with respect to speed and safety, but programs which run on top of a massive tree of third party code which has been written by god-knows-who tend not to be very fast or very secure.

NPM has already shown that dependencies can be used as an attack vector, and unless Rust can solve this problem, I don't think it's going to bring us some brave new world where we don't have to sandbox anymore.


> programs which run on top of a massive tree of third party code which has been written by god-knows-who tend not to be very fast or very secure.

You have a point about security, but not about the speed. I can probably link 5 "we rewrote in Rust and it was much faster" articles. All of these used third party libraries. ripgrep for example, is faster than grep, despite having more dependencies. In reality, it just promotes better code reuse without impacting run time speed. If anything, separating your code into crates improves incremental compilation times.

It's possible that you might pull in a large dependency with many features. Compiling all of this and removing the unused code will cause a compile time penalty and no run time penalty. In practice, Rust crates that expose multiple features have a way to opt-out/opt-in to exactly what you need. No penalty at all. In any case, most rust crates err towards being small and doing one thing well.

Examples

- https://blog.mozilla.org/nnethercote/2020/04/15/better-stack...

- https://hacks.mozilla.org/2018/01/oxidizing-source-maps-with...


I agree that Rust has very favorable characteristics when it comes to performance. My argument would be that language choice is not a panacea. It's certainly possible to write performant code which leans on dependencies, but the style of development which relies heavily on piecing together 3rd party libraries and frameworks without knowledge of their implementation details is not a recipe for optimal performance.


I see this as sort of saying "Imagine you could cure Ebola, is the world a better place? Well, for me, no, I'm much more likely to get hit by a car".

While I am unlikely to be attacked through a memory safety exploit, I also:

* Have been attacked through one in the past, when the internet was a different place

* Wonder how much time and money could be better spent if we just eliminated that entire class of problems - perhaps solving some of those poor design issues?


I think the reason such a magic wand can't exist is actually why it would be a material improvement if it did - it would fix swathes of bugs that rustc would refuse to compile, and that require understanding the application semantics to fix.

I don't know when I last saw a kernel oops or data corruption either, but iu certainly routinely experience bugs that could be manifestations of memory mismanagement.

And if everything written in Java would be transpiled, with no `panic`s, bells, or tracebacks :vom:'ed into the GUI, oh how I'd celebrate.


I feel for everybody obliged to be interrupted all day by Slack.

I would like to disagree with you, but I can't.


>In the brave new Rust-powered software world, does your day-to-day computing experience change? It is materially better?

A simple example would be that heartbleed and dirty CoW would not have existed in a rust world.


Would Rust actually solve heartbleed? Most memory safe languages wouldn't have, because it wasn't using regular memory management, it was using a custom memory pool with custom array types that would refernce that pool.

Maybe in many other languages they would have had better alternatives than that implementation, but I'm pretty sure that their implementation could have compiled to valid Rust that would have had the same heartbleed bug.


>Rust designers have consciously chosen to keep the language's abstraction capacity limited, which makes it more approachable, but reduces what is possible to express in libraries. That means libraries possible, even easy in C++ cannot be coded in Rust.

Like what? I can think of many examples where I can't express the abstraction from Rust nicely in C++. proc_macros, strong typing, macros that are part of the AST, enums with data, safe passing by reference, nice iterators (none of this begin() and end() line noise), less noisy lambda syntax, and of course memory safety.

It's especially nice abstractions where Rust shines, compared to C++.


Both C++ and Rust have nice abstractions. Since you asked for an example of a C++ one, consider a bignum library.

In C++ you can write a natural-feeling library that provides familiar arithmetic. In Rust, you will be constantly smacked in the face with lifetimes, borrowing, etc. because there's no way to do implicit clones or borrows.


Implicit clones are enabled using the `Copy` marker trait in Rust.


`Copy` implies that an object can be bitwise copied while preserving desired semantics; Rust has no exact equivalent to copy construction or move semantics in C++, which can be entirely implicit while involving custom code. (There's a 3rd-party `transfer` crate that enables a `Clone`-like marker for custom moves, however. It's used together with the Pin feature, so these moves are also explicit.)


Plenty of those can be better expressed in C++20, alongside clang tidy and VC++ analysers.

Compile time reflection and metaclasses already exist as VC++ prototype, demoed at Visual C++ 2020 virtual conference.

While I enjoy using Rust, what I care about are the libraries and IDE tooling, specially for .NET and Java integrations, so for better or worse, C++ keeps being the tool to pick.

Also I have big hopes for Rust/WinRT, but it is still very early for it to eventually reach C++/CX tooling, which even C++/WinRT is currently playing catch up and where WinUI team is placing their resources.


>Plenty of those can be better expressed in C++20, alongside clang tidy and VC++ analysers.

Aside from lack of C++20 support on many targets, Clang tidy and extra analyzers require other people to take the same care as you do, an assumption that, as somebody mentioned in that article, is unreasonable to make in a non-solo project.

>Compile time reflection and metaclasses already exist as VC++ prototype, demoed at Visual C++ 2020 virtual conference.

A prototype in a Microsoft dialect of C++? That's apples and oranges.

>While I enjoy using Rust, what I care about are the libraries and IDE tooling

IDE tooling has gotten better, but of course that does still lag behind. While C++ build systems IMHO lag behind cargo, so for me that's a tie.


It isn't a Microsoft dialect, rather what might be coming in ISO C++23.

Conan and vcpkg are already establishing themselves, and they have a killer feature over cargo, binary dependencies.

Also too many Rust libraries still reach out to unsafe, needlessly.


Willingness to lock oneself into corporate walled gardens never fails to mystify me.

To my understanding, the only reason Java was embraced with any enthusiasm in 1995 was as a way off of Microsoft's frameworks treadmill.


Then you will gladly refuse any contribution from Microsoft, Apple, Oracle, Google into Rust.

Better let Microsoft know about their mistake to sponsor Rust.


Whenever Microsoft can get people writing unportable software, and locked in again, they benefit; same for any corporate octopus.

We don't.


Yet the Rust community doesn't seem to think the same way as you.

I see the community quite happy with the corporate help it is receiving across the board.

Waiting to see you pushing a Rust fork, corporate free of any kind of contribution.


You entirely miss the point.

Any resources MS provide to improve the compiler is an obvious pure win for Rust users. Likewise, any portable code they release.

But all the code other people write that only works in MS's walled garden benefits mainly Microsoft, at the wider community's expense.

I am astonished at the need to explain this, as if the last several decades had not happened.


You are the one missing the point, either you embrace corporations support or not. There is no pick and choose here.

Plus what makes Microsoft bad, and Oracle, Apple and Google contributions good then?

I am astonished that FOSS religion keeps being a thing, while it has been proven in the last 25 years where most projects without corporate love end up.


Again you entirely miss the point.

Nobody but you has mentioned rejecting Microsoft contributions to the Rust compiler and library suite. Nobody but you has mentioned there being anything wrong with MS, GOOG, AAPL, ORCL or anybody else contributing to projects.

The topic is people who are not otherwise beholden to Microsoft (and, thus, not employed by Microsoft) writing code that works only in Microsoft execution environments.

I have explained this three times in this one thread. I don't know a way to make it any clearer. What is so hard to grasp about this distinction?


> That means libraries possible, even easy in C++ cannot be coded in Rust.

Can you give some examples of libraries that can be made in C++ that can't in Rust and why they can't? Having never used Rust yet I'm curious what the issue is.


Imagine something like SwiftUI, including support for dragging components out of a toolbox, and having an eco-system of companies selling such components.

https://www.componentsource.com/search/products?f%5B0%5D=at%...

https://microsoft.github.io/microsoft-ui-xaml/

The borrow checker in its current state makes it very hard to build such tools.


Why? I don't understand what this has to do with the borrow checker.


Because you dynamically change the widgets graph, like in Unreal Blueprints or Shader Graph.


> Go proved that the world will make room for a less capable language (in Go's case, than Java) if it is simpler.

Is this a fact, is there consensus on this? Didn't it just prove that if you are a corporation the size of Google, you can push your libraries or languages that far and make them that popular, just by virtue of forcing some people to use them and spend a lot of resources on guides and development and marketing?


How does Google force anyone to use Golang?

If you are a platform vendor like Apple or Microsoft you can be dictatorial and only support your language of choice; In fact Apple's infamous "section 3.3.1" was going to limit iPhone apps to only use approved languages.

But I don't see how Google uses their ecosystem to push Golang. Sure, they use their resources to develop the language and people probably feel better adopting a new language that comes from an established tech company with a lot of resources than one that is purely unfunded open source.


I'm pretty sure most people using Go choose it for the simplicity and low overhead of the runtime, not the language itself. Perhaps the simplicity of the stdlib as well.


Simple ecosystem. Static binaries. Simple tools. From day of day code, the ecosystem sometimes matters more than language itself.


Exactly.

Though I would say that Go's tools are anything but simple from my point of view, once you get past the very simplest things. It is much easier to understand how to configure a Maven-based project with multiple modules, where it will get those modules from, and what version numbers everything has, for example, than it is with `go mod`, which has far too much magic and barely documented behaviors still. Also, there's still no good IDE, no good language server, no good refactoring tools, and no good debuggers for Go.

Go tooling is still way behind Java, C#, and Node, to name just the ecosystems I've worked with. It's better on the deployment side than C or C++, but worse on the debugging side.


> The libraries being coded in C++ today will never be callable from Rust.

This is not true. The FFI story for Rust <=> C++ [1]is good enough that people are experimenting with it. Where I work at, most internal infra client libraries are written in C++ and now they're all available in Rust.

Your prediction that Rust and C++ will coexist for decades is almost certainly true. But I foresee that they'll coexist in many of those codebases. Code that operates on untrusted input will be written in Rust for security reasons while the rest of it is in C++ or Rust. That's the approach that Chronium [2] is experimenting with.

[1] - https://github.com/dtolnay/cxx/

[2] - https://chromium-review.googlesource.com/q/project:experimen...


It is absolutely true.

The language features that make C++ libraries powerful are wholly inaccessible from Rust, and will remain so. Rust will only ever be able to use parts of C++ libraries specifically cut down or bowdlerized for access by foreign languages.

Much of the power of C++ libraries is in their ability to direct the compiler's treatment of the calling code.

You cannot write an equivalent of std::tie in Rust, or call it.


Rust supports proc macros that can "direct the compiler" in arbitrary ways.


Rust macros know nothing about types, and anyway C++ libraries define no Rust macros.


> Rust adopted very few old mistakes

I'm of the opinion that Rust copy-pasted a number of C++'s mistakes, although mostly the trivial ones.

To name a few, required semicolons just feel dated and unnecessary when compared to the many modern programming languages where they're not required.

Also the use of vector to name dynamic-length arrays is something which has been an annoyance in C++ for decades for anyone who deals with linear algebra and would really like to have that name reserved for its precise mathematical meaning.

These are quibbles, and they don't really matter, but it's frustrating that the opportunity was missed to put these kinds of issues to rest and not carry them forward into a brand-new programming language.


Rust is a fantastic language, and once past the borrow checker level, it can be quite productive. One interesting compatriot of Rust is Swift, which also ticks most of the checks in that list. Also Swift has IMO a better development experience, due to Apple’s initiatives. I wonder what Rust developers think about swift.


Fun fact: several early Rust developers, including the initial author Graydon Hoare, switched to Apple and started working on Swift.

The languages have a shared legacy and feel quite similar in certain aspects, with Swift being a higher level adaptation.

Swift is a lot easier to use, with automatic reference counting, quite a bit more convenience and syntax sugar, and a class system.

It's a lovely language.

It lacks what are the defining features of Rust (for me) though: low level control available when required but usually hidden behind nice abstractions, borrow checker, concurrency safeguards (Send/Sync in Rust), trait system over classes, and a good macro system.

The biggest conceptual downside for me is the class system with inheritance, overrides, etc: Rust has traits that are somewhat similar to Haskell type classes, and are much nicer and more coherent to use in many domains. (it's far from perfect, especially around the severely limited trait objects and dynamic dispatch, but that is a longer topic)

The biggest practical downsides are Apples disregard for other platforms (Linux/Windows are very much second/third class citizens) and the Objective-C compatibility baggage, which makes the language a bit messy.

But overall I think it is easy to like and appreciate both languages.


> disregard for other platforms (Linux/Windows are very much second/third class citizens)

It's worth noting that improved and cross-platform support is a stated priority for the next Swift version. It's already quite usable on Ubuntu, and Windows and other Linux distros beyond Ubuntu have been added to the CI pipeline for Swift development.


Yes, there's a lot of cross-pollination between Rust and Swift, and Swift devs pragmatically use features from Rust ecosystem. IMO Rust has much better FFI than Swift, but that's primarily due to Apple. I'm really stoked up for both languages to gain a larger space in developer ecosystem.

> The biggest practical downsides are Apples disregard for other platforms.

This is unfortunately true, but I see Swift committee making serious efforts to overcome that. I hope it gets better with time. Swift on Backend (vapor/Kitura) are simply ages behind Actix.

Another language that's syntactically close is Kotlin/native, even though it has quite different goals. These 3 languages have brought much excitement to development in the past few years.


Well for creating bindings, Rust's bindgen falls short. The fact that you need to play around and switch on flags to properly recognise variables, macros and functions over a .h only for generating the FFI layer, it is just harder to maintain, even when updating the library.

Swift's ClangImporter does this automatically in the language via clang modules, unlike Rust's bindgen.


> IMO Rust has much better FFI than Swift, but that's primarily due to Apple.

How so? Swift has a stable ABI that, among other things, enables it to expose FFI bindings to many other languages. Rust has made the pragmatic choice of having no stable FFI beyond the C one, so if you want to setup FFI to a Rust library you'll have to write C-compatible wrapping code in Rust and expose that.


Swift got stable ABI in version 5, that’s bit late IMO. Also the community efforts to improve Rust interop (e.g. Rustler/Elixer) are way ahead when compared to Swift. That being said, I personally am happy that both languages to have first class FFI


> that’s bit late IMO

It's pretty rare for a language to have a stable ABI, no? I mean Rust still doesn't have one.


Rust and Swift are my two go-to languages when I have free choice for a given task. I love both, and I miss things about Swift when writing code in Rust and vice versa.

What I love about Swift is that it's an incredibly expressive language which is easy to write, so it's possible to be incredibly productive in it. The type system is awesome, the syntax is super clean and consistent, and thanks to a lot of well-thought-out syntactic sugar it often feels like I'm almost working in a DSL for the problem I'm working on.

What I miss from Rust when working with Swift is the level of specificity. Swift makes some assumptions about how values and references are handled, which makes things easy to code, but can often result in unnecessary copying and reference counting which can only be avoided by bending over backwards in the implementation or resorting to the unsafe API. I like how Rust gives me precise tools for specifying exactly how memory should be handled.

To be honest, I think my perfect language would have something like Swift's front-end, and something like Rust's memory model.


From everything I've read Swift is still slow relative to Rush because it uses ARC and so there are increments and decrements of usage counts all over the place. To put it another way, Swift handles memory safety at runtime, Rust handles it at compile time?

There is/was talk of trying to either do escape analysis or add some hints so the compiler can get rid of those checks but AFAIK that hasn't happened yet?


It's a bit more complex than "swift is slow". As you say, ARC can lead to some performance cliffs when using reference types (class), but Swift manages memory statically for value types (struct, enum).

Modern idiomatic swift uses reference types quite sparingly, and Swift can be quite performant, but usually requires prolific use of profiling and optimization to get there.

Best case performance for Rust will probably always be better, because even for statically managed types, Swift leans more heavily toward copying values where Rust tries to leave them in place whenever possible.


"ARC can lead to performance cliffs" is quite optimistic. Obligate reference counting ala Swift has even lower performance than obligate GC, at least wrt. throughput. It's pretty much a dead end in many ways.


Yeah but it depends on the structure of the program right? Many swift programs might have a handful of reference counted objects which are not used in the performance-critical section of the program so the ARC penalty is negligible.

I agree that ARC is probably an obstacle for Swift to overcome in the long term. There have been some interesting discussions in the Swift forums around this, and there may be some solutions, like for instance the compiler being smarter about knowing when ARC is truly necessary, and replacing it with simple RC when, for instance, an object never leaves the thread where it was created, which is probably true of a majority of objects.

Probably this can be addressed when Swift's ownership model is more mature, and when the concurrency model has been formally defined.


Rust has Arc<> too, but it's optional. And it only increments and decrements wrt. Arc references that might affect when the object is dropped; it doesn't need to do so with references that are outlived by some existing Arc reference. I think this is what you're calling 'escape analysis' in your comment, but Rust does it statically and the 'hints' are ordinary Rust syntax.


The thing Swift has over Rust is a laser focus on language ergonomics. This is what really sells me on the language. It makes it extremely expressive and clear.


related question, what's the state of swift outside of the Apple ecosystem, in particular on Linux?

I looked at it briefly and I liked it a lot, but does it have a future as a platform-independent language?


It's decent and getting better. I'm using Swift for computer graphics projects on Linux and it's working great and it's quite pleasant to use.

Currently it's only officially supported on macOS and Ubuntu, but the next major release (5.3) will add official support for Windows and more linux distros.

Tooling still has a ways to go. XCode is still the best way to write Swift, but, SublimeText, vim and VSCode are usable with Swift thanks to LSP support.


Also: almost everyone who uses rust does so by their own choice.


Which makes it seem logical that the people who consistently use it do so because they love it. Point proven.

Other than, say, JS for instance. If you want to make webware you just have to use it, regardless of whether you like it or not. Or C/C++ for embedded. There are efforts to bring more languages into that space but at the moment you'd be hard pressed if you wanted to use anything else.


You don't have to use JavaScript. I don't write frontend professionally, but have used Elm with great satisfaction for toy projects - without having to know anything about JavaScript.


Also the same for Haskell so I would have expected to see that much higher.


This is mentioned in the article


I don't get this comment, rust is a new language that isn't widely embraced by employers yet.

How would rust have a significant number of users forced to use it?


It’s discussed in the article but that quality kind of ‘games’ the metric SO are using - what % of people using it want to keep using it? If people were forced to use it for work then you’d likely see a lower number regardless how good it is just because it wouldn’t be every single person’s preference.

It’s not a bad thing - a new language that doesn’t reach that point probably dies - but it does make it easier to score highly on their metric.


He means: another reason rust users love it so much is because they are not being forced to use it


"In type checking, only the signature of functions are considered. There’s no relying on the implementation for determining if callers are correct (like you can do in Scala, or Haskell)"

What does this mean?


In modern C++, you can write:

    template<typename T, typename U>
    auto myFn(T x, U y){
      return x + y;
    }
Note the return type of the function is not specified; it could return anything, depending on what the type of x + y is. In Rust you can't do that; you must specify the return type of the function.


That's not a good C++ example for comparison because that's using templates.

The whole point of templates is to have "code that generates other code". Therefore, omitting the types as much as possible lets the compiler deduce types so the programmer doesn't have to manually write multiple redundant versions with different types.

With normal functions, one parameterizes values but templates lets you parameterize types so explicitly putting a return type defeats the purpose of templates.

In the context of the Rust blog, the comment about type signatures seems to be about ABI Abstract Binary Interface backward compatibility of libraries and not templates generating code.


It also works without templates:

    auto add(int a, int b){
      return a + b;
    }
But yes there's usually less reason to do it for a normal function.


You can know the exact parameter and return types of a function by reading its signature. You can't do that in Scala, because of type inherence. This is legal Scala:

  def twice(x: Int) = { x * 2 }
But you can't know the return type without reading the function body. That example is not so bad. I routinely used to confront things like:

  def applyComputation(x: Int) = { determineComputation().compute(x) }
And now you're off on an adventure to work out the return type.

EDIT There's sort of a deviation from this with "impl trait" returns. There the signature says that the function returns some type which implements a certain trait, but you can't tell exactly what.


Why is this a problem? It's trivial for any editor to just display the return type. How is the lack of type inference a feature? Type inference makes a statically type language so much better. For example, in OCaml it is rare to ever implicitly define the return type of a function, and it's never caused my any problems. If I need to see the return type, I just C-c C-t and it's displayed for me.


It's been a while since I've written any Scala, but if I recall correctly it's frowned upon to use type inference for return values in method signatures.


> it's frowned upon to use type inference for return values in method signatures

The language shouldn't allow it in the first place. Its a symptom or not balancing power with complexity and shows up elsewhere in Scala. They "were so preoccupied with whether or not they could that they didn't stop to think if they should."


> The language shouldn't allow it in the first place. Its a symptom or not balancing power with complexity and shows up elsewhere in Scala.

To me it rather seems you have not used some of Scala's feature which are not really usable without return type inference. E.g. typelevel computations where the return type depends on the inputs.

Of course rusts typesystem (while maybe turing complete) is vastly less powerful than Scala's, so it might not have the need for it.


OCaml also does global type inference like this, and it generally works out fine (unlike Haskell, it's not idiomatic to write a type signature for every single top level function in OCaml). Maybe because the type system is more principled compared to Scala.


Ocaml and the MLs have a very strong commitment to global type inference, much stronger than Haskell's. Scala has no commitment.

That said, you do see a lot of types in Ocaml code. Ocaml source files typically have an accompanying signature file (with extension ".mli" rather than ".ml"). The signature file gives explicit types for all of the definitions (fields) in the structure file. Often, you need to write these signature files because you want to hide implementation details from the user and so give narrower types than the ones inferred by the compiler.

You can and do write Ocaml without ".mli" files, and there, you are relying heavily on global type inference, and built-in Ocaml tools to tell you what your most general ".mli" file would look like. You can and do get the compiler to write them for you and then add your restrictions and documentation. As such, Ocaml programmers are very used to reading these signatures as the entry point to understanding a library.

This doesn't work so well in Haskell, because Haskell doesn't have global type inference, and annotations are sometimes mandatory. Consider the expression

   show (read "1")
Without saying what type "read" is supposed to return here, there's no way to know what this code is supposed to do.


> Ocaml and the MLs have a very strong commitment to global type inference, much stronger than Haskell's

This is hilarious to read when (+.) is a thing in OCaml

> Without saying what type "read" is supposed to return here, there's no way to know what this code is supposed to do.

It has nothing to do with global type inference, as "1" can successfully and meaningfully be parsed into several distinct types, like Int, Integer, Float, MyCustomEnum at the same time, where each of them has their own implementation of `Show`. You cannot "magically" infer that globally, unless you introduce an equivalent of (+.) for disambiguating parsing, which would serve the same purpose as the explicit type annotation passed to the parser.


>because Haskell doesn't have global type inference

I'm curious what you mean by that. Is it because Haskell's additions mean that its version of Hindley-Milner type inference is not able to infer types for all expressions?

Is this also why the OCaml Emacs mode is much better, even though Haskell has more developers?


In part, that's right. For example, there is an extension called "rank-n" types where global type inference is provably impossible. But even in Haskell 98, type classes often create ambiguities that mean the compiler doesn't know what code to generate, as in the example I gave with

   show (read "1")
You fix these with annotations.

I couldn't say why Ocaml's Emacs mode is so much better. Merlin was a game changer for me, and I've never had issues getting it to work. ghc-mod is the equivalent in Ocaml, but I've never managed to get that to work.


I'm curious what about Ocaml's Emacs mode is better. I use dante for Haskell in Emacs which is good, but I'm always interesting in hearing about better technologies.


I'm not the parent, but thanks for the link to Dante.

One of the main things I like about Ocaml and merlin is how robustly it can tell you the types of expressions by hitting "C-t". It usually works on incomplete code, and it will tell you the type of arbitrary subexpressions (not just identifiers) in your selected region.

It will do automatic destructuring of an identifier (producing a match/case expression with the patterns in the ADTs filled in for you). It's not perfect, but I use it a lot for complex ADTs.

The autocompletion is great too. It will complete for local variables in scope, and it must be having to do some fairly complex stuff in the background, since it'll autocomplete for local modules applied to functors. For example, you can write

   let foo x =
     let open Foo(Bar) in
     ...
and when you autocomplete inside the "...", it will bring in completions from the module generated by applying Bar to Foo.

I'd be interested to hear how dante compares.


Thanks!

> I'd be interested to hear how dante compares.

> it can tell you the types of expressions

dante has flaky support for this

> It will do automatic destructuring of an identifier

dante does support this. It's a bit hokey because the code it inserts doesn't match pre-existing indentation, but it is useful.

> The autocompletion is great too. It will complete for local variables in scope

That sounds very cool. I don't think dante does that, although I've never tried it.


The keybinding (by default), is C-c C-t, btw.


> Ocaml tools to tell you what your most general ".mli" file would look like

Thats a nice idea! Can those tools do automatic refactorings to use the generalized type?


Ah, sadly I haven't come across any refactoring tools for Ocaml. But in Emacs with Merlin, I regularly hit "C-t C-t" on a module identifier, and it brings up the signature in another window, whether there is an ".mli" file or not.


Yes, inference. My phone is very keen to use the word "inherence", which I didn't even know was a word.

In Scala it's frowned on. In Rust it's not possible.


> But you can't know the return type without reading the function body

Use a good IDE like IntelliJ. It has shortcuts to show ot for you and it will show you the return type automatically in the editor if you want (using different font-style to distinguish it from the actual code). It can even be configured where to show it and where not.


So basically is Rust like C/C++ this way? In C++ you also have to know exact types of arguments and return types?


It's like pre-C++14 C++. C++14 added automatic return type deduction!


This is one thing which makes it almost unbearable for me to go back to programming in Javascript. The signature gives precisely zero information about the types being used, so if a library is not well documented, you often have to go to the source to figure out what's being expected and returned.


Rust is great but it could gain from explaining itself with memory and pointers or else why a String has the Clone trait and a u32 the Copy trait? I tried to learn it as a first language and it was hard until I started learning C, groking the stack the heap and pointers. I think all the tutorials out there are ill suited since they hide away so much. It's harder to remember stuff if you don't understand why it is that way, well at least it stands true for me. So I would love to see a Rust for beginners tutorial for C beginners. Maybe I'll write it someday. I really got discouraged when it got into the ugly lifetime syntax. But I will definitely come back to it. (Unless Zig shows to be just as safe and less verbose)


Fair point.

The Copy on u32 and lack of Copy on String is confusing until you've grasped that things containing heap allocations are ineligible for Copy, and that Copy is primarily intended for data types the same size as or smaller than a pointer.


It feels like being part of a village that learns to love the dragon it battles.


It's the only production-ready language that is both memory safe and has zero-cost abstractions (i.e. for any C code you have Rust code that compiles to equivalent assembly, and using more abstractions in Rust does not make the assembly less efficient unless the abstraction can't be implemented otherwise).

Also as long as you accept not having dependent types (at least for the short and mid-term) and several currently unimplemented features, Rust is the optimal way a programming language can be designed other than assorted minor warts.


Batteries are very much already included. The missing features you are referring to are likely not show stoppers.


It's a niche language that dominates its niche.

I wouldn't write a LoB application in Rust, for example. But if I wrote programs with really tight speed and memory requirements for a living, I would pick Rust for the task.

If people were forced to write their website backends in Rust (or even their frontends in Rust targeting WASM) they would hate it. Its performance is overkill for 99.9% of backends, but the means of getting this performance kill your productivity.


I've been using Rust for backend web dev and networked services for the last two years. I'm working on the greatest project of my life (until my next project) and Rust is really helping me along. The performance and resource efficiencies are great side effects. I don't understand the argument that performance benefits are overkill for web development. Obviously, no one needs to use a flamethrower to light a cigarette. However, a fully-fleshed web server is a very complex system that taxes performance with all that it does. I'm working on a greenfield project and had discretion over what tooling to use. I decided to invest in Rust, eating short-term productivity losses in exchange for long-term gains. Those gains are realized at many points in development, compounding over time.

As team members are brought into this project, I will accept a simple "thank you" for bringing joy to their work and renewing their aspirations for creating better products. Web dev doesn't have to be so shitty an experience, but it requires investment. That's a tough sell for managers who aren't coders, but those who rose from engineering understand its importance.


> I wouldn't write a LoB application in Rust

Is ironic, but in fact rust is easier to use for Lob than for low level stuff (for the obvious reason low level stuff requiere more intricate knowledge).

I'm building a erp/ecommerce backend, ported from F#. Rust, alike F#, have all the machinery for encode nicely business requirements, but I think I'm building much better logic in rust than F#, in part because doing imperative and async stuff is effortless, the rust syntax not depend on inference (so not guessing what this function is) and the wonderfull From/Into traits.

In the other hand, I'm building a relational language on the side, and it is very hard! Somehow, I hit all the hard corners of rust (and have progressed like a snail), where in the other side the erp backend is progressing as fast as I have done in other langs (f#, python, swift, ..) and even doing things that before I can't (because time).

My only 2 major complains: Rust slow compiler (rust, like c and c++ are damm slow languages) and the web ecosystem still lack a django-kind of experience.


My current side project has a frontend in Rust via WASM, and I love it. Way better than the huge mess that is the JS ecosystem.


Rust WASM is still very bleeding edge. Let's not give anyone the false impression about what anyone can manage to build today.


I've been casually playing with Rust for a few years now. Wrote a few small things in my previous job that still run a large part of their business, which is pretty satisfying, but only very recently have I found a couple of hobby projects where it just feels like the right tool for the job (for me). Web stuff I'd still much rather write in Ruby, if I'm honest, but for playing around on systems Rust is super fun. I ended up making https://git.sr.ht/~robotmay/amdgpu-fancontrol, of which there's already an equivalent in Python, but the lack of dependencies when installing a piece of Rust software makes it feel very portable and neat.

My favourite metaphor for Rust is that it's like a friendly bare-knuckle fist-fight with the compiler. It's not as user-friendly as, say, Elm, but it's streets ahead of Haskell's errors.


Can I just say, I really appreciate the Community reference

Edit: Btw, if you have to ask what I mean you’re streets behind


A newbie question.

As a seasoned C#, Python and JS programmer, what conceptual foundations in CS will make me use rust more effectively?

Say I want to create a new database service, on top of Postgresql, using rust. Would the design of rust help me in a specific way?

I want to learn and use rust, for systems programming, the kind where I build a high performance underlying system, called by other languages, but it always feels I need to learn quite a bit of theory to effectively use rust.

I never felt the same with C# or python. A bit of OO stuff was usually enough to be productive with them.


Where's a good place to start with Rust? Which domains is it particularly good in ?


Also the same reason why ppl love c++: Stockholm syndrom.


86.1% of people using Rust love it. For C++ it's 43.4%. [1] You'd have to explain the disparity in the two numbers if you think they're loved for the same reason.

Further, Rust is mostly used by people who choose to do so. There are very few people out there forced into maintaining shitty, legacy codebases in Rust because there aren't very many such code bases ... yet.

[1] - https://insights.stackoverflow.com/survey/2020#technology-mo...


"It occurs when hostages or abuse victims bond with their captors or abusers." [1] The stories speak of abuse and an unwillingness to leave. This does not detail how they came to be abused or captives. I'm not saying they were asking for it, dressed that way, but maybe these Rust victims' behavior led them to entrapment. That doesn't mean it's not real. This seems a lot like Stockholm syndrome.

[1} https://www.healthline.com/health/mental-health/stockholm-sy...


Interesting study. Of course you are right. But this study poses the question: are ppm forced to work in julia?


Most of the people who use rust, do so on their own volition. I don’t see anyone being held hostage to rust due to corporate policies.


It's just a matter of age .... I remember back in 1990 99% of C++ users loved it, as it was kool.


I don't believe that rust solves the right problems in the right ways. This is specifically with respect to the single-owner raii/lifetime system; the rest of the language is imo pretty nice (aside from the error messages, which are an implementation problem).

For starters, ATS[1] and f-star[2] both provide much stronger safety guarantees, so if you want the strongest possible guarantees that your low-level code is correct, you can't stop at rust.

  _____________________________________________
Beyond that, it's helpful to look at the bigger picture of what characteristics a program needs to have, and what characteristics a language can have to help facilitate that. I propose that there are broadly three program characteristics that are affected by a language's ownership/lifetime system: throughput, resource use, and ease of use/correctness. That is: how long does the code take to run, how much memory does it use, and how likely is it to do the right thing / how much work does it take to massage your code to be accepted by the compiler. This last is admittedly rather nebulous. It depends quite a lot on an individual's experience with a given language, as well as overall experience and attention to detail. Even leaving aside specific language experience, different individuals may rank different languages differently, simply due to different approaches and thinking styles. So I hope you will forgive my speaking a little bit generally and loosely about the topic of ease-of-use/correctness.

The primary resource that programs need to manage is memory[3]. We have several strategies for managing memory:

(Note: implicit/explicit below refers to whether something something is an explicit part of the type system, not an explicit part of user code.)

- implicitly managed global heap, as with malloc/free in c

- implicit stack-based raii with automatically freed memory, as in c++, or c with alloca (note: though this is not usually a general-purpose solution, it can be[4]. But more interestingly, it can be composed with other strategies.)

- explicitly managed single-owner abstraction over the global heap and possible the stack, as in rust

- explicit automatic reference counting as an abstraction over the global heap and possibly the stack, as in swift

- implicit memory pools/regions

- explicit automatic tracing garbage collector as an abstraction over the global heap, possibly the stack, possibly memory regions (as in a nursery gc), possible a compactor (as in a compacting gc). (Java)

- custom allocators, which may have arbitrarily complicated designs, be arbitrarily composed, arbitrarily explicit, etc. Not possible to enumerate them all here.

I mentioned before there are three attributes relevant to a memory management scheme. But there is a separate axis along which we have to consider each one: worst case vs average case. A tracing GC will usually have higher throughput than an automatic reference counter, but the automatic reference counter will usually have very consistent performance. On the other hand, an automatic reference counter is usually implemented on top of something like malloc. Garbage collectors generally need a bigger heap than malloc, but malloc has a pathological fragmentation problem which a compacting garbage collector is able to avoid.

This comment is getting very long already, and comparing all of the above systems would be out of scope. But I'll make a few specific observations and field further arguments as they come:

- Because of the fragmentation problem mentioned above, memory pools and special-purpose allocators will always outperform a malloc-based system both in resource usage and throughput (memory management is constant-time + better cache coherency)

- Additionally, implicitly managed memory pools are usually easier to use than an implicitly managed global heap, because you don't have to think about the lifetime of each individual object.

- Implicit malloc/free in c should generally perform similarly to an explicit single-owner system like rust's, because most of the allocation time is spent in malloc, and they have little (or no) runtime performance hit on top of that. The implicit system may have a slight edge because it has more flexible data structures; then again, the explicit single-owner system may have a slight edge because it has more opportunity to allocate locally defined objects directly on the stack if their ownership is not given away. But these are marginal gains either way.

- Naïve reference counting will involve a significant performance hit compared to any of the above systems. However, there is a heavy caveat. Consider what happens if you take your single-owner verified code, remove all the lifetime annotations, and give it to a reference-counting compiler. Assuming it has access to all your source code (which is a reasonable assumption; the single-owner compiler has that), then if it performs even basic optimizations—this isn't a sufficiently smart compiler[5]-type case—it will elide all the reference counting overhead. Granted, most reference-counted code isn't written like this, but it means that reference counting isn't a performance dead end, and it's not difficult to squeeze your rc code to remove some of the rc overhead if you have to.

- It's possible to have shared mutable references, but forbid sharing them across threads.

- The flexibility gains from having shared mutable references are not trivial, and can significantly improve ease of use.

- Correctness improvements from strictly defined lifetimes are a myth. Lifetimes aren't an inherent part of any algorithm, they're an artifact of the fact that computers have limited memory and need to reuse it.

To summarize:

- When maximum performance is needed, pools or special-purpose allocators will always beat single-owner systems.

- For all other cases, the performance cap on reference counting is identical with single-owner systems, while the flexibility cap is much higher.

  _____________________________________________
1. http://www.ats-lang.org/

2. https://fstar-lang.org/

3. File handles and mutex locks also come up, but those require different strategies. Happy to talk about those too, but tl;dr file handles should be avoided where possible and refcounted where not; mutexes should also be avoided where possible, and be scoped where not.

4. https://degaz.io/blog/632020/post.html

5. https://wiki.c2.com/?SufficientlySmartCompiler


> then if it performs even basic optimizations—this isn't a sufficiently smart compiler[5]-type case—it will elide all the reference counting overhead. Granted, most reference-counted code isn't written like this, but it means that reference counting isn't a performance dead end, and it's not difficult to squeeze your rc code to remove some of the rc overhead if you have to.

This is only the case if the compiler can effectively inline all functions. When compiling a function on its own, the compiler has no idea if the function incrementing a reference count is the first to do so or not. In rust the type signatures of the called functions are all that is needed to verify the type and lifetime correctness of a given function implementation.

> Correctness improvements from strictly defined lifetimes are a myth. Lifetimes aren't an inherent part of any algorithm, they're an artifact of the fact that computers have limited memory and need to reuse it.

Rust's lifetime analysis and 'mutability xor shared' semantics are also useful for correctness, both in threading (as you mention), but also in the case of unexpected mutation in the same thread: iterator invalidation is probably the most obvious case of this (and it's not just because 'computers have finite memory', it's intrinsic to how a lot of datastructures work).

What's more, Rust's lifetime and ownership system works neatly with pools and other special-purpose allocators, and implementing such patterns in a safe manner is frequently done in rust (in some cases Rust lets you get away with patterns which would be so wildly unsafe in C++ as to be impractical). If Rust didn't care for allowing such control over memory allocation it probably would not have much of the features it does have.


> > rc elision is super trivial

> no it's not

Fair enough.

It's still not a very difficult problem, though. You don't have to inline all functions (which you don't want to do anyway); you can infer lifetime attributes for each function.

RC has another benefit: it's easier to make a naïve compiler; it'll just produce slow code. Whereas a naïve single-owner compiler implementation (e.g. mrustc) will allow bad code.


> Rust's lifetime and ownership system works neatly with pools and other special-purpose allocators

How does that work? One of the primary benefits of using pools is that you can deallocate objects in batches (so the destructor for each individual object is a no-op). Can you say that the pool object outlives (or owns) all of the objects allocated from it?


> if it performs even basic optimizations—this isn't a sufficiently smart compiler[5]-type case—it will elide all the reference counting overhead

I'm skeptical. How do you know this?

Are you assuming that borrows wouldn't be reference counted? If so, how would the compiler know that no borrow outlives the value?


The borrows are reference counted. The compiler sees that a function unconditionally increments the reference count at the beginning and unconditionally decrements it at the end. It further knows the semantics of reference counting, so it knows it's ok to elide both of those.

Don't know enough obj-c or swift to show an example/disassembly, but here's a suggestive quote from the llvm docs[1]

> ARC may assume that non-ARC code engages in sensible balancing behavior and does not rely on exact or minimum retain count values except as guaranteed by __strong object invariants or +1 transfer conventions. For example, if an object is provably double-retained and double-released, ARC may eliminate the inner retain and release; it does not need to guard against code which performs an unbalanced release followed by a “balancing” retain.

See also from nim[2]:

> Plain reference counting with move semantic optimizations

1. https://clang.llvm.org/docs/AutomaticReferenceCounting.html#...

2. https://nim-lang.org/docs/gc.html


Never before has a programming language received so much marketing. It's very odd.


I don't think that "marketing" is the right word for a FOSS project that is not affiliated with any for-profit entity and has no business strategy. Rust is truly loved by many who had the chance to work with it and that's why it's honestly promoted more than any other modern language.


I take it you weren't programming when Java was the new hotness?


or Ruby, or Haskell, or elixir... Rust so happens to appeal the front-end crowd as much as the backend people and they are leveraging those windows of opportunity much better than any other language or community. Wasm bindgen is a bliss of fresh air, it even works very well with TypeScript.


People were following the hype and cargo-culting Java, XML, Visual Basic in similar ways.

Yet I really feel that the echo chamber effect is stronger now. People seem to need something to be hyped and polarized about.

Nuanced conversation becomes more difficult as the hyped crowd overwhelms any conversation.


We were discussing VB and Java on early days of Web, places like Compserve, BBSs or plain magazines reader letters.




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: