Hacker News new | past | comments | ask | show | jobs | submit login
Rust concepts I wish I learned earlier (rauljordan.com)
379 points by rck on Jan 18, 2023 | hide | past | favorite | 170 comments



> With the code above, Rust does not know whether we want to clone Arc or the actual inner value, so the code above will fail.

This is incorrect. The code works fine as written; Rust will let you write .clone() and it will clone the Arc (without cloning the inner value). More generally, methods on a wrapper type are searched before methods found via auto-deref. It’s often considered better style to write Arc::clone(…), but that’s for human readability, not the compiler. There’s a Clippy lint for it (https://rust-lang.github.io/rust-clippy/master/#clone_on_ref...) but it’s turned off by default.


The types will be different depending on which operation is used to clone so I wouldn’t really worry about this one too much.


The author mentions the following:

    fn add(x: Option<i32>, y: Option<i32>) -> Option<i32> {
        if x.is_none() || y.is_none() {
            return None;
        }
        return Some(x.unwrap() + y.unwrap());
    }
    The above looks kind of clunky because of the none checks it needs to perform, and it also sucks that we have to extract values out of both options and construct a new option out of that. However, we can much better than this thanks to Option’s special properties! Here’s what we could do

    fn add(x: Option<i32>, y: Option<i32>) -> Option<i32> {
        x.zip(y).map(|(a, b)| a+b)
    }
Do folks really prefer the latter example? The first one is so clear to me and the second looks inscrutable.


You do get better about using the functional operators as you use them, and they can be incredibly powerful and convenient in certain operations, but in this case he's missing the simplest implementation of this function using the `?` operator:

    fn add2(x: Option<i32>, y: Option<i32>) -> Option<i32> {
        Some(x? + y?)
    }


Wait! The ? operator works with the `Option<T>` type? I assumed it was only reserved for `Result<T,E>`.


To slightly elaborate on yccs27's good answer, at the moment, you can use ? for both Option<T> and Result<T, E>, but you can only use them in functions that return the same type, that is

  fn can_use_question_mark_on_option() -> Option<i32> {

  fn can_use_question_mark_on_result() -> Result<i32, ()> {
if you have them mixed in the body, they won't work directly. What you should/can do there depends on specifics. For example, if you have a function that returns Option and you have a Result inside, and you want any E to turn into None, you can add .ok() before the ? and then it still looks nice. (The compiler will even suggest this one!)


And an even smaller caveat: If you use older (distro provided) Rust versions note that it may look like there is some partial implementation of allowing mixing with NoneError, etc. Ignore this. It doesn't work, and was removed in later versions.


But also: don't use your distro provided version of Rust. It's intended for compiling distro packages that depend on Rust, not for developing with Rust. Get Rust from https://rustup.rs


And hence for writing rust code you would like to package for your distro of choice. I publish my software because I want others to use it, and including it in distro repos is the most friendly way to do that.


It currently works with `Result` and `Option` types, and with RFC#3058 will work with all types implementing the `Try` trait.


Monadic do notation equivalent at that point, I suppose.


Although that would be really fun, the proposed `Try` basically only allows you to call the `Result<_, E>` by a different name.


I also (written a few tools in rust, dabble now and then) was under the impression that it only worked with Result. Is this recent or just never encountered?


? is able to be used with Option as of Rust 1.22, released in November of 2017 https://blog.rust-lang.org/2017/11/22/Rust-1.22.html


Well, I guess this counts as a concept I wish I learned earlier!


TLDR; learning Rust through canonical code in tutorials often requires the student to learn bits about the language that are more advanced than the actual problem the resp. tutorial tries to teach how to solve in Rust. ;)

I prefer the latter now that I understand how all the Result/Option transformations work. As a beginner this would be hard to read but the former looks clunky.

Clippy also got pretty good lately at suggesting such transformations instead of if... blocks. I.e. I guess that means they are considered canonical.

In general I find canonical Rust often more concise than what a beginner would come up with but it does require deeper understanding. I guess this is one of the reasons why Rust is considered 'hard to learn' by many people.

You could actually teach Rust using pretty verbose code that would work but it wouldn't be canonical (and often also not efficient, e.g. the classic for... loop that pushes onto a Vec vs something that uses collect()).


This is very true - to fully explain a "hello world" program you'd have to dive into macro syntax... When writing my Rust book I often start by showing the naive solution, and then later move to more canonical code once I've introduced more syntax that enables something more elegant. But I'm aware that I'm showing something non-optimal to start. Maybe that loses some trust, like people think they're learning something only to be told later on that it's wrong? On the other hand if you start with the optimal solution you have to teach so much syntax that it's overwhelming. I expect that some folk want every example to be perfect, but I'm going with an approach where you iterate through each chapter and as you progress through the whole book the examples get better and more idiomatic.


This is basically an eternal battle when teaching. Personally I prefer to try and stay to only idiomatic code if at all possible, but there's pros and cons to each approach.

(This is one reason why I'm glad other people are also writing books! Not everyone likes my style!)


Or when necessary, use the verbose approach to lead toward and explain the idiomatic approach!


Even then: code examples get copied. People go "oh this works for me" and don't bother reading on to the better approach.

It's tough :)


I'd probably write this

    fn add(x: Option<i32>, y: Option<i32>) -> Option<i32> {
        match (x, y) {
            (Some(x), Some(y)) => Some(x + y),
            _ => None,
        }
    }


I like this the best. It is dumb and clear. Anyone can read this even with minimal to no Rust experience but it still flows elegantly.


I'm sorry, but this perspective is absolutely bizarre to me.

We should not avoid language features that reduce boilerplate and drastically increase comprehension for for people who have experience in a language in order to cater to people who have minimal or no experience in that language.

Should we not use `?` in Rust because it might be obscure to someone who's never used the language? Should we not use any of the `Iterator` functions (including `zip` and `map`) because they might be confusing to C programmers who only know `for` loops?

Functional concepts are everywhere these days. Most of them are not hard. `zip` and `map` do not require understanding of homotopy type theory to understand, they are essentially trivial functions. They are available across every type you could possibly iterate over in Rust, and if you understand what they do on one of those you essentially understand what they do on all of them.

This is a toy example, but virtually every piece of hard evidence we have in this field shows that—within reason—more concise code has fewer bugs and is easier to comprehend than longer, more verbose equivalents. Writ large across a project, doubling or tripling the amount of code to bend over backwards accommodating complete novices is lunacy.


I half agree. The question mark version seems to be good as well, in fact also more idiomatic.

If the zip/map version is more common then I take everything back. But it seems less malleable and clear than the pattern matching examples. I had to stop and think for a second to get it. I find pattern matching in general more declarative for small amounts of items.

---

In terms of preferring clearer and simpler code: Absolutely yes. I avoid unnecessary abstractions, especially if the gains are so minor or questionable. It's a matter of empathy and foresight.

Whether that's the case here: I don't know. It might very well be that this is common and clear in the Rust world.


Every abstraction is unnecessary. And the gains are almost always minor… until you apply some of those abstractions across an entire code base.

C-style `for` loops were the norm for decades. Now virtually every language gives you some ability to iterate directly over every element in a collection. Replacing a `for` loop with an iterator over each element is never necessary. The old way worked for decades. The gains are minor. Should we go back to C-style `for` loops? If not, why not?

When you understand the answer to that, you’ll understand why that same logic applies to trivial functions like `zip` and `map` that simply take the idea one minor step further.


I’m not arguing against abstraction in general. I argue that clarity comes first and that in this specific instance pattern matching seems more clear to me.


Sorry, this is just a pet peeve of mine in general.

All the time I see people argue against “unnecessary” abstractions. But this almost always comes from the perspective of “I don’t personally understand it yet” which is just not a reasonable bar for anything to have to clear. Second most frequently it’s “less clear” which often just means “I haven’t internalized it yet” which is likewise a terrible evaluation method for something like functional iteration methods that have the possibility of being used virtually everywhere. And almost equally as often, the underlying objection is that it isn’t useful because the old way is just fine thanks, which is typically a perspective that has completely forgotten about all of the sharp edges and bugs that we all just grew to accept from the preexisting approach.

All of these types of objections are knee-jerk reactions. There are good arguments against bad abstractions, leaky abstractions, infrequently-used abstractions, overly-complex abstractions, and all sorts of other failures to abstract well. But people are so used to these that they reflexively oppose any new abstraction as overly complex or unnecessary simply because it’s new to them.

And that’s a terrible perspective to have, because quite literally all of the progress that has ever been made in the practice of software engineering has been due to abstraction.


The `zip` and `map` functions used here are actually functions of the module std::option, and not std::iter. While they are the same idea "in essence", they have different implementation. The std::option ones are a simple pattern match, while the std::iter ones are more complex. For example, std::iter::zip returns an std::iter::Zip, while std::option::zip returns an Option of a tuple.

I'll also add that option's zip and map are also implemented with a pattern match, like above.

One error and one deviation from the established norm for a toy example is a lot. At the scale of a codebase it would be a catastrophe.


Yes, I know.

But if you look closely you'll notice that `zip` and `map` were called directly on an array here and not actually on an iterator. That's a third implementation of the same concepts. If Rust had HKTs they could all be the exact same implementation, but not today.

The important thing, though, is that they all conceptually do the same thing. Understanding one essentially translates to understanding them all. If zip/map are called directly on two Options, you get an Option containing a tuple back out. If they're done on two arrays, you get an array containing tuples back out. If they're done on two iterators, you get an iterator containing tuples back out.

There's nothing to be confused about.


> But if you look closely you'll notice that `zip` and `map` were called directly on an array here and not actually on an iterator.

No, I don't think that's true, unless we're talking about two different things. In the article, and in the following post https://news.ycombinator.com/item?id=34428999, zip is user on an Option<i32>, takes another Option<i32>, return an Option<(i32, i32)> (which is a tuple, not an array), on which map is applied to extract the two values and add them.

> If zip/map are called directly on two Options, you get an Option containing a tuple back out. If they're done on two arrays, you get an array containing tuples back out. If they're done on two iterators, you get an iterator containing tuples back out.

But that's my whole point. std::option::map is not the same function as std::array::map, which is not the same function as core::iter::Iterator::map. One big difference, for example, is that core::iter::Iterator::map is lazy, while the others are not, hence the note to try to avoid chaining std::array::map, and being careful around it in performance-critical code: https://doc.rust-lang.org/src/core/array/mod.rs.html#466.

Even with HKTs, while you could share some code, that wouldn't solve the fact that the "direct map" (std::option::map for example) is strict, and the other map (std::option::iter::map) is lazy. Especially in a language often used for performance-sensitive tasks, I can't agree that understanding one map translates to understanding them all, since that would be ignoring part of their ergonomics, and more importantly their performance characteristics.


Agreed. That's way more readable.


I think the idiomatic way to write that is:

  fn add(x: Option<i32>, y: Option<i32>) -> Option<i32> {
      Some(x? + y?)
  }


> Do folks really prefer the latter example? The first one is so clear to me and the second looks inscrutable.

Literally everything in programming is inscrutable until you learn it the first time. The latter should be trivial to understand for anyone who's spent even a little amount of time in a language with functional elements.

A day-one beginner doesn't understand a `for` loop. You probably think they're trivial. Bitwise operations are the same. They might be new to you, but `zip` and `map` frankly don't take much more effort to understand than anything else you probably take for granted. `zip` walks through everything in two separate wrappers and pairs up each element inside. `map` opens up a wrapper, lets you do something with what's inside, and re-wraps the result.

For instance, you can do the exact same thing with arrays. Pair up each element inside (like a zipper on clothing), then for every element inside, add them together:

    [1, 3].zip([4, 1]).map(|(a, b)| a + b ) # [5, 4]
That said, you can write this specific function even simpler:

    Some(x? + y?)


The latter is a lot clearer and simpler. The former requires me to reason about control flow, if, and early return, a whole bunch of magic concepts. The latter is just an expression made of normal functions; I could click through and read their implementation if I was confused.


Both are rather ugly. This is much more idiomatic:

    match (x, y) {
        (Some(x), Some(y)) => Some(x + y),
        _ => None,
    }


Don’t know Rust, but wouldn’t this have to be:

    (Some(x), Some(y)) => Some(x + y)
    else => None


You're correct, except that "else" is a keyword and so cannot be used there. You'd want

  _ => None,
instead, which is the "catch all" arm.

(For those that didn't catch it, the parent code is trying to use None, but it's actually a tuple, and there's four different cases here, not two. So the catch-all arm is better than spelling each of them out in this case.)


You're right – fixed. That's what I get from writing code in a simple text area.


Your "fixed" version is also broken (at time of writing). :-)


Fixed again. I'm starting to run out of excuses... ;-)


This has been a thing since Rust 1.0. Just use the beautiful properties of match (or the "later" `if let`, of course). I prefer this and wish I could say it was idiomatic, but some tools like clippy push users over to helper methods instead of simple applications of match.


Pretty sure clippy will tell you to rewrite it as:

    if let (Some(x), Some(y)) = (x, y) {
       Some(x + y)
    } else {
       None
    }
`match` in place of `if` looks weird. IMO example with `zip` is better though.


Clippy will not complain about the parent's code. It's not really in place of an if; there's four cases there. To be honest, I find 'if let... else None' to be worse looking than the match, though I'm unsure if I truly prefer the zip version to the Some(x? + y?) version.


    Some(x? + y?)


Once you know what map does with an option I'd say it is mostly pretty readable. Basically map (when run against an Option value) is a way to say "if the value passed in has a value run this function on it otherwise return None.


The first one is very clear, I agree. However if I wrote Rust daily, I would probably be familiar with the second one and would prefer it. Here's an article kind of related to that, in this case talking about APL, that I think explains very well the tradeoffs: https://beyondloom.com/blog/denial.html.

To try with my own words: programming is about shared understanding of a problem, but also the tools used to solve the problem. Code is text, text has a target audience. When it is experts you can use more complex words, or more domain-specific words. When it's intended for a wider audience, taking the time to explain and properly define things, sometimes multiple times, can be necessary.

According to Rust's documentation of Some:

> zip returns Some((s, o)) if self is Some(s) and the provided Option value is Some(o); otherwise, returns None

> zip_with calls the provided function f and returns Some(f(s, o)) if self is Some(s) and the provided Option value is Some(o); otherwise, returns None

Using zip_with seems more appropriate (x.zip_with(y, +) or something) but zip_with is nightly. I also don't like how object chaining makes so that x seems more "fundamental", or "in another category" than y and +, while really x and y are the same, and + is something else. The if solution shows clearly that x and y are the same, by treating them exactly the same. The second solution also introduces a and b from nowhere, doubling the number of variables used in the function. All small things, but I think it can help put words on why precisely the second isn't as readable as it may seem.

It's interesting how much can be said about a simple "add" function.


First one isn't idiomatic anyway with return. You can just do Some(x? + y?) though.


Using `zip` feels excessively clever to me. I'd probably prefer something in between, like `and_then` followed by `map`, or just matches.


Not sure if it's only me but after using `zip` for the first time in any language, I tend to overuse it too much while there are better, more idiomatic alternatives.


If you are fine with using `map` then I don’t see how this is “clever”. `zip` is basically “map2”.


What’s there to say? It works the same way as `zip` does for an iterator over a `Vec`. So if you understand `zip` as applied to an iterator over `Vec`, then you might understand `zip` applied to `Option`.

In other words:

Y is clear if you already understand X. Like how returning early is simple to understand if you understand if-blocks.

That’s the problem with replying to these kinds of questions: the response is so short and context-dependent that it can look curt.

EDIT: the first code is also inelegant in that it effectively checks if both values are `None` twice in the `Some` case: once in the if-condition and once in `unwrap()` (the check that panics if you try to unwrap a `None`).


Yea the latter code is unreadable to me and feels obfuscated, like the author is trying to force functional programming where it doesn't belong.


You might choose to believe that, but `Option` and `Result` are practically purpose-built in Rust to work extremely well with functional approaches.

And doing so greatly increases the likelihood that the compiler can produce perfectly optimal code around them.


My issue was with the `zip()` usage. For lists I know that it will stop short once one of the lists has run out of items, but I haven't seen it used this way to combine optional values. I'm assuming it only produces a result if all of the elements passed in are non-null (based on the prior code) but it still seems too clever IMO. IDK, maybe this is a common pattern I'm unaware of.


I've done a non trivial amount of functional programming and know what zip and map do but I can't off the top of my head work out how that example works.


`x.zip(y)` evaluates to `Option<(i32, i32)>`


Options can be mapped/zipped/“iterated” over (traversed might be a better word?).

So in this case it’s using that fact to form a tuple of non-null values whenever the option is not null, and then acting on that. I think it’s kinda neat, but I wouldn’t personally use zip in this case, I’d have gone with map or and_then depending on whether the output of my operation is T or Option<T>.


I'd prefer a match approach


As someone new to Rust, I look at the latter and see `.zip()` (apparently unrelated to python zip?), and then a lambda/block which intuitively feels like a heavyweight thing to use for adding (even though I'm like 90% sure the compiler doesn't make this heavyweight).

By comparison, the first one is straightforward and obvious. It's certainly kinda "ugly" in that it treats Options as very "dumb" things. But I don't need to read any docs or think about what's actually happening when I read the ugly version.

So TLDR: This reads a bit like "clever" code or code-golfing, which isn't always a bad thing, especially if the codebase is mature and contributors are expected to mentally translate between these versions with ease.


What you find "clever" or not is really a function of what you are most used to seeing. There are likely many folks who use combinators frequently who find them easier to read, myself included.

The first example, to me, is the worst of all worlds, if you want to be explicit use `match`. Otherwise, having a condition then using `unwrap` just feels like it's needlessly complicated for no reason... Just adding my subjective assessment to the bikeshed.


100%.

In the first example—the longer, tedious one—I have to look at the condition to make sure the resulting `unwrap`s never actually happen, and if I reason about it wrong I get an application panic.

    x.zip(y).map(|(a, b)| a + b )
The above is 100% clear, can obviously never panic, trivially produces optimal code, and is how you'd write the exact same operation to add elements between sets, arrays, or anything else iterable.

People act like the above requires a Ph.D. in Haskell when it really requires about fifteen minutes of playing around with basic functional concepts that are in at least half the popular programming languages these days. At which point you realize a ton of annoyingly tedious problems can be solved in one line of code that can be easily comprehended by anyone else who's done the same thing.

It's the same thing as driving on the highway. Anyone driving slower than you is an idiot, anyone driving faster than you is a maniac.


> (even though I'm like 90% sure the compiler doesn't make this heavyweight).

Yes, as of Rust 1.66, this function compiles to

  example::add:
      xor     edi, 1
      xor     edx, 1
      xor     eax, eax
      or      edx, edi
      sete    al
      lea     edx, [rsi + rcx]
      ret


It's the same zip, just think of an Option as being a list of length at most 1. [] = None, [1] = Some(1), etc.


It is kinda common pattern for some FP languages (Haskell), so it doesn’t seem too clever to me.


> So TLDR: This reads a bit like "clever" code or code-golfing, which isn't always a bad thing, especially if the codebase is mature and contributors are expected to mentally translate between these versions with ease.

You contradict yourself. You can’t deride it as “clever” (whatever the quotes mean) and then in the next breath say that it might be a practical style.

And yes, Rust code in the wild does look like this.


Nice article, but I'm not sure I like using the Deref trait just to make code "cleaner" - it does the opposite IMO, making it harder to understand what's going on.

Deref is convenient for builtin "container" types where the only thing you'll ever do is access the singular value inside it. But sprinkling it everywhere can get confusing ("why are we assigning a &str to a struct here?")


In addition to containers, it seems to be useful for 'wrapper' types adding decorators to the 'base' object without having to wrap/unwrap the new type in each operation.

Classic OOP would use inheritance to achieve this, while something like Deref allows you to use it with all the added behavior - without losing the possibility to assign to it values of the base type.


A good discussion of why using Deref to simulate inheritance is considered an "anti-pattern": https://rust-unofficial.github.io/patterns/anti_patterns/der...


In general, I don't like the term "anti-pattern" or its sibling "best practices". Those terms give off an authoritative aura instead of spurring curiosity, and often people don't seem to remember the rationale but just associate X with absolute bad and Y with absolute good.

Softer language like "guidelines" and "recommendations" or playful language like "tricks" or "hacks" seems more useful to me. "Hacks" is dirty and _interesting_ instead of normative and unquestionable.


Thanks for this; I saw it, and it made me twitch, but I didn't know why... All I had was "composition over inheritance"


Nice list!

This is a nitpick on my part, but this part on PhantomData:

> Tells the compiler that Foo owns T, despite only having a raw pointer to it. This is helpful for applications that need to deal with raw pointers and use unsafe Rust.

...isn't quite right: `_marker: marker::PhantomData<&'a T>,` doesn't tell the compiler that `Foo` owns `T`, but that instances of `Foo` can't outlive its `bar` member. `bar` is in turn borrowed, since a pointer is "just" an unchecked reference.

You can see this in the playground[1]: `foo1` and `foo2` have the same borrowed `bar` member, as evidenced by the address being identical (and the borrow checker being happy).

Edit: What I've written above isn't completely correct either, since the `PhantomData` member doesn't bind any particular member, only a lifetime.

[1]: https://play.rust-lang.org/?version=stable&mode=debug&editio...


Not quite, it doesn’t tell the compiler much about how Foo relates to its bar field unless you also constrain the public API for creating a Foo. If Foo is constructed out of a ‘new’ method with this signature:

    impl<'a, T> Foo<'a, T> {
        pub fn new(bar: &'a T) -> Self {
            Self { bar, _marker: PhantomData }
        }
    }
… AND the bar field is private, then you are ensuring the Foo container doesn’t outlive the original bar pointer. If you don’t constrain creation and access to the bar field then people can just write foo.bar = &new_shorter_borrow as *const T; and then the lifetimes are absolutely unrelated, foo.bar will become invalid soon. A classic example of doing this correctly is core::slice::IterMut, and the slice.iter_mut() method.

A short illustration (note how only one of the Foos creates a compile time error): https://play.rust-lang.org/?version=stable&mode=debug&editio...

A short explanation of what you’re telling the compiler with PhantomData (without accounting for the new method/ constraints on creation) is that Foo appears to have a &'a T field, for the purposes of the outside world doing type and borrow checking on Foo structs and their type parameters. That does two things:

(1) Due to implied lifetime bounds, Foo’s generics also automatically become struct Foo<'a, T: 'a>, so this ensures T outlives 'a. That will prevent you accidentally making Foo<'static, &'b str>.

(2) You don’t have an unused lifetime, which is an error. We always wanted Foo to have a lifetime. So this enables you to do that at all.

Edit: and (3) it can change the variance of the lifetimes and type parameters. That doesn’t happen here.


Genuinely asking: what about this is different from what I said in the original comment? It's possible that I'm using the wrong words, but my understanding of Rust's ownership semantics is that "owns a reference" and "borrows the underlying value" are equivalent statements.


You can have a PhantomData without a bar field. You could have baz and qux fields as well. The compiler does not know the lifetime in the phantom data is connected to any other field whatsoever. It doesn’t look through the other fields searching for *const T. It’s just a lifetime and is connected to T, not any particular pointer to T. The bar field happens to have a T in it, so the T: 'a constraint does apply to what you store inside the pointer, but the *const T is lifetime-erased as it relates to the pointer, so you have to ensure that the lifetime of the *const T matches up with the lifetime your Foo type claims it has (in its documentation). You must do that manually by constraining the struct creation and field access. You would also typically have a safe method to turn bar back into a borrow in some way (like IterMut::next does) and when you do, you will want to be sure that the pointer is actually valid for the lifetime you say it is.*


Thanks for the explanation! I can see how my nitpick wasn't right then, either. I'm going to make an edit noting that.


Having worked with Rust for a little while (about a year) none of this stuff is particularly esoteric. However it is a great list of things that are good to know, but you don't necessarily need all that often (or learn from the official rust book)


You're right, but for those learning Rust, there's _so much_ to learn, that having occasional reminders of the more basic things is really handy. I've been hobby programming in Rust for years and professionally for about 6 months, and I still picked up one or two simple things from this list.


I'll share an architectural pattern from Rust that is pretty ubiquitous but not really mentioned in the official books.

Skip the whole weak rc nonsense and jump directly to using an allocator (like slotmap) when dealing with cyclic data structures. Wrap the allocator in a a struct to allow for easy access and all the difficulty from rust cyclic datastructures disappears.


    Really annoyed by the borrow checker? Use immutable data structures
    ... This is especially helpful when you need to write pure code similar to that seen in Haskell, OCaml, or other languages.
Are there any tutorials or books which take an immutable-first approach like this? Building familiarity with a functional subset of the language, before introducing borrowing and mutability, might reduce some of the learning curve.

I suspect Rust does not implement as many FP-oriented optimizations as GHC, so this approach might hit performance dropoffs earlier. But it should still be more than fast enough for learning/toy datasets.


> I suspect Rust does not implement as many FP-oriented optimizations as GHC

It's more complicated than this; sometimes FPish code is super nice and easy and compiles well (see the zip example in this very thread!) and sometimes it's awkward and hard to use and slower.


Personally I’ve done a small amount of reading of the Hands On Functional Programming in Rust book. I’ve also found Scott Wlaschin’s F# resources quite transferable (though you may run into some issues with passing async functions around as arguments).


I think proper immutable data structures can be quite fast without sophisticated compiler magic if they are read and cloned often (cloning is an abstraction) and are generally long lived.

Rust makes mutations safe, but immutability has benefits outside of safety.


Author has confused the drop functions. `Drop::drop` and `mem::drop` have nothing to do with each other.


Hm? `mem::drop` is defined in terms of `Drop`[1].

Are you thinking of `mem::forget`?

[1]: https://doc.rust-lang.org/std/mem/fn.drop.html


The complete function definition is provided in the documentation there, and it isn't defined in terms of Drop. It's just a single line function with an empty body.

In fact, mem::drop will accept any value, whether it implements Drop or not.

The author of the article is definitely quite confused about Drop vs mem::drop. mem::drop is not an implementation of Drop.


Oh, I see what you mean: it does look like they've confused `mem::drop` with an implementation of `Drop`.

> In fact, mem::drop will accept any value, whether it implements Drop or not.

This doesn't mean that it isn't defined in terms of Drop, because there is no such thing as !Drop in Rust. Even things that are Copy are implicitly Drop, it's just that the copy is dropped instead of the value.


> Even things that are Copy are implicitly Drop, it's just that the copy is dropped instead of the value.

While that mental model could make sense in an abstract way, it's not literally true. Copy types are forbidden to implement Drop.

    fn takes_drop<T: Drop>(t: T) {
        todo!()
    }
    
    fn main() {
        takes_drop(5i32);
    }
gives

    error[E0277]: the trait bound `i32: Drop` is not satisfied
     --> src/main.rs:6:20
      |
    6 |         takes_drop(5i32);
      |         ---------- ^^^^ the trait `Drop` is not implemented for `i32`
      |         |
      |         required by a bound introduced by this call
      |
    note: required by a bound in `takes_drop`
     --> src/main.rs:1:22
      |
    1 |     fn takes_drop<T: Drop>(t: T) {
      |                      ^^^^ required by this bound in `takes_drop`


Thanks for the example! Yeah, I'm realizing my framing (around the Drop trait, and not "droppability" as a concept) was incorrect.

Would it be more accurate to say that Rust guarantees the droppability of owned values? I know there's a guarantee that &'static items never have their underlying value dropped, but that works because you can never actually hold the static value itself, only its reference.


> Would it be more accurate to say that Rust guarantees the droppability of owned values?

I'm not really sure, exactly, since "droppability" isn't really a thing that's talked about, because as you're sort of getting at here, it's universal, and therefore not really an interesting concept.

> I know there's a guarantee that &'static items never have their underlying value dropped,

Even this is sort of... not correct. Consider this program:

  #[derive(Debug)]
  struct Boom;
  
  impl Drop for Boom {
      fn drop(&mut self) {
          println!("BOOM");
      }
  }
  
  use std::mem;
  
  static mut FOO: Boom = Boom;
  
  fn main() {
      let mut s = Boom;
      
      unsafe {
          dbg!(&FOO);
      
          mem::swap(&mut FOO, &mut s);
      
          dbg!(&FOO);
      }
      
  }
This will print BOOM, as the initial Boom is swapped out from behind the reference and then dropped.


(Oh I just realized, for those not super familiar with Rust, mem::swap is safe but since dereferencing FOO is unsafe, I just left in one block, not two.)


I think your second paragraph is a misinterpretation of how Rust works.

Everything isn’t implicitly Drop. Drop is an explicit cleanup mechanism that types can opt into.

If it helps you to think of it conceptually as everything having an implicit no-op Drop, then I guess that’s fine, but that’s not what is happening.

There is an automatic Drop “glue code” for types that contain types that implement Drop, so that those will get called, of course. But `i32` does not have Drop, at all.

> Even things that are Copy are implicitly Drop, it's just that the copy is dropped instead of the value.

You cannot implement Drop on a Copy type, so Drop literally never gets called on Copy types. You can’t put non-Copy types inside a Copy type, so there isn’t even Drop glue code. And no, it isn’t implicitly Drop at all! And it has nothing to do with a copy being dropped instead of the original value. Drop isn’t a universal trait.

I also seem to remember in the early post-1.0 days that whether a type implemented Drop or not would significantly impact lifetime analysis, requiring some occasionally obtuse workarounds. Rust lifetime analysis accepts many more correct solutions these days, and it has been awhile since I wrote a lot of Rust code, so I don’t recall how it is these days.


> If it helps you to think of it conceptually as everything having an implicit no-op Drop, then I guess that’s fine, but that’s not what is happening in the generated code.

I understand that types that don't implement Drop do not literally have an implicit `Drop` trait implemented for them by the compiler.

What I meant is that there is no "undroppable" type in Rust: the best you can do is make the type panic in a custom Drop implementation, but any function that takes ownership of a value is effectively described as forwarding, dropping, or forgetting that value based on the lifetime/type of its return. In other words, `mem::forget` can only be defined in terms of Drop (or default drop behavior for a type) in terms of ownership semantics, because its signature admits no other possibilities.


> In other words, `mem::forget` can only be defined in terms of Drop (or default drop behavior for a type) in terms of ownership semantics, because its signature admits no other possibilities.

But again, Drop is a destructor trait. It might be confusing that this shares a name with the concept of "dropping" in Rust, which is when a value goes out of scope, but they're not the same thing. Not every value has Drop, and mem::drop doesn't just work for values that are Drop. It is not defined in terms of Drop, but just Rust's ownership semantics.

In fact, you can define a `drop` function that only accepts Drop types: https://play.rust-lang.org/?version=stable&mode=debug&editio...

Although I am disappointed that the automatically generated Drop glue doesn't "count" for this purpose, and there isn't a higher level Drop trait, so this isn't a fully general solution.

I also don't know where the concept of "undroppable" came from for this conversation. Taken literally, that would mean that the compiler would emit an error any time a value of that type would need to be dropped, so those values could only exist in functions that either return them or return `!`, or as global static values. I never suggested that was a possibility, and Rust does not support types that are undroppable, but it does support types that are not Drop.


Thanks for the explanation! Yeah, I'm realizing that I'm using these terms ambiguously: I do understand the difference between dropping and Drop, but I tend to think (incorrectly!) of the latter as an "implementation" of the former, when it really isn't.


Great read — the author should feel proud. This made my morning.


This is somewhat off-topic but it's nice to see someone using Zola for their own blog, awesome SSG built on Rust!


> Use impl types as arguments rather than generic constraints when you can

Eep. No. At least, not in anything public. Universal impl trait arguments can't be turbofished, so you're placing an unnecessary constraint on your caller.


As I understand it &impl Meower also uses dynamic dispatch, whereas the generic version will generate a specialized version for each concrete type it is called with.


Are you sure? I thought that was dyn Meower.


You’re right! My bad


> Universal impl trait arguments can't be turbofished, so you're placing an unnecessary constraint on your caller.

What does this mean?


One thing I wish Rust and C++ had and that I have only seen in Carbon is pass-by-reference by default + an explicit syntax to pass by copy + for rust, some syntax to mark that we are passing a mutable/exclusive reference.


That doesn't make sense for Rust. Rust's references in function arguments aren't an equivalent of C++ reference arguments.

Rust doesn't reason in terms of by-reference vs by-value passing. It doesn't have pervasive expensive copy constructors that need to be avoided, NRVOs, and things like that.

Rust works in terms of owning and borrowing. Moves have a special case of `Copy` types like i32, but this works only for POD types, is at worst a shallow memcpy, and the types have to opt in to being copyable like that. The default is non-copyable, even for trivial structs and integer enums.

Drawing false parallels with C++'s pointer types is a major source of people fighting the borrow checker. References aren't for not-copying, they're for not-owning. `Box<T>` is a pointer that passes T by reference, but there's no `&` involved, because it is owning. OTOH Passing an argument via `&` is not just "by reference", but may require borrowing a value, which needs a location to be borrowed from, may extend scopes of loans, need specific lifetimes, etc. It's way more than just a perf tweak and is a PITA when it's done implicitly (which async fn does to some extent).


Small note- at least one of the links on mobile does not respect the text column width of the page, so the actual page width is wider than the text and horizontally scrollable.


Yes, most of my code deals with graph-like data structures. It is _really_ hard to understand how to write this in Rust. Just doesn't fit in my head.


Well not in the article but I saw somebody doing a guard clause that I started to copy.

i.e.

```

let min_id = if let Some(id) = foo()? { id } else { return; }

...

let bar = if let Some(bar) = baz()? { bar } else { return; }

..

// vs

if let Some(id) = foo()? {

...

if let Some(bar) = baz()? {

..

}}

```

It's nice to also do `if let (Some(x), Some(y)) = ...` but sometimes you need the result of `x` to do a few things before you can get `y` (or maybe don't went to evaluate `y` depending on `x`).

---

I like the `where` syntax more than the example formatting.

```

fn x<T>(t: T) where T: Foo {}

```


Have you heard about let-else? It’s a recent syntax addition. That example translates as

    let Some(min_id) = foo() else { return };
    // continue coding at same indent


> There are two kinds of references in Rust, a shared reference (also known as a borrow)

Is that really what they're called? It seems confusing to me: if it's shared (i.e. many people use it at the same time) how can it also be borrowed (i.e. one person has it)?


There are multiple ways of referring to this stuff.

* mutable/immutable (this is the way the book refers to it and used to be the 'official' terms but I don't know if they've changed that since I left, partially this is the name because you spell it "&mut".)

* shared/exclusive (this is the name that some people wish we had called mutable/immutable, a very very long time ago thing called the "mutapocalypse.")

Both sets are adjectives for "borrow."

I agree with you that "shared borrow" can feel like a contradiction in terms.

In general, they are duals of each other: it depends if you want to start from a place of "can I change this" or "who all has access to this," making each term more clear depending on the perspective you take.


This clears things up, thanks.


This is a fantastic list. I've certainly learned something new.

The comments here are unnecessary negative. People seem to be upset on things that don't look familiar. Don't let the negative comments get to you. Keep up the good work.


I don't think the comments mean to say the list is useless. It's a nice overview and I learned a few things as well. However, having written non-trivial amount of code in Rust, it really struck me at how many places there are various factual errors (however small they are). And yes, Rust is a complex lang, but if you mix terms like this without noticing the differences, I'm afraid that will make it easier.

To add an example of my own:

    fn do_the_meow(meower: Meower)
seems like you want to take (consume, own) an object implementing Meower, which is correctly explained not possible like this. A suggested solution,

    fn do_the_meow(meower: &dyn Meower)
is very different - it is now correct with regards to a trait access, however, now you're just taking a reference. Correct replacement would be Box<dyn Meower>. And the final solutions,

    fn do_the_meow<M: Meower>(meower: M)
    fn do_the_meow(meower: &impl Meower)
the first one is correct and equivalent to the original intent - it takes (owns) the value. However, the second variant (which is, again, as correctly stated, the best solution), is different again - it takes a reference (`&`) to `impl Meower`.

Coming back to my point - it is important to separate the difference between the ampersand and the impl/dyn part. To suggest an improvement, one could first write all variants of this function taking a reference (`&Meower`, `&dyn Meower`, `&M` and `&impl Meower`), and later introduce the difference between where one can use sized/unsized types, and that Box<dyn Meower> is owned equivalent of &dyn Meower, and why one can't have owned `dyn Meower` just lying around.


In the traits example

> which tells the compiler “I just want something that implements Meow”.

The ‘trait Meower’ also implies same, right? If so, why can’t we use that


"trait Meower" just declares the trait. "impl Meower" tells the compiler to accept any type that implements the trait. It's the same as adding a generic parameter "<M: Meower>" and using "M", as with the example above that one. (Except in the second example it's placed behind a shared reference; it still works either way with either syntax.)


> Normally, Rust would complain,

Not in the example given. There's nothing wrong with creating an Rc<T> loop; the borrow checker doesn't come into the picture.

> That is, you could mutate the data within an Rc as long as the data is cheap to copy. You can achieve this by wrapping your data within a Rc<Cell<T>>.

T: Copy is only a bound on the .get() method. You can do this even if the data is expensive to copy, so long as you always swap in a valid representation of T. (I sometimes write Cell<Option<T>>, where it makes sense to replace with a None value.)

> Embrace unsafe as long as you can prove the soundness of your API,

In other words: avoid unsafe except as a last-ditch resort.

> &impl Meower

Should be impl Meower, if you want the same behaviour as the explicit-generic version.

> Many tutorials immediately jump to iterating over vectors using the into_iter method

Out of interest, what tutorials? I've never read one that does that!

> Instead, there are two other useful methods on iterators

Methods on many iterables in std. Not on iterators (nor iterables in general).

> You can wrap the following types with PhantomData and use them in your structs as a way to tell the compiler that your struct is neither Send nor Sync.

… You're doing it wrong. €30 says your unsafe code is unsound.

> Embrace the monadic nature of Option and Result types

Er, maybe? But try boring ol' pattern-matching first. It's usually clearer, outside examples like these ones (specially-chosen to make the helper methods shine). I'd go for if let, in this example – though if the function really just returns None in the failure case, go with the ? operator.

> For example, writing a custom linked list, or writing structs that use channels, would typically need to implement a custom version of Drop.

No, you don't typically need to. Rust provides an automatic implementation that probably does what you need already. Custom drop implementations are mainly useful for unsafe code.

> Really annoyed by the borrow checker? Use immutable data structures

No. No. Haskell has all sorts of optimisations to make "immutable everything" work, and Rust's "do what I say" nature means none of those can be applied by the compiler. If you want to program this way, pick a better language.

> Instead, you can define a blanket trait that can make your code a more DRY.

There are no blanket traits. There are blanket trait implementations, and you've missed that line from your example.

All in all, this is a good article. There are some good tips in there, but I wouldn't recommend it to a beginner. I would recommend the author revisit it in a few months, and make some improvements: a tutorial designed when you're green, but with the experience from your older self, can be a really useful thing.


The book "Programming Rust" (very good book btw) only mentions iter() and iter_mut() briefly and focuses on into_iter because the IntoIterator trait only implements into_iter.

You can replicate iter() with `(&x).into_iter()` and you can replicate iter_mut() with `(&mut x).into_iter()` and obviously `x.into_iter()` consumes the contents.


The one about exclusive references is good.


God almighty, that language is sl fugly to look at


I know, right? It uses way too many styles of brackets (just use the round ones) and puts them on the wrong side of the operator.


Yes and then the Capitalization() paired with the :: and ([<<<>>>])()[] and most importantly {(|| ())}


> there is a lot of excellent Rust content catering to beginners and advanced programmers alike. However, so many of them focus on the explaining the core mechanics of the language and the concept of ownership rather than architecting applications.

The author then goes on to write an article largely covering the mechanics of the language rather than architecting applications.


hah got me there. Content on that topic will be coming soon after this post :). This was my first foray into writing about the language


My wishlist would include design patterns for business applications in Rust, where a beginner-intermediate level Rust programmer could learn the language and how to use the language practically at the same time.


Rust is a systems programming language, with a large number of systems programming-motivated concepts to learn before you don't get stuck.

I suspect, if someone is looking for copy&paste code patterns for business applications (like CRUD)... they're going to get stuck in situations where they hit brick walls that the cargo-culting rituals didn't cover.

Will they have somehow learned enough Rust by then to solve their problem, or will they be frantically posting StackOverflow questions on what code to copy&paste to do what they need... and end up creating new brick walls that are even harder to extricate themselves from?

With lots of business applications developers on Agile-ish methodology, where workers have to continually be able to claim they are "completing tasks" at a rapid pace, I think Rust would make them cry. It's hard to call a task complete when it won't compile. And working with borrowing/lifetimes/etc. is almost always going to take longer than (what Agile very-short-term thinking rewards) leaning on copy&paste and GC and mutating with abandon for the current task, while letting any increased defects due to that become new tasks.

And when those Rust business developer workers are crying and slipping on their deliverables, the leads/managers who chose to use Rust (with people who really just want to use something more like Python or Ruby or JavaScript)... are going to get called onto the carpet to explain. Live by the Agile theatre, die by the Agile theatre.

(At a couple previous startups, where there was systems programming to do in addition to business-y apps, I've actually proposed using Rust as a way to attract certain kinds of genuinely enthusiastic developers, and to make it harder for some kinds of low-quality code to slip in. But I probably wouldn't do Rust if I only had a normal business application, and only normal business developers who were asking for code examples to cargo-cult.)


Full time Rust web dev here (have been so for about a year).

Feature delivery was slow to start due to familiarity with the language, the business domain, and the project. Now that the patterns within our project have been established (EDA, Vertical Slice, some DDD for those interested) it’s actually proving quite easy to work on.

Have been a ts dev in a past life, and while I wouldn’t necessarily reach for Rust for future green fields it has been quite pleasing to work with for the last year.


architecture patterns and antipatterns would be a welcome contribution


Rust for Rustaceans is a good book that covers some of this, but is definitely not recommended for beginner Rust users.


I read this 3-4 times and realized why people stick to python though


"Eschew flamebait. Avoid generic tangents."

https://news.ycombinator.com/newsguidelines.html

We detached this subthread from https://news.ycombinator.com/item?id=34429403.


Because, what, Python doesn’t have any language details related to method lookup that might require a sentence or two of technical explanation? __getattr__, __getattribute__, __get__, base classes, metaclasses, __mro__, __mro_entries__, __dict__, __slots__, __call__, bound and unbound methods—none of this has ever confused someone looking at it for the first 3–4 times?

I assume you’re going to reply that you don’t need to think about all this when writing most Python. And that’s true of most Rust too. Don’t get me wrong, Rust does have some unique complexity that Rust programmers need to learn in exchange for its unique safety guarantees—but method lookup isn’t where the complexity is.


It’s basically as complicated as “If a subclass and a parent class both define a `bar()` method, and you have an instance of the subclass named `foo`, calling `foo.bar()` will call the subclass version”

I actually find that simpler than some of the spaghettis you can create with Python’s multiple inheritance :P


Those types are wrappers that keep a reference to some data. In any language, it's common to encounter issues when people confuse copying the underlying data with copying just the container around the data and keep pointing to the same thing.

In Python, a list of lists has the same behavior.

    l = [ [ 1, 2, 3 ] ]
    l2 = l.copy()

    l[0][0] = 99

    print(l)  // [[99,2,3]]
    print(l2) // [[99,2,3]]


there was similar confusion regarding pass by reference and pass by value when Java first came out.


The complexity exposed by Rust is overwhelming for people that haven't done any low level programming.

It's not just lifetimes, a basic Rust function can expose the ideas of exclusive and non-exclusive ownership, traits, generics with and without trait bounds, algebraic data types, closures, closures that move, async, async blocks, smart pointers, macros, trait objects, etc etc.

There is absolutely a learning curve and it makes me understand why Golang is the way it is. The language authors of Golang saw the business needs of Google, realized that they need to get domain experts that are not programming savants to write code and created a language that would allow the largest number of non programmer domain experts be productive coders.

That being said, the Golang developers went too far and let too many of their biased ideas for what is "simple" infect the language. In what way is it simple for a language to not be null safe. I have a bunch of other complaints too.

But anyway, I don't think Rust is going to be a revolution in writing software unfortunately. I don't think enough people will be able to learn it for companies to be able to hire for Rust positions.


Go and Rust solve different problems. Rust is a very complicated language, but it's a much safer language for systems programming. I don't think that there is a better language for systems programming.

Go is probably the most widely used 'simple' language, and it's primarily used for applications. You can find some examples of using Go for systems-like programming, but that's not where it shines.


Rust keeps getting hyped so I thought I should give it a go. I’ve been writing JS/TS, Ruby, PHP, Python for 10+ years.

It was a real struggle, and I wasn’t enjoying it at all. “Programming Savant” is a great way to put it because all the people I know who like Rust also happen to be the smartest engineers. In the end I gave up because I wasn’t getting any value. I was also trying to find a Rails equivalent, tried Actix and I didn’t like that either.


Rust is a good replacement for C/C++. It's not a good replacement for the languages you listed.


Disagree, I have a web server in Actix that I would have written in TypeScript and Node, or Python and Django, or Ruby and Rails, and the Rust version is similarly high level but far faster and more efficient. I recommend people read Zero To Rust In Production if they want to build a web API in Rust.

https://zero2prod.com


That's a good book. Project focused and practical.

From my limited experience I think it is possible to write simple, dumb Rust, still get a N times performance benefit using N times less resources compared the languages you listed at the cost of being more verbose.

There's a learning curve to understanding concepts like lifetimes and so on. But diving deeper there arise many problems. There are just a ton of hyper specific and nuanced types and traits in Rust, and that's just the std lib. Reading Rust is also quite difficult because it takes time to navigate the types and abstractions. Also dependency trees seem to be deep in OS projects.

It's a powerful, great language, but I would say the learning curve and the subtleties can _easily_ be underestimated, because there is an initial AHA moment when lifetimes, traits, closures, async etc. start to click. But the devil is in the details.

But I think the need/want for safe, fast languages with good security defaults is on a steady rise, especially in web development. Clients (laymen) start to notice and praise good performance and robustness more than they did 10y in my experience, so it's definitely worth it to take a look at languages at Rust.


I had the opposite experience. I wanted a quick http server that would proxy some requests where I couldn't control the client and the real servers. I hadn't written either language at the time, but I had it in seconds in Golang and fully functional and I used actix and halfway through the dependencies disappeared and then afterwards it was still a hassle even with help from people on the subreddit and discord.

Reduced to its barebones, the problem is:

1. Expose a HTTP server

2. Make n HTTP requests

3. Combine their output

4. Return the result

But I was definitely expecting to have an outcome in a few hours. This might be a language that it takes more than that time for me to be proficient enough to use async code in.

Here's an example from my friend, almost exactly my problem and what was suggested: https://old.reddit.com/r/rust/comments/kvnq36/does_anyone_kn...


Not sure what you mean by dependencies disappearing.

> This might be a language that it takes more than that time for me to be proficient enough to use async code in.

Yes, it does take more time to learn the language, but once you do, it's pretty straightforward. Rust is a different way of thinking (a Meta Language dialect in C clothing, as I like to call it; it looks like C but it's definitely not the same semantically) while Go is similar enough to C-based languages and web based languages like Javascript that it's relatively easy to get started with.

It's much the same as learning Haskell or APL, the entire paradigm will be different and can't be picked up in a few hours. However, if you already have functional or array programming experience, Haskell and APL will seem straightforward. It is because you have a lot of C-language or JS based experience that Go was easy for you.


I'd just started on things and they randomly disappeared the repo. There was some Rust community kerfuffle that was going on. I don't know the details. It was really annoying at that time, but it's really not a big deal now.

Yeah, it's probably just the programming paradigm that was different, but if you look at that post I posted there, it's not that straightforward (all the code is there so you can try it out).

For what it's worth, I found burntsushi's code wicked easy. I don't write Rust but I have my own forks for `xsv` and `ripgrep` and I could just pop in and add a new command and some functionality. But that actix-web experience was intense. I spent like 4 hours on it and couldn't make it work and then the repo disappeared and I just used go.


Oh you must have used it in the Actix Web drama time period in 2020. Yeah it's much better now. You might want to try it again, it's as straightforward as Go now I believe.


Thanks, I'll give it another whack!


> the Rust version is similarly high level

In typical TypeScript/Python, you don't need to worry about memory allocation. At all.

If you're writing any non-trivial Rust application, you're quickly going to have to learn a lot of concepts regarding memory, and your code will reflect that.

Now, if your application must be high-performance then Rust is a no-brainer. Rust is much better than the alternative (probably C++). But, most web applications don't need to be high-performance, and it would be better to use a language that is easier to learn and write.


I've been going through this book, not once did I have to think about memory management (beyond adding a few &), it's simply not needed at this high level and crates manage most of the low level details for you. I especially haven't had to think about Arc, Rc, etc either.


Unfortunately I haven't had the same experience as you. Even for a simple program I've had to think about passing by value/reference, lifetime annotations, static lifetimes, etc.

That's not to say I don't like Rust; I hope it takes over the world that C/C++ have a grip on.


Rust is a decent replacement for any static compiled language too. It's honestly a breath of fresh air if you actually need to use parallelism and had to experience that with other static languages.

Even my peers that transitioned from ruby to rust had a pretty smooth time since ruby's functional-ish, enumerable dependence, and blocks style coding lends itself well to rust's idioms.

But if you've only ever stayed in iterative dynamic language land and don't remember university coursework on programming languages, compilers, or operating systems, yes, you'll probably be in for a world of pain and frustration.


Disagree, I've used it to replace Ruby at work. I haven't had any criticisms of the language itself, only the library ecosystem and community knowledge make it difficult. I spent a lot of time working out what the right library is or working without examples for some libraries. In the end the finished product is as easy to work on as our Ruby code once the patterns are established and libraries imported.


Rust shines in two domains: high performance, and systems programming. Ruby is most commonly used for web applications (e.g. Ruby on Rails).

Rust is much lower-level than Ruby, so in general you're going to be a lot more concerned about memory -- in this case, satisfying the borrow checker, dealing with stack vs heap allocation, etc.

> the finished product is as easy to work on as our Ruby code

I... highly doubt this. It's harder to hire Rust programmers than it is Ruby programmers. It's harder to teach Rust than it is Ruby. And, again, your Rust code is going to be much more in the weeds than Ruby would be. Rust also is a pretty complex language -- there are a lot of concepts to learn if you want to use it well. Ruby is fairly easy to learn.

I'm not saying that Rust isn't the right language for you, or that Rust is bad (I love Rust!), I'm just saying that it's much lower level than Ruby, and that comes with tradeoffs.


The latest service I built kept a list of urls to poll in a database and it polls/processes/sends off the result. Using a http and ORM library I found this task to be pretty much the same difficulty as doing it in Ruby only it was far more reliable than Ruby.

Making HTTP requests and using an ORM in rust wasn't harder than in Ruby. I didn't have to think much about the borrow checker or memory as it was all single threaded simple stuff.


> because all the people I know who like Rust also happen to be the smartest engineers.

I wish people didnt say or think things like this. a lot of the times people get into these complex things and talk over people to make them selves look smart. or make things look more complicated then they really are to improve their image. i ran into this a lot when dealing with rust. if you can't do complex things in a clear and simple way, your just regular.


Certainly, many applications don't need this kind of complexity.

But if you're working on one who does, you're typically glad to use Rust :)


Python is not immune to this (for different reasons though)... If you want to be snide online, make sure you're correct first :)


RE: Any article about the GIL


Yeah. Speed and memory efficiency don’t come for free. Use Rust when you benefit enough to compensate for the cost of grokking the plumbing.


Arc is a low level tool for writing multi threaded programs. If it scares you back to Python you could also just write single threaded Rust.


There are many simpler languages with speed and memory efficiency (Nim, Crystal and the like). Rust somehow became even more complex than C++; what an achievement (not).


Two things that might help Rust a lot despite the complexity are the tooling and the ecosystem. Cargo is good, the compiler is extremely helpful, and there are a lot of crates to build on for all sorts of tasks.

For example, if I need to use simulated annealing to solve an optimization problem, there already exist libraries that implement that algorithm well.[1] Unfortunately, the Haskell library for this seems to be unmaintained[2] and so does the OCaml library that I can find.[3] Similarly, Agda, Idris, and Lean 4 all seem like great languages. But not having libraries for one's tasks is a big obstacle to adoption.

Nim looks very promising. (Surprisingly so to me.) Hopefully they will succeed at gaining wider recognition and growing a healthy ecosystem.

[1] E.g., https://github.com/argmin-rs/argmin

[2] https://hackage.haskell.org/package/hmatrix-gsl-0.19.0.1 was released in 2018. (Although there are newer commits in the GitHub repo, https://github.com/haskell-numerics/hmatrix. Not too sure what is going on.)

[3] https://github.com/khigia/ocaml-anneal


Fwiw, hmatrix-gsl works trivially and painlessly. Why does a working library need changes?


Good to know that hmatrix-gsl works too. In this case, I went down the Rust route instead of the Haskell one. I use Haskell too, just not for simulated annealing.

My main domain is scientific computing and I get nervous about the prospect of not being able to run my code 5 or 10 years down the road (somewhat above the typical lifespan of a project that ends up published in my discipline). GHC gets updates that sometimes require library maintainers to update their libraries. Here is a list of a few:

https://github.com/fumieval/Haskell-breaking-changes

Rust promises no breaking changes in the stable part of the language. Hopefully this promise will hold up.


> Rust somehow became even more complex than C++

I use C++ professionally and the small subset of it that I've learnt is already way more complicated than rust.


Python is not even remotely an option for the sorts of programs Rust is intended for.


“Fed up with the borrow checker? Try writing your own immutable linked list.” Yeah, that’ll help!


You mean use immutable data structures?


I love how Rust has so many footguns that you need to read a dozen blogs from no less than a month ago to avoid utter confusion.


Footguns is the exactly what rust doesn't have.

Some might criticise it for forcing you through a complex process to acquire a firearms license, selling you gun with an efficient safety and then insisting that you wear extremely heavily armoured shoes.


I consider footguns to be things that cause me to waste a ton of effort and build architecturally unsound code. The things in the article are actually just basic language concepts you need to understand to be productive.

So to use the foot gun analogy, if you don't know the stuff in the article you wont even be able to pull the trigger.

An actual footgun is something like monkeypatching in python or ruby. Or running for_each with an async callback in javascript.


Now that's a strawman if I've ever seen one.


Quite the opposite. Rust won’t even let you pull the trigger. C++ on the other hand is very much littered with footguns.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: