Decided to start learning rust last month. I have very little previous programming experience and the journey has been a bit brutal so far so i wanted to share my "Learning journey".
"The book" although a great book, is fairly dense with no exercises. Rustlings[1] help me apply alot of my learning and i supplemented the book with Tensors programming tutorials on youtube [2]. Recently i started developing a game in Bevy[3] and although its a bit over my head at this point i've managed to get a character moving and be able to kill a monster and receive a drop. My motivations for learning rust was to be familiar with substrate [4] as i'd like to switch careers into programming. Would also love any recommendations/tips & tricks by any experience rustaceans!
Thanks for this! read through the first couple chapters and it's already plugged some holes that i didn't really understand.
Can you point me to some really good rust code on github? I've been reading through some and see how people do some interesting things but i don't have the mastery to understand which practices are better than others.
I just want to wish you good luck on your journey to software engineering. It is an incredibly satisfying and rewarding experience, and I know that many people who have chosen this path as a career change are happy and successful!
Agreed. And I'm not sure if this is more or less precise than "model", but I often think about Rust as a process and community for developing a programming language, rather than as a programming language itself. This seems related to your 'model' remark.
It's amazing how perfectly they're able to balance pragmatism with idealism/principles. For me that might be the defining trait of this language and community; most languages swing too hard in one direction or the other.
In general, nindalf's reply is very on point about the mechanics. To expand ever so slightly on this bit:
> If the folks who work in that area are supportive
Rust has teams that make decisions to accept designs in their part of the project. They look at proposals and decide to accept, reject, or postpone them. This process can take a while, depending on all sorts of factors. Sometimes a design may be good, but it may not be the time yet, which is when postponing happens. Sometimes it's a "we don't plan on doing this" and that's when things get rejected. Even if a design is accepted, we don't require that people proposing the design do the implementation work, so if you do propose something and it does get accepted, it may take a while until it actually exists. Furthermore, stuff is in unstable at first, until people can gain experience with the feature, so even after an RFC is accepted, it takes some time until it's "stabilized," at which point it's part of the language proper.
Thanks for the detailed response! I checked out the RFC repo posted by nindalf, and I was wondering if you could explain (very briefly) the issue tagging scheme? Do the "A-" tagged issues mean they're active?
Edit: For more context, the README says:
> If you are interested in working on the implementation for an "active" RFC, but cannot determine if someone else is already working on it, feel free to ask (e.g. by leaving a comment on the associated issue).
I went and checked out the issues and couldn't tell which ones were active. (I also don't plan on implementing an RFC, so no worries if this is something that should be apparent to someone more involved.)
"A" is short for "Area", which is something we inherited from Mozilla, IIRC.
I don't think there's a good way to filter on "active", it's mostly like, if you go to the issue and it's open, then it's 'active.' It's a bit odd, because in some sense, for the RFC repo, once the RFC is merged it's "done", and then rest of it is implementation and therefore tracked on the main Rust repo. The "C-tracking-issue" issues are ones that are open and tracking an RFC that's in implementation, https://github.com/rust-lang/rust/issues?q=is%3Aissue+is%3Ao...
("C" is short for "Category". These prefixes are... well the original ones made sense but I think there was some retconning going on at some point, haha)
There's discussion in the corresponding Github issue of the RFCs repo (https://github.com/rust-lang/rfcs). If the folks who work in that area are supportive, the RFC will be merged. After that someone needs to implement the feature.
Take a look at the Readme in the RFCs repo for more details.
I believe Rust took both attributes and format strings from C#. It looks like Python's modern format string syntax dates to 2006, and I think(?) that C#'s format string syntax has been present since its first release in 2002. The PEP adding the modern string formatting syntax links to .NET documentation, although the link is now dead: https://www.python.org/dev/peps/pep-3101/#references
I remember hating the idea when the PEP was at first accepted. I hated the thought of refactoring tools breaking, typos leading to empty replacements, etc.
In reality, f-strings work really well and are super elegant. I'm a convert now.
Yes, there's a lot of small details that need to be ironed out before we get a clear picture of what the feature might be (remember, there's no guarantee!). There's a reason this is simply reserving the syntax, rather than actually proposing the feature. :)
Of course, I didn't mean to imply any certainty, only that the blog post mentioned that it would likely resolve to format_args! which doesn't return a string but rather std::fmt::Arguments.
I love Rust, but one thing that seems like a glaring design flaw is that adding a new trait implementation can change the behavior of existing code. This is very counterintuitive, because it seems like a purely additive change. In my opinion, the convenience of auto-dereferencing "as much as possible" to make method call syntax magically work through the indirection of references is not worth the lack of stability guarantees that it causes. This isn't a normal kind of breakage where the compiler points out what code you need to fix; this is a much worse kind of breakage where you might not even realize your code now behaves differently. In my opinion, the notion of "editions" is not an acceptable solution to this.
For a language that prioritizes safety, there are a surprising number of gotchas in Rust (another example is the large number of partial functions in the standard library).
> I love Rust, but one thing that seems like a glaring design flaw is that adding a new trait implementation can change the behavior of existing code. This is very counterintuitive, because it seems like a purely additive change.
> In my opinion, the convenience of auto-dereferencing "as much as possible" to make method call syntax magically work through the indirection of references is not worth the lack of stability guarantees that it causes.
Auto-deref is just one way of many that the dot operator can resolve to an unintended method. This problem would still exist even if auto-deref were removed. What you're really objecting to is the existence of the dot operator at all. I think the dot operator pulls its weight, given how often it's used: I encourage you to check out Simon Peyton-Jones' thoughts on "the power of the dot" [1].
I'm pretty confident Simon Peyton-Jones would never find it acceptable that adding a new type class instance changes the behavior of existing code, regardless of what convenience it brings. Yes, the dot operator is useful (as SPJ acknowledges in that talk), but convenience at the cost of sacrificing the ability to reason about code and how its meaning evolves is not worth it to the typical Haskeller. SPJ isn't talking about Rust specifically in that talk; he's only talking about the convenience of IDE integration and namespacing based on the receiver.
There are lots of Haskell features where adding new instances changes the behavior of existing code. OverlappingInstances is merely the most obvious example, but there are plenty of others.
Static typing helps significantly here, though obviously not completely. Often, the failure mode is a compiler error, not silent code changes. Note the example of TryFrom; the failure mode isn't that your program is now running a different implementation, but that, because there are now two possible implementations, it becomes ambiguous which is selected and a compile-time error results.
Same with the array change, the failure mode was breakage. While in this case, the method's names, arguments, and argument types are identical, the return type is different, which means other code expecting a certain type now has a different type. For that to silently change behavior, it would also need to match up on everything on the return type as well.
A lot of `expr.expr` would need to become `(&expr).expr` or `(&mut expr).expr` if auto-dereferencing wasn't a thing. C++ has `.` and `->` but Rust only has `.`
(To be clear, what's happening with `array.into_iter()` is unsize coercion, not auto-dereferencing. `[T; N]` does not impl `Deref<Target = [T]>`. But the point is the same.)
In other words, there is auto-ref in addition to auto-deref, and auto-ref causes the same issues of possibly-surprising method resolution. As I mention above, auto-deref is just one way that methods could resolve to the "wrong" implementation, and there would be no solution that I can see other than to remove dot entirely, which isn't viable.
a.foo(b,c)
vvvv // a is &X, so replace it with (*a)
(*a).foo(b,c) // wrong; should be a.foo(b,c); we called
vvvv // a method on the pointer, not what it points to
X::foo(???(*a),b,c) // should be A::foo(???a,b,c)
// continue as above
Auto-derefencing changes which type the method is looked up relative to, not just how the object is passed to it.
Yes yes, there's auto-ref, auto-deref, deref coercion, unsize coercion, ... All of them are forms of "`lhs.rhs` tries to look up a different type to apply the `rhs` to vs what type `lhs` actually has," or more generally "I wrote an expr of type T but the compiler treats it as an expr of type U for :reasons:"
I called it "auto-dereferencing" because that's what OP used when talking about `[].into_iter()`, and then clarified that it's actually an unsize coercion, not auto-deref.
> All of them are forms of "`lhs.rhs` tries to look up a different type to apply the `rhs` to vs what type `lhs` actually has,"
Nope, it uses the same type (A), just blindly adds a operator to the argument based on which method is called. You could just as easily have `a.foo()` -> `A::foo(++a)` instead; it's just[0] that adding `++` would be largely useless.
0: You'd also need a explict annotation on the declaration of foo, but that's a convenience issue.
One thing that I repeatedly bump into in Rust is the oddity of multi-argument functions. To explain the problem, let's look at another language for a moment: in SML there is no such thing as multi-argument functions per se. There are two ways to write them anyway:
* the function's argument is a tuple (this is the more common way, not only in SML, but also how it's done most commonly in maths);
* currying.
I'm not a fan of currying, because it singles out the first argument over the others. However, the first approach (tuples) is very ergonomic, because now you can pass the return value of one function as an argument to a "multi-argument" function. One application of that is function composition. But another is a simple map. I've lost count of how many times in Rust I had an "o: Option<(A, B)>" and a "f: fn(A, B) -> C" and couldn't call "o.map(f)"; instead the programmer is forced to write more noisy code with lambdas. And lambdas sometimes don't play well with the borrow checker for mysterious reasons (even those that don't capture anything).
Also, I think it would be nice if the compiler generated named discriminants for every enum. Currently, "std::mem::discriminant" returns an opaque "std::mem::Discriminant<T>" (which doesn't have a name). In order to compare discriminants I need to create a full-blown enum value and extract the discriminant from it later. I've worked around this problem by using the "strum" crate, which has a macro to produce another (plain) enum, but having the discriminant type be named "std::mem::Discriminant<T>" (to signal the connection to the original enum) and have named variants at the same time would be the best of both worlds.
EDIT: If I knew more about possible breakage caused by my first suggestion, I could probably start working on an RFC. However, I feel I don't have enough knowledge about Rust to anticipate that.
For my second suggestion I think I'm going to try to propose that. I'm reading the docs on the process now.
"Non-serious" because it won't be stable any time soon, and it's not as concise as `o.map(f)` anyway.
More seriously, I don't think `o.map(f)` will ever happen, because it'll be backward-incompatible in the case of `o: Option<(T,)>` and `f: <U>impl FnOnce(U)` - `U` used to be `(T,)` and now would be `T`. Something like `o.map(spread(f))` might work if vararg generics are added to allow a generic `spread` that works for any tuple and matching-arity FnOnce.
I assume you meant `.ok_or`, not `.expect`. Either way, I'm not really how this or your `.ok_or_else` comment is relevant to this sub-thread. `.map` is not a method on tuples.
[EDIT: comment applicable only to an older version of the parent comment]
This "foo" still suffers from the same problem as "Option::map" does: it is hard-coded for functions of fixed arity (2 in this case, 1 in Option::map's case).
Needing to write this function doesn't shorten the code. The SML-inspired solution allows one to write an "Option::map" which accepts functions of all possible arities (since they are all 1).
But yes, if there are a lot of places where I want to call "Option::map" for an "f" which takes 2 arguments, that could work. Although, personally, I think I'd still opt for an explicit lambda, in order to communicate intent more directly (avoid one level of indirection).
>This "foo" still suffers from the same problem as "Option::map" does: it is hard-coded for functions of fixed arity (2 in this case, 1 in Option::map's case).
Actually, I'm sorry too. I just realized I had a brainfart while writing that example and forgot to make the point I wanted to make. I've fixed it now. (It was supposed to work with any-arity tuples as well.)
> the function's argument is a tuple (this is the more common way, not only in SML, but also how it's done most commonly in maths);
The Ceylon programming language worked that way[1] and it was really cool to be able just what you wanted: `o.map(f)`.
It's a shame that other languages didn't seem to pick up this idea... I've no idea if this could be made to work in Rust though (given Rust has no GC and it would go against its philosophy to allocate parameters like this on the heap just to create tuples - maybe the compiler could get rid of the actual tuples altogether?).
Even though Ceylon uses tuple types to represent parameter lists, a function accepting multiple arguments (with the parameter list represented by a tuple: Callable<Void, [Integer, String]>) is still different from a function accepting a tuple of multiple fields (with the parameter list represented by a tuple of a tuple: Callable<Void, [[Integer, String]]>). You can’t just call one as if it were the other. Ceylon works the same way as Rust here, as far as I can tell.
BTW, for Rust beginners who might be concerned that o.map(f) is somehow a difficult problem, the solution in Rust is simple: o.map(|(a, b)| f(a, b)).
Swift had this feature until version 2.3 it was removed in 3.0 because it added a lot of complexity and seemingly confused users. I’d search for a link but I’m on mobile
Perhaps you meant that variables which are closed over are in an anonymous struct or in a tuple. I am fuzzy about the details but as far as I know parameters are passed like as in other functions.
where Args is a tuple of all the actual arguments.
You used to be able to cause an ICE on nightly if you used something other than the correct argument[1], but instead it now tells you that it should have been a tuple:
error: functions with the "rust-call" ABI must take a single non-self argument that is a tuple
--> src/main.rs:7:5
|
7 | extern "rust-call" fn call_once(self, args: i32) -> Self::Output {}
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
error[E0059]: cannot use call notation; the first type parameter for the function trait is neither a tuple nor unit
--> src/main.rs:12:5
|
12 | x(42);
| ^^^^^
> Having enum variants be structs has been proposed
That's great news.
> that could solve your issue if `mem::disciminant<Enum::Variant>` returned what you wanted.
I think you misunderstood, or I explained it poorly.
Given an:
enum Enum {
A(int),
B,
}
I would like mem::Discriminant<Enum> be the same type as if there was such a definition in code (illegal in Rust, because Rust doesn't have specializations):
You cut out the rest of the sentence, which is needed to make sense of it.
What I mean is, with the "strum" crate I can use a proc macro to automatically generate an "EnumDiscriminant" enum for an "Enum" enum, where "Enum" is defined as:
enum Enum {
A(u32),
B,
}
and "EnumDiscriminant" is defined as:
enum EnumDiscriminant {
A,
B,
}
However, the only thing linking those two is a naming convention. I would like to have a universal name for such discriminants.
The Rust standard library already has such a discriminant type for any enum - mem::Discriminant<T>, where T is the enum type. However, in order to get the discriminant of Enum::A(5) I need to write mem::discriminant(Enum::A(5)), which introduces noise (especially with bigger tuples or structs) - this 5 here is irrelevant. The mem::discriminant() function is useful for contexts where I have a variable of type Enum and want to get its discriminant, but when I just want to pick a particular discriminant, it's too noisy. I would like to be able to name the discriminant directly. I.e. I would like the standard library type Discriminant<T> be defined for each enum the same way EnumDiscriminant above is defined for Enum. So I could just write Discriminant::<Enum>::A.
> However, note that Rust is a project run by volunteers. We prioritize the personal well-being of everyone working on Rust over any deadlines and expectations we might have set. This could mean delaying the edition a version if necessary, or dropping a feature that turns out to be too difficult or stressful to finish in time.
What a refreshing take! One of the things I have admired about Rust is the effort from the start to build a healthy, welcoming, non-toxic community, and to prioritize personal well-being.
Yeah, a lot of communities work this way implicitly, but being explicit about it likely has some nice mental health benefits in itself, as even if most people people are totally understanding, sometimes the person themself needs it explicitly said that it's okay for them to just back off for a bit and they aren't letting everyone down. Sometimes it's our own expectations and beliefs that drive us the hardest when we need a break.
I would say it's very important to be publicly explicit about it, to help with the expectations of people outside the project. If some of these things then don't make it into 2021, or 2021 is delayed a bit, that wonderful paragraph can be pointed to as part of the explanation.
Plus of course it gives people on the project more confidence in saying "I have to have my evenings to recharge, I can't crunch this feature to hit the deadline". Which is what we should all be able to say.
This doesn't require an edition change, but I'm really looking forward to custom, per-collection allocator support. It's in nightly right now and very nice to use.
Yeah, I’ve been following that thread. The storage idea is interesting but I don’t think it’s really necessary. The main thing it would enable is inline allocators that can be stored inside the collection, as opposed to the collection storing a reference to the stack allocation. The cases where that’s necessary, one can use the stackvec or arrayvec crates, which are more optimized than a generic inline allocator would be anyhow.
> However, note that Rust is a project run by volunteers. We prioritize the personal well-being of everyone working on Rust over any deadlines and expectations we might have set. This could mean delaying the edition a version if necessary, or dropping a feature that turns out to be too difficult or stressful to finish in time.
Same for me. I have no idea why, but it excites me to learn rust. Maybe because it have a so obvious source to learn from (The book), and that it still is fairly a new and "limited" language? I can't wait to get some major work-hurdles out of the way so I can go through the book. Not sure if I'll use Rust that much professionally, but my experience so far is that I learn so much about programming while I am learning it.
I thought this post was about the book, rather than the language. The book is supposed to be up to date for Rust 2018. I suppose there's probably no need to wait for the book to catch up to the language.
Lack of IntoIterator for arrays and disjoint capture are some of the biggest pain-points in writing Rust code for me! Very excited to see these changes on the map.
Man I feel like an idiot. I read the first paragraph and I'm like "that's just versions" and then I read the next few paragraphs and I'm like "Yeah but your dependencies are going to screw you".
I'm not sold on the raw identifier syntax and we'll see if the changes are actually as good as they claim, but in general this seems like a great way of doing language 'versions'.
Rust is definitely learning from the past and I'm really interested see what we learn from how they fuck up too
We've already done one of these! This is the second introduction of a new edition, and therefore, our third overall: 2015, 2018, and now 2021. If you want to see how it works, you can create a new project, which defaults to Rust 2018, and depend on https://crates.io/crates/semver, which is on Rust 2015, and see that interop works just fine!
Yeah don't worry we'll fuck up for sure, haha. One could argue we've made a few of those already...
Note that the raw identifier syntax is from Rust 2018, not the upcoming edition. Also it's only intended to be used for rare occasions, and even then only temporarily, so it's not a big deal if the syntax is fairly clunky; it just needed to be something that the Rust 2015 parser wouldn't choke on.
I think that's a good thing. People do complain about the overly high rate of evolution of Rust, but I think that was much more true in previous years than today (not least because a lot of projects needed to depend on nightly).
Also, most of these changes enable new things, and are only in the edition because of some low chance of breaking existing code. The 2018 edition had some major changes, so 2018 code looks pretty different - the ? operator, big changes to module use, impl/dyn Trait, and more. I suspect that most code bases you won't be able to tell whether it's 2018 or 2021 without looking more closely.
I expect a common complaint to be “why isn’t [].into_iter() working in my tests? —oh, it’s 2018 edition”, though this can be mitigated by a specialised error message which didn’t exist last week when I first learned about this matter.
But that's the right way to do it really... I remember an obscure thing in c# when var was introduced that if you have a type called 'var' in scope, type inference doesn't work and it uses that type instead. This is the way new features should be introduced.
> Instead, we decided to add the [IntoIterator] trait implementation in all editions (starting in Rust 1.53.0), but add a small hack to avoid breakage until Rust 2021.
Eh, why not. We nerds are used to retconning in all the sci-fi universes anyway.
> It has been suggested many times to "only implement IntoIterator for arrays in Rust 2021". However, this is simply not possible. You can't have a trait implementation exist in one edition and not in another, since editions can be mixed.
Another example how editions aren't much different than mixing language versions on other eco-systems, just another way of achieving similar results.
The page source for this post has this comment at the end:
<!--
If you really can't wait, many features are already available on
Rust [Nightly](https://doc.rust-lang.org/book/appendix-07-nightly-rust.html)
with `-Zunstable-options --edition=2021`.
-->
Since you said „master“ you are probably already somewhat proficient, but if not, I can not recommend enough to just try Rust out for some project one day. It literally took me 2 days from writing my first line of Rust to having a central component of our pipeline rewritten in Rust and running in production and I enjoy every minute of writing Rust, since (after you win the battle with the borrow checker) everything „just works“.
Battles with the borrow checker are seldom won. Victory is conceding that the compiler knows better than you and fixing your design.
(I’m serious about this, as a user for eight years and casual trainer for several. The fact of the matter is that when the borrow checker complains, it’s right to complain, >99.99% of the time, even if you had thought carefully about it and were sure you got it right and that the compiler’s complaint was unnecessary.)
I would lower the percentage it is right to complain about to only 99.9% (without non-lexical lifetimes it is probably only right 99% of the time). I have encountered like 3 cases where the borrow checker complained and forced me to make a worse design. I think I have seen more than 1000 and less than 10,000 borrow checking errors.
I've written a fair amount of Rust and my beginner experience was not as positive as yours. It took me a very long time to fully deal with things like the borrow checker.
My personal recommendation to anyone just writing code as a learning exercise is that they start with using Rc<> wherever a borrow checker-related issue arises. Then you can get used to the rest of Rust. Once you feel a lot more proficient you can go back and start replacing Rc<> with references (and you might find that you don't even need to in many instances anyway)
On most things, clone() will make a full copy. If your type is wrapped in Rc<T>, then calling clone() on it will increase a reference count. So it ends up being cheaper. If you were mutating things, though, you'll need extra stuff in the Rc<T> case, though. If you're mutating in the regular case, you'd be changing the copies, of course, so the originals wouldn't change.
For the read-only case it's the same except for a larger overhead of the cloning.
But for the case where the data is modified, Rc often goes with RefCell as Rc<RefCell<X>> and allows in-place mutation called interior mutability. This means that the other code that got the copy of the Rc will observe the write effects.
I also like writing Rust, but “After you win the battle with the borrow checker” is doing a lot of work. Those battles often crop up unexpectedly and sometimes they’re easily resolved and other times they require reworking your architecture—it’s very hard to estimate how long it will take. Of course, there are escape hatches—you can clone excessively, but it’s not all peaches and cream.
About 80% of my battles with the borrow checker eventually resolve with me realizing the code I'm trying to get working could lead to a bug in a case I hadn't considered. Of course, I could just be an abnormally sloppy programmer.
In my experience, borrow checker errors would only be “bugs” in a program that doesn’t have a garbage collector or if there is shared memory parallelism involved. Put differently, if you slapped a borrow checker on Go and your program was single-threaded, (I posit but welcome correction) borrow-checker errors would probably not be uncovering many bugs.
I would never trust something anything I've only been exposed to for two days in production. The chance of some kind of subtle error is wayyyy too high.
Actually it was two weeks - I mistyped. And no, I did not have non-gc-language experience but tbh Rust feels like a gc-language to me? I don't "feel" like I'm writing C code you know?
holy shit, I was feeling like an idiot to know someone can actually get to grips with Rust basics in just 2 days :D took me more like two months to stop struggling constantly on every line I wrote!
I have pushed people to learn Rust. And I noted that the more experience in software design a person has, the easier it is for them to truly internalize the borrow checker.
I suspect this is because you naturally get to an instinctive form of ownership model similar to Rust. Since it is a really good way to reduce complexity. Rust then, merely formalizes and name it for you.
>And I noted that the more experience in software design a person has, the easier it is for them to truly internalize the borrow checker.
I feel like this verges dangerously close to patting ourselves on the back, but...
>I suspect this is because you naturally get to an instinctive form of ownership model similar to Rust.
I do feel this is more or less correct. I've only had a literal handful of "battles" with the borrow checker (over a couple months spent with Rust); the rest seemed to naturally parallel the ownership semantics I wanted in my code anyway. Ownership meaning everything from "where does a value arise" to "what parts of the code have access to that value in the first place". It forced - or rather nudged - me to think about my architecture in a way that led to a rather clear design, although it might have taken longer than in another language. In another words, I did move slower, but refactoring is much easier and less frequently needed, and that's a tradeoff I'm all for.
> refactoring is much easier and less frequently needed
Compared to what? I have written about 10,000 lines of Rust so far... unfortunately, I stopped recently because I had a few huge refactors I had to make on one of my favourite projects which almost cost my sanity. The reason is that, in Rust, when you want to change the lifetime of a certain struct for example, you need to make sure that everywhere that's used it's only used in the new right lifetime... that turned out to be incredibly difficult to do and required heavy re-writing of many usages I had. Another huge refactor I had to struggle through was changing how I managed errors, and creating a new structure for my errors to be able to capture more information. Again, ran into huge issues which in a language with GC wouldn't exist. I still intend to go back to Rust as I do enjoy it even after all these issues I've had, but to claim Rust is easy to refactor strikes me as completely the opposite of the truth.
"The book" although a great book, is fairly dense with no exercises. Rustlings[1] help me apply alot of my learning and i supplemented the book with Tensors programming tutorials on youtube [2]. Recently i started developing a game in Bevy[3] and although its a bit over my head at this point i've managed to get a character moving and be able to kill a monster and receive a drop. My motivations for learning rust was to be familiar with substrate [4] as i'd like to switch careers into programming. Would also love any recommendations/tips & tricks by any experience rustaceans!
[1]https://github.com/rust-lang/rustlings [2]https://www.youtube.com/watch?v=EYqceb2AnkU&list=PLJbE2Yu2zu... [3]https://bevyengine.org/learn/book/getting-started/ [4]https://www.substrate.io/