Hacker News new | past | comments | ask | show | jobs | submit login
The struggle with Rust (ayende.com)
200 points by mpalme on Jan 24, 2017 | hide | past | favorite | 295 comments



When I first learned OCaml (knowing only Python), I swore a lot at the type checker for refusing to compile code that I knew would work at run-time (e.g., a variable having multiple types, but on strictly disjoint execution paths). It seemed insane to me that people would want to subject themselves to this kind of bondage and discipline. And yet, many years later now, I have learned and internalized how type systems work, I have learned not to try and simply replicate Python patterns in OCaml, and as a result I use OCaml to write code quickly and without any headaches; and not only is the type checker no longer an obstacle, it's now a valuable assistant.

I believe that we are seeing the same kind of phenomenon in Rust: we are trying to replicate C or C++ code directly in Rust, and we get frustrated when our efforts are foiled by the borrow checker. It's going to take some time and some efforts before we learn and internalize how borrowing works in Rust and how to change the way we write programs so that it is no longer an obstacle. The Rust team looks very receptive to comments about taking away some pain points (e.g., non-lexical lifetimes), but we also need to accept that, for a little while, we are going to feel like we did when we were new programmers and were trying to make sense of these new constraints.


This is a different issue. The problem that the author is observing is that Rust operates on the hypothesis that the ownership relation is mostly left-unique and acyclic and that you run into problems when this hypothesis doesn't hold. And this affects not only explicit data structures, but also the implicit structures that involve stack frames (local variables, closures, etc.).

While Rust has mechanisms to deal with this (copying, Rc<T>, etc.), all these mechanisms add pain points, and for problems that are not just theoretical. It affects a number of design patterns, closures that survive the scope they were created in, functional-style programming, functional data structures, any data structure that naturally involves a DAG or cycles.

The difference compared to a (good) static type system is that a good [1] static type system does not measurably reduce expressiveness; Rust's ownership model does.

This is not to say that there isn't a point to Rust's model. Rust directly competes with garbage collection as the main alternative model to achieve memory safety and is an obvious choice in cases where garbage collection is not a realistic option. But where garbage collection is an option, the tradeoffs that Rust makes become much less attractive.

[1] Obviously, primitive static type systems (example: Java up to version 1.4) do pose a problem.


First, the expressiveness is all there. You just need to explicitly say you are responsible by using unsafe blocks. So risky pointer and allocation logic becomes the programmer's job, not the borrow checker.

Second, this article only considers the costs of a stricter borrow checker, not the benefits. It will be more work for the coder to have to think about how allocation and ownership is communicates. But I've been through much bigger headaches in production code at inconvenient times (for the business plan) due to dangling pointers, multi-writer concurrency bugs, and ill-defined memory models in software designs.

Even if Rust is much harder than alternatives instead of just different, and even if that's inherent in borrow checking, it's not clear that it's still not worth it in the long run.


> Even if Rust is much harder than alternatives instead of just different, and even if that's inherent in borrow checking, it's not clear that it's still not worth it in the long run.

One obvious argument is that Rust frontloads its difficulty. It's a terrible tool for most 30 line tasks, simply because the majority of your work will go into arguing with the compiler. But that's true of a great many tools - if there's an adequate payoff in safety and comprehensibility at scale, they can still be worthwhile.

That doesn't solve the learning-and-usage problem, though. Most of us are aware that the explosion of dangerous-at-scale tools like Javascript is largely because no one adopts a language that's unfriendly until you reach enterprise sizes. Even if Rust is "worth it", it's unlikely to catch on while the learning experience feels like pulling teeth.


> It's a terrible tool for most 30 line tasks, simply because the majority of your work will go into arguing with the compiler.

This arguing with the compiler is really only while you are learning Rust and it's nuances. For me the first two weeks were this way, then I started thinking more in Rust terms. Now I rarely deal with the borrow checker at all, but I do end up in some complex type situations.

You're spot on with the frontloads difficulty. Honestly though, I can't think of any language that I've learned that in under a few weeks I knew how to properly do things in the language.

Python was this way for me. My code looked like C when I started with it, and then I learned after a few weeks about 'with' and 'yield' and started writing real code.

Rust is worth the initial investment, but the first two weeks will require some perseverance.


I think the difference is that while you are coding Python in a C style the code is working. The front loaded nature of Rust means that you are in a constant fight to get something to run.


Yes, and I understand how frustrating it can be, I had to learn it, too. My only point is that after you spend the time with it, the cognitive load becomes a lot less and you've learned some new things about programming at the same time.


I believe this. However, my concern is that the number of people willing to make it through that learning curve is small and will limit the adoption of Rust.


I'm hoping that things like the Rust Language Server and it's integration into some of the IDE's out there will help with the initial learning curve; already the rustc error output is a million times better than when I started with it.

I definitely share your concern; for me, Rust was like putting on a glove perfectly shaped for my hand. It works and prevents every basic pattern that it took 10 years to learn and start applying to all my code thereafter. If I had Rust all those years ago, it would have saved me a ton of time... but I probably would have turned my nose up at it, b/c I wouldn't have had the experience to understand what it was saving me from.

So yes, I want the learning curve to be lower, but not at the expense of losing all of the safety it brings to developing fast and reliable software.


Have they thought about allowing code to work by warning and elevating the scope so you can quickly get the procedural work done and then attack the memory usage?


I've been thinking about a good answer to this, but I don't know what it would look like. Code looks so different when you need to be concerned about mutable vs. immutable borrows. For example you might want to iterate over a list perform an action and then delete items after doing work. This needs to be broken into two blocks, one that does the iteration and then a second block to remove any elements that no longer belong.

That's an example of just two blocks, but if it's deeper structures then ignoring this until the end could cause you to go back and restructure significant sections of code to make it abide by the mutability rules.

The process I use to stay productive, is to write small blocks of code, write a unit test for it, compile, and then move on. That way I don't build up a ton of compiler debt...


I've been writing Rust solutions for two years now, and I do not feel like I am in a 'constant fight' to get something to run. That only happened for the first two or so weeks as I was learning Rust.


> This arguing with the compiler is really only while you are learning Rust and it's nuances. For me the first two weeks were this way, then I started thinking more in Rust terms. Now I rarely deal with the borrow checker at all, but I do end up in some complex type situations.

It just means you've learned to head off problems before they occur, that does not mean the problems you're heading off should be there.


"One obvious argument is that Rust frontloads its difficulty. It's a terrible tool for most 30 line tasks, simply because the majority of your work will go into arguing with the compiler. "

It does. Ada did too with people saying same stuff. Once they got used to it, they swore by Ada since their stuff usually worked after they got it past compiler. I also add that the alternative to Rust for GC-free safety of heap code is essentially separation (VCC) or matching logic (KCC). That's probably much harder for average programmer to work with than structuring stuff to get through borrow checker easily. Cyclone and then Rust were huge steps up on dynamic, memory safety given they're accessible to people who don't know formal verification.

That's the comparison people keep leaving off griping about borrow checker. I think Rust looks a lot better when one adds it. Still a headache but good in relative way.


My observation is that that's not quite true. It's fine for smaller tasks, but that mushy middle where you're discovering the parameters and outlines of a problem can be painful. Looser (note that I explicitly don't say "better" or "worse") languages give you more leeway to play around and discover how the problem is structured.


I can easily slap out a 30-line program and have it compile and work on the first try with Rust, guaranteed. No competent Rust developer has issues writing solutions in Rust. Nobody but newcomers are 'arguing with the compiler'.


> First, the expressiveness is all there. You just need to explicitly say you are responsible by using unsafe blocks. So risky pointer and allocation logic becomes the programmer's job, not the borrow checker.

No, unsafe would be overkill and if you needed unsafe, it would put Rust at a massive disadvantage vis-à-vis garbage-collected languages.

Simply put, Rust doesn't handle multiple ownership well. It results in syntactic bloat and/or runtime overhead. But multiple ownership occurs naturally in a number of real-world programs.

> Second, this article only considers the costs of a stricter borrow checker, not the benefits.

This is why I very specifically mentioned that it's a tradeoff.


I think you're implication here is that other languages do handle multiple ownership well. What I think you find in Rust, is that the guarantees you need to make when using unsafe in these limited situations, are the exact same guarantees you need to make in a language like C or C++.

Destroying a circular list takes some thought to make sure you don't double free, same as Rust. Rust just makes those sections of code very explicit and calls your attention to it, while in C you may end up with a segfault, or worse a program that only shows odd bugs during runtime. At least Rust is telling you that you need to put some extra thought in this section.


Your assumption seems to be that I'm talking about languages with manual memory management. I'm not; I'm talking about garbage-collected languages, where you get memory safety automatically, regardless of ownership rules. In fact, C++ has some of the same problems that Rust has when it comes to multiple ownership.


Then use https://github.com/Manishearth/rust-gc if you're willing to pay the price of a GC (still experimental). Or better, stick to a GC'ed language.

If you don't need manual memory management, it's obviously going to be a burden. There are still use cases for manual memory management though, and that's where Rust comes in.

Use the right tool for the job.


(Please don't use rust-gc, it's more of a proof of concept. I mean, it works, and some folks do use it, but Rc + Weak are probably better options)


> If you don't need manual memory management, it's obviously going to be a burden. There are still use cases for manual memory management though, and that's where Rust comes in.

That's exactly the point that I made in my original comment?


Sorry, HN isn't great at keeping track who's who on a comment tree.


Does GC imply memory safety? Use after free and dangling references probably can't happen, but I have never heard of a GC supplying cross thread memory safety. The borrow checker does.


Memory safety in multi-threaded programs is not a hard problem once you have GC. Eiffel solved it decades ago. The JVM also has a weak form of memory safety; it does not preclude data races, but the VM cannot crash as the result of a data race.


> No, unsafe would be overkill

I don't see an issue with 'unsafe' if it's an implementation detail and well-tested. Obviously it would be better to avoid it if possible. 'unsafe' is overkill sometimes and worth it other times.


Instead of unsafe, as much as possible should be looking at interior mutability, Cell and RefCell allow for this, with no unsafe, and they guarantee for you at runtime that what you believe is accurate.

unsafe should be IMO, reserved for only those cases where strictly needed, e.g. FFI is the clearest example.


> if it's an implementation detail and well-tested

The same can be said of C and C++, at which point, why does it matter if you use Rust?


Rust has opt-out safety with 'unsafe' blocks. C++ doesn't even have opt-in safety. You can get fairly close with smart pointers and some other techniques, but 'fairly close' doesn't cut it when you're talking about memory corruption issues.


You can enforce safety by using the strict type system combined with polymorphic memory allocators in C++17, and get quite close to that with C++11.

The only thing you cannot exactly prevent is "cast_dammit_cast".


Could you be specific about how the type system and polymorphic memory allocators defend against things like dangling references, iterator invalidation and use-after-free/move? I can't imagine an allocator affecting these unless it's one that just has a no-op free (which only use-after-free, anyway), nor is the type system strong enough by itself.


1) If you cannot get a plain reference you cannot get dangling references. 2) Use data structures that keep iterators valid and modern reference counted internals. (so that when you have an iterator, you have the object) Alternatively copy on write semantics. 3) Use after move is relatively tricky if you want more. However, your allocator or data structure can catch it with specific implementation of move operators. Use after free can be reduced to use after move or caught with right delete/new operators. 4) Instead, you may have an API where you have your own high level reference-like type. There is a cost to it, same as in Rust. 5) What you cannot do is enforce people to use it always.


> There is a cost to it, same as in Rust

All of these things have a larger cost than Rust: you're forced to use pervasive reference counting and very defensive data structures. Rust allows you to get away with pointers deep into other things, as well as aggressive data structures, all in safe code.

Rust also has reliable use-after-move defense for arbitrary resources (not just vague dynamic protections for only the memory resource), and defends against data races. That is, Rust has true safety in a way that can give higher performance than approximate safety in C++.


Kind of, you cannot enforce your team mates, consultants and third party libraries that are part of the whole product to use them.

Static analysers and code reviews are not always possible, depending on the target OS and compiler being used, or how the whole project across teams is organized.


It might be worth it over C++, but assuming the task at hand is not hindered by garbage collection, is it ever worth it over an ML style language?

It certainly takes a lot more typing to get things done.


I think it's always worth it to learn new paradigms. They expand your view of what's possible, and give you a richer understanding of the other languages you already know, understanding some of the tradeoffs they made, and give you other conceptual and practical tools in your toolbox.

It's much like visiting other, radically different countries than the ones you've been brought up in. Even if you don't wind up moving there permanently, you'll be richer for the experience.


> The difference compared to a (good) static type system is that a good [1] static type system does not measurably reduce expressiveness

This is pseudo-jargon. What counts as a "measurable" reduction? Do you have any substantive evidence that "good" typesystems don't reduce it? And why are only type systems which satisfy this property "good"? You talk cogently about tradeoffs later on, but this sentence taken literally suggests only a type system which doesn't make this particular tradeoff can be "good". Why?

My go-to languages for the past few years have been Haskell and Racket. There have definitely been times I picked Racket because I knew Haskell's type system would interfere with my expressiveness. Certainly Haskell is less restrictive than Rust with regards to ownership, but all type systems restrict you. I think GP is spot-on in pointing out that you just get used to it over time as you learn to think and code in ways that avoid pushing against the type-system's weak spots.

> But where garbage collection is an option, the tradeoffs that Rust makes become much less attractive.

Yes, precisely. It's not a fundamentally different issue from getting used to other static type-systems; it's just a different tradeoff.


> This is pseudo-jargon. How do you measure expressiveness?

Actually, not jargon. I had Matthias Felleisen's characterization of expressiveness [1] in mind (incidentally, he's also one of Racket's authors).

> Do you have any substantive evidence that "good" typesystems don't reduce it?

If we want to be precise, there is a continuum from statically to dynamically typed language. A dynamically typed language is, after all, the same as a static language with just a single polymorphic type. This means that we can trivially translate a dynamically typed program into a statically typed program, though obviously we don't gain anything by that at this point (this is essentially an argument that Bob Harper has advanced before) and we pay for it in additional verbosity. However, in practice we know that we don't need all that flexibility and can constrain types more, allowing us to get better type guarantees; static types then also allow us to do things that we can't do in a dynamic language (such as compile-time overloading).

[1] http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.51.4...


EDIT: I've been chewing over the following:

> This means that we can trivially translate a dynamically typed program into a statically typed program, though obviously we don't gain anything by that at this point (this is essentially an argument that Bob Harper has advanced before) and we pay for it in additional verbosity.

This is exactly the opposite of the argument I advance below; you're claiming that there is a "local" translation from dynamic to static. And on reflection, you might be right, but I'm not entirely sure. It hinges on exactly what counts as "local". Translating away exceptions, for example, requires inserting case-analyses which propagate an exception if one has been thrown literally everywhere in your program. It's "global" in the sense that it touches every node in the program; it's "local" in the sense that the transformations performed don't involve any global knowledge or analysis. I'm rereading Felleisen's paper now to see how precisely he defines things.

ORIGINAL:

> I had Matthias Felleisen's characterization of expressiveness [1] in mind (incidentally, he's also one of Racket's authors).

Great! I believe, by that definition, your assertion that "good" typesystems don't reduce expressiveness is false, at least for common type systems such as Haskell's, Java's, ML's, etc. There are plenty of programs which require non-local rewriting ("a global reorganization of the entire program") in order to appease the typechecker.

A simple example can be adapted from the expression problem: suppose I'm writing an compiler for a simple language, so I have an AST datatype and some functions on it. Then I add a pass to my compiler that removes a certain feature from the language I'm implementing - a certain branch of the AST. Now I wish to simplify my backend by removing the case which handles this branch of the AST. I cannot do that without either dynamically failing in that case (which a type system might even prevent me from doing) or introducing an entirely new AST type which does not have that branch - but this could require duplicating or else considerably generalizing existing code that operates on the AST (a global reorganization). Of course, a sufficiently powerful typesystem might let me do that, but for any typesystem you name, I suspect a similar example can be procured.

To get more precise, the "feature" we're considering adding is "ignoring the typechecker and running anyway". For example, GHC's -fdefer-type-errors. The question is "can any legitimate program written with -fdefer-type-errors be locally translated into an equivalent program that typechecks without -fdefer-type-errors". Without reaching for an explicit escape hatch like unsafeCoerce (which is analogous to "unsafe" blocks in Rust), I think the answer is no.


First, let me say that I think you're right that expressiveness is reduced. In the best case, all of that reduction is of things we don't want to express because they are incorrect - but in practice (and probably in theory!) the boundary will always be fuzzy.

That said, with regard to your "kill a branch in your AST" example, you can kill a branch with Void:

    import Data.Void

    frobnicate :: Either a Void -> a
    frobnicate (Right x) = absurd x
    frobnicate (Left x) = x
Now you can rely on no-one accidentally handing you a Right expecting you to do something reasonable with it.


> Great! I believe, by that definition, your assertion that "good" typesystems don't reduce expressiveness is false, at least for common type systems such as Haskell's, Java's, ML's, etc.

Haskell is not very good at extensible runtime polymorphism (one of its weaknesses). Try OCaml's polymorphic variants.


Okay, then I can pick another example: A dictionary where the type of the values depends on the key they're inserted under. (For example, a representation of the closure of a function in a typed language.) Of course, then I can go to a dependently typed language, but are you really going to tell me that only dependently typed languages count as "good" static type systems?


I am not arguing completeness here; a type system is not a full-blown specification language and does not have to be. (I actually tend towards the point of view that trying to make them complete specification languages has limited practical value, but that's a different issue.)

The point here is that the type system does not restrict me from implementing a data structure that adheres to this constraint, even if it does not enforce it for me.


You can do this in GHC Haskell with existential quantification, or trade static checks for runtime with Data.Dynamic.

(Again, not arguing against your thesis here.)


After quickly scanning https://realworldocaml.org/v1/en/html/variants.html, isn't the same thing accomplished by GADTs?


The point of polymorphic variants is that the entire type doesn't have to be fully defined in one place; you can have partial definitions in a module and those definitions can even overlap. You can have (say) [ `Int of int | `String of string ] in one module and [ `String of string | `Char of char ] in another, and the typechecker will handle intersections and unions of these types.


I agree, ayende is a C# person, he's well aware of statically typed languages.

I also agree about your other points, I think maybe one of these days the borrower will become more sophisticated, but these issues he's complaining about are the exact reason I put Rust down. It just felt too sophomoric, as if the language wasn't really ready.

Not all is bad, I really like the way errors are handled in Rust, but there are definitely some problems with the approach they take.


> I believe that we are seeing the same kind of phenomenon in Rust:

I don't know. I picked up OCaml fairly quickly, but I have consistently struggled with Rust, trying it for a while before abandoning it in frustration with lifetimes, 2 or 3 times. I can program in Rust, but it still seems like an ongoing struggle, as opposed to the smooth brain to text programming I've gotten used to in Python, JavaScript, OCaml, and Lua, and to a lesser extend C, Go and Elixir (just because I haven't used to latter 3 as much, at least recently).

I think Rust may be ideal for embedded software development, or low-level systems software, but for general application development, I think OCaml or perhaps Swift or Scala are more ideal, at least for me (or at least until I decide to try Rust again, perhaps it will stick this time).

Edit: Oh yeah, and I was even able to become productive enough in Scala to build a significant project (a programming language) in a relatively short period of time. Now, I'm not silly enough to think I've really mastered Scala in such a short period of time, but I was using it productively as a functional language as I would OCaml with little difficulty. On the other hand, I've started 3 (3!) programming languages in Rust but have yet to get beyond much beyond the parser/AST stage before I quit in frustration.


> smooth brain to text programming

This isn't a good thing, in my opinion. Brains make mistakes. I don't want smooth, uninterrupted coding. I want the compiler to tell me when I'm making the mistake, even if it makes things choppier. It's a lot less painful to write code in fits and starts than to write a million tests or spend hours debugging.


It depends a lot of what you code and your constraints. Believe it or not they are a huge crowd of people making a living out of code that is easy to write but crashes. And they are happy about it.

E.G: one of my friend has a streaming website. He make banks with ads despite the site having problems all the time. He doesn't care the least. He is not an expert coder and just want to be able to build the features he needs, even if it's not perfect. Types are way too much for him. It took 2 years to master git basics.

Another example: my current clients are geomaticians. They code in Python a lot. They produce only small codes, and don't want something complex. They are experts in maps, not code, and love Python to have the balance of cost / reward they want.

HN is a bubble of expert devs, but the world is full of PHP plumbers, data manglers, sys admin and other professional that have a legitimate needs for "smooth brain to text programming".


And I think that's fine; Rust isn't trying to be the one language to rule them all. It's intended as a foundational language: your friend's streaming website survives despite his own code crashing often, but it won't survive if his off-the-shelf HTTP server crashes often (HTTP servers being the sort of thing one would write in Rust). Likewise, your mathematicians aren't going to be very productive if their Python interpreter crashes often (likewise, interpreters are the sort of thing one would also imagine writing in Rust).

Even in Rust's ideal future, very little of the code atop the stack is written in it. It's the code at the very bottom that underpins everything whose share it hopes to eat.


Dictatorial concepts in PL usually fail pretty badly. Rust may become the textbook example of a good idea too far. I'd certainly never use it or use anything written in it given a choice and/or time to rewrite in anything more accessible.


> or use anything written in it

As I often say on HN, I must be missing the sarcasm here...


I think "use" in this case is in the sense of a developer using code, rather than an end-user application.


Oh yeah that's fine. Both languages are actually very nice used together.


> I want the compiler to tell me when I'm making the mistake, even if it makes things choppier.

You don't think the OCaml type system would do this? OCaml gives you just as much safety as Rust does, it just doesn't also give you no GC like Rust does. And the price Rust pays for the absence of GC is the lifetime concept, which adds a very significant layer of complexity to the language. Much like OCaml, Swift and Scala also provide very safe type systems.

Again, if you can't have a GC, then Rust is an excellent option, and the only really safe GC-less language unless you use C with FramaC or similar static analysis (and FramaC also requires lots of training and time to use, just like Rust).


Ownership and borrowing are also useful for avoiding data races. OCaml solves the problem by not having any shared state at all, which is obviously a huge limitation.


Current OCaml needn't worry about data races since no state is shared. Unfortunately, current OCaml also only exploits a single core, but it's so insanely fast that usually doesn't matter.

Upcoming OCaml multicore solves the shared state and data races problem with algebraic effects[0]. It is working, it's just not yet been released in mainline OCaml. Supposedly, that will happen this year, but we'll see.

0. http://kcsrk.info/ocaml/multicore/2015/05/20/effects-multico...


OCaml seems like the argument against Rust's claim to trade usability for safety. It's not true for everything, I wouldn't write embedded OCaml, but it's an easy disproof-by-example for "Rust's difficulty is the inevitable price for trustworthy code!"

Which isn't to say OCaml is trivial, or Rust is useless (GC and threading both being limitations). It just makes me skeptical of claims that Rust's difficulty is irreducible.


Rust's difficulty relative to OCaml might be irreducible if we consider the difficulty added by not providing a pervasive GC, which is part of Rust's design goal and not part of OCaml's.


But OCaml isn't intended for the use cases Rust is intended for: a replacement for C and C++


It depends on the use cases you mean.

OCaml is actually used for many systems programming tasks that can live with a tracing GC.


>I wouldn't write embedded OCaml

But you can run OCaml on a microcontroller:

http://www.algo-prog.info/ocapic/web/index.php?id=ocapic


I don't think they are any error classes that OCaml/F# will let you make that Rust won't.


Rust prevents data races, OCaml/F# don't provide such guaranties. Though it isn't so much of a problem in OCaml since it doesn't have concurrent threads.


OCaml has GIL, is normally single core, and shares no state, so current OCaml doesn't need to provide such guarantees.

Upcoming multicore OCaml will provide such guarantees through algebraic effects [0].

0. http://kcsrk.info/ocaml/multicore/2015/05/20/effects-multico...


First, Rust does not prevent data races, just makes them pain to write.

Second, Rust also prevents legitimate sharing in a bunch of cases where concurrency is not involved. (and a simple scoped allocator succeeds)

This would be fixed if there was an easy way to have typed memory that can be shared but cannot be accessed concurrently. There is none. You get at most full unsafe.


Rust does prevent data races, as long as you don't use unsafe. If not, it's a bug.

You can write arenas in Rust.


Just to be clear about the terminology (since I'm not sure if the grandparent knows about the difference): Rust prevents data races but not race conditions (although it makes it easier to avoid them). Here's an explanation about the differences: http://blog.regehr.org/archives/490


You can, but it is an extreme pain to do it, especially for any nontrivial data structure. That is because even if the arena is safe enough, say with copy on write semantics, you cannot specify that for the borrow checker easily (or at all)

You cannot implement many data structures in a decent performance way without unsafe, so Rust guarantee is weak.


> You can, but it is an extreme pain to do it,

It is not extreme, and it isn't even necessary: just reuse one of the many arenas others have already written, e.g. https://crates.io/crates/typed-arena .

Copy-on-write is fairly ease to write in Rust, e.g. the even core Rc and Arc types have a get_mut function that does copy-on-write.

> You cannot implement many data structures in a decent performance way without unsafe, so Rust guarantee is weak.

You can't implement anything in C or C++ without unsafe: Rust's main strength is the ability to write high performance code with a safe interface, so that only one library has to muddle through the `unsafe` problems (if they have to touch `unsafe` at all). Most code one writes is not implementing data structures.

As a demonstration of this strength, even very low-level projects like operating systems can get away with not using `unsafe` everywhere by packaging it up into safe interfaces, e.g. http://os.phil-opp.com/modifying-page-tables.html


Data races.


Give it time. Rust is brand new and has changed considerably during the last year. The rust team listen closely to the community and will smooth things up. But they can't do it all at once.


This is worth remembering. I'm technically on my "second attempt" to learn Rust, but the first one wasn't abandoned because Rust is fundamentally flawed. It was just new enough that the thing I wanted to build was drowning me in nightly builds and hypothetical features. After giving it ~1 year, things are already looking much more welcoming.


> I think Rust may be ideal for embedded software development, or low-level systems software, but for general application development, I think OCaml or perhaps Swift or Scala are more ideal, at least for me (or at least until I decide to try Rust again, perhaps it will stick this time).

If you're not aware of it, scala-native looks very interesting... It looks to be active project.

https://github.com/scala-native/scala-native

FWIW, I've also enjoyed Swift development quite a bit. I think Swift will go deep into the C++ sweet spot.


A benefit that Swift has over Rust in regards to market adoption, is that an OS vendor has the power to say "My way or the highway" to developers that want to target their platform.

However Rust is already picking up steam in Firefox modules, hobby OS development, university CS degrees and GNOME modules, so there might be a path there.


The tooling for Swift is certainly miles better on iOS, but I do like Rust as a language. On Android the most comprehensive solution I've seen is Xamarin, but last time I tried it (which was years ago), I wasn't a fan of the GUI and the perf was worse than Dalvik. Hopefully that has improved now.


Rust seems really great as infrastructure level code, but the tooling still has a lot of rough dyes when it comes to mobile development.

I think Swift could play a decent role here if it ever gets good android support as it already enjoys first-class support on iOS. I also agree in that Swift feels very nice. :)


To me it seems like Rust has an additional wrinkle with safe and unsafe code. A new Rust user needs to figure out both "How do I do this" and "Is this possible in safe code". The limits of safe code seems to be something people are figuring out at all levels. Rust projects seem to start out with a relatively high amount of unsafe code. Then as the project matures the unsafe sections are reduced or eliminated.


I think whether you feel you need unsafe when you first come to Rust depends on the angle from which you're coming.

Coming from C/C++, I know that there are structs in memory and can visualize what they look like - surely I can just go and tweak them? (No I can't because either it wasn't safe to do that anyway, or I need to explain to the Rust compiler why it is safe, and that can be quite difficult.)

Coming from something like Java, I (probably) don't think too much about what the underlying structure in memory is, so I'm less likely to try to do this.

Either way, it feels like part of the learning curve is learning the "Rustic" way to do something - in general, you probably shouldn't be reaching for "unsafe" until you know what you're doing! ;)


>Either way, it feels like part of the learning curve is learning the "Rustic" way to do something - in general, you probably shouldn't be reaching for "unsafe" until you know what you're doing! ;)

I tend to agree with this sentiment. The moment I saw that OP was doing pointer arithmetic: I knew they were in for a bad time. It's not that you can't do it -- it's moreso that Rust's raison d'etre is to highlight the flaws inherent in that approach.

I hadn't thought about it, but I think you're right, my background in managed languages is probably why I didn't run into these sorts of issues until after I was already comfortable with Rust. I wasn't trying to make maximally storage-efficient collections right out of the gate, instead I was building application-level stuff on top of `std::collections`.


In other words, you weren't using it as the systems language it's billed as. Which is fine, but it's claimed to be a systems level language, and if it's really that painful to do systems level things, then there's a problem somewhere.

Which is what's being discussed.


>and if it's really that painful to do systems level things, then there's a problem somewhere.

To me it's not a problem at all, Rust is working exactly as designed. When you start writing code in the style that Ayende has: its memory layout becomes entirely non-obvious. The structure definition no longer lines up with how that memory is actually being used. You'll need documentation of the sentinels & invariants, you'll want diagrams of the memory layout (since it won't line up with the structure definition), and a careful understanding of every line of code packing the bytes -- that's just to get a basic understanding of what's actually going on.

This is fine, and you absolutely need to be able to do it, which is exactly why Rust provides `unsafe {}` (along with `std::ptr` and `std::mem`.) Sometimes you need that control you want a memory-efficient data structure, other times that's because you're reading registers directly off the hardware. In either case Rust is going to make you use `unsafe {}` which is a signal to say: "to understand how this is really laid out in memory you need to understand every single line in this block."

I would much rather be able to grep for `unsafe {}` and audit parts of a trie-map for memory safety, as opposed to auditing my entire program for memory safety violations. That's not a failing of Rust, it's functioning exactly as intended. Trying to squeeze every last byte out of a struct is neither obvious nor safe, it requires careful thought.


> I would much rather be able to grep for `unsafe {}` and audit parts of a trie-map for memory safety, as opposed to auditing my entire program for memory safety violations.

you mean auditing the entire implementation of the trie data structure, not the entire program.

Guess what?

In rust, if you use unsafe, you also have to audit the implementation of the entire trie datastructure.

And if you think you don't, then I submit you don't understand Rust safety that well.


Rust contests the claim that you actually need to do pointer arithmetic and stuff often in systems languages. You need to do it a bit to implement your abstractions, maybe, but it is not the first tool you should need to reach for.

The code in the blog post looks very much like C-translated-to-rust. Right now, systems programming mostly does look like C/C++ because those are the only two mainstream languages that work here (D, too, but D is like C++. There are also other languages). But systems programming doesn't have to look the same or use the same primitives.

Do you really need to use malloc to allocate memory in a systems language? Do you really need to use pointer arithmetic to deal with things in a systems language? Sometimes, but not necessarily often. Even the Rust operating systems have managed to minimize the amount of unsafe/C-like code.

Rust does expose this functionality, but if it's not something it expects you to use often it makes sense that it might not always be straightforward to use.


> Do you really need to use malloc to allocate memory in a systems language? Do you really need to use pointer arithmetic to deal with things in a systems language?

To add to this point, my answer would be a big NO, if one would look to the C alternatives that sadly became niches.

Sadly the adoption of C has lead to younger generations thinking that those are the only valid paths in system languages.

Thankfully new languages, including Rust, are changing that.


However, typed and separate memory areas with various allowed kinds of concurrent access are needed. Even simple RAM has multiple modes, plain, atomic of multiple kinds, barriers... On top of that, you get devices that handle access only with delay inbetween, can never be accessed concurrently (volatile), have alignment requirements and more. Rust memory model is too weak to enforce safety in such cases while not sacrificing performance.

It is written from "I have one CPU" side. Even C is far beyond that with things like OpenCL...


Rust gets all your favourite kinds of atomics and barriers, exactly like C. Other things are indeed hard to get safe, which is why Rust has `unsafe`: it is possible to use them, marking out the dangerous regions precisely, rather than the whole program being `unsafe` just because a few places need niche features.

In any case, Rust is definitely not written with "I have one CPU" in mind. A slogan that is sometimes used for Rust is "safe. fast. concurrent.", and it's one of the headline features on the front page: http://rust-lang.org/ . Rust's abstractions allows writing extremely high performance libraries like rayon, which, had better performance than even highly optimised C libraries like Cilk, in one benchmark I heard about. Building out that parallelism ecosystem takes time, so it does not seem particularly surprising that there are more (unsafe!) libraries for the 40-year-old language, than (safe!) libraries in the less-than-2-year-old one.


No, by one CPU i meant not a single core. I meant not having to manage different memory models and available access patterns. Or different ways of handling race conditions and alignments.

You get to use unsafe almost exclusively in the internals of such access where it shouldn't be necessary if there are ways to specify more properties than just binary "safe" and "unsafe".

It is extremely hard to prove such code correct in Rust.


> I meant not having to manage different memory models and available access patterns

The only way I can make this make sense, is having cross-platform abstraction around atomic orderings etc. (like the new C++11 std::atomics) that works on x86 and ARM and PowerPC etc.

Rust has such a cross-platform abstraction, almost identical to C++11: https://doc.rust-lang.org/std/sync/atomic/index.html

Please be more specific about what exactly you mean if I am wrong.


> Do you really need to use malloc to allocate memory in a systems language? Do you really need to use pointer arithmetic to deal with things in a systems language? Sometimes, but not necessarily often. Even the Rust operating systems have managed to minimize the amount of unsafe/C-like code.

If you're adding performance considerations I think the answer is going to be yes.


I'm not an expert, but I believe that Rust's performance will be better if you avoid unsafe pointers. Rust's abstractions are generally cost-free, but if you use unsafe pointers the compiler will have to avoid optimizations that could be broken by potential pointer aliasing.


the idea of cost-free came from C++ where the belief is that you don't pay for what you don't use. The underlying meaning there is that you don't pay the cost of virtualized function calls if you're not using polymorphism (this is a big reason why C++ doesn't feature virtualized function calls by default).

polymorphism is of course an abstraction, the idea of cost-free is that you only pay for the abstractions you actually use.

This is not the same as maximizing performance.

The real question is whether or not you can do these things without giving the developer the control to maximize performance, and I think the answer is going to be no. The response will be that it's "good enough", but that's not actionable as we don't know what "good enough" means for any particular project.


> If you're adding performance considerations I think the answer is going to be yes.

You can build (or use, from the stdlib) safe abstractions over malloc that have the same performance. This should cover 99% of the use cases of malloc. For the few quirky cases where you still need malloc, you can still use it, but that means that the language doesn't need to make malloc-based memory management easy.


I find that unfair. C++ has new as well as other mechanisms such as perfect forwarding into smart objects, but no one seems to think C++ qualifies for the "you can do systems programming without malloc and pointers"?

You can't do systems programming without them, what you can do is minimize their use by building abstractions, which other languages, such as C++ and D, already do.

So this idea that people somehow think you have to use the actual malloc call throughout your code is a bit of a strawman.


I do think that C++ qualifies for that. I don't think there is widespread use of this yet -- I'm talking about the vast majority of "systems" code in C or C++ -- most of it is older and doesn't tend to use abstractions, or has been written from a "C with goodies" perspective.

So yeah, (modern) C++ also contests this claim too. Rust contests a stronger claim about safety as well.

------

You're the one who said that "it's claimed to be a systems level language, and if it's really that painful to do systems level things," -- my point is that in both Rust and C++, "systems level things" don't need to involve pointer arithmetic and other similar things. They should be niche tools you reach for when the abstractions fail you, and the abstractions should not fail you that often. This is already true for both languages, I feel. It's not "niche" in C++ because the ecosystem needs time to evolve to that point, but for new code it probably works out.


like I said, that's a strawman. C++ has had new/new[]/auto_ptr since C++ 98, so almost 20 years. The ideas of smart pointers have been around for a really long time as well.

Then there's the C constructor convention through opaque void pointers idiom.

This stuff is not new, not even close to new, NO ONE, not even the kernel developers, would make the argument that you cannot build abstractions around malloc.

If that's really what you're arguing, well you're not saying anything that C developers don't already practice.


When I say "malloc", I'm including `new` -- programming patterns that use new/delete are the same with malloc/free. new/delete doesn't really change the programming pattern, it just has a slightly nicer API.

smart pointers have been around for a while too. I see some codebases using them. But many don't.

> NO ONE, not even the kernel developers, would make the argument that you cannot build abstractions around malloc.

I never said that kernel devs or whatever would make that argument.

----------------

Look, you seem to think I've said something different than what I did here.

You said "if it's really that painful to do systems level things, then there's a problem somewhere.". I'm saying that Rust contests the claim that malloc/new/whatever must be easy to use for systems level things. Easy-to-use abstractions around them work well too. It's not that C++ doesn't have these abstractions, it does, and has for a while. My statement isn't one against C++.

My point is that the difficulty of using programming paradigms designed around malloc/new should not be a dealbreaker for a systems language, provided that the same tasks can be accomplished without raw allocation using a suitably generic abstraction. C certainly promotes malloc-based-paradigms. C++ has good alternatives, but a lot of the lower-level stuff I find avoids them because ultimately malloc/new aren't hard in C++. Rust makes them harder to use, but provides sufficient abstractions/abstraction-building-power that this should not be an impedance to systems programming. And, again, I'm not saying C++ doesn't -- I'm just saying that C++ also makes it easy to use malloc/new-based patterns, so you will continue to see those in systems programs even if they aren't necessary.

Now, interestingly, an abstraction for a vec-with-header doesn't actually seem to exist in the crate ecosystem, probably because folks don't like unconditional allocation. But it's relatively easy to build, might do it over the weekend.

(You can fake it in safe code using DSTs but it's not so great)


new/new[], delete/delete[] are abstractions around *alloc/free. That's not really under discussion, that's what they are. So when you talk about the ability to build abstractions around malloc, you're also talking about these operators in C++.

You may not like that they tend to be very thin abstractions, but they are abstractions, and the point is that the idea of building abstractions around malloc isn't new and certainly isn't unique.

You said:

>> You can build (or use, from the stdlib) safe abstractions over malloc that have the same performance.

While in defense of the statement:

>> Rust contests the claim that you actually need to do pointer arithmetic and stuff often in systems languages. You need to do it a bit to implement your abstractions, maybe, but it is not the first tool you should need to reach for.

This is presented as something unique to rust, which isn't even close to the accurate. This is why I called it a strawman. No one, not even kernel developers using C, are incapable of building abstractions around malloc to make it easier/safer to use.

The rest of what you say is inconsequential in my opinion. This isn't about C++, this is about your implication that building abstractions around malloc is something new or novel about Rust. It isn't. Not even the idea of safety around malloc is new, new[]/delete[] are themselves abstractions intended to make it less error prone to create/destroy contiguous blocks of memory.

That doesn't even get into the smart objects and RAII which have been around for a very very long time in C++.

None of this is new to Rust, and I think it's a bit disingenuous to imply that it is.


> So when you talk about the ability to build abstractions around malloc, you're also talking about these operators in C++.

Right, I wasn't talking about just that, I was specifically referring to abstractions that make you change your programming patterns. C++ has those too (the move-based abstractions), but I don't consider new/delete to be them.

My point was sort of "being able to write code that looks like this is not necessary for systems programming". Being able to do the task at hand is, but it need not be done using code like that.

> This is presented as something unique to rust, which isn't even close to the accurate.

Fair. It's not. I didn't intend to mean that. I'm sorry.

I was a C++ programmer for a quite a while, some of which time I used modern C++, so I'm quite aware that these abstractions are not unique to Rust.

The reason I phrased it that way is that C++ doesn't really make it harder to use direct-malloc-based (or raw-pointers-with-new) programming, and most of the lower-level C++ code I've seen (this is anecdotal) still uses this. So C++ doesn't contest the claim the same way Rust does (it does contest the claim though), since Rust codebases doing the same thing (even the kernels in Rust) tend to move strongly in the direction of avoiding unsafe pointer twiddling as much as it can. This is a bit more nuanced (and based on anecdotal evidence on low level codebases I've seen) than I really wrote in my original comment, and I apologize for that.


I am not so sure about this, at least for Java for one simple reason: it is so easy to have back-pointers/cycles in your object graphs. In Java you don't even have to think about that. This was quite hard for me to figure out when starting with Rust.


This really isn't that big a deal. I have written 60k lines of code in my side project, it's required zero lines of unsafe Rust code. I have needed to extend some other OSS libraries I depend on and those have required unsafe code for bindings to C, but that makes sense. Even in Java it's unsafe to write JNI to C.


You came from interpreted Python to compiled OCaml and I suspect that your initial encounter with a compiler is the underlying issue. (For example, being new to FP but very friendly with compilers -- C, C++, Java, C# -- I found the initial OCaml encounter surprisingly pleasant. Haskell was something else though :))

I looked at Rust specs 1.0 when they were released and immediately recognized that Rust is not a 'dating' language, but rather a very serious 'marriage' till death do us part sort of deal. (I would say the same for Haskell, & Scala.) And imho the initial promise of addressing the out of control complexity of C++ was not met.

I have no doubt that Rust is a thoughtful, and powerful, language. But you have to be prepared to exchange vows at the alter.


Of all the things I'd accuse rust of, complex is not one of them. Could you give an example of the complexity? I've not found a single pattern in rust I wouldn't already have to think about in c++, but I have to think about it less because the compiler thinks about it for me. I just can't postpone the confidence until later.


It was an impression, but clearly it is not a 'simple' language at the syntactic or semantic levels.

(You should not read my OP as a negative critique. I'm merely pointing out that anyone expecting 'immediate gratification' with Rust is likely to be disappointed.)

> the compiler thinks about it for me.

Indeed. One valid approach to categorizing programming languages is by considering upfront vs amortized cognitive load.


And imho the initial promise of addressing the out of control complexity of C++ was not met.

I'm not sure. Type traits seem to be much more manageable than how the same thing is handled in C++ template code, no?

Also, if we can bring in ecosystem effects such as cargo...


I guess I should have added that I last touched C++ in '96 and never (ever) have had an urge to look back. :)


I've you've never seen template programming, but just C with classes, then yes, I can see how you arrive at this conclusion. C++ in 2017 is very different from what it was in 1996.


I was using STL and RTTI. I know the language has substantially improved. These days I've actually rediscovered the joy of plain C.


Not related to the topic but how do you find ocaml as compared to python or rust? Do you think its worth learning today as opposed to rust for high level general purpose programming?


Absolutely! If you're looking for fast, GC'd, native functional language I'd say OCaml is a great option. There's actually a very comprehensive set of blog posts migrating an open-source python based project to OCaml, that I'd recommend reading if you're looking for a more thorough comparison - http://roscidus.com/blog/blog/2014/06/06/python-to-ocaml-ret...

Personally, I used OCaml while at university to write a programming language (http://jsjs-lang.org) and totally loved it. Real world OCaml is a great resource and I'd definitely recommend giving OCaml a try. My only regret is that I don't get to write OCaml as much as I want to due to the kind of projects I work on, but given that Bucklescript[0] and Reason[1] are moving ahead in full steam, that might change shortly too :)

[0]- https://bloomberg.github.io/bucklescript/

[1] - https://facebook.github.io/reason/


It occupies a slightly different niche to Rust as it has a GC. If you can live with that, it's probably going to be more productive than Rust for you.

It's definitely worth playing around with a language from that family. OCaml is maybe slightly more practical than Haskell and it has some nice resources available - in particular Real World OCaml looks really good: https://realworldocaml.org/


It is well worth learning an ML-family language. In addition to Rust, OCaml or Haskell as njs12345 mentioned, F# or Scala might be options. OCaml is probably more mature than Rust, has more libraries available, and has automatic memory management which takes some of the load off you (especially if you're writing a GUI application, IME). But it's not got Rust's "hot right now" factor, it doesn't have Rust's ease of embedding in other languages, and it doesn't have quite the same focus on high-quality documentation.

If you already know Rust I wouldn't bother learning both - they're very similar languages.


I'll also add to their recommendations that GC'd languages can be used for OS's or system code. You can do that with a number of strategies:

1. Go unsafe on lowest-level operations that simply can't be GC'd. Quite a few of these can be expressed as finite, state machines which can be model-checked for correctness. Static analyzers are also really good these days in whatever code you can avoid dynamic allocation in. The rest of the code is GC'd. That's what the Oberon operating systems do.

2. Use a real-time, low-latency, concurrent GC that basically knocks low single or two digit performance out of the system. Lots of products in embedded Java have special GC's for that. Buy a faster CPU. Trade dollars or performance for productivity and safety.

3. Use a mix of GC's in the system where each one is tuned for the job at hand. You might use a real-time, concurrent one in the kernel. Maybe just reference counting in one more about performance. Maybe a straight-forward one for stuff that's not performance-critical. Could even use one that's mathematically verified for correctness in that situation. JX Operating System is an example of OS that mixes up GC strategies.

That's for software. Your options open up a bit if you can also deploy custom hardware. At the least, a modified soft-core on an FPGA. The LISP machines & maybe a Java CPU of the past often included hardware acceleration for garbage collection. One built the GC right into memory subsystem/interface of the processor where neither the processor nor applications had to be aware of its existence. It just did its thing in parallel to execution of the app. A concurrent, low-latency one integrated with virtual, memory hardware shouldn't take much hardware resources to pull off.


It is worth looking at F# as well, which is basically OCaml on the .NET platform. With the advantages of having the entire .NET ecosystem of libraries available for use, being able to use more than one core of your cpu, a solid IDE, and some other features like type providers. There are some disadvantages as well, such as not being able to compile to a native executable.

Anyway the syntax is so similar you can learn one and switch to the other with very little pain.


I never got past the pain-in-the-ass stage of dealing with the OCaml compiler. I hated wrestling with it for something simple. It really slowed me down to no end.

I remember asking on freenode's #ocaml for help with a convoluted error message, and when I was told what it meant and how to fix it, I asked how the person who helped me knew what it meant. He told me he'd taken two semesters of type theory courses at uni. Having to spend a year taking university-level courses in order to effectively program in OCaml seemed like way too much pain for too little gain when there were so many other less painful languages to program in.

Later I discovered Lisp and then Scheme. Both were a sheer joy to use compared to OCaml. I didn't have to wrestle with the type checker, my programs were quick and easy to write and understand, and in their own way the languages were just as powerful as OCaml was. Sure, OCaml had them beat at safety (at least if you were comparing ordinary Lisps and Schemes, not statically typed versions or extensions with more safety features) but it way, way more painful and slow to program in, for me. I recognize that if you need the safety, then the pain might be worth it. But even then it might be worthwhile to program in a dynamic language like Lisp/Scheme and then once the program matures, rewrite it in a safe language like OCaml. But even then, I think I'd just rather dial up the safety features of Lisp/Scheme or use one of the static versions.

That's just me. I know some people love OCaml. And I'm sure I'm missing some of its wonders that are only apparent when you dive deep enough in to the language. But from what I did learn of it, it just wasn't for me.


What helped me when starting with OCaml is always declaring the types in the function's arguments. The compiler errors started making a lot more sense.


These sorts of articles are starting to pop up a bit more frequently, presumably because Rust is starting to get a bit of traction. The theme is "I know $LOW_LEVEL_LANGUAGE therefore I should be able to program Rust. I spent a few days and couldn't. I don't like Rust."

Unfortunately, things aren't that simple and Rust really is different from other languages. It takes months of steady investment (and yes, frustration), but the payoffs are spectacular. I would urge the author to persevere. In the meantime, the community is very helpful and I'm sure you'll get lots of useful advice on the back of this post.


Agreed. Rust is different from other languages in that it is HARD to learn. It took me 3 projects and thousands of lines of code before I finally felt like I'm okay at Rust. Each project involved tons of compiler fighting that I now know how to avoid or solve.

The problem is that instead of a general technique for solving borrow checker issues, every issue has a different specific technique. Hash map matches (like the article) require the entry API. Multiple mutable pointers into a buffer require either using a typed Vec or Vec#split_off. Cyclic data structures require an Arena or Rc<Refcell<>>. String casting issues sometimes require .map(|x| &x). Global state requires lazy_static. Nice error handling requires error_chain. When you run into problems with "Send" you need a Mutex. The list goes on...

It's like pure lazy functional programming in Haskell in that unlike other languages you can't just pick it up based on previous knowledge and patterns. You need an entirely new toolbox of pure algorithms and data structures, or in this case borrow-safe algorithms and data structures.


> Agreed. Rust is different from other languages in that it is HARD to learn.

Don't you think C/C++ are hard to learn too? C++ has a vast surface area and I think it's unlikely that most people who use it understand much of it. When I code in C, I feel like I have to be paranoid because it's so easy to forget a path that doesn't recover allocated resources. But if I'm not paranoid, I just leak the resources and don't care about it (ever, or until many months or weeks later when the problem's discovered). So the easiness is illusory IMO. It seems to me that all of those other resource leaks are just leaking too slowly for us to discover yet.

> Each project involved tons of compiler fighting that I now know how to avoid or solve.

As frustrating as it is, it's easier than struggling with heisenbugs. Those can easily eat up days trying to reproduce issues.


I think it's easier to get started with C++ because it contains simpler languages. You don't have to start with template metaprogramming, you can be productive with a small subset of C++.


C++ is hard, but C is about the simplest language in modern use.


The issues and solutions you list sound super valuable. Do you know if there is a write up about those issues somewhere or did you just learn through blood and sweat?


Learning Rust With Entirely Too Many Linked Lists (http://cglab.ca/~abeinges/blah/too-many-lists/book/) is the best resource I've found for learning how to create data structures in Rust. It goes over a lot of the pain points of interacting with cyclic data structures.


Thank you for this pointer!


  > Cyclic data structures require an Arena or Rc<Refcell<>>.
Is there a way of creating a cyclic data structure that doesn't involve these types?

I ask because I used the latter recently, after struggling to create lifetimes that were acceptable to the compiler. I only found it when I searched for 'cyclic data structures rust' on Google.

It would have been much nicer if the compiler could have noticed that I was trying to create a cyclic data structure and then have informed me on the different ways of doing so. Perhaps a misunderstanding on my behalf, but the error messages that I previously received felt like they were sending me on a wild goose chase.

I still don't think writing Rust code is hard, but there are definitely improvements which could be made to the compiler error messages.


You could use id/vec indices, instead of directly pointing to your e.g. sucessors: "struct Node { successors: Vec<usize> }". This has the obvious disadvantage that deleting nodes is quite hard though.


> This has the obvious disadvantage that deleting nodes is quite hard though

It's hard because now you're in charge of building a malloc(3) implementation. Depending how important this data structure is you might need to worry about 1. about fragmentation 2. free space managment 3. adjacency and cache effects

So you've traded on hard problem for another hard problem. Not something I'd call a win.


Just use petgraph. It manages all of this for you.


I work on databases and I ran into similar problems implementing index data structures. While I understand petgraph solves some/all of these problems completely - it would be great if the barrier to writing trees / graphs from scratch in Rust would be lower than it is currently.


Same problem. Working on an OLAP database toolkit (tried to in Rust)... writing typical database data structures is excruciating painful in Rust.

Trees, indexes (including bitmaps), buffer allocators and even just keeping state in streaming cursors (Volcano model). Are all pain in the ass in to implement.

In fact implementing a Cursor trait for like 10 different Cursor (SrcView, Join, Project, Compute, ...) became painful because each cursor struct has it's own complex lifetime params and those lifetimes were trying to leak into the main Cursor trait.

Right now I can't do that without resolving to runtime using RefCell. In my case the plans could be entirely materialized at compile time, so not really a win.

The situation won't get better until there's some kind of abstraction over lifetimes (HKT?).


Sure, I wouldn't recommend that solution if you have to delete single nodes a lot. But it is quite convenient if you don't need to do that.


Are these strategies for solving borrow checker issues documented anywhere? Sounds like a case for which a best-practice reference would be golden.


I wrote http://manishearth.github.io/blog/2015/05/27/wrapper-types-i... which explains the primitives that can be composed together to do things like this.


> Agreed. Rust is different from other languages in that it is HARD to learn

But do you learn to code in a different paradigm where you don't struggle by iterating on a compilable and runnable piece of code, but rather struggle on iterating on a non-compilable piece of code and "once it compiles, it works"?


Definitely and I'd say the struggle is only there for the first couple of weeks.

This will get shorter as tooling gets better this year.


>It takes months of steady investment (and yes, frustration), but the payoffs are spectacular.

This was the exact same thing I kept reading back when functional programming was becoming in vogue - people were pushing for Haskell, saying that purity and laziness were amazing, type system could almost prove correctness at compile time, etc. etc. You just need to meditate deeply on category theory and draw direction graphs to become one with types and all would be revealed to you.

Hybrid functional languages got adopted in the meantime (F#/Scala/Clojure come to mind) and a lot of FP was adopted by OO languages - it wasn't pure or fancy but it solved problems in an accessible way. Most people still don't use Haskell.

I'm hoping Rust doesn't go down this path as I want a replacement for C++ badly. But saying "you need to spend a few months to get intimately acquainted with the language" is just not going to work in practice if you're hoping about breaking in to the mainstream.


I'm hoping Rust doesn't go down this path as I want a replacement for C++ badly.

So that's opposed to C++'s "It takes a few years to get acquainted to the language enough that people would want your stuff in production code"?

Nobody hires junior C++ programmers.


The problem with writing C++ is not so much the language as it is the problem domain - writing C++ as "C with some syntax sugar" is possible and a popular method by people who don't care about C++ complexity. And it gets you to results fast. Having something work right away and deal with the edge cases when you encounter them is preferable in a lot of scenarios - especially while learning, but also as a principle of iterative development, a lot of the time as you're iterating on code stuff fundamentally changes (requirements change, your interpretation changes, etc.) all that formalization up front ends up being an impediment - as much as it makes some people uncomfortable.


Is it common for code that fits those requirements to require C++ or Rust, though? Particularly in terms of performance?


Depends on the industry, from my experience in gamedev - yes.


? What's your point.

The parent post specific said it wanted a replacement for C++; the amount of time and effort it takes to get code right in C++ is a problem.

Rust also has a long learning curve, and its hard to quickly be productive in.

I don't think there's anything wrong with idly wishing there was some kind of middle ground where you could gradually or partially opt-in to the borrow checker with a nice modern language like rust to get started, and then gradually eliminate your GC parts or whatever.

Many game engines use a high level language with a GC like C# or (shudder) python or blueprint which are baked by C++ for the performance critical parts.

You write application logic in the scripting language, and anything that's too complex you fall back to C++ for.

If rust could somehow self host itself as a high level language like that, it would be superb.


My point is that C++ has a learning curve that's as steep as Rust's. Rust is a C++ replacement. So this isn't a disadvantage for Rust.

It may stop Rust from getting widespread adoption, but I see no problem with that. C++ isn't "widely used" either by most meanings. Rust may still largely replace C++ even without magically having a less steep learning curve. (Which it, IMHO, actually has anyway.)


> My point is that C++ has a learning curve that's as steep as Rust's. Rust is a C++ replacement. So this isn't a disadvantage for Rust.

As someone who learned C++ first and later Rust, no it isn't. Rust's learning curve is a hell of a lot higher than C++'s.

This reminds me of when Java was trying to compete with C++, you would have all these Java people making claims that HotSpot was going to take over the world and make Java faster than C/C++. Only it never happened outside of a few specific cases because these people were "fanboys" in every sense of the word.

You like Rust, there's nothing wrong with that. You think it's better than C++, that's your belief and I'll defend your right to believe it.

But stop comparing it to C++. Just about every language over the past 20 years has users do the same thing and __C++ IS STILL KING__.

You can push rust without being unfair to C++ (NO WAY is C++ harder to learn than Rust).


> (NO WAY is C++ harder to learn than Rust)

Depends on what you mean. Writing good, safe, C++? Quite hard. Debatably harder than Rust, but it's debatable, not an absolute statement.

The difference is that C and C++ let you write things like doubly linked lists in day 1 of learning them, whereas Rust won't. But a good doubly linked list is tricky to get right anyway.

I learned C++ first too, and I think I have a handle on "safe" C++ practices, but it's mostly a nebulous set of conventions that shift in the particulars based on the codebase. OTOH in Rust I'm immensely productive even in random codebases.

Most of the cognitive overhead of Rust exists in C++ too, since you still need to reason about how your data is shared, just that C++ spreads the cost out over the learning curve, whereas in Rust you have to deal with it up front.


NO WAY is C++ harder to learn than Rust

I've been programming C++ for 16 years, it's the main language at my day job, and I won't even begin to think I understand templates. But you'll see codebases that use it regularly. Const correctness? rvalue vs lvalue? ODR? SFINAE? RVO? exception safe code? template argument deduction? Then you have C++11, C++14, C++17...

Despite this, I still regularly write code that has memory, string, concurrency and bounds bugs. So does everyone else in the industry, including smarter people with more experience.

It's possible that the parts of Rust I'm not familiar with (e.g. unsafe code, macro system) turn out to be similar sinkholes, but so far, that doesn't seem to be the case.


> Rust's learning curve is a hell of a lot higher than C++'s

I had the exact opposite experience, i.e. Rust allows me to not worry about memory safety and allows me to think on a much closer level of abstraction to my problem domain than C++.


using != learning.


If you just want to write code that runs but that will suddenly break without warning an unknowable time into the future, then C++ has a lower learning curve. If you want to code defensively and write software that isn't going to eat your laundry, then the learning curve for C++ surpasses Rust in steepness, because of the sheer number of things you have to learn not to do.


No student in any language learns by writing production ready code.

Anyone who thinks this is what it means to learn a language is attacking a strawman.


No strawman here. It's incorrect to believe that the process of learning a language has necessarily ended by the time that someone has begun putting code into production.


> No student in any language learns by writing production ready code.

OTOH, many students of programming languages learn by pushing their code (EDIT: ready or not) into production. Oh boy do they learn.


Worked for me when learning Go.


I think that says more about your idea of production ready code than it does the issue under discussion.


I didn't know a many useful "Goisms" at the time, but it was certainly code that didn't need drastic refactoring. I just picked up the language basics in a day, and 3 days later I was effectively writing code that I might certainly write better today, but it was functional, extensible and fault-tolerant.


> You just need to meditate deeply on category theory and draw direction graphs to become one with types and all would be revealed to you.

In order to just use Haskell you don't need to meditate about category theory at all.

Some things do require months -- or even years -- of studying. Like, for example, C++, Java or whatever was your first programming language. There is absolutely no getting around that.


Yes but going from one to the other is relatively straightforward and possible in small steps of progression - Rust forces you to deal with a lot of things up front which is why people complain. eg. if you're a C++ developer you'll pick up Java very fast, it will take a while to be super productive but you can start working really fast.


That's because a lot of things (let's not debate exactly how many here) translate well between C++ and Java. But my point wasn't about going between relatively similar languages; it was about learning something radically different for the first time, like it was when you learned your first programming language.


But that's not going to work in practice - there are many skilled C++ devs and C++ has many ingrained legacy issues which another systems programming language could solve - but the bridge between that language and C++ must exist for this to happen. Like in the post, OP is a competent programmer - he can solve his problem in C++ - why would he invest months in to Rust to do the same thing ? If your answer is safety then OK that will matter to some but a lot of people don't care enough about it to pay the cost of switching.


Safety and correctness are big deals. I get that for many people they aren't a high priority. Such is life.

If someone thinks they won't gain anything by going from C++ to Rust, then no effort to learn it will be worth it. Obviously the Rust devs think there is a benefit.


C++ and Java are a bad example as they are unusually similar. Consider moving between C++, Python, and Javascript: on that scale, Rust doesn't seem so far out.


>Consider moving between C++, Python, and Javascript

I think those will be trivial in that direction, harder in other - but that's not even relevant. I think if Rust aims to occupy C++ domain then it needs to be very easy to get C++ developers on board - and from what I've seen so far it's not quite the case. I'm not sure what can be done about it tho - maybe emphasize unsafe is not a dirty word but a valid approach when doing low level data structure stuff like OP ? Create an unsafe version -> refactor to safe as you're sure it does what you expect and you learn more about borrow checker instead of having to absorb it all at once.


I doubt moving from Java to C++ would be that trivial, Java is a comparatively safe language without many of the issues that will trip you up in C++.


From personal experience, it's not that bad. C++ is "Java plus you need to worry about these other things", and it pays off in power and speed. They're not so similar in safety/scope terms, but the structure and design have enough in common that you get plenty of spare brainpower to worry about managing C++.

It was, for instance, a way more natural transition than "Java and C++ to OCaml". Less educational, but as usual that trades against easy.


In particular, the lack of pointers in Java would probably annoy the article author as much as Rust does.


Personally (not the author) I am more annoyed by the gutted single inheritance no traits type system.

You have numeric pointers into arrays, they're called indices.


> or whatever was your first programming language.

People who complain about how difficult rust is to learn certainly aren't learning rust as their first programming language.


That's my point. I think people complaining that Rust (or Haskell, or whichever paradigm shift) "takes too long to learn" have forgotten what it was to learn their first language. It took them long, because some things take long. Learning something different takes time, but it's easy to forget this once you have a couple of languages under your belt.


> I think people complaining that Rust (or Haskell, or whichever paradigm shift) "takes too long to learn" have forgotten what it was to learn their first language

Because they are required to be productive fast. A professional usually doesn't learn a new language just for the sake of it. If the language doesn't yield instant benefits then most people aren't going to bother learning it.

If a steep learning curve is expected for rust then the language will never be popular around the people who could potentially make use of it, i.e. C or C++ developers.


C++ has a pretty steep learning curve, yet people use it.

Many things in tech are tradeoffs. Sometimes the additional tools a language brings to the table are worth the learning curve, sometimes they aren't. There are people in the industry for which FP (or Rust, or whatever) is worth it. For you it isn't. That's life.


> C++ has a pretty steep learning curve, yet people use it.

C++ has 30+ years of libraries, so does C. To even think you can compare Rust and C/C++ on that matter is not realistic.


I think that OP's point is that Haskell or Rust introduce enough of a different paradigm that it is worth thinking about them anew, rather than trying to map your old programming language knowledge to Rust or Haskell directly.


Then why even bother learning a new language if you have to deliver something fast and you know one already?


I think the problems with Haskell are quite different than the ones with Rust.

Haskell suffers more from being a research playground, bad documentation for both the language and it's libraries, confronting you with many unusual concepts (type theory, monads, laziness,...) and, as a pretty old language, a lot of legacy cruft that makes a lot of things awkward (string handling is the prime example).

Rust wants to be a widely used language and while it has some aspects that are not trivial and quite often make it verbose and awkward, in reality it's not really that different from, let's say C++, apart from the borrow checker and lack of classes.

I don't think much will change about Haskell, but I have hope for Rust that it will steadily improve.


> Rust wants to be a widely used language

I am sure they want that. But so many responses of Rust enthusiasts to any article/blog less than flattering get very aggressive. May be Steve Jobs got away with 'You're holding it wrong' but I doubt incoming users for Rust will be this kind.


The aggressiveness is unfortunate (and counterproductive), but comes from the understandable perspective that too much bad press could kill the language before it even gets off the ground.

The Rust thesis is: "You should care about all these things you don't currently care about, and Rust helps (makes) you care about them." In order to enjoy Rust, you need to thoroughly buy in to that idea. Any significant deviation from that thesis defeats the very purpose of the language.

But of course most people aren't going to buy in, because it's an idea predicated on making their lives harder (in the short term) than it currently is. Rustaceans believe the long term benefits are massive, but new users cannot possibly believe that except by taking it on faith. This creates a conflict. The Rustaceans need everyone to give the language faith over an extended period of time, and anyone giving up their faith after a short trial not only drops out of the ecosystem themselves, but possibly damages the faith of others. That's a huge problem, because Rustaceans are hoping their language will one day take over a significant portion of the systems/embedded development space. That can't happen if people sour on the language so quickly. So when Rustaceans see someone slam their language with giving it an "honest try," they get upset because the damage done is palpable.

Of course, that's the wrong strategy, as it just drives people further away from the language. The right approach is to be inhumanly kind and accepting so that the frustrations are counterbalanced by support. The good news is that the Rust community at large seems to be quite nice; it's just the angry ones that show up in blog comments.


Honestly, Hacker News is too general to base your opinion of the "Rust community" on, try #rust-beginners on irc.mozilla.org or users.rust-lang.org and I think you'll find one of the most welcoming communities on the internet.


When the blub programmers look up the ladder, they see a bunch of type theory, monads, and laziness. They don't see the use for these features as they think in blub.


If you tried to teach basic arithmetic in elementary school by means of axiomatic set theory, you'd put entire generations of people off of math. It is not a virtue to require people to take the hardest path to a concept in the name of purity, and it is certainly not a virtue to be condescending in the failure of others to do so.

Take as an example call/cc. If I tried to explain that to a--as you put it--blub programmer, I would probably get glazed eyes in response. If I instead tried to explain it via means of a yield operator (à la Python), I would much more likely get someone who could think of use cases where they would want to use it in their own code. The limitations are not in the blub programmer, it's in the teacher for assuming that blub programmers are incapable of learning hard concepts.


The type system in Haskell and other ML family languages is incredibly powerful, but it takes a long time for most to intuitively grasp functors, monads, monad transformers,...

It's a huge entry barrier.


Exactly what I rail against in Rust. What do you see when you toss a coin friendo?


The clincher for me with respect to rust is that I see a steady stream of useful software being developed in it.

I never really saw that in Haskell and I've heard people talking about it for at least 15 years. That indicates that something is missing.


There is some pretty damn useful software written in haskell. For example, I started using Pandoc long before I knew it was written in Haskell, and for me, it still dwarfs the utility of any specific tool made in Rust. That said, Haskell had a lot longer to get there, so Rust is still moving a lot faster on that front.


It's been the same with lisp for 40/50 years. A lot of evangelists but not a lot of people doing useful things with it. Rails did more for FP than lisp evangelists ever did.


This makes me hopeful about Rust as well.


F# and Scala are halfway to Haskell - maybe 3/4 of the way even - compared to the languages that were popular at the time (and the more experienced people get the more Haskelly their code tends to become). Hell, those popular languages are about 1/3 of the way to Haskell now. Any serious language these days has lambdas, map/reduce/filter, and type inference; most have list comprehensions and pattern-matching.

You're right that there needs to be a path up to there though. The reason Scala really worked for me is that it let me make my way up to a very Haskell-like style while remaining productive every step of the way.


All the "hard" things a user learns in Rust have to be learned in C++, too. It's just some things are learned in a different way in Rust (e.g. fighting with the borrow checker) than they would be in C++ (e.g. proper use of const(expr), references, moves, etc.).


To get any programming done, you have to learn certain things, you can't complain that it takes weeks to learn a new programming language and months to assimilate it's model of thinking.

But saying "you need to spend a few months to get intimately acquainted with the language" is just not going to work in practice if you're hoping about breaking in to the mainstream.

You mean it's okay for everyone to struggle to work with JavaScript for years and years trying to come up with an OO system and a module system and trying to just bundle the files for the browser, but it's not okay to spend months on getting intimately acquainted with the language?

Who cares if Rust becomes mainstream, we only need the best open source code to use it (like Firefox and the other C/C++ libs that are thinking of porting over). We don't need it to be mainstream, we just need it to be possible to learn it within a reasonable timeframe (less than a year).


> You mean it's okay for everyone to struggle to work with JavaScript for years and years trying to come up with an OO system and a module system and trying to just bundle the files for the browser, but it's not okay to spend months on getting intimately acquainted with the language?

You're saying that like those things are a contrast, not two incarnations of the same issue. Safe-and-usable is the holy grail for exactly this reason - right now we have usable-but-unsafe. Javascript isn't a few months to get acquainted: it's a few days to get passable results and a lifetime to get good ones.

So there's a huge need for a better mainstream tool, because the alternative is letting the existing ones endure. That doesn't have to be Rust - there's plenty of space for languages that are hard, but good for key applications like safe browsers. But it has to be something - lecturing people doesn't actually reduce reliance on easy-to-learn, hard-to-secure options.


> You just need to meditate deeply on category theory and draw direction graphs to become one with types and all would be revealed to you.

Luckily, Rust's community is much more pragmatic - but their goals are still to create something substantially different from existing languages because existing languages are not enough to guarantee what Rust guarantees.


Yea that's what I see as an issue as well, I was hoping it wouldn't be but the deeper I go in to Rust the more it shows up. On one hand a really simple solution would be to make unsafe a first class citizen when it makes sense, this would let you avoid "pushing square pegs in to round holes" solutions when the borrow checker doesn't capture your problem nicely, and it would be trivial for C++ devs to get on board.

On the other hand this defeats the goal of the language - being as safe as possible - because their domain is very sensitive to memory safety issues (browser) - promoting an ecosystem of unsafe libraries wrapped in safe facades would be against their goals.

So I guess right now, because of it's design decisions, Rust isn't simply an upgrade to C++ which I want but more of a language that took different trade-offs to solve specific problems. It remains to be seen if it's worth taking those trade-offs to be rid of C++ legacy garbage.


> On one hand a really simple solution would be to make unsafe a first class citizen when it makes sense

I have pushed this several times in the past, but there's always been a lot of resistance to the idea. Though I've always phrased it as "let's allow the safety checks to be turned off", which isn't necessarily the best phrasing...

I wonder if it'd be better to just push to make unsafe easier to use: add sugar for ptr::offset, add a -> operator, relax the restrictions on casts, and so on. When I do OpenGL programming in Rust I drop into unsafe quite regularly, and it feels just like C, except for some annoying papercuts. It'd be nice to get rid of those papercuts.


Or better extend the language to allow making code partially safe in many ways. This would make the wrappers much better.

Make it easy to reason about useful kinds of partial safety. Make it easier to define properties of external systems that Rust does not control.


Can you elaborate this into a concrete proposal?


There might be an easy solution, they could provide more flexibility on using safety features, kind of like Perl did it, i.e. let anyone disable/enable a particular strict feature for a particular scope, for example "no borrowcheck;" or something. This will make it easier and more flexible for everyone, newcomers will be able to gradually learn more about the language and still get things done in the meantime, and power users will understand where satisfying the compiler is not worth it and will be more productive as a result without worrying about overly broad unsafe.


They have that with the "unsafe" blocks but like I said promoting it as something you should use when things don't fit borrow-checker goes against their primary goal of making things safe. I guess there are two camps, one that wants compile time safety guarantees - those people made Rust, and the other wants a C++ replacement with modern tooling and no legacy garbage - those are looking at Rust and getting discouraged by safety trade offs they are paying but not really caring for - and every Rust tutorial I've seen focuses on hammering this aspect instead of going "OK here is the equivalent of this C++ code with unsafe code - you now get a sane build system, package manager, modules, fancy syntax. Now if you want here's how to make it safe."


You're almost certainly not looking for Rust if you don't want your language to prevent a vast variety of bugs, and instead want something that allows and encourages you to design systems which make it easy to shoot yourself in the foot (which all C++ codebases I've ever seen have grown). I don't think language developers are interested in making such a language in general, as it's not interesting - you'd just be rehashing ideas from multiple decades ago with new syntax, and not making anything particularly easier to do.


>and not making anything particularly easier to do

Have you used C++ in the real world ? Dealt with it's insane build systems, build times, obscure semantics, etc. ? Not to mention there's nothing close to a sane package manager.

There's plenty of value of just rehashing the lessons learned in last 20 years of language design to a systems language.


    I don't think language developers are interested in making
    such a language in general...
https://inductive.no/jai/

:P

I'd argue go came from a desire to have a language that was completely uninteresting in terms of 'interesting ideas', and really just exists to Get Stuff Done.

I think more people are interested in that sort of thing than you imagine. Clearly.


Jai is designed on a bunch of false assumptions, which makes it hard to get excited about.

Or more precisely, the designer of Jai uses C++ as "C with classes" and then goes from that, which isn't fair. He also misrepresented a few things in regards to Rust, which he doesn't even know, but comments on etc. etc.


As I understand it, you can't disable just borrow checker somewhere where you don't need it.


It's subtle. Unsafe Rust is a superset of safe Rust; that is, all valid safe Rust code is valid unsafe Rust code.

  fn foo() {
      unsafe {
          let x = &5; // this is still going to get borrow checked
      }
  }
Since it's a superset, it gives you access to new tools, that don't interact with the borrow checker, rather than "turning the borrow checker off".

  fn foo() {
      unsafe {
          let x = *const 5; // this is not going to get borrow checked
      }
  }


(*const is not valid syntax: the way to not-borrow something is to coerce/cast it to an unsafe pointer within the statement, so that the borrow ends at the ;, e.g. give x an explicit raw pointer type in the first example.)


gah, i always mess this up, haha


So you're saying it's perfectly feasible to become an expert C++ programmer over a weekend, but it's completely impossible to learn Haskell?


OP says he's a C# programmer and doing the solution in C++ was straightforward and he had the time to test it + look for C++ patterns.

First weeks in Haskell as an average joe programmer you'll spend getting your brain wrapped around lazy eval being the default.


I'm a big fan of hybrid functionals. I think being able to fall back to somewhat imperative code makes sense on occasion. Some things are better expressed in either paradigm, and that's OK.


You can do that perfectly fine even in a pure language like Haskell.


Or maybe Haskell is also a hybrid? I know about the type theoretic accounting tricks used to stay pure, but when I really need unboxed mutable arrays and several tight loops, I just use the ST monad and write code that is psychologically completely imperative. And it compiles to pretty fast executables too, especially using the LLVM backend.


Yeah, that's what I meant.

You can't tell from the type signature alone if a pure function uses ST though, so I'd say that counts as pure :)


Exactly. The fact that rust is easier than most low level languages made the high level language crowd forget than bare metal programming IS HARD. Try to do a big project in C++ or C and you'll see that making something that is production worthy takes months.

Besides, as a Python trainer I can make you productive in it in a few days. But it will still take a lot of time for you to be good at it. The only difference with rust is that the first "reward stimuli" comes later down the road because you need to be better at it than at Python to make use of it.


> It takes months of steady investment (and yes, frustration), but the payoffs are spectacular.

In my experience, most developers can't or won't put in this sort of investment. Which leads me to wonder what fields/niches will Rust land in?

This might change if, as other commenters have stated, it is taught as a first or second language, but that doesn't seem likely anytime soon...


Rust's niche seems very clear: C++ programmers that need better tools. This is an audience that should be willing to make a time investment, and understands the trade-off, for the most part.


I disagree to some extent, for a couple of reasons.

First, C++ programmers in aggregate have to consider more than the choice of language for their projects. Odds are they work for somebody else, on a team, an established project, whatever. In these cases "C++ programmers need better tools" doesn't mean "toss the bath out with the water." Rust isn't necessarily a good move in established products, in other words.

Second, Rust would do better, in my opinion, to target folks who are new to systems programming and, for whatever (mostly invalid, in my opinion) reasons, are intimidated by C or C++. Rust is very powerful and in many ways I like it much better than C++. I think Rust offers a great way for these people to have a gentle introduction to the world of systems programming. I say "gentle" because a good portion of the "pain" of learning is front-loaded into getting a build, rather than back-loaded into futzing about with gdb (although that would be something these folks will miss out on--who doesn't like a good session with gdb with a satisfying bug squash at the end?). Get them while they're young, as they say.


I don't disagree - my statement was also directed to people new to this field.

The overhead for existing C++ teams is learning the language, and (potentially AFAIK) interfacing with C++ code that has no good C style API. But Rust seems to tackle even the latter quite well. I don't think you'll get around the learning curve for a new language.


> it is taught as a first or second language

This is interesting. Many universities teach Java, C, C++ or Python as a first language. The headache that many first year students suffer from these basic (relatively speaking) languages is readily apparent. Rust is very strict, and it requires an entirely different mindset to use. I wonder how this would fare with students who have a clean slate.


> I wonder how this would fare with students who have a clean slate.

I suspect that the % of students who have difficulties won't be much different, overall. There may be a different distribution of when they run into trouble.

The interesting question is whether the group of students who have problems with other first programming languages is different from the group that have problems with Rust-as-a-first-language...


Yeah, I've seen a number of posts like this for go as well. I wonder if it may be useful to start programming in these languages with a more experienced buddy to ask questions if and do some coffee reviews, to reduce time spent banging one's head on the keyboard, and get better acquainted with the language's idioms.


I never wrote a single line of rust, but when a language is hard to learn, the compiler error messages are probably just bad in terms of comprehensibility and offer the user no direct solution.


    when a language is hard to learn, the compiler error 
    messages are probably just bad in terms of 
    comprehensibility and offer the user no direct solution.
I don't want to sound snarky but Rust really does have amazing error messages, and they literally do offer direct solutions/suggestions using literal examples and raw samples of your source code.

It even offers the `rustc --explain EXXXX` (where XXXX is an error code) that give an in terminal deep dive + examples on why this is an error/how it is failing.

    I never wrote a single line of rust
:|


So what do you think, why is it so hard to learn rust, if it is not about the compiler messages?


I spent the last week working with Rust and learning from the Rustaceans. It's been such a great experience. I am really glad that I dedicated some time to it. In a short amount of time, I've already dramatically improved some computationally expensive python code by more then 10x, largely with the help of the community. Further, Rust is already making me a better programmer. If you are skeptical, I challenge you to dedicate yourself completely to a solid week crash course of Rust and experience this for yourself. If you struggle, don't give up and blog about why you hate Rust. Join #rust-beginners on irc.mozilla.org and ask for help.


I know about GDB, that ain’t a good debugging experience

I can't take this article seriously after reading that and after reading this:

So I spent a few evenings with Rust

How long did it take you to learn C, C++, Java, Python, Ruby, Perl or other languages? MONTHS. Many many many months. Sure you could get something up and running in a few evenings but after many months did you realize that earlier code could be better. Each language has its own obstacles and spending a few evenings isn't enough time to get around them.

I gave Haskell a pass but after checking out Elm I can see more of its power. I gave Rust a pass because the language was changing but now that it's stable I expect to sit down for a few months at least learning it.


I agree with him about GDB.


I'm not coming from C++, but my experience with Rust has been:

* Read the book, got some of the concepts but most of it went over my head.

* Did nothing, maybe thought about it a little bit.

* Two weeks later: read the book again. It made more sense but some of it is still lost on me.

* Actually wrote some Rust. Was hard, frequently consulted the book, the "by example" book, r/Rust, and google.

* Two weeks later, it's much easier. I'm still consulting outside resources frequently, but I generally know how to fix problems as they arise and my mental model of the how the language executes is becoming more and more accurate.

I've spent on average 30 minutes a day working on a pretty simple Rust program for about a month now. The advantages of Rust so far have been: if it compiles, it works. Extremely easy to refactor: for my main data structure, I've switched between vecs and arrays back and forth probably 4 times to get a feel for the differences. This is generally really easy to do because they both implement the the iterator trait. In the other languages I use (at work), switching between one type of collection and another would be a huge time sink and involve a lot of annoying (though trivial) refactoring.

The main advantage of Rust for me (and the reason why I decided to check it out vs go) is that I don't have a lot of experience thinking about memory at a low level. The dynamic languages I work with abstract that away from the programmer. I wanted something that would force me to think about how and where memory is being allocated, and whether I want to pass by reference or value. For someone who has spent a lot of time with C/C++ or just someone with a background in CS/programming languages, this may all seem obvious to you. But there's a lot of programmers out there who learned with dynamic languages and don't have a good mental model for memory. Rust is a good instructional tool for us.

I don't really have an opinion on whether Rust is "production ready." It's being developed rapidly though, so I think it'll get there.


Yes, it is hard to use mutable shared values Rust. This is exactly the point.


So how do you avoid it?


Well, you either uniquely mutate, or keep the value immutable.

Usually you construct the pipeline to switch between these as you need. If you need to share the result from one mutation to the other, you can do it by passing a simple immutable value that communicates the state change to other mutation, but don't try to do both at the same time. This is simply a good practice enforced at compile time. Of course, sometimes these rules prevent some valid cases, but overall it is worth it.

If we are talking about HashMaps, the Rust std lib has useful Entry api to for get-or-insert use cases. In rare cases where you need to share the value between different parts of the system, there are primitives you can wrap your value in and enable reference counting, as well as internal mutation (even if wrapper is immutable, it provides safe api for mutation). In cases where the concurrency is needed, there are mutexes, channels, and multiple libraries to parallelize the code (like rayon or crossbeam).


The problem is there is nothing like "atomic mutable", equivalent of C++ std::atomic with at least sequential consistency.

So you get to drop all safety often instead to have shared mutability.


This is wrong. There is a std::sync::atomic module, along with types like Mutex and RWLock.


There's no single solution to the problem, which is perhaps one reason why folks new to Rust might stumble over it. You can't just get two aliased mutable pointers to the same region of memory in safe code. Sometimes the borrow checker can't quite see through everything. Consider this trivial example, which will fail the borrow checker:

    let mut owned = vec![1, 2];
    let x = &mut owned[0];
    let y = &mut owned[1];
That this fails might come as a surprise to a lot of folks because it's easy for a human to see that it's perfectly safe. Namely, neither `x` nor `y` refer to the same point in memory, but the borrow checker still forbids it.

What's the solution to this problem? Well, it depends on what problem you're trying to solve. One popular approach is to change the representation of your reference from a pointer to an index:

    let mut owned = vec![1, 2];
    let ix = 0;
    let iy = 1;
Now, in order to dereference your reference, you need to do `&mut owned[ix]` instead of simply `{STAR}x`. There are various trade offs to this choice of course, including the fact that `&mut owned[ix]` will likely be bounds checked. (Bounds checking could be turned off by using `unsafe { owned.get_unchecked_mut(ix) }`.)

But there are more solutions to this problem! Here's another:

    use std::cell::Cell;
    
    let owned = vec![Cell::new(1), Cell::new(2)];
    let x = &owned[0];
    let y = &owned[1];
    
    x.set(10);
    y.set(20);
    
    println!("{:?}", owned);
This uses a concept known as "interior mutability" to permit mutating through a borrowed reference. `Cell` only works for `Copy` types ("plain old data"), whereas you'd need to use `RefCell` for non-copy types like `Vec<T>` or `String`.

Finally, we can come full circle and use the `split_at_mut` API on slices to get two mutable views into the same slice. This is a good example of something that is ultimately implemented using `unsafe`, but exposes a `safe` interface:

    let mut owned = vec![1, 2];
    {
        let (xs, ys) = owned.split_at_mut(1);
        let x = &mut xs[0];
        let y = &mut ys[0];
        *x = 10;
        *y = 20;
        // The added scope is used so that the mutable
        // borrow of `owned` is done before borrowing it
        // again to print its contents below.
    }
    println!("{:?}", owned);
There are various trade offs to each of these approaches and which one you use depends on the problem you're trying to solve. Of course, the criticism of "I just want to use mutable pointers dammit" is valid, because Rust will, for the most part, force you to think of another way of solving your problem (unless you're OK using `unsafe` and raw pointers). We occasionally pay a price for it when the borrow checker isn't quite smart enough. In the example I've shown in this comment, it's obvious what the borrow checker should do, but in a real program, it's rarely so simple.


> That this fails might come as a surprise to a lot of folks because it's easy for a human to see that it's perfectly safe.

So what's still unclear to me is, is this failing in the borrow checker "by design", or is it something that will be "improved"?

Also, _why_ does it fail in the borrow checker? Is it correct that it fails, or is it a deficiency of the current borrow checker?


If "correct" corresponds to "if and only if safe," then no, it's not correct because the code I posted in my previous comment is safe, but the borrow checker rejects it.

I only posted samples to demonstrate working with the borrow checker. Think about this code:

    let mut owned = vec![1, 2];
    let x = &mut owned[i];
    let y = &mut owned[j];
What are the values of `i` and `j`? If they are equivalent, then this code is unsafe, because it would permit mutable aliases. Therefore, the borrow checker would need to prove something about what values of `i` and `j` are legal at this particular point in the program. (Errmmm, dependent types anyone?)

With that said, if you do have code like my original sample (where the indices are constants and trivially not equal), then perhaps it would be reasonable to say that is a deficiency of the borrow checker as it currently exists that could be feasibly fixed. It's not clear what the impact of fixing that deficiency would be. For example, I don't think it would have made the OP's life any better.


When in doubt be extra safe.


I guess that is equivalent to 'if it might be broke, break it'?


Let's see, you can be extra safe and deny certain programs. Or be more "free", and break code when you introduce safer borrow checker.

It would literally break forward compatibility.


You know I really don't care for a pl that tells me how to do things. I fought python for a long time on a simple issue like indentation. C like syntax is best. RUST is stupid syntactically...surely you've noticed it's a leap. I'd use limbo before Rust.


It's a limitation of the borrow checker, but it's more of a limitation of simple, orthogonal design than a hard technical limitation of the implementation.

It could have special cases for many things, such as letting the disjoint indexes pass. But that would be a special case because it can't always prove that the indexes are disjoint. So it leaves up to libraries to provide APIs that do that. This is why `split_at_mut()` demonstrated in grandparent post exists.


It's a limitation of the current borrow checker.


It would be relatively easy to overcome this limitation, but it would involve making slices "special" to the compiler, which also means another special rule for the programmer to use.

It would be preferable if you could make your own slice-like types which supported indexing, but to do that you need a mechanism to tell the compiler that your `IndexMut` implementation returns disjoint references for different indices, and that requires a mechanism for the compiler to determine whether two indices are the same (in case your indices are not integers).


Use unsafe or wait for the improved borrow-checker?


shared mutable values are also called globals


Only when they are globals, though.


>We are talking about allocating memory, in a system level language, and unless you are jumping through hops[sic], there is just no way to do that.

To be fair, if you want to call malloc it is still just as accessible[1][2] as it is in C. If you search the Rust documentation for malloc `libc::malloc` is even the first result. If you were to use that on stable Rust the error message will even tell you to link your program to the libc published on crates.io (which is just adding a line in your Cargo.toml file.)

I'm not sure I'd call that jumping through hoops. I'd love if Rust provided easy access to its own allocator on the stable release channel; but I'm willing to wait for them to get it right, these things take time.[3]

[1]: https://doc.rust-lang.org/libc/x86_64-pc-windows-msvc/libc/f...

[2]: https://is.gd/cgkX2R

[3]: https://github.com/pnkfelix/rfcs/blob/117e5fca0988a1b0c4b6e4...


Also, most folks would use `Box` with `Box::into_raw` (also `Vec::with_capacity` for dynamically sized stuff, which works fine.

RawVec is an implementation detail; Vec exposes most of what you need for dealing with raw memory.


The first few of these types of posts were OK. But now it's a bit ridiculous. You spent 2 days trying a new language and you didn't like it, your opinion means nothing.


I don't typically respond to comments like this anywhere, but I think this kind of view is dangerous to have. And it's one the Rust community at large seems to try to shy away from.

This kind of attitude towards people trying out a new language is damaging to the person trying the new language and to people who are part of that languages community. The former because it will give them the notion that the community is hostile towards them and newcomers in general and the latter because it can enforce a dogmatic view of the language ("you do it our way or no way!" kind of feel).

A language's ability to have people pick it up easily is important. In Rust's case there is definitely always going to be a bit of a hump for people to get over, but the concern put forth in this article seems like a legitimate one, but one the Rust community seems to be aware of. Acknowledging and helping people with these problems (while searching for a way to potentially avoid such a situation for the next newcomer) is important and it's critical that you not simply dismiss someone's opinion because they don't know or understand "the right way to do it" yet.


For a language like Rust that puts all the pain up-front, I want to hear whether the pain is worth it. But someone who spends only a few days can't give you a weighing of whether the pain is worth it.

I'm reluctant to even listen to someone's Amazon product opinion if they've only used it 2 days.


I certainly agree that an opinion piece like this has less weight than one which comes from someone who has spent a considerable amount of time with a language. However, saying that an opinion formed after only a few days is "worthless" is taking it too far.

Plus if one completely ignores pieces like this then they are more likely to fall into a rut where pain points like this are forgotten or ignored completely. While the actual pain of getting used to the Rust way of doing things may be harder to mitigate perhaps there is some insight in opinions like the ones presented in the blog here which can lead to a way to help people get over that pain in a quicker/less painful way.


This is particularly bad considering who it's from, his claims to fame include nHibernate and ravenDB, both of which take much longer than a few evenings to learn, with more questionable value as well.


I do not know Rust, but I know C and some Haskell and AFAIK one should building a program in Rust differently than in C/C++ (which is probably the authors background).

Get out a quick correct (and this can not be emphasized enough) version (obviously one has to learn the language here) and then optimize (when you have numbers). The author did not seem to get a working version with Rust (sorry, if I am wrong on this), and tried an approach that he knows would work for C/C++. But maybe another approach would have been already performant enough?


Rust provides something that few if any other languages do: memory safety without GC. It is very obvious that this comes at a non-negligible cost in development effort. If you need what Rust provides -- and an important class of software certainly does -- you should be more than happy to pay that extra cost. If you don't, there are plenty of alternatives. But exchanging effort for rather unique guarantees is the very explicit tradeoff that Rust makes. That few other languages choose to make this tradeoff is what makes Rust so valuable.


To be clear, I think the data supports that Rust comes with a non-negligible learning cost. Whether that cost is continually paid isn't clear yet. (I personally think the answer is "no.")


I agree that there's no data, but I would be very surprised if the answer is indeed no, as there are at least good theoretical reasons to assume extra cost. Proving anything (let alone to a compiler) comes at a computational complexity cost (think of the proof as a certificate). In principle, a large number of proofs may be "free" in the sense that no more information than the programmer normally supplies the compiler is necessary. But if this is true in every case, that is very surprising.


It's not very surprising to me. Paraphrasing, "once I got past the initial hurdle of the borrow checker, programming in Rust became much easier," is frequently uttered in the Rust community.

I'm not trying to make a very nuanced claim here. All I'm saying is that there's clearly a lot of people who struggle while learning Rust, but that this is very different than paying an eternal development cost similar to that struggle. We should be careful not to conflate them.

(I did not meant to get into a broader philosophical discussion about using a type system. That's a different conversation IMO, and I disagree with your analysis because you only focus on the upfront cost while neglecting the maintenance costs in the absence of such proofs.)


> programming in Rust became much easier

Much easier than what? Writing C? I believe you.

I agree with you about maintenance, and I agree that there are two separate issues, but still, extra proof must invariably carry some extra mental burden. There are no free proofs just as there are no free prime factors. This isn't a philosophical issue, but a computational one. But it's also true that the mental burden might offset a different sort of mental burden. So whether there is additional work or not is unclear, but I think it's pretty clear that the nature of the work is different.


I don't disagree. But I find this to be a very different claim than the one I initially protested. To re-iterate: I'm trying to point out that there's a difference between learning costs and eternal development costs. The OP is a data point in support of the former, not the latter.

I'm not going to debate the trade offs of type systems with you. It's been done to death.


I doubt there would be much debate, as we're likely on the same side :)


Writing libraries, especially implementations of well-known algorithms and data structures, is a bad way to learn most languages. Technically, it's possible, but I think it skews the experience strongly toward languages that are similar to what you've worked with before and languages that are more permissive.

I suggest writing applications, or libraries closer to applications (e.g. a specific library that parses a special data format important to your domain), where you have a clear goal but not a clear path to the goal in mind. In other words, hacking.

Let's say the author had finished the Trie. What good experience could he possibly have? It's not going to magically run faster than the C++ version or use less memory. It's a well-known algorithm so there probably aren't subtle bugs in the C++ version that rust would have caught. The best you could possibly say is that it was faster to write -- which is exactly why it skews the experience toward familiar and permissive languages.

I am trying to learn rust (albeit very slowly) by doing some hacking to make it work better for postgres extensions. I know postgres, and I think it would be cool if it were easier to write postgres extensions in rust. Of course you can today, but you'd have to use a lot of unsafe FFI calls.


If you just want to allocate a chunk of memory for random use like you would in C, it can't be guaranteed to be safe. So I would hope you need to jump through hoops.


It is a rust weakness that it does not have "partial unsafe" that guarantees some things. Makes many kinds of code unnecessarily hard to reason about.


Could the issues of rust be rectified with a better theorem proved that goes past "2 things are touching this"? I don't think I'd mind if the compiler said "your code will not work because eventually it will modify the same bytes at the same time and create an unpredictable state" but I will mind if it says "your not good because you have two mutable references.


Having a more powerful theorem prover is not always better: it makes it harder for the programmer to know whether a piece of code will work, and you can start getting non-local effects, where a change in one part of the code can break a (seemingly) unrelated part.

However, what you're asking for is very similar to what types like `RefCell` and `RwLock` do - they effectively delay borrow-checking until runtime. Now you can have references to a `RefCell` in multiple places in the code, and they can all get mutable access to the interior, just not at the same time.

Having said that, I find it interesting how rarely things like `RefCell` are needed in practice: there's almost always a better way to satisfy the borrow checker, and it usually results in much cleaner code.


> Having a more powerful theorem prover is not always better: it makes it harder for the programmer to know whether a piece of code will work, and you can start getting non-local effects, where a change in one part of the code can break a (seemingly) unrelated part.

I think there are two solutions to that problem. The first is to involve HCI people in the development in the language to see how they can improve the output and communication from the theorm prover to the programmer. I imagine that, some time in the future, a very powerful therom prover will make its way into a "general" language and will bring some cool effects. The theorum prover I'd like to see will attempt to prove all of the conjectures used in the program and, when not possible, will say "possibly invalid code. Please solve this conjecture" where then this would be given to a mathamatician and they have to complete the missing steps to see if it works, if not we tell the compiler this isn't going to work and it now generates errors for this.

The other solution is to limit the shared state of every program using very strict segregation of all moving parts and an attempt to limit use of side-effect-causing operations.

This second one is nothing new, I mean the original LISP crowd had these ideas down and theres nothing new.

I don't think I've ever worked with a compiler that has a built in theorum prover, possibly a shared library of proven work, and a way to in the language aid the prover. For instance this type of function definition

    int(int numerator, int denominator) where (denominator != 0) divide {
        return numerator / denominator
    }
And even more so I've not heard of unobtrusive theorum provers that keep out of the programmers way while still being useful. I think an unproven relation should be a warning. It's not good but the program can still run and you (you being not me but the math-people) can come up after me and prove all of my work or tell me I'm an idiot and to fix it.


This example works in SBCL, where types (possible disjoint sets of values) are treated as assertions. By default I always set "safety" and "debug" to 3 (the max), which helps.

    ;; Giving names to abstract things a little
    (deftype zero () `(eql 0))
    (deftype non-zero () `(and number (not zero)))

    ;; Declaration
    (declaim (ftype (function (number non-zero) number) divide))

    ;; Definition
    (defun divide (n d) (/ n d))

    ;; Test
    (lambda (u v)
      ;; introduce a type constraint here.
      (declare (type zero v))
      (divide u v)) ;; (nb: underlined in orange)
This message is also printed:

    note: deleting unreachable code
    warning: 
      Derived type of V is
        (VALUES (INTEGER 0 0) &OPTIONAL),
      conflicting with its asserted type
        (OR (AND NUMBER (NOT INTEGER)) (INTEGER * -1) (INTEGER 1)).
      See also:
        SBCL Manual, Handling of Types [:node]
The warning happens when two types are and-ed (intersection), and the resulting set is empty. You still allow code where there is potentially "bad" value flowing from one place to another, because ultimately you know the runtime will detect it. In other words, the goal is to reject false negatives (unwarranted warnings), which contrasts with statically typed languages where you only accept code that is guaranteed to satisfy static typing rules (it makes sense there because the compiler is your last chance of catching those type errors, since the runtime does generally not perform type checks).

(edit: actually use non-zero instead of (not zero))


Yes that is exactly what I'd like but I'd like something that's grown past just type checking. There are many more bugs that I know can be caught by a theorem prover.

Edit: I want to clarify. I think just type checking should be written as "Just type checking".


Indeed, something like what is being tried in Pyret to make contracts (a kind of theorem lemma) the first class citizen.


As a matter of fact, I agree.


Well Rust's way of ensuring you don't modify the same bytes at the same time is to ensure that any byte has precisely one "owner" in any region in which it is mutated. That's a model we know how to implement and also an easy model for humans to think about - we can see what's going on (we can see the regions in the code) and we know what to expect to work (e.g. if you have a mutable reference you expect to be able to use it to mutate what it refers to). If you can formalize a better model you can probably implement it in a language, but what model would that be?


From an implementation standpoint that makes sense. From a usability standpoint it does not. The use case of systems programming usually involves many mutators on a single piece of, non-copiable, data. I can see why the person writing the Rust prover would say "yea this is the best" but I can't see why anyone else would because it's filter cuts out a large portion of completely valid code that is not incorrect but instead is "scary" or "spooky".

> If you can formalize a better model you can probably implement it in a language, but what model would that be?

I'm just someone on the internet here to complain. I can't do that because I'm not a smart math person who can prove how to get there or even generalize it to work with other programming related things.


> From a usability standpoint it does not. The use case of systems programming usually involves many mutators on a single piece of, non-copiable, data. I can see why the person writing the Rust prover would say "yea this is the best" but I can't see why anyone else would because it's filter cuts out a large portion of completely valid code that is not incorrect but instead is "scary" or "spooky".

I find passing ownership around very usable, and can never understand how the bit-twiddling C folks ever get anything done. Maybe that's my mathematical background - I like my functions to be functions.

> I'm just someone on the internet here to complain. I can't do that because I'm not a smart math person who can prove how to get there or even generalize it to work with other programming related things.

The hard part is forming the model. Once you actually know how it's supposed to work, proving it is the easy part. But I mean, how would you even explain to another human that a given algorithm is safe, if not by arguments along the line of "look, this function owns this piece of data and this function owns this piece of data and they're called one after the other"?


> The hard part is forming the model. Once you actually know how it's supposed to work, proving it is the easy part. But I mean, how would you even explain to another human that a given algorithm is safe, if not by arguments along the line of "look, this function owns this piece of data and this function owns this piece of data and they're called one after the other"?

One example of how this could work is writing a spinlock. Let's use a made up language for now and adapt that to our need.

   alias lock unsigned int;
   lock() make_lock (
      return 0;
   )

   void(ref<lock> l) take_lock (
      while (l != 0); // Assume this is atomic for brevity
      l += 1; 
   )

   void(ref<lock> l) give_lock ( 
      l -= 1;
   )

This is a "correctish" non-reentrant lock. This has many vectors to see if this is "correct" from a compiler standpoint. The first is that only the alias "lock" can be passed to my lock even though it is just an "unsigned int". Another mode is that I am doing (lock -= 1). Since this is unsigned if my program ever gets into a state where I am doing (0 - 1) in unsigned arithmatic then I am going to wrap around to UInt.MAX_VALUE which is not how subtraction should behave and it should warn me when this situation arises. If the language allows me I can also do this to help the compiler know that I never want this to happen.

   void(ref<lock> l) where (l != 0) give_lock (
      l -= 1;
   )
Now the compiler can see I absolutly do not want this wraparound behavior and it can check for it. Where would this be the case?

   int() thread_safe_fac(ref<lock> l, int some_important_data) (
      int f;
      take_lock(l);
      if (some_importand_data) {
         f = some_important_data * thread_safe_fac(l, some_important_data - 1);
      } else {
         f = 1;
         give_lock(l);
      }
      give_lock(l);
      return f;
   )

   void() main (
      lock l = make_lock();
      int some_important_data = 10;

      some_important_data = thread_safe_fac(&l, 10);

      print(some_important_data);
   )
Will this code work? Absolutly not and it's easily checkable. One of my function's invariants is violated. give_lock() will sometimes fall negative. This is a very crude example but I've seen code like this in larger projects where someone modified some code and someone refactored that then someone was contracted to add a feature. It just goes down the shitter. But this is easily provablely wrong.

   * Call thread safe fac
   * grab a lock
   * we theoretically touch some thread safe data
   * We hit a case where we do an if
      * Then run the then case
      * else set a default and free our lock
   * Free our lock
We've free'd twice and we've now broken the underyling data of the `lock` type. We've got an unsigned int that has been wrapped around to an huge number and take_lock(l) will never work again. Lock is a mutable datatype that can be shared between N many threads/fucntions and it can always be checked like this just by telling the compiler "hey I never want to let the internal int fall into a state where it is subtracting 1 from 0".

This can also be extended to other things

   void test(int *num) { *num *= 10; }
   int some_complex_function(int *num) {... free(num) ... }

   int main() {
     int *some_cool_data = (int*) malloc(sizeof(int);
     *some_cool_data = 5;

     while (true) {
       ...
       test(some_cool_data);
       ...
       some_complex_function(some_cool_data);
       ...
     }
   }
We've come to a place in the code where we have a possible path of execution (regardless of the depth in if statments or gotos or other control flow) where we will eventually come to the point where we use a bit of data after it's free'd. This code will be correct if you do something like.

   // We now make sure the compiler knows that we only want to accept real references to initialize ptr
   int some_complex_function(int **num) where (num && *num) {... free(*num); *num = NULL; ... }

   // We've removed the path through the program that allows the data to be used after free.
   while (some_cool_data) {
      ...
      some_complex_function(&some_cool_data);
      ...
   }
Again another bad example but there is a lot of perfectly valid, and very clean code that is against the borrow checker but is still completely correct.


I don't think I understand your point.

You can use e.g. Nat in Idris to avoid the underflow possibility. A language shouldn't allow silent overflow, and to the extent that Rust encourages a style where you fiddle with overflowing integers this is a problem with Rust, though I don't believe Rust does encourage that style - it provides locks, and if you were writing a custom one for some reason you wouldn't naturally fiddle with an int, you'd use an affine type or some such.


Lots of mutators over a single piece of data gets you data races. If you do it correctly, you protect against this somehow, using something like a mutex, or something more complex. Rust provides RefCells, Mutexes, and fancy libraries like Crossbeam for exactly this reason.


> The very same trie code that I tried to write in Rust I wrote in one & half evenings in C++

Use unsafe. No, really, just use unsafe.

Clearly a big value of Rust is that it's a safe language, and the design patterns around safety and affine typing are emerging and good. We should want people to write and use safe code as much as possible. But the trie example seems, to me, to be very much the sort of thing unsafe is for. Writing in C++ is essentially doing everything in unsafe. Unsafe is not evil; it's an important part of the language and when it's called for, it's called for.


I'm not sure, tries are pretty easy with safe code. It's only things like data structures with parent pointers where things get hairy.

If your goal is to use malloc and stuff (which seems to be what the author wanted) then of course you should use unsafe. Unsafe is indeed for doing abstraction-writing like this. Still, you should avoid it if you can.


I don't know Rust good enough, but do unsafe blocks help there? Data structures belong to library code and libraries surely can utilize unsafe blocks, because they are expected to be heavily tested.


Sounds like Nim is what you are looking for: http://nim-lang.org/

It's fast, has a Python-like syntas, a good degree of memory safety and optional GC.


"multiple mutable references"... I would use C++ for this. I would not use rust if I had a design that required that ability.


These kinds of articles are always hostages to fortune. Because I guarantee somebody, any second now, will demonstrate that what the author wanted to do was trivial, and if he'd spent a bit less time trying to start flamewars on Hacker News he'd have discovered it.

Not me, by the way, I don't know rust. But I reckon it'll be about three comments down.


I think part of the problem is this: if you're say, a C++ programmer, you can write C++ in most other languages and have a working program, even if it's not idiomatic.

You can't in, say, SQL, Rust, or Lisp. That doesn't make them a priori harder than other languages, it just requires a shift in thinking, whereas if you're jumping to say, Java after C++, you can get started right away and hopefully pick up idiomatic Java as you go along.

Instead of thinking "here's how I would implement this in C++, now how do I translate that to Rust?" you have to cut that part out and think "how do I write this in Rust?" I mentioned SQL because that's where that really, really, clicked for me: if you ever write SQL and think imperatively, you are doing it wrong, you have to think in sets, and once you do that, SQL is the easiest and most natural thing in the world. The Rust Book is pretty good, but I think it would really benefit from an in-depth section early on explaining how writing in Rust requires a paradigm shift in your thinking, and exactly how to think Rust-ically. I'd write something here, but I'm still learning how. :)


We're re-writing the Rust book at the moment: http://rust-lang.github.io/book/

It doesn't explicitly have a section like this, exactly, but there are several "thinking in Rust" style chapters towards the end.


(SQL is still kinda a crap language for dealing with sets, because the synatx does not represent the order in which set operation are applied or anything else; instead, they went with the "cobol" strategy of trying to make things like english.)

But I agree with your premise, certainly.


That's why I'm really looking forward to the point in time where "non-lexical borrows in Rust stable" and "clippy installable via rustup" converge.


If this were a C++ problem the author wouldn't have blogged because the answer would've been on StackOverflow.


Not sure if your comment was intended dismissively/jokingly, but I think also somewhat insightful. I find myself much less often really thinking about how to do something these days; if it seems remotely standard or potentially troublesome I google it. As a result, I spend less time thinking "algorithmically" about low-level stuff and probably lose something. But I also spend more time thinking (perhaps algorithmically) about the business domain, and am ultimately more productive.


Googling for Rust solutions is dangerous since frequently the advice you find only applied to some old beta version and no longer works. Then you find a second answer but it only applies to an even older beta version. None of the examples anywhere work of course.

This is my biggest frustration with the language: the manual is good for what it is, but it only covers the basics. You can see this in the article, where the author tries to implement a simple DNS cache and fails miserably. He doesn't even get to the part where you have a separate thread managing the cache and expiring old entries in the background to avoid unbounded growth and stale data. This is the sort of thing I could whip up in literally half an hour in C, but I just know the borrow checker is going to be a pain in the ass if I try to do something similar in Rust.


And your comment sounds exactly like the author to me: "I can't move transparently from languages I know well to ones I don't". Would you be able to whip up same in Haskell or Erlang? How many people claiming this would include a few bugs that Rust prevents?

Learning curve is a feature for sure, and the Rust community seems to be aware of this. But expectations of minimal effort are odd.


It is already in the comments at the site of the article.

To quote "Will": "On the bright side it is possible to implement a pretty elegant solution to this particular problem"


But in any case, it will be helpful for other newcomers to Rust because they are likely to run into the same challenges.


>So I spent a few evenings with Rust

I stopped reading. There is literally nothing you could possibly say that'd be worthwhile after "a few evenings" with Rust, especially when you lack the proper background in the first place.

This trend of "The problem with Rust: A naive and inexperienced perspective"-esque articles is already old.


I read a bit further, it doesn't get better and my summary would be: Rust isn't C++, doing things the C++ way in Rust is not easy, therefore it requires too much ceremony to do what I want and is too hard.


I made it to

> I have a buffer, that I want to mutate using pointers

Turns out you were right.


Yeah, that entire paragraph is basically a list of all the things safe Rust will not allow you to do. It should be a sign that you're tackling the problem the wrong way around...


The joke being, this is being written by a guy who has constantly insisted on his blog that if people don't think NHibernate and IoC are a good idea for their project, it's because of their personal failings.

Me, I muck around with Haskell. No I'm not productive in it. I don't think that's Haskell's problem, though.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: