Yes, Rust kinda doesn't fit super cleanly into a very black/white binary here. It is automatic in the sense that you do not generally call malloc/free. The compiler handles this for you. At the same time, you have a lot more control than you do in a language with a GC, and so to some people, it feels more manual.
It's also like, a perception thing in some sense. Imagine someone writes some code. They get a compiler error. There are two ways to react to this event:
"Wow the compiler didn't make this work, I have to think about memory all the time."
"Ah, the compiler caught a mistake for me. Thank goodness I don't have to think about this for myself."
Both perceptions make sense, but seem to be in complete and total opposition.
"Manual vs automatic" is mostly just a semantic problem IMHO. We could say "runtime versus compile time" to be more precise, but maybe there are problems there as well. The more interesting question to me is "how much time/energy do I spend thinking about memory management, and is that how my time is best spent?". In cases of high performance code, you might spend more time fighting with the GC than you would with the borrow checker to get the performance you need, but for everything else the hot paths are so few and far between you're most likely better off fighting with the GC 1% of the time and not fighting anything the other 99%.
The Rust community has done laudable work in bringing down the cognitive threshold of "manual / compile-time" memory management, but I think we're finding out that the returns are diminishing quickly and there's still quite a chasm between borrow checking and GC with respect to developer velocity.
"developer velocity" is also, in some sense, a semantic question. I am special, of course, but basically, if you include things like "time fixing bugs that would have been prevented in Rust in the first place", my velocity is higher in Rust than in many GC'd languages I've used in the past. It just depends on so many factors it's impossible to say definitively one way or another.
I have trouble believing this, at least in any generalizable way. I'm comfortable in both Go and Rust at this point (my Rust has gotten better since last year when I was griping about it on HN), and it's simply the case that I have to think more carefully about things in Rust because Go takes care of them for me. It's not a "think more carefully and you're rewarded with a program that runs more reliably and so you make up the time in debugging" thing; it's just slower to write a Rust program, because the memory management is much fiddlier.
This seems pretty close to objective. It doesn't seem like a semantic question at all. These things are all "knowable" and "catalogable".
(I like Rust more now than I did last year; I'm not dunking on it.)
I know you're not :) I try to be explicit that I'm only talking about my own experience here. I try not to write about my experiences with Go because it was a very long time ago at this point, and I find it a bit distasteful to talk about for various reasons, but we apparently have quite different experiences.
Maybe it depends on other factors too. But in practice, I basically never think about memory management. I write code. The compiler sometimes complains. When it does, 99.9% of the time I go "oh yeah" and then fix it. It's not a significant part of my experience when writing code. It does not slow me down, and the 0.1% of the time when it does, it's made up for it in some other part of the process.
I wish there was a good way to actually test these sorts of things.
This jives very well with my experience. I like writing Rust, but I do so well aware that I could write the same thing in Go and still have quite a lot of time left-over for debugging issues.
I can also get user feedback sooner and thus pivot my implementation more quickly, which is a more subtle angle that is so rarely broached in these kinds of conversations.
The places where I think the gap between Go and Rust is the smallest (due to Rust's type system) are things like compilers where you have a lot of algebraic data types to model--Rust's enums + pattern matching are great here.
I always miss match and options (I could go either way on results, which tend to devolve into a shouting match between my modules with the type system badly refereeing). But my general experience is, I switch from writing in Rust to Go, and I immediately notice how much more quickly I'm getting code into the editor. It's pretty hard to miss the difference.
I don't do much Go, so I can't really compare it with Rust all that well, but I think it's a plausible result.
To take two GC'd languages, I'm proficient in both Java and Scala. It usually takes me a little longer to write something in Scala, but when I'm done, I've almost certainly written fewer bugs in the Scala program than the Java program (I've also written many fewer lines of code, but that's another topic).
For me, it's the type system that helps the most. Given that Rust's type system is much stronger and expressive than Go's, I do expect to write fewer bugs in Rust than in Go. But it does feel like, if I had more experience with Go, I'd be significantly faster writing Go than Rust. (Then again, the more I write Rust, the fewer write-compile-fail-fix cycles I have to go through, and the compiler's ability to accept code as safe improves pretty frequently.)
Still, though (and I know this isn't the question at hand, but...), I personally value greater chances of correctness at compile time way more than development speed. While some types of bugs can be a fun adventure to track down and fix, most bugs I encounter are some mix of boring and annoying. I honestly would prefer to spend 2 weeks building and 2 days debugging over 1 week building and 1 week debugging. I really do find debugging that annoying. (Fuzzy numbers; I don't actually think I'd build 2x as fast in Go as Rust.)
> I personally value greater chances of correctness at compile time way more than development speed
In my experience, I don’t get much additional correctness for the extra effort, but rather I get independence from the GC, which is worth much less to me.
If we’re optimizing for correctness alone, I think development times could improve significantly by swapping the borrow checker for a gc. I know the borrow checker aids in correctness beyond what a gc does, but IMO the returns diminish rapidly. And I’m not sure how well this would work in practice, but maybe you could keep the borrow checker and add a GC, with every reference type being gc<> by default (not sure if that would recoup any of the extra correctness that a borrow-checker affords or not).
> it's just slower to write a Rust program, because the memory management is much fiddlier.
Really depends on what kind of programs do you write. I found that my Rust development gets slowed down only because I have to spend time to create the proper types. Memory management and lifetime problems are very few in my practice (but I can agree that they can swallow a time -- only when you are new though).
It's very much a confusing process. If C-styled memory management is skydiving and Python is parachuting, Rust can feel a bit like bungee-jumping. It's neither working for or against you, but it will behave in a specific way that you have to learn to work around. Your reward for getting better at that system is less mental hassle overall, but it's definitely a strange feeling, particularly if you're already comfortable with traditional memory management.
Control is primarily exerted over consumers of your API rather than the actual resources. This can be enforced through a combination of Drop implementations, and closures / lifetimes; the classic example is Mutex's LockGuard. In a GC language (eg Go) they give you defer or finally blocks that can accomplish the same thing, but that is always optional and up to other programmers to remember to do. Compare: you can't typically make someone run destructors in a GC language; you also wouldn't be able to guarantee the destructors have run at any particular point in time.
The one area you have more control over actual resources is knowing when memory is freed. Some people need to know when memory is freed, because they have allocated a lot and if they do it again without freeing, they'll run into trouble. To know for sure, simply use a normal owned type or a unique pointer (Box); when it goes out of scope, that's when its destructor is run. No such feature exists in a GC language, because you can never know at compile time when nobody else holds a reference.
As a thought experiment: in JavaScript with WebAssembly, an allocation in WASM can be returned to JS as a pointer. You need to free it, somehow. Can you write a class that will deallocate a WASM allocation it owns when an instance of the class is freed by the JS GC? (Answer: no! You need a new language-provided FinalizationRegistry for that.)
Ah, so it's more about library writer control then about library consumer control? Since for example in Common Lisp, the latter can still be accomplished through declarations, such as DYNAMIC-EXTENT (http://clhs.lisp.se/Body/d_dynami.htm). (Not sure if the former is necessarily related to memory usage control, but you'd probably achieve that type of resource control by exposing only WITH-* macros in your API.)
Maybe D people would have something to say about this as well, but I'm not a D person. What you're describing doesn't seem impossible in D to me, though.
Edit: yes. Library consumers don't get to change much, except where you have generic functions that abstract over a trait like `T: Borrow<T2>`, and then you can pass in any kind of owned or borrowed pointer to T2.
Dynamic-extent appears to be more similar to the "register" hint in C than to anything in Rust, in that it's an implementation-defined-behaviour hint. Rust has no such thing as hinting at storage class. Your variables are either T (stack) or Box<T> (heap) or any other box-like construct involving T. You maintain complete control at all times, nothing is implementation-defined, and it's explicit. You can implement (and people have implemented) dynamic switching between stack and heap storage in a Rust library.
As you can see, these three library authors get to control very precisely how their types allocate and deallocate, and you basically mix and match these and the stdlib's smart pointers (and Vec) + other libraries like arenas, slot maps, etc to allocate the way you want.
> you'd probably achieve that type of resource control by exposing only WITH- macros in your API*
Yes, this and similarly using with_* closures both work, but both are more limited than destructors that run when something goes out of scope. A type that implements Drop can be stored in any other type, and the wrapper will automatically drop it. You can put LockGuard in a struct and build an abstraction around it.
It's also like, a perception thing in some sense. Imagine someone writes some code. They get a compiler error. There are two ways to react to this event:
"Wow the compiler didn't make this work, I have to think about memory all the time."
"Ah, the compiler caught a mistake for me. Thank goodness I don't have to think about this for myself."
Both perceptions make sense, but seem to be in complete and total opposition.