Rust is the more elegant and powerful language. Creating a new language and repeating the "billion dollar mistake" by including null (sailing under the brand name "nil" in Go) is just crazy. Error handling is another strange thing in Go. And generics have been only introduced recently, but there is hardly any support for libraries in it (now). While Go is definitely fast enough for most scenarios, it is not the best language for low level code like drivers or kernel development.
So, depending on your goals, I think Rust is the better language in general. But if your goal is to get something done fast, then Go would probably be better, since it doesn't require that much learning effort.
Yes, I know (although they promoted it as a "systems language", but it was not really defined what that should mean in the beginning), but it is a restriction, you don't have in Rust. Basically, Rust can do everything Go can do, but not the other way around. That _might_ help to make a decision for a language.
It seems to be an idiom shift. Systems means connected parts, go concurrency does just that, connecting parts through channels. But it's not `systems` as in bare metal electronic chips systems. More like IT `system`.
Read about sum types. They exist in Haskell, Rust, OCaml, Typescript, Swift, Kotlin, etc. You are likely only familiar with product types without knowing they're called product types. (Cartesian product)
You can have 100% type-safe, guaranteed at compile time code without null that can still represent the absence of data. Once you've used sum types, you feel clumsy when using Javascript, Python, Go, Ruby, C, C++, etc. especially when refactoring. Nullness infects your data model and always comes out of nowhere in production and ruins your day.
Arguably dynamic languages have sum types: every variable is one big sum type with the variants being every other type! I suspect the lack of sum types in many static languages are partially responsible for the popularity of dynamic ones.
> I suspect the lack of sum types in many static languages are partially responsible for the popularity of dynamic ones.
I can totally see this. I started writing a small cli tool in Go, and despite knowing way less Rust, I switched to it and was able to make a lot better progress at first due to pattern matching and result types. It was just so much easier/more ergonomic to write a simple parser.
The Go code was a mishmash of ugly structs and tons of null checking and special casing.
I would say that traits are a better analogy for dynamic types, but at the same time you can think of enums as closed sets and traits as open sets, so they are different ways of encoding sets of possible structure and functionality, more alike in what they provide than it initially seems.
>If it's a single score, I'd still want to use null / int.
In your example, you still have to manually check if there's a value every time, but this is not compiler-enforced. Should you forget, you will get a runtime crash at some point (likely in production at a critical time) with some kind of arithmetic error. This wouldn't be possible with a simple sum type.
Also, a sum type with units of Score(Int) and NoScore won't allow assignments of any other "null" instances. Null in one spot is interchangeable with null in any other spot, and this can lead to bugs and runtime crashes. Null should be avoided when possible.
> The compiler would enforce Number-ness every time I try and run a function that takes a number, right?
> Wouldn’t I still have to check for NoScore?
No, because you differentiate between the sum type (e.g. Maybe in Haskell) and the number type at compile time. It's a small distinction - there will still be one or two places where you ask "is this a Maybe-score, or a Just-Score, or a No-Score", but the upside is that in all places you are very clear if a No-Score is possible, and you can't confuse the value with the error signal.
I.e. if you pass maybe-scores to something that computes the mean, you'll get a compiler error. The writer of the mean function doesn't need to care you've overloaded an error value onto your numbers.
The compiler support is the important part. Languages like C/C++ know sum-types just fine. They usually surface as sentinel values (NULL for ptrs, -1 for ints, etc) or unions with type fields. The stdlib offers it as std::optional<T>. As you progress along that spectrum, you get increasing compiler support there, as well.
One could even argue that sentinel values are a slightly better choice than Go's pairs, because they are closer to being sum-types than the strict product type that is go's (result, error) tuple - at least sentinels can't be simultaneously carrying a valid value and an error.
> I.e. if you pass maybe-scores to something that computes the mean, you'll get a compiler error. The writer of the mean function doesn't need to care you've overloaded an error value onto your numbers.
If I pass a null into something that calculates an average, taking numbers, the TS compiler will complain now. The writer of mean() (assuming mean() is typed) doesn’t have to know anything about my code.
That's right, and the compiler will reject programs where you don't do this. It's a set of safety rails for your code. You pay a dev-/compile-time cost in exchange for your programs not exploding at runtime.
Sure, it's the same amount of work, but you're forced to do it and the compiler will be able to reject invalid programs. In languages that allow null (including TS, it doesn't 100% enforce the stuff that Haskell, etc. does), you can skip that check, saving some work I suppose, at the risk of your code exploding at runtime. Having stack traces which jump across 50,000 lines of code because someone forgot to check for null somewhere sucks a lot.
The Billion Dollar Mistake refers to the fact that things that are not explicitly marked as "nullable" can be null/nil.
In rust, you would annotate score as `Option<u32>` (`u32` is one of Rust's integer types), and then you would set the score of someone who hasn't sat the test yet as `None`, and someone who got a 100 on the test as `Some(100)`.
Also, because Rust is intended for writing low level software where you might very well care deeply about how big this type is:
Rust has NonZero versions of the unsigned types, so NonZeroU32 is the same size as a u32, four bytes with an unsigned integer in it, except it is never zero.
Option<NonZeroU32> promises to be exactly the same size as u32 was. Rust calls the space left by the unused zero value a "niche" and that's the perfect size of niche for None.
As a result you get the same machine code you'd have for a "normal" 32-bit unsigned integer with zero used as a sentinel value, but because Rust knows None isn't an integer, when you mistakenly try to add None to sixteen in some code deep in the software having forgotten to check for the sentinel you get a compile error, not a mysterious bug report from a customer where somehow it got 16 which was supposed to be impossible.
When a maintenance programmer ten years later decides actually zero is a possible value for this parameter as well as "None", there's a NonZeroU32, they swap it for u32, and the program works just fine - but because there's no niche left in u32 the type is now bigger.
Oh yeah, allowing values to be nullable by default is bad, that's totally different than just 'including null'. I thought they meant including null in the language!
> you would set the score of someone who hasn't sat the test yet as `None`
>Oh yeah, allowing values to be nullable by default is bad, that's totally different than just 'including null'.
In Rust (and Haskell and OCaml for that matter), there is no built-in null keyword. Option is just an enum in the library that happens to have a variant called None. So it's technically Option::None and Option::Some(x). But, really, it could be Quux and Quux::Bla and Quux::Boo(x) instead--without any language changes.
That is vastly better that what IntelliJ does for Java with their weird @NonNull annotations on references--which technically still can be the null. null is still a keyword there, and null is somehow a member of every reference type (but not of the other types--how arbitrary).
And C# has a null keyword, and the rules for type autoconverting it are complicated, and some things you just aren't allowed to do with null (even though they should be possible according to the rules) because then you'd see what mess they made there (you'd otherwise be able to figure out what the type of null is--and there's no "the" type there. null is basically still a member of every type. And that is bad).
So even the language used in "allowing values to be nullable by default" is insinuating a bad idea. Nullability is not necessarily a property that needs to exist on values in the first place (as far as the programming language is concerned).
Rust has Option which you opt-into for cases like the one you describe.
What's special is that it's not like Go/Java/JS/etc where *every* pointer/reference can be null, so you have to constantly be on guard for it.
If I give you a Thing in Rust, you know it's a Thing and can use it as a Thing. If I give you an Option<Thing> then you know that it's either a Some or None.
If I give you a Thing in Go/Java/etc well, it could b nil. Java has Optionals...but even the Optional could be null...though it's a really really bad practice if it ever is...but it's technically *allowed* by the language. Rust doesn't allow such things and enforces it at the language and compiler level.
I use Go at my job, and Rust in a few personal projects.
The "typical" Go nil-check would usually look something like this (no idea how code will look, apologies up front):
result, err := someFunction()
if err != nil { ...
It's nice that you're not having to litter your code with try/catch statements, or use a union type with an error value like in other languages, but the downside is that Go only checks to see whether err is used at some point (or makes you replace it with _), and it's possible to accidentally use err in a read-context and skip actually checking the value. Go won't prompt you that the error case is unhandled (in my experience)
In Rust, when you want to return a null-like value (None), you wrap it in Option<type>. To the compiler, Option<type> is a completely separate type, and it will not allow you to use it anywhere the interior type is expected until the option is unwrapped and the possible null value is handled. You'd do that like this:
The compiler forces you to unwrap result into its two possible underlying types (any possible <type> or None), and handle each case, which prevents an accidental null value being passed to handle_value. Trying to pass result directly into handle_value would give you a type check error, since it's expecting a <type> but is passed an Option<type>. The compiler will also give you an error if you try to only handle the Some(x) path without providing a case for None as well, so you can't just accidentally forget to handle the null case.
(For completeness, you can also just do result.unwrap() to get the inner value and panic if it is None, which can be useful in some cases like when you know it will always be filled, and you want to fully terminate if it somehow isn't).
So in your case (assuming this is in the context of a video game), you'd make score an Option<i32> for example, then unwrap it when you needed the actual value. Generally speaking, I'd make the score returned from a saved game loading function be Option<i32> and make the actual score for the current session just an i32, then the function that handles loading a game save into the current session would handle the Option<i32> from the save file (defaulting to 0 when this is None), and we could assume that the score would be set by the time the game session is running so we don't have to constantly unwrap it within the game logic itself.
So, depending on your goals, I think Rust is the better language in general. But if your goal is to get something done fast, then Go would probably be better, since it doesn't require that much learning effort.