I don't agree, especially on the second point. Having strings default to empty instead of nil is a feature not a flaw.
What Go actually needs: Nil-Safe Types! [1]
Programmers can work around verbose error handling(3) and lack of enums(1), but forget to check for nil before using a pointer... CRASH! And that's something the compiler doesn't warn about.
As an ex security consultant almost all the bugs I found in Go applications were crashes that happened because of nil. It’s crazy how Rust just gets rid of this whole class of bugs with option and result types (and sum types in general).
I don't know about C# and Swift, but Kotlin, Python, and Typescript are only null-safe-ish; there is always a relatively easy path to smuggling a null value into your program regardless of the guarantees of the types you use.
This isn't to say it's useless, just that it's not as useful as it _could_ be. I've seen codebases in all three languages where there have been real production issues due to null values, even though the type decls say otherwise. Kotlin probably does the best job here but using any JVM library introduces potential (K)NPEs.
This situation is unfortunately quite hard to avoid in the languages above since they either added null safety (or indeed any kind of type-checking) at a later stage (C# and Python) or they need to maintain compatibility with a language or platform that is not null-safe (Kotlin with Java, Swift with Objective C, TypeScript with JavaScript).
Languages like Rust, Haskell or ML do not have this issue since they do not need to support such a legacy layer.
I was hesitating whether to mention unsafe code blocks or FFI, but the no language that gets thing done can avoid abuse of its escape hatches. Even Haskell has to deal with nulls when doing FFI.
There's still a huge difference between "you can smuggle a null value in" and "the language can't statically reason about null values at all"
You can override TypeScript's checks for anything, not just nulls, and yet people still get tons of benefit from it. Same goes with many of Rust's safety checks in unsafe { } blocks.
Additionally, languages like C# and typescript have to be a little more flexible because they added this feature after the fact. Go had plenty of opportunity to do it up front and not require any compromises, and it chose not to.
You can turn off strictNullChecks, and there's also noUncheckedIndexAccess which unfortunately isn't enabled by default even in strict mode. You can also introduce flaws via @ts-ignore, casting, etc.
Still, typescript equips you to catch these errors, even if you can technically circumvent it. In practice it can be nearly bullet-proof if you follow good practices.
Aside from explicitly turning off null safety and tricky use of casting, an easy example is interfacing with JS.
If you're using either a library that wasn't written in pure TS (maybe JS or JS with .d.ts) or interacting with some unconverted JS from your own codebase, you can easily pass a null through entirely by accident. The problem really stems from the JS end of things, but 9 times out of 10 you're going to be touching JS at _some_ level when using TS so I think it's fair to point out this gap.
C# only sort-of-fixed this problem and it did so rather inelegantly IMO.
E.g. nullable types are not options - you can't "map" them (you can't invoke Select, Where etc). It's easy to "get value or default" (via operator `??`), but you can't do the other, equally frequent thing, of "apply transformation on value if not-null". Or... well, you can, but here's the syntax:
var newVal = val == null ? (resultType?) null : compute(val.Value);
Various bits of C# and its libraries also variously ignore non-nullability when it suits them. I know it's done the way it's done for compatibility reasons, but it's really quite confusing around the edges as a result. And despite something being non-nullable in the code it could still be null at runtime thanks in my experience particularly to the joys of deserialisation libraries.
For that I use my own extension function like `?.Map(x => ...)`, defined as `Map(this T input, Func<T, TResult> map) => map(input)`. Still a bit ugly, but much nicer than the ternary operator.
Of course that doesn't fix all the other issues with null handling in C# (inconsistency between null references and null value types, inability to nest options, ...)
Unfortunately in any real world typescript or python the type checking is almost always partial. Which means it doesn't actually save you from these kind of errors.
I have not found this to be the case in Typescript code I've worked on. The main issue I've found with TS is stuff coming in from outside the system as JSON that doesn't conform to expected formats.
You can solve the JSON issue by validating that it deserializes to a known type at the point of ingress. I wrote a small library that I use in basically every project to help with that: https://www.npmjs.com/package/narrows
One issue I've noticed is that indexing into a T[] array where T can't be undefined, results in the expression to be of type T, not T | undefined, despite an out of bounds index evaluating to undefined. I believe I've seen that they recently added a setting that changes this though.
I think you are right, in that rust did not invent non-nullable references. But I do not see a problem is someone claming that they like that on a language.
I did not read the parent comment as "rust invented nun-nullables and is great", but as "rust, for example, removes the class of problems this way...". I think its neither helpful nor realistic to require every mention of a feature to be backed up by a proper research into who invented in when.
I'd say that clearly there's a design choice in whether or not to have it, so Rust choosing to have these whereas Go choose not to is clearly something worth crediting as "probably good design". While I think there's a place for nullables, they're mostly in the area of compiler optimisations and constrained low-level programming rather than higher-level code, since you're at the level of caring about how many words your procedure returns whether you should be returning a pointer to a struct, or whether or not you should use fat pointers (which in high-level code is usually just an easy "yes"), which is a level of optimisation most people no longer concern themselves with and that the compiler can do quite well (heuristically speaking, similar to register allocation).
So worth giving it credit where credit is due for choosing to use something other languages showed were a good idea. But it's all design choices now, there's few major new language ideas in the mainstream, so while I agree it would be nice to see a little more awareness of the legacy, I wouldn't phrase it as confrontationally as "if people actually did their research" when most people only write short comments and replies that probably assume you also know the legacy.
> While I think there's a place for nullables, they're mostly in the area of compiler optimisations and constrained low-level programming rather than higher-level code
I think here you mean that C-style nullable pointers should be “constrained to low-level programming” and high-level programs should always have option types and non-nullable pointers, that correct?
If that’s the case then I’d say “nay”… because it’s not that hard to have your cake and eat it: while Rust implements niche-value optimisation somewhat generically, nothing precludes special-casing optional pointers such that optional and non-optional pointers have exactly the same ABI.
And then if the language is memory-safe it probably gains in performances, because it doesn’t have to check non-nullable pointers for nulls before dereferencing them.
By "constrained low-level programming" you mean things like the interfaces exported by CPUs and VMs?
If so, yes, that makes complete sense. But if it's on the level of system languages, well, Rust is a perfectly capable low level system language, and constraints nulls only to where they matter.
At the end of the day, null is a value for pointers, and a system language does need pointers. But if you have a reasonable type system, not everything needs to be a pointer, and the type system is a compile-time feature, so it doesn't matter for your code target.
C spoiled the mentality that everything needs to be a pointer, when in fact, other systems programming languages never did it like that.
Yes pointers are there, for when they are really needed for interfacing with the hardware, dynamic datastructures and reference parameters can be dealt in more type safe ways.
> while crediting Rust for things it didn't implement.
* invent
(Other than this minor typo, I'm not sure why you're being downvoted. It's crazy that people in 2021 think non-nullable types are novel/"crazy". Why is nullability the default in most people's brains?)
I would guess null is the default because many of us start out with languages like C or Java. I've seen some interns have FP experience with Haskell, but it tends to be a single module and they often don't appreciate the nuance and FP is still not particularly mainstream in industry.
More generally, I don't think many programmers get to see the better way because they're not exposed to it. I can rhapsodise about Rust, but I doubt my company is going to buy into it because they already picked Go.
> I would guess null is the default because many of us start out with languages like C or Java.
Go is not a language that "just happened" (what could be said of JS and PHP). Go is designed. Designed by heavyweight language designers (from Wikipedia): Robert Griesemer, Rob Pike and Ken Thompson.
These people knew of Haskell, type systems, and the merits of type safety. I would expect them to know of "billion dollar mistake[1]" that is null. But sadly Go still carries on with the mistake.
I too care more about null-safety and proper sum types (in combination with nice switch/match statements and/or pattern matching) than generics. In the Elm language I found an experience that not having generics is perfectly okay (just a little annoying sometimes).
I find "not null safe languages" not okay nowadays, and I hold this opinion since before Go's first appearance (2009). I really wonder how the designers came to this decision.
I'm afraid this mistake can never be fixed, as null checks are already idiomatic Go. Java also could not fix it (which may be one of the main reasons behind Kotlin). Maybe the best thing we can hope for is Kotlin kind of language for Go (question marks after types to indicate nullability). It's just sad.
>Java also could not fix it (which may be one of the main reasons behind Kotlin).
C# added some null-safety features despite having a 20-something year baggage of legacy code. While it might not be the perfect solution (these checks work only at compile time and you can enable them per-file), I find that they work great for new projects. You have to be extra careful at boundaries (interfaces with third-party libraries which have not added ? annotations yet, API calls, and so on), but they save a lot of headache inside your own code.
Cool. It's a bit like what Kotlin did. Yet after some experience with Kotlin I found it is not perfect. "You have to be extra careful at boundaries" exactly, with Kotlin too.
The question mark is the best thing if you have not build your language with null safety to start with. Please look at Elm for a good example of what real null safety looks like.
With Go they had to change to do the right thing from the start. I really wonder why they didnt.
Also worth noting that Java may be able to fix it in the future after Valhalla drops (and this may mean better implementation for JVM-based languages too).
> I would expect them to know of "billion dollar mistake[1]" that is null. But sadly they had not.
What makes you think that they didn’t know about it, rather than that they did know but decided they weren’t interested in the trade offs for this particular new language.
> It’s crazy how Rust just gets rid of this whole class of bugs with option and result types (and sum types in general).
As a Rust security consultant, many bugs we find in Rust applications are crashes due to .unwrap()... it's easier to spot, but still essentially the same class of bug.
I disagree. A nil error in Go is most likely a dev forgetting a check. An unwrap() is a dev explicitly saying this should always contain a value. In fact fixing unwrap is as easy as search for unwrap in your codebase and factoring them all out.
No it's the opposite. Unwrap() means the dev said this will never not be set. Expect() is saying if this value is not available exit the program with this user facing error message.
It is a shame there are so many Rust examples out there with unwrap() all over them, because it's really not something I would want to see in production code. expect() with a message if you know that the only reason this fails is because something else messed up that should have prevented the call in the first place, sure. unwrap()? No thanks. That's why we have the ? operator now.
Option types in general are a wonderful thing, it's a big part of why I prefer Kotlin to Java. Kotlin's syntax for option types is really nice too, it's just `Int` vs `Int?` as opposed to Rust's `Option<i32>`. I definitely prefer Rust-style result types when it comes to error handling though.
I don’t think rust got rid of the null dereference problem. Just traded it for something slightly different. ie, calling unwrap when no value exists causes the program to panic.
In Go some common types are forced to be nillable, and you can't express "this is never nil". In Rust, "never nil" is the default, even for slices and by-reference types, so right of the bat for the majority of types nullability disappears entirely. You can't make a mistake of `unwrap()`ing something that doesn't support unwrapping.
`unwrap()` is the laziest/worst way of handling optionals, but it's still better than Go's behavior. `unwrap()` is local and explicit, so you can find and review every potential failure point (unlike e.g. finding every use of a nil map in Go). And of course Rust has plenty of better, graceful ways of handling optionals, so you can also ban this method entirely with a lint.
> Doesnt Rust have implicit panics on indexing out of bounds?
It does yes. A fair number of other constructs can panic as well.
> I wonder if any codebases lint those away.
Clippy has a lint for indexing so probably.
For the general case, it's almost impossible unless you're working on very low-level software (embedded, probably kernel-rust eventually) e.g. `std` assumes allocations can't fail, so any allocation will show up as a panic path.
https://github.com/Technolution/rustig can actually uncover panic paths, but because of the above the results are quite noisy, and while it's possible to uncover bugs thanks to rustig it requires pretty ridiculous amounts of filtering.
It’s really hard if you don’t start linting them early on because they creep on your codebase. There are a set of functions that will panic on you surprisingly. Indexing is one example, copy_from_slice is another one, honestly it’s too bad that there are such functions in the library but at least clippy can find them.
Sure, but the crucial difference is that all the unwraps in your code are then hooks that a linter can find, or simply grepping for unwrap in your codebase and do a manual audit of those pieces of code. In my experiments with Rust that's been a very nice way of working: first build a very barebones version getting the basic happy path right, and then getting all the tedious stuff right afterwards by eliminating all the unwraps. By making it explicit, you now have a concrete thing you can audit specifically, rather than every pointer access in your entire program and every pointer you pass off to a library.
> I don’t think rust got rid of the null dereference problem.
Yeah it did.
> Just traded it for something slightly different. ie, calling unwrap when no value exists causes the program to panic.
That betrays an absence of familiarity with option types.
First of all, an optional pointer is strictly more work, so you're not going to use an optional pointer when you don't have to, meaning most of your pointers will be non-optional and statically checked so.
This means when you do encounter an optional pointer, there's a reason for it. At which point you get to put your thinking hat on and wonder whether that's applicable to your situation:
* sometimes you don't care because it's a one-off or something and you just unwrap() and go on your merry way
* sometimes the pointer is set by construction but the typesystem is not expressive enough to understand that, usually by the second or third time you get around that code and don't remember why the unwrap's there you'll switch to `expect` or `unreachable!` in order to document your assumptions or logic (which is also helpful when those break and the code panics
* and most of the time there's a good reason why the pointer is nullable and it applies to your situation and you probably want to handle it properly and you do.
Plus `unwrap()` and friends are easily greppable so you can review them with little trouble, or flag them during review, or whatever.
In language with nullable pointers (and only that), you've got none of this.
But they add an unnecessary overhead, and still don't prevent you actually writing x := nil; print(x.a);. And everyone can have their own Optional, meaning you'll going to install dozens of different generics libraries for just this feature.
There are simple, go-like solutions. There's a character for pointer, so why not a character for a non-nil pointer?
In Rust, you're forced to match against the enum before you can look at the value, but in Go you even can't match. So unless it's implemented as a type cast, testing optionality would be just like testing against nil. The current internal "optional" package just panics, which is just as useful as a panic on dereferencing nil.
Function passing/callbacks, no thanks. You still won't get a compiler error when passing a potentially nil value, and I'd rather have readable code. I hope Optional is not going to be in the way of progress in this area.
Im in my 40s now and have seen many hyped language features come and go and very few that I think haven proven to be a net good thing that truly make programming better. Option types are on of the few. I think every language should have them now. It is more convenient, safer, and modern development machines can often compile away the tiny overhead.
Explicitly being able to mark a type as nullable and forcing null handlong if so. See https://kotlinlang.org/docs/null-safety.html it works pretty well with java interop by respecting jsr annotations and auto checking for null
I think for most intents and purposes, option types and Kotlin’s approach are more or less equivalent. Either way, the compiler helps you avoid null dereference errors.
I like Swift’s approach personally, which is that there’s an Option type, but loads of syntax sugar to make working with it easy (just append a “?” after the type to flag it as optional, “if let”, optional chaining with “?.”, “?()”, etc etc.)
For Go's purposes, though, they aren't equivalent at all.
Option types require generics, which Go doesn't (currently) have and may or may not be enthusiastically embraced by the community.
Option types must also be baked into the language, standard library and culture from the beginning in order to be ergonomic; otherwise you're constantly having to wrap and unwrap them whenever you interact with code that was written before the type was introduced. Which is most the code you interact with.
Finally, my own experience has been that Option types generally make things more rather than less complicated when you introduce them to a language that already has null. You'll need to make a decision on whether Some(null) is an allowed value, both options will introduce language warts.
Kotlin's approach, on the other hand, does not require generics to work, and interacts very well with the JVM's existing libraries and its existing type system. It's not going to butter everyone's toast. You can't do anything monadic with Kotlin's approach, for example, but I don't get the impression that the Go community is just desperate for monads.
I suppose it wouldn't. But, if you're going to do it as a feature that's built into the core language, I'd say that's even more reason to just do it Kotlin-style. The other main downside of doing it that way is that it needs to be baked into the core language and can't be pushed out to the standard library the way you can with optional types. But that wouldn't really be a practical difference compared to an optional type implemented without full-fledged generics.
The other big distinction between the Kotlin approach and optional types is that the Kotlin approach introduces no new run-time types. Null safety is checked statically, and then you're done. That's a big part of why it plays nicer with an existing standard library. It also means, though, that you introduce no extra run-time overhead, which I would assume is something that's considered pretty desirable to gophers.
They're not quite the same. With option types `Some(x)` is a distinct type and value to `x`. That's not true in e.g. Kotlin or Dart. For example this is fine in Dart:
void foo(int? x) {}
void main() {
foo(5);
}
This is not fine in Rust:
fn foo(x: Option<i32>) {}
fn main() {
foo(5);
}
You might not think that makes much difference, but consider if `foo()` starts off as `fn foo(x: i32)` and after using it 10k different places you want to change it to `fn foo(x: Option<i32>)`. That's a backwards compatible change in Dart (and I presume Kotlin), but not in Rust.
You can do this with typescript. If strict mode is on, any type that can be undefined or null will cause a compiler error if the condition is not explicitly checked for.
* val input: String is a non nullable String. You can never assign null to it, and null will never be assigned to it (unless you do some reflection stupidity, or really, really rare cases of you're-fucked-anyways-if-that-happens-something-is-really-wrong)
* val input: String? is a nullable String. It's basically String|null.
There's a third option, which is String!, platform types: since Kotlin has interop with Java, Kotlin will err on trying to be practical for you when calling Java code: it will (rightly so) assume that it is a non-nullable String, but still warn you that it doesn't have the necessary info to infer nullability/non-nullability, so it could technically be null. It's up to you to decide if you want to null-check it. This can be solved in two ways:
- Update your java code to include @Nullable/@NonNull annotations. You should already be doing that anyways, especially if you have control over it.
- Don't call Java code/wrap it in Kotlin wrappers. Kotlin code does not have this type inference issue.
I don't know Kotlin, but these usually are distinct from Options in an important way, idempotency `String?? = String?` or `String|null|null = String|null`.
This equation makes sense for some mental models, but isn't quite the same as Option. In particular, Option[T] has the nice property that you can map over its inner type in a naive fashion `forall S. (T -> S) -> (Option[T] -> Option[S])` whereas the coalescing version above needs an additional assumption that `S` is non-nullable.
forall S not null. (T -> S) -> (T? -> S?)
This can really get in the way of some kinds of generic programming.
Not necessarily. Not only is it Option.Some/None/ProbablySomeBuyMaybeNoneBecauseJava, it doesn't offer the exact separation that Options can offer: you can stick a value in Option.None (String? can contain null or ""). Nullable types are different to options.
(Bonus point for Java that has introduced Option types, that can still be null)
The problem with Kotlin is that it only has one global "anything-but-null", so nested ?s are collapsed (String?? is equivalent to String?). This is because it needs to run on the JVM, which only knows about is-null and is-not-null.
This interacts terribly with generics, since it means that only one side of the generic boundary can "own" the null value at any one time. To simplify a case where this can cause issues:
class Loadable<T>(
// null until value is loaded
var valueIfLoaded: T?,
)
// returns null if user is not found
fun getUser(id: String): Loadable<User?>
Code that interprets Loadable<T> will get stuck showing a loading bar, since it has no idea about getUser using null for anything else. This doesn't even generate a warning!
The "correct thing to do here would be to add a `T: Any` bound on Loadable, which prevents T from itself being a nullable type. That would notify the author of getUser that they need to box the User?, so that the cases are kept separate:
data class Box<T>(val value: T)
class Loadable<T>(
// null until value is loaded
var valueIfLoaded: T?,
)
// returns null if user is not found
fun getUser(id: String): Loadable<Box<T?>>
These are things you don't even have to consider when using a typical well-designed Option type (which, mind you, could still have a syntax sugar for T? if you wanted it).
I'd argue that they are fundamentally different. Option is "just another type". You interact with it using constructors, map/flatMap/get etc. You could implement Optional in Java 1.5 but can't implement nullable types in current Java.
Nullable types are a feature of the type system and require language support, but the benefit is that you don't touch typical programming patterns
val foo: T? = x()
if(foo == null) { return ... }
// foo is now T, optionality gone
v.s.
val foo: Option<T> = x()
// following must be wrapped and sprinkled in .map, .flatMap, .orElse
You can sort-of achieve the first with .isPresent() + .get(), but it clutters the scope, is more verbose, and is not really idiomatic.
Really? I haven't used Java in a long time, but in the languages where I've used Optional types, you'd always check for presence once and then extract the value before further use.
I'm really not a fan of the "initialize with empty value" approach. Especially when handling user input, e.g. marshalling JSON onto a struct, Go makes it impossible to distinguish between the user setting a value to 0 and the user not specifying the field. Instead, you end up with awkward "pointer to int" constructs to allow for nil checks. I'd love to have a "undefined" value instead.
I have never used Go in anger, only dabbled a bit and watched interestedly from the sidelines. So, my opinion probably isn't worth much. But here it is, anyway: it seems like this way of doing things would lead to a tendency toward messy, confusing domain models where you can't tell from types alone whether 0 or -1 or MIN_INT is a sentinel for "no value", or actually means that value, and possibly find yourself juggling multiple sentinel values for one type in the same scope.
My own cranky opinion is that, in the 21st century, a static type system that can't even reliably distinguish between something and nothing isn't much of a type system at all. Say what you will about dynamic type systems, but it must be acknowledged that they do a much better job of maintaining a firm and logically consistent, if not statically verifiable, grasp of the most basic ontological distinction that the universe has to offer.
There's always the option to wrap the atomic value in a struct of some sort. But, without generics, that's going to feel more like gRPC's awkward, verbose way of doing it than the ML family's Option<T> type. That approach is (mostly) tolerable in a datagram format, because I/O is going to be a hot mess no matter what you do, anyway, but I'd hate to have to do it with the types I'm using in my actual code.
I'm generally fairly disdainful of the, "every language must cargo cult everything functional languages do," thing, but it is interesting to observe that generics and proper algebraic types would cover all of these use cases - and more - cleanly and without having to add a specific language feature to cover each one. Which has me wondering, if the goal of Go was to create a maximally simple and ergonomic imperative language, did they actually achieve that, or did they follow a greedy search into a local maximum?
I agree this can be awkward, especially if you let these constructs propagate through your codebase and database. However, if a string or int can be null, then all strings and ints are essentially pointers, so you've just introduced this construct everywhere.
A couple things I have tried:
- hope default values align with your business logic, eg an empty string isn't a valid name and 0 isn't a valid age.
- for partial updates, populate the existing values before unmarshalling, then unmarshal on top. Missing fields in the json won't overwrite the existing values
- unmarshall into a map[string]interface{}, which gives you the semantics you want.
This does not excuse the language necessarily, but it helps to understand what is going on. Go is like C, in that it cares deeply about allocations. It has various syntax glosses that may help it look more like a language like Python where everything is a reference, but it is not.
If a function says it returns a struct and an error value, then that function is going to return things into a memory chunk sufficient to hold that struct and error value. "sizeof" is a well-defined operation in Go. Everything has a precise size the compiler uses. Slices may look dynamic, but they're actually a three-word structure for size, capacity, and pointer; the "dynamic size" happens behind that pointer. Maps may look dynamic, but they're a fixed struct as well under the hood. Channels may look like they could be dynamic, but they have fixed sizes as well for their internal components. Go is a value-based language, not a reference-based language.
The same thing happens when you declare a struct value in a function. Memory for it is allocated immediately.
"Initialize with empty value" is not a solution to any sort of type issue and has nothing to do with nil; "initialize with empty value" is a solution to C's uninitialized value problem. Unlike C, which simply grabs some RAM and gives it to you, and leaves it to you to figure out what to do with the garbage values in it, Go guarantees that all fresh values you receive are fully initialized to a well-defined "zero value".
In Go, when you say you have a struct, you do, right then and there, and so, it needs a value. You don't have an "undefined" value, because Go is too low level for that. For Go to have an undefined value for a struct isn't something it could just bodge on, it isn't something that was just an "oversight", it would actually be a fundamental overhaul to the language. Go would have to be adjoining values to the structs you declare, making it harder for you to know how much memory they take, or it would have to completely shift to a reference-based language, or it would have to do something else like that not in keeping with the nature of the language.
Rust, for instance, is doing more work than you may realize when you "Option" something. Think about what the memory representation of Option<byte> is. Without more information from the user, you can't use any value of the byte as the undefined value, because all 0-255 values are valid byte values. You have to stick it somewhere else. Rust, and some other languages, often do some magic to turn your byte into a 16-bit int instead and pack the invalidity into there. Having an array of Option<...>s is a non-trivial operation for the compiler, especially to do it maximally efficiently. Go doesn't do this kind of thing. The struct that is declared is what is in memory.
I welcome you to think that's still a problem in the language, but I hope this at least makes why Go works this way more comprehensible.
I think you're implying significantly more magic behind Option than there is.
> Having an array of Option<...>s is a non-trivial operation for the compiler
This is just not true. Arrays are always a number of values laid out in memory, exactly like in C. There is no magic for an array of options. It works the same as an array of any other value: you put N of them in memory, one after the other.
I should clarify. It is not that once the compiler computes the size of an Option that there is further confusion. My point is that something like Option<byte> is not necessarily something a programmer can just look at and know how large it is, because people who get very used to reference semantics like in Python, or smart compilers just doing things for them without having to think about the memory layout, can find it easy to forget that there's nowhere in a "byte" to stick a "None" value.
Sum types in general usually have compiler magic associated with them, because the common case of options or sums between various integers can get good results with the compiler being smart enough to pack the "None" option somewhere clever. As the size of the thing being used as a sum type goes up that amortizes to being less important. The naive way of using some integer as a tag and then having a chunk of memory large enough to store the largest value in the sum type can get very inefficient for small "largest values", especially if you have to round to a full 64-bit machine word for some reason.
Also, to be abundantly clear, I think this is all a good thing. It is intrinsically part of the value of a sum type in a language that you're not forced to think about the modestly complicated memory layout such things entail. It is good that compilers have some special cases for when you're summing on small values.
> Also, to be abundantly clear, I think this is all a good thing. It is intrinsically part of the value of a sum type in a language that you're not forced to think about the modestly complicated memory layout such things entail. It is good that compilers have some special cases for when you're summing on small values.
Seems more likely you'd limit your enums to 256 entries and always use a byte than require a word by default, no?
niche-value optimisation is a much more complicated affair so it's unlikely as a baseline indeed.
Why is null even an expected value for strings? I get that it is a design decision for a low level language like C, but as soon as you go up a level the fact that strings are implemented as pointers to character arrays should be an implementation detail not visible to the language's user. Need a Null\Nil string? Use a pointer to a string, just like you would for an integer.
Because there's value in being able to differentiate between when a value is "truly absent" vs. a zero-length string. To take a trivial example with a simple API: a null string can carry the semantic that a value was (intentionally or not-intentionally) omitted in a request by a user/FE. What if that request schema grows to the point where there could be
dozens of intentionally-omitted/optional fields--you then get into this awkward kind of scenario where either the FE must include those key-values as empty strings or the API must be able to interpret those omitted values and coerce them into empty strings. Even if a framework or whatever is abstracting that away from you--the can is getting kicked somewhere for someone to have to deal w/ the fact that other systems/langs--and their contracts--do, very much, have the concept of null.
Why strings and not integers and doubles? Why strings and not structs? You have given plenty of reasons why nullability in general can be useful, but not why it is mandatory for strings to be nullable. Unless the core of your argument is C has null strings so now everyone must support that poor decision for interop purposes.
Personally I prefer how Protocol buffers handles not present strings compared to C. If you stick to std::string, C++ isn't bad either.
Yeah ADT's, and specifically their application to prevent NPE's is one of those things you never want to go back from once you've used a language which has that feature.
It seems like one of those fundamental advancements in language design, like when computer science collectively decided goto should be deprecated in favor of flow-of-control expressions.
I'm not sure I understand this discussion. It sounds like most people are saying that null/nils cause a tonne of problems because if you don't check them everywhere, you end up with a crash.
I don't see how making types non-nullable solves the problem, doesn't it just create a new problem? When I load the string from somewhere outside of the program, if it is not present, I would either have to check for that condition otherwise I will crash, otherwise I would assign it some magic value and have to check everywhere "if not magic value".
otoh, if the strings are not external then there is little danger of things ever being null no?
> I don't see how making types non-nullable solves the problem, doesn't it just create a new problem? When I load the string from somewhere outside of the program, if it is not present, I would either have to check for that condition otherwise I will crash
If the type is non-nullable then the compiler will refuse the program if you have not dotted your is and crossed your ts. So you'd have to check your expectations at the edge, and if the expectation is "that is indeed optional" then it's optional internally and the compiler will ensure that is acknowledged at every place it's used.
> otoh, if the strings are not external then there is little danger of things ever being null no?
It depends. Lots of things are internally nullable. If I look an account by a key that's been provided externally, the account may or may not exist, that's an optional.
However it's true that most of the internal values will be non-nullable, and then the advantage of non-nullable types is that is checked and validated by the compiler, so it avoids misuses and misunderstandings.
When you only have nullable types, then the only thing you can do is assume, everywhere, and pray that you're right.
Non-nullable types allow functions to put the burden of checking for the existence of a value on callers. That means the check only has to happen in one place instead of everywhere a string is used.
But I sort of agree that excluding nulls is only part of the solution, because existing isn't the only requirement that values have to meet. I often need strings to be non-empty or have a minimum length.
Many modern type systems allow you to define new types that conform to an existing type's interface. But it's often a rather convoluted affair.
The type system forces you to deal with the case, instead of relying on the programmer to remember. You can still opt to crash explicitly if an option has nothing it it, but if you merely forget to check it won’t compile.
The advantage of having something like a non-nillable value is that it more strongly encourages (and/or forces, depending on the library) the programmer to move the handling of incorrect values to the edge, where they first came in. If you are parsing something, and you're trying to extract a string to pass it to an API that requires a non-nillable string, you are forced to handle not getting a string right on the spot, rather than passing it in to the API, where it may go who knows where.
This is a very important programming pattern that should be used at every available opportunity, and probably one of the biggest programming failures that is rarely talked about in programming. Dynamically-typed languages encourage this style more than statically-typed languages but it can be done in either one. You need to do as much of your validation as close as you can to the time that data enters your system. If you don't, instead of validation living at your edge, it gets smeared through the entire program, and it will be done incorrectly when that happens.
(If you have a deeper layer that has more restrictions on the validity, then the additional validation should be done on that deeper layer. This can be applied recursively within a program if it is big enough to have multiple domains. But you always want edge validation.)
Architectures that fail to do this inevitably become a mess on the inside. Functions develop that are passed "strings" but they don't know if they're validated strings, or decoded strings, or what. Where things get validated and decoded becomes incredibly complicated. Functions start growing options describing what's being passed, but then the functions calling those functions end up eventually being wrong themselves. This eventually metastasizes into the sort of code that nobody can change because every attempt to change something screws up some delicately balanced code path. The only solution to this is to start over and do this edge validation I'm talking about, so that the entire rest of the code base just stops having to worry about it.
This also goes hand-in-hand with the idea that you should decode data at the edge, and pass around data internally only in its natural format, and encode data only at the edge as it leaves. If you're in the "middle" of a program and suddenly there's code to URL decode a string, there's an architectural problem in that code. (This code is likely to become code in the future to conditionally decode the string, or much worse, guess at whether the string needs to be decoded, which is getting perilously close to a You've Already Lost situation.) If you come to a deep enough understanding of what it means you can start to see that validation and decoding into a canonical internal format, and the encoding on the way out from the canonical internal format, are the same thing.
I know this is a common question since I also had it myself at the beginning, but see the strong types not as the assertion that the world will never have a nil string in it, because that's obviously an impossible assertion, but as a statement that any calling code is going to have to deal with what happens when there's a nil string (or whatever other invalid input), because I, this strongly typed library, am not going to deal with it. This statement is also almost always correct, too, because the library lacks the context to know how to handle things it can't handle. You shouldn't ask libraries to handle things they don't know how to handle, because, well, they don't know how to handle them.
In C# we were going from "every reference type can be null" to "only things marked with '?' can be null" with nullable reference types.
When I started using Go it was nice to see that only something, that is explicitly created as pointer is nullabe, everything else can't be null / nil - and you can't get panic from nil referencing stuff
> When I started using Go it was nice to see that only something, that is explicitly created as pointer is nullabe, everything else can't be null / nil - and you can't get panic from nil referencing stuff
Well, it's the same in C# since 1.0 to some extent, except that people rarely used `struct` (value types) in C#. The ecosystem matters a lot, of course, but just at the language level, Go and C# structs have the same semantics (though in C# it's harder to get a reference to a struct).
Like Swift, how once you check if a pointer (not integers, strings, and non-nullables) is null, the compiler knows from that point on the variable cannot be null.
Go already does that... if you have an "if x == nil { return }" at the top of your function, then the compiler knows that x != nil below that point, and elides nil checks.
Like, I'm not trying to be cute here, but the point of having a "non-nullable type" is that you can get an error message when you use a nullable type in its place. However, if you're not talking about non-nullable types and just about whether the compiler knows a value is nil, well, the compiler knows that.
Yea, I meant the ability to declare functions that only accept non-nullable args. For ex: You have a chain of function calls, A -> B -> C -> D, where B, C, and D only accept a non-nullable arg then you can save the mental load of checking for null in 3 functions. Only A would need to check the var for null before calling B with that var.
The safety is contagious, preventing crashes from nil pointer dereferences at runtime. For ex: 99% of your program is functions accepting only non-nullable types then 99% of your program is guaranteed to not crash from nil pointer deref.
With Go, you have to check for nil in every function even if all your callers already check for nil. That's because Go can't declare a function's arg as non-nullable type.
> With Go, you have to check for nil in every function even if all your callers already check for nil. That's because Go can't declare a function's arg as non-nullable type.
I have no idea what you mean.
func foo(a int, b someStruct, c *someStruct)
a and b can't be nil, c can.
Now, it is true that there are multiple reasons for using a pointer aside from supporting nil (ability to modify, optimization not to copy a huge object).
I'm not who you're responding to, but the fact that interfaces/pointers (among other things) are nullable and there is no way to make them non-nullable is a problem with Go. A lot of bugs in Go programs are due to calling methods on those types and getting a null pointer error.
Your claims are correct, but it feels like you're missing the point they are (ineffectively) making.
Sure, that is why I added my second paragraph recognizing that things aren't all great, but still Go is a little better than Java for example in this area, where literally every non-primitive type is nullable.
What if I want to pass a pointer into the function (maybe I want to mutate the object), but I want that pointer to be non-nil?
You make it seem like value types solve this problem, but they don't. C also has the same thing you're talking about, but it still has nullability problems.
I think other comments are not even asking for anything complex. Something akin to C++'s references (which cannot be null) would already be a step forward.
I was only talking about pointers and interfaces, not primitive types. Yes A and B can't be nil, but I was only talking about C. Pointers (C) are what cause crashes in Go programs when using them without checking if they're nil first.
It’s a basic optimisation. Java does it as well, for example, and it’s safe. But it’s not part of the type system and it’s not what people here are talking about.
It's more complex in practice, since you need to also check that the compiler needs to see the entire lifetime of the variable to check if it's not modified asynchronously, as you say. But for many common use cases, this is not hard to confirm, especially since the Go compiler has access to the entire source code of the entire program (unlike in C++, Java, etc).
Universal Nil: no. we need less nil not more. Nil-safe types would be a better answer.
Error-checking: If you're not going to do anything with the error, then have an editor snippet for the ? boilerplate. Better still, start handling the error. The boilerplate is a Go anti-pattern.
I think we do need more clarity about the relationship between slices and arrays: it's not clear at the moment when two slices share a backing array or not, and therefore whether changes to one slice will change the other.
I'm kinda looking forward to generics, but also dreading it. Most newbie gophers don't understand how powerful duck-typed interfaces are and what they can do with them, and I have a feeling generics will be used for all sorts of situations that could be better handled with interfaces.
It's not universal nil, it's more universal zero. This proposal, in one way or another, ― using "_", or "{}", or "nil" ― has been around since 2016 at least, because zero-initializing a struct is indeed way too verbose.
> Better still, start handling the error.
Most of the time, you can do this only two-three levels higher on the call stack, and even here it's most likely to be wrapping up of "return nil, fmt.Errorf("frobbing bar: %w", err)" sort.
And yes, I am too excited and scared of the generics. I'd like to see generic map- and channel-related algorithms and patterns, and sort.Interface is an ugly hack, even if an ingenious one.
> because zero-initializing a struct is indeed way too verbose
hmm. I don't have the problem you're describing, because I generally declare the return struct at the start of the function and then return that. E.g.
func something() (mystruct,error){
result := mystruct{}
if badthing() then {
return result, fmt.Errorf("bad thing happened: %w",err)
}
//do something to result before returning it
return result
}
I must admit I don't understand how the proposal works fully. Would `x==nil` return true for an int with value 0? Both answers to this seem wrong to me.
> hmm. I don't have the problem you're describing, because I generally declare the return struct at the start of the function and then return that.
The problem with this is that, in a larger function, I have to read all the way up to confirm if result is an empty value or something else. I think returning myStruct{} is much better, and having a way to easily return a default myStruct would be nice. Calling that "nil" seems to be a bad idea to me, as others are pointing out.
> I must admit I don't understand how the proposal works fully. Would `x==nil` return true for an int with value 0? Both answers to this seem wrong to me.
I think it would return false in that case. There is already precedent for this in Go, in fact, with arrays/slices. Even though slices themselves are value types (they wrap a pointer) they can be nil, but []someStruct{} == nil is false.
The difference between an empty slice and a nil slice is one of the most inexplicable (to me) warts of the language, especially since it doesn't apply to anything else (maps can't be nil, nil and empty interfaces actually have different properties).
Anyway, I do agree that using `nil` as a base case for all types would introduce more confusion. What I think could be useful would be a `default` keyword for this:
var x int
x = default
if x == default {
this is true
}
This would obey rules similar to `iota` in terms of its integration into the language.
To be fair, it's not that much of a problem, and even then: "if result == _ { ... }" instead of "if result == MyPrettyStruct{} { ... }" is arguably more intent-revealing, as for "if number == _ { ... }", well, I guess the recommendation would be "just don't do that".
Perhaps gofmt could even automatically replace it?
> Most newbie gophers don't understand how powerful duck-typed interfaces are
Duck-typed interfaces are kind of broken in Go, though. I can use an Impl anywhere I can use an Abstract, but if I have a function that takes a slice of Abstract, I can't pass it a slice of Impl, because it's the wrong type.
type Abstract interface {
value() string
}
type Impl struct {
str string
}
func (impl Impl) value() string {
return impl.str
}
func ShowAbstracts(arr []Abstract) {
for _, a := range arr {
fmt.Println(a.value())
}
}
func main() {
arr := []Impl{Impl{"a"}, Impl{"b"}}
ShowAbstracts(arr) // This doesn't work, because arr is []Impl, not []Abstract
}
It works with `arr := []Abstract{Impl{"a"}, ...}` of course, but if you already have a 100K slice of Impl, you'd have to copy all 100K elements into a slice of Abstract to call ShowAbstracts in this example.
Slices are backed by arrays, not actual arrays - copying a 100K slice into another slice isn't as intensive as it sounds.
I'm not sure I see this as "interfaces are broken in Go", or something that requires generics to solve? (I do realise that the language itself uses generics to implement the copy keyword for this).
Unfortunately I think copying 100K elements is going to be exactly as painful as it sounds. The `[]Impl` is going to be (more or less) an array of 100K strings. The `[]Abstract` will be an array of interfaces, each of which is internally two pointers, so I'll have to allocate 1.6M of RAM and fill that RAM with pointers in order to convert a `[]Impl` into a `[]Abstract`.
Generics fix this handily, because I can write:
func ShowAbstracts[T Abstract](arr []T) {
for _, a := range arr {
fmt.Println(a.value())
}
}
And now when I try to pass in my `[]Impl`, Go will compile a new `ShowAbstracts(arr []Impl)` transparently behind the scenes.
The problem is that ShowAbstracts could just modify the array and put an instance of Abstract there that _isn't_ an Impl, thus violating the original constraint of arr containing only instances of Impl.
Once again mutability/invariance raises its ugly head!
Whereas it doesn't bother me at all. I've used Go for years now and it feels totally normal to check the error condition after every call. It feels strange switching to JS and there's just nothing - stuff can just error at any time and we're not going to do anything about it? That feels odd ;)
You can - one can put an exception handler at the place one truly wishes to handle the error as opposed to always introducing repetitive code at every single place in the call chain.
The latter tends to make the eyes glaze over in tedium after some time and due to the signal/noise ratio one misses real bugs in the error handling.
>You can - one can put an exception handler at the place one truly wishes to handle the error as opposed to always introducing repetitive code at every single place in the call chain.
I don't understand this point but I hear it raised often. At best I'm lukewarm on Go's error handling. Having if err != nil feels the same to me ergonomically as a try / catch.
For a lot of complicated programming you can be 20 levels deep in function/method calls and most of the time you deeply don't care and you want to crash out to the main loop which will then either decorate the error for the user and crash itself or clean things up and loop if it can.
Then there's exceptional exceptions which should be not crash near the "edge" of the program and need to be caught and something done with, so you add those to the program.
What you wind up with using that approach is only exception handling being written where you need it, and the vast bulk of the code is understood to always crash and get handled in some inner loop, or even some external supervisor.
Go feels like it was a reaction to (probably early-2000s) Java programming patterns where at every level Java programs were catching and decorating and rethrowing their own exceptions. Neither of those approaches really are helpful though because you litter your code with all kinds of exception handling that nobody really cares that much about, and under both systems you can crash instead of retrying and all the useless error handling code you've written everywhere doesn't help you see where you've made that error. The 2010 erlang crash-first programming mentality works way better imo rather than error checking everywhere.
And Go out of the box makes it difficult to get stack traces, so you should really use an error library at least.
The number of JS scripts I've had to add any error handling to is too damn high.
I like that Go forces at least some kind of error handling.
The "signal/noise ratio" problem fades after a while - you start seeing "if err != nil {...}" as a single statement after a while, a steady rhythm in the code. You notice when it's not there, or when it's doing something different.
> I'm kinda looking forward to generics, but also dreading it. Most newbie gophers don't understand how powerful duck-typed interfaces are and what they can do with them, and I have a feeling generics will be used for all sorts of situations that could be better handled with interfaces.
That really is the scariest part about it. I really hope the Go community will take this issue to hearth and push back against codebases with unnecessary generics.
Otherwise the whole ecosystem will suffer from it.
> The way I see it, there is a simple solution staring us in the face — make nil a placeholder for the zero value for any type. [...] Sure, it might be a little odd to see nil used in place of 0 for integer values, but I think this is a small price to pay [...]
So is writing `return MyLongTypeName{}, errors.New("...")`. The power of Go is its explicitness. It's a slippery road to introduce an unintended behavior. The existing use cases for nil are extremely clear. There is no need to change that. I agree that it's annoying to write `return MyLongTypeName{}, errors.New("...")`, but it gives you a chance to think about a better name of a struct and normalize errors. Suddenly, an ugly looking line of code becomes short and nice `return Type{}, ErrInvalidOp`. Maybe Go would benefit from a reserved keyword that would represent a default value of a type, eg
return default, ErrInvalidOp
var x int = default
var y string = default
but again, this is just a syntactic sugar and it may introduce uglier code when used in conjunction with explicit values:
return default, ErrInvalidOp
...
return 0, ErrInvalidOp
// was it intentional to write 0 here? can it be replaced with default? is there a difference between 0 and default?
3. Concise error handling - no
What if you have more than two return values?
func Foo(x int) (int, error, error)
y := Foo(x)?
Which error is being checked, the second or the third one? What if the second is nil, but the third one is not? Anyways, concise error handling has already been discussed by the community million times. Nobody can suggest good solution. When that happens, the best solution is to leave things alone, because, well, doing nothing is also a solution.
I'm well aware of Go patterns. That is not the point. The point is that the language allows you to return as many values as you want, in any order, of any type. If you introduce a new operator, then that operator should not be driven by a community pattern, but a strict language specification that covers all the use cases allowed by a language. It should also explain how an operator behaves in weird scenarios (eg return err, Type{} vs return Type{}, err). It can surely be all explained by the spec, but Go is the language that wants to minimize "weird scenarios". The best operator is the one that doesn't need explanation.
> the ? operator could simply not be applied in this case.
No. Go doesn't want to have such operators exist at all, because they are not simple. Go wants to find the most common denominator that would elegantly solve an issue from all angles in such a way so that the authors don't have to write pages of specification, but most importantly, so that code readers don't have to read specification when glancing through code.
You could have a `?` macro which only handles the standard case of two values. If you have more than one, then likely you need to handle it in a specific way anyway.
Having `?` does not mean the previous way of checking error doesn't work anymore.
Not saying this is a good idea or a bad idea - but I could see the use in a function that could both error from a bad file or a dropped network function. You don't really want to combine those two into one enum or error class, but maybe your programming language has support for marking the types of errors a function can return and ensuring coverage.
I think one big reason why checked exceptions failed was due to the other big lacking thing: generics. Of course, nowadays Java does have generics but I haven't used Java since then.
But in principle with generics and possibly some other additions you could provide much more useful checked exception types: ones that are applicable to your situation and would allow you to bubble up more precise checked exceptions.
In reality generics made checked exceptions a lot worse because while you can be generic over checked exceptions you can’t be generic over their combinations, do you can not have a wrapper which is “transparent” to exceptions (e.g. swift’s “rethrows”).
You need a wrapper for 0 exceptions, a wrapper for 1, a wrapper for 2, …
> There is probably a model for checked exceptions that is good but it's current state in Java isn't it.
Yeah and Java has pretty thoroughly poisoned that well so in every discussion of the concept what comes up is mostly Java’s implementation and then it’s dead on arrival.
And then you’ve got Swift which uses an exception-ish syntax for results, but they’re not even remotely exceptions, and functions must say that they are faillible (or failure-transparent) but then can’t say what their failures are, so it always feels like the worst of both worlds.
Though it makes more sense when you understand that it’s dual to the old `NSError*` out-parameter system.
Can you explain to me why it was bad? I've not used Java (professionally at least) - from what I can tell, the main issue is that it's just much more verbose than returning something like Result<T>.
I feel it’d be easier to have “sealed interfaces”: Go already has type switches, so the only missing pieces are defining sealing and adding match completeness support to type switches.
> The power of Go is its explicitness.
And yet it’s got plenty of implicitness. Though I guess you’re fight in that most of them are footguns?
> 3. Concise error handling - no
Then you don’t use concise error handling? Seems pretty easy.
I would be surprised and saddened if `?` as described in the blog was added to go. Error wrapping is critical for operating a service and staying sane; an operator encouraging `if err != nil { return nil, err }` is a step backwards.
Consider this situation(it may sound familiar): you get paged at 4am for 500s from a service, check the logs, see 'file does not exist'. go doesn't attach stacktraces to errors by default. go doesn't enforce that you wrap errors with human context by default. How do you debug this? Pray you have metrics or extra logging enabled by log level that can give some degree of observability.
This error could have been 'opening config: file does not exist' or 'initializing storage: opening WAL: file does not exist' or even just decorated with a stacktrace. Any of those are immediately actionable.
Now, if go decided to make error wrapping a first class citizen with a fancy '?' operator, I'd be excited. However, I doubt that will happen because go is definitely not rust-like in its design.
First of all, it's important to recognize that the majority of error handling in Go today is actually `if err != nil { return err; }`. Take a look at Kubernetes if you don't believe me.
Second of all, nothing prevents `x ?= Foo();` from implicitly doing `if x,err := Foo(); err != nil { return fmt.Errorf("Error in abc.go:53: %w", err)}`
K8s is regularly regarded as a very poor example of idiomatic Go code, along with the AWS client libs.
Searching our prod code, "naked" if-err-return-err showed up in about 5% of error handling cases, the rest did something more (attempted a fallback, added context, logged something, metric maybe, etc).
I would love a syntactic sugar for 'if err == nil {Errors.Wrap(err, *What I was trying to do")}'. As a SRE, making each part of the stack explicit (or explicitly non-explicit) is invaluable for understanding debugging. I'm A-ok with some forms of error throwing, but they need to be clear and understood.
if err != nil {
return fmt.Errorf("What I was trying to do: %w")
}
That's the correct and standard way to do error wrapping in Go since Go 1.13[1]. There is also Dave Cheney's pkg/errors[2] which does define "errors.Wrap()", but that has been superseded by the native functionality in Go.
Would it not be easier if Go just provided a stack trace attached to the Error? How to cleanly do this in Go I don't know, I do however do this in embedded C++ and it works well. I agree without context, errors can be hard to track down when they come from functions that are called by many other functions.
I am not a fan of manually wrapping errors because it seems inferior to stack trace.
Also, I hate that Errors in Go are mostly just random strings, super hard at a high level to do anything intelligent.
There have been many[1] proposals for streamlining error handling in Go, including some coming from the Go Team, like the check/handle proposal from Marcel van Lohuizen[2] and the try proposal from Robert Griesemer[3].
If anything, these proposals have been even more controversial than generics. The "try" proposal in particular, was probably the mildest proposal of them all, but it garnered a lot of criticism which amounted to "Go error handling is perfect as it is".
Personally speaking, I vehemently disagree with the criticism. I think that error handling go is a meteor-crashing-into-earth level disaster and anyone happy with that must be completely out of their minds. But there does seem to be sizable (or at least very vocal) group of Go users who believe this boilerplate help understanding the code better, and I don't think we can reach common ground.
It makes sense. Due to its design, Go naturally attracts people who we believe in can call "Imperative Explicitness Maximalism". This is the belief that the code is easier to read the closer the code is to to description to the way your machine executes code[4].
The people who want to reduce boilerplate as much as possible may be motivated by various beliefs, but I think the most credible belief is that imperative explicitness hides the _purpose_ of your code behind nitty-gritty implementation details. From my perspective, in cases where the implementation is trivial (like error handling), you're adding nothing to understanding the implementation, while completely hiding the purpose and — at the same time — significantly increasing the signal-to-noise ratio in your code. So you get zero readability gains at the price of two significant readability losses.
Indeed, that was the idea behind keeping it unexported.
From an external package you can only access the explicit values and the functions that accept them as parameters. The type is opaque for the developer.
An issue with a Rust-like ? operator is that it is sanctifying the "result, error" return values. It's entirely possible to return three or four values from a Go method (if not common).
With generics Go is opening up the possibility of honest sum types. But, to be honest, I'm one of those people who are not upset by Go's error verbosity. The error shortcuts in Rust confuse me more often than not.
> Not sure how it’s an issue, you just could not use the shortcut for this “not common” situation, or the case where the result is an actual product
But the whole point of Go is that code is more often read than written, so legibility is paramount. Having a `?` operator alongside the traditional `if err != nil` would be very confusing.
> But the whole point of Go is that code is more often read than written, so legibility is paramount. Having a `?` operator alongside the traditional `if err != nil` would be very confusing.
Are you confused by unnamed returns? By the ability to declare variables initialised or not? By there being two syntaxes to declare variables? By it being possible to elide types in declarations or parameters lists?
Go is already full of shortcuts which I would say are more confusing and less useful than`?`
I don't think that's a problem. The semantics of `?` would likely be defined for any number of (including zero) non-error return values, followed by an error value as the last return value. And when you use `?`, the call site will return however many non-error return values there are.
Go authors have explicitly stated that while the branches are clunky, they are so intentionally to make the points where your code branches much more obvious. I wasn't sold at first, but 6 months ago I decided to use it for a project where it's ease of cross-compilation was the deciding factor, and since then I have to agree.
Anyways, you're entirely free to use a named return value in the second (or later) part of a multi valued := statement. An example is noted below, the clunky and obvious branches are there, but the intent and flow is obvious at a glance.
The "Go authors" (by that I assume you mean the Go team) have very explicitly stated that they view the current way Go does error handling as a major problem[1][2].
The section that you're linking to in Effective Go does not state anywhere that "branches are clunky, they are so intentionally to make the points where your code branches much more obvious".
It only promotes the concept that of using guard-clause style ifs that return an error and have no else clause, gradually eliminating each possible error condition.
This style of programming was later termed in the Go community "Aligning the happy path to the left"[1][2]
I wholeheartedly agree that this style makes code more readable and I have adopted it - and encouraged others to adopt it - in other languages besides Go. This style would stil be followed with the try or check/handle proposals. For instance, let's take the example:
You continue to assert that error handling adds noise to a block of code. This is an opinion, that's fine, but it's not an objective truth. In many many domains -- in fact the domains which Go targets -- error handling is equally important to business logic.
One perk I've enjoyed with error verbosity is the ability to spit out different logs depending on where in the function you've failed. I'm not super familiar with the shortcut being described but if I could do something like:
I’ve a very small working experience with Go, tried it on few personal projects to test the language. Globally I really like it, and I’ve been pleased by a lot of « features » (simplicity, explicitness, readability of the standard lib etc…).
One thing that I miss was an immutability story. You have values/references and, again I’m a beginner, I’ve the impression that passing references are favored. Wouldn’t it be nice if the compiler was able to check if you can mutate a reference or not? There are some proposals [1] that are discussed [2], that can be a nice evolution.
Initially I was taken back by the language, I felt we'd stepped back 15 years in language and design principles. I decried the lack of many of these features.
Today I feel more like the language has simply been misunderstood, and possibly because of its association with Google, wrongfully applied in places where it isn't the right fit.
It seems to been adopted, in many place as a fast system level alternative to higher level languages such as Kotlin or C#, when in actuality I think its niche is in providing a safer, more convenient alternative to C.
Great for smaller, performance critcal applications, or things that where the bundle size is important, such as operating system utilities, parts of the networking stack, small fast lambda functions, caching servers, etc. But not domain heavy application servers.
So with that in mind im not sure how I feel with GO current roadmap. Its now a language being adapted to be more suitable for applications I _feel_ it was never intended for.
Making go more like other languages might be nice if you like those other languages. But then, there are probably valid reasons for go language designers to not have gone down that path.
I can work with Go when I have to. But it's not my main language. I appreciate the low barrier to entry. There's a ruthless pragmatism to the language that just means that whether you like it or not, that's how you do things in Go. Go-fmt is a great idea. Shuts down discussions about style that are fundamentally a waste of everybody's time.
The suggestions that the author makes are unremarkably reasonable and at the same time controversial to people used to the Go way. People get entrenched that way. It's like tabs vs spaces or curly braces vs. begin/end (or parentheses).
Enums are a good idea though. Should be entirely controversial. But clearly people have dismissed that idea because it is such an obvious and easy thing to do. I'd be curious about the reasons for that.
Error handling has been controversial since the language emerged with people loving it and hating it. IMHO it would clean up the language a lot if they acknowledged the simple reality that the most copied bit of go code is basically people deciding to not do anything substantial with the error other than logging it. Having the option to do something productive is nice. Making it required to spell out that you are not going to is silly. Java had a similar thing with checked exceptions. That's thankfully missing from Kotlin (which I use a lot) and it streamlines code quite a bit without really affecting stability. You still deal with the error somewhere. Just not right here right now.
I mention Kotlin here because I'm eagerly awaiting Kotlin native getting to the point where it becomes stable enough so people can write command line tools in it like people do with Go. JVM based Kotlin or Kotlin script are just not great for that (startup overhead, dependencies). Kotlin native actually works well on IOS currently but it is a bit lacking for command line usage; mainly because of lacking functionality in the library ecosystem. Just not a huge priority for Jetbrains apparently. But very fixable.
I don’t agree with this. Right now writing a basic LRU cache requires casting gloop in your code. The issues (specially 2 & 3) in article are manageable. But generics are genuinely holding me back from writing reusable code.
I would love great enum support to be added to Go. I want to have a case statement for an enum’s value and have the compiler assure me that I’ve covered all possible cases.
I would rather make that check request-able but not default.
E.G. any case (including default) in a switch statement, or anywhere else, could have a keyword such as FORBIDDEN and at compile-time a check would result in: A warning if it is unprovable this is unreachable with a fatal exception if it is reached during runtime; an error if it is reachable (in the default case for your input handling this would inform you of the need to update the code); nothing (it is proven OK).
What I’m asking for is basically just sum types. In my mind the behavior of switch would be totally unchanged unless you use an enum as the subject.
If there are possible values you don’t want to handle that’s totally fine, you just need to be explicit about it by having a blank default handler or similar.
I wholeheartedly agree. Exhaustive compile-time checking of enum variants is extremely helpful (and absolutely a required feature if sum types exist). So that when you're adding a variant to the enum, the compiler will fail if you missed handling the new variant.
There are several ways to deal with nil in language design:
1. The "default value" approach as Go, or how Java treats primitive types: It's simpler and safer.
2. Universal nil, or how Java handles non-primitive types: Bad idea. It's Sir Tony Hoare's "million-dollar mistake". It leads to unsafe code and feels pretty backward for a modern language.
3. Union/sum type/algebraic data types: The Haskell and Rust approach. It solves the problem faithfully but does introduce mental overhead. Whether it's suitable for languages like Go which strive for simplicity is subject to debatable.
As for Enum and Exception, in principle, it's the same situation that there are three camps of ideas:
1. A minimal approach which is the Golang we have. For enum is there is no enum. For error handling is treating errors as return values.
2. A mainstream approach like Java and alike. For enum is enum like in Java. For error handling its exceptions and ad-hoc syntaxes like "?." operators.
3. A full solution like Rust and Haskell. For enum, it's ADT. For error handling it's also ADT-based Either type and monadic operators.
It's interesting that how often these three camps talk past each other, and even more challenging to hold the community together and make the right decision for the language.
> Secondly, since the type is based on an int there is nothing at the type level saying that your pseudo-enum can only take the enum values you want — it can take any possible value that the integer can represent.
That is literally how enums work in most languages AFAIK. Even in .NET an enum is just an integer and you can in fact assign any integer value to an enum and therefore break things.
In my opinion people who complain about having to write too many if statements in Go because of error handling just don't get it. I rarely ever want to bubble an err all the way up in my stack. In most cases I can either handle my error directly or if it's an error which cannot be dealt with at the current level then often I actually want to wrap it into a more high level application error and log the low level error or map it to something meaningful, distinguishing internal errors from user errors. People who look for a simple syntax to not deal with errors and just let them bubble all the way up simply do it wrong and need to learn how to write good code.
The article is hosted on some kind of shitty blog platform which doesn't let me copy quotes from it. Anyway, there was a point about how tedious it was to type,
return SomeStructName{}, err
This is what "goreturns" does.
Concise error handling would be "nice to have" but to be honest, I prefer Go's current error handling over exceptions, most of the time.
Why do you prefer them over exceptions? What is the preferred way of handling errors deep in the stack in a unified place (eg. By displaying an error page)?
I'd add operator overloading and static method dispatch at least. For mathematical types it is much better to write `M * v` rather than `M.VecMultiply(v)` for clarity.
I gather that Go people freak out over syntactic sugar like `cout << "foo\n'` but they've thrown away the ability to better express mathematical notation.
And the language itself cheats and internally can do that and has the kind of almost useless complex type, but you can't extend it to add vectors, quaternions and octonians that use algebraic operators.
You don't need this for writing a grpc web service, but if you start doing ML or game programming this becomes important, and I think there's one of those koan quotes out there about how the code you write should strive for clarity, and I'd argue that function calls are less clear, and the reason why we write `1 + 2` instead of `1.AddInt(2)`.
> Point 2. It's not clear what the operator '?' does for a person new to the language. It will introduce confusion.
I mean, it's not clear what pretty much anything (except the most basic stuff) does for a person new to the language. Has to be documented, like anything else.
I don't like the inconsistency of forcing you to assign all of the return values of a function (even if you just use underscore), but not doing that for range, map access, or taking something off a channel.
Please no on the Universal nil. Meaningful zero values for types are essential. It may be confusing to some that nil is a valid zero value for some types and not others, in which case that needs to be refined, but having a type that can either be a value, a zero value, or a nil introduces complexity where there should be simplicity. Having a zero value enables, for example, Unconditional Programming: https://www.youtube.com/watch?v=AnZ0uTOerUI
Personally, I find writing tests in Go to be extremely painful outside of trivial assertion tests. The minute you need to start introducing test doubles, spies, or mocks, you're abusing interface{}. Compared to Ruby's rspec or even Python's pytest where creating these objects is fairly straightforward, writing new code with Golang often feels like a chore.
I get that a lot of this difficulty stems from Go being a compiled language, but I still wish that writing tests were easier. (I'm really thankful that Ginkgo and Gomega exist though!)
> In particular, there is no shortcut for the zero value for a struct value (not a pointer) — you have to specify the struct name followed by an empty set of braces
There's no such thing as a struct name. Structs are unnamed. The only way to have a zero valued struct... is, obviously, to instantiate it with its members as zero values. Exactly what struct{members}{} is.
`type X struct { age int }` just makes X refer to a particular type of a struct, but this also works:
type X struct { age int }
print(reflect.TypeOf(X{}).Name())
But a name isn't the only thing that may be "magically" bound to a struct. Methods too; if you define methods on `X`, then your `john` won't have those methods.
I encourage you to re-read the language spec, paying particular attention to the relationship between "defined types" and "underlying types".
1. A struct is a sequence of named elements, called fields, each of which has a name and a type.
// An empty struct.
struct {}
// A struct with 6 fields.
struct {
x, y int
u float32
_ float32 // padding
A *[]int
F func()
}
2. A type declaration binds an identifier, the type name, to a type.
The new type is called a defined type. It is different from any other type, including the type it is created from.
type TreeNode struct {
left, right *TreeNode
value *Comparable
}
--
type X struct {} is simply a type named X, of underlying type struct{} which (I'm refering to the struct here) cannot have a name - because it is an ordered sequence [(name, type), (name2, type2), ...], not a product type (X, [...]).
I don't know why you bring methods to this, as if methods are somehow connected to structs. In your reflection example: Name() returns:
(1) the name of a defined type
(2) the empty string for all other cases
It is designed for all types, and is of no particular meaning to the struct type, more than it is to int64 or float64. Would you say "ints have names"? You can create a type with a name and/or methods having such underlying types too.
I don't wanna nitpick in any case - my point was structs are not an exception like OP mentions. The brace syntax is a composite literal, and maps, arrays, slices all use it. The latter just don't use it for creating zero valued objects, because they are dynamic and are nil valued.
Both declaring a value without a specific value (var john struct{} or var john X) or a literal (struct{}{} and X{}), seem concise to me. Unless we bring inference to the table:
definitely disagree with the 'Concise Error Handling section', at least without a lot of enhancements to errors first.
yea, there's a lot of error handling code in Go, but a lot of the time its not just "if err != nil return err". errors are a simple interface type with an 'Error() string' function. This is NOT ENOUGH METADATA for real usable errors a lot of the time. Lots of codebases are littered with stuff like 'if err != nil return errors.WithStack(err)' simply to attach a stack-trace to an error coming from an external dependency. Sure, the enhancement proposed in the article doesn't preclude you from continuing to do that, but it doesn't solve the actual shortcomings of errors either in my opinion.
Errors in golang != exceptions common in other languages, I get that, but that doesn't mean I want to have to remember to attach standard metadata to every possible place an error bubbles up in my application. It's a consistency and simplicity problem too since some places you can just bubble an error up transparently, and sometimes you've got to wrap it with metadata for reasons that aren't obvious just by looking at the immediate error handling code. Thats a problem.
"Firstly, it is not an obvious way to declare an enum if you are at all familiar with their use in other languages."
It's the same way C does it, other than saying "const" instead of "enum". Explicitly casting int -> enum is "recommended" but not required or enforced.
Enums are a pain in C# and Java. Not a fan. The error handling is what it is. Not a fan. It is really bad. The second point I don't even understand. Never had that issue.
I think the problems with Go becomes obvious when you spend some time reading Go code. Try reading the code to Kubernetes and so on.
Are generics not essential in a modern industry language? How else can you avoid rewriting data structures? Void pointers and polymorphic parameters expect the programmer know the underlying type.
Given how successful Go has been without generics, it appears generics are not essential in a modern industry language.
I'm happy generics are coming to Go, but living without them hasn't been difficult. (Like the article author, I feel like I'm jumping through more ugly hoops due to lack of a proper enum type than I am due to lack of generics.)
We went through all this with Java in 2004. Lots of people saying "we don't need generics", then a few years later nobody can imagine writing code without them. You may not yet know what you're missing, but you will.
Flaws often represent design tradeoffs. Rust's lengthy compile times are a tradeoff for a highly sophisticated type system, where Go made the opposite choice (a simple type system for fast compiles). As a web engineer I prefer the latter for a rapid development/debug cycle.
The problem with void pointers is that if you make a mistake, you get undefined behavior. If you make a bad cast in Java, Go, or C#, instead, it fails. You get nil, you get an exception, or you get a panic, depending on how you do the cast, and which language you're using. None of these result in undefined behavior.
If you look at Java and C#, neither had generics either. You used casts, just like Go. It's safe, because the casts are safe (unlike void pointers). People started using Java and C# anyway, even though they didn't have generics.
Java then added generics in J2SE 5.0, which came out in 2004, when Java was nearly 10 years old. A year later, in 2005, C# added generics as well, with the release of C# 2.0.
Given that Go is about 11 years old now, it doesn't seem so out of place.
C# had generics already, they just weren't fully baked for 1.0 release and they decided to postpone them instead of delaying the .NET 1.0 release.
That is one of the reasons why the CLR already had most of the infrastrucure for proper handling of generic code instead of being a compiler only sugar.
It really depends on the types of programs you are writing. A lot of CRUD style "microservices" being built these days don't ever construct any complex in-memory structures, instead deferring that to databases etc.
Also many system-ish programs, CLIs and the like also don't really need complex structures or benefit from authoring them + the algorithms that traverse them in a program-specific way.
That said, I'm not a fan of Go but generics are far from the only reason why I'm not a fan and it could probably exist as a somewhat useful language without them indefinitely.
There are times when you wish you had them, but they are typically conversion type functions. Those just don't happen that often for a lot of applications.
It's easy to show some contrived add func, and how you need one of every single number type in Go. But in practice, people rarely ever need such a function.
You can get close enough to an enum by making your own type and a few values. If some fool typecasts their own values, that's on them but also easily checked for.
The way nil works is pretty simple. Unexpected cases are cases that didn't understand how nil works.
Errors as values is how the language works, and how I prefer it. No magic, make sure each step works. When I hammer a nail, I can tell immediately if I miss it, and deal with it. I don't have a catchall that sees I wasn't wearing safety glasses and makes some random hammer error.
Then again, I also don't think generics are a good thing, so maybe I'm just against change to the language at all.
> The way nil works is pretty simple. Unexpected cases are cases that didn't understand how nil works.
I mean sure, but wouldn't it just be strictly better if there was just not null values and you specifically had to handle the present vs non-present case (i.e. Haskell's "maybe" type or other sum type construct)? Any non-trivial codebase is going to have programming errors where "nil" crashes the program where the compiler could literally have told you that your code was incomplete.
> Errors as values is how the language works, and how I prefer it. No magic, make sure each step works. When I hammer a nail, I can tell immediately if I miss it, and deal with it. I don't have a catchall that sees I wasn't wearing safety glasses and makes some random hammer error
Not following the analogy, but seems to rely on writing perfect code the first time, which is not possible. Nor are any of the proposed improvements "magic".
> Then again, I also don't think generics are a good thing, so maybe I'm just against change to the language at all.
With respect, it definitely sounds like it. Which is fine, but there's serious design problems with go that can be improved over time, and it's not a personal affront.
The comment you’re replying to is not talking about “a zero value type”, it’s talking about option types aka separating nullability and value.
If you don’t know about that, please educate yourself, or at least ask questions, instead of replying with nonsense.
Same with “errors as values”, nobody is arguing about or against “errors as values” (then again I can’t say I know any language where that’s not the case, possibly VB? Exceptions certainly are errors and first-class values so they’re errors as values).
"maybe" (Haskell) or "option" (Rust) types (other languages have them, too) are not zero value types.
Option, for example, can be one of two types. Something or Nothing. And if the Option is Something, then the value is embedded in the "Something" variant of the type.
The callee has to match on the variant to extract the value. This provides more safety than nil does because you must specifically extract the value from the "Something" variant (and that extracted value can never be nil). You cannot extract the value from the "Nothing" variant (compiler error). So therefore, calling a function that takes an Option, and specifying Nothing is equivalent to nil (expresses intent to provide no value) with greater safety.
I appreciate the thought out response, but do understand the concept.
It just changes the onus somewhat.
A well written Go program will check for and handle nil up front. That an option type is 'safer' only helps enforce good habits.
I'm not against the idea, I just don't see the big benefit to a programmer who understands what he or she is working with. Unless the value is solely not having to think about nil cases, in which case I'd argue the value is very little.
Relying on “well written” to prevent bugs is just a fantasy.
Non-nullable types and compiler enforced checking of present/non-present values leads to more readable and more maintainable code. And there’s a mountain of CVEs, and bugs in go programs, to prove it. Maybe you just have to experience it.
Honestly a language designed in this century with nil values is tragic compared to what exists in other languages.
> That an option type is 'safer' only helps enforce good habits.
It enforces compilation result not habits. "to a programmer who understands what he or she is working with" - one day you'll learn that everyone makes mistakes regardless of experience, but it's better to accept and adjust to that fact before it bites you ;)
What Go actually needs: Nil-Safe Types! [1]
Programmers can work around verbose error handling(3) and lack of enums(1), but forget to check for nil before using a pointer... CRASH! And that's something the compiler doesn't warn about.
[1]: https://wakatime.com/blog/48-go-desperately-needs-nil-safe-t...