I've been doing quite a bit of Go programming for a while now, and I've internalized all these behaviors, but this one gets me every time:
var err error
if true {
foo, err := someFunction()
// err gets shadowed
}
It's particularly annoying because if both foo and err are defined in the outer scope, you get a compiler error that no new variables are on the left, so you tend to forget that removing one shadows the other.
I think Go is one of those languages that really benefits from an editor that performs static analysis.
I use Atom and go-metalinter catches this on a daily (if not hourly) basis. Variable shadowing is a sharp edge of the language that's very effectively dulled by proper tooling.
If anyone is using VIM or Atom please please please install the Go helpers, too. Atom's go-plus is awesome.
The team I work with- I beg them to use linters and go-plus and avoid these pitfalls like shadowing err. Instead most of them will run the linter, not fix the errors, then defend poor practices like shadowing and returning private types from exported methods.
I tend to simply not have duplicate variable names (with the exception of `err`, which is always treated with special care anyways) so this shadowing behavior doesn't really trip me up any more.
Languages probably shouldn't allow shadowing, certainly not within a single module. Think of the maintenance programmers.
"Defer" was probably a bad idea. It's a run-time construct - you're adding to a to-do list run at the end of the function. It's not scope oriented. Something like Python's "with" clause would have been both simpler and more useful. "with" calls "_enter" on an object when you enter a scope, and "_exit" when you leave, no matter how you leave. All the file-like and lock-like objects support "with". Python's "with" even supports exceptions properly - if you're handling an exception when "_exit" is called, the "_exit" function gets told about the exception, so it can clean up and then re-raise the exception. If an "_exit" itself needs to raise an exception, that works, too.
"Defer" is blind. With "defer", nobody ever finds out that the close failed to write the end of the file.
> "Defer" is blind. With "defer", nobody ever finds out that the close failed to write the end of the file.
This is simply false and a misconception held by people who haven't spent a lot of time with the language. You not only can, but absolutely SHOULD handle errors in defers.
Using named return parameters on your method, your error return is still in scope and can be set. This is how you properly handle errors on defers.
> This is simply false and a misconception held by people who haven't spent a lot of time with the language.
If you've spent a lot of time with the language, then you've surely noticed that people rarely do this (because it's extremely verbose). The language encourages ignoring errors in defer.
(Contrary to the parent post, I'm not sure this is a real problem, however.)
That is so cool. And confusing. And likely to lead to errors.
Error handling should not require l33t features. (This is my big argument with both Go and Rust. The stuff they use to get around not having exceptions is worse than exceptions.)
I wouldn't really call "return values" a "l33t feature".
Exceptions are absolutely a "l33t feature". They break whatever is currently happening. They are an uncontrolled return working its way up your structure.
While you're attempting to recover from that Exception I gracefully catch my error and can even use the multiple return values that came back successfully from my function, despite there being an error.
To be fair I think most Go programmers are exposed to defer .Close() on files via example, and it's possible every copied example can be traced back to some proto-example in the very early days of Go. It is extremely pernicious in the ecosystem - Docker, Kubernetes, and etcd all had rounds of cleaning it up over the last year or two. I would expect linters to flag Close without checking the error value (if they don't already).
The problem isn't shadowing. The problem is capturing by reference. It's utterly confusing, because it means that all closures that you create in the same environment share the same mutable state - delicious action at a distance!
Languages where closures capture by value (ML, Haskell) or where the programmer explicitly chooses between capturing by value and by reference (C++, Rust) are free of these complications.
When I say ”by value”, I mean the abstraction the programmer is exposed to, not the internal implementation detail. In a Haskell implementation, the environment may well be internally represented as a pointer, but the programmer doesn't need to worry about that. On the other hand, a Go, Python or Lisp programmer is constantly and painfully reminded of the difference between copying an object and pointing to a pre-existing object.
Haskell programmers do have to think about it as being by-reference if they want to understand the performance/memory-use due to sharing.
But for correctness, yeah, you just don't have to think about it. So you don't think "by value" or "by reference", it just becomes something obvious that you don't need to think about.
Adding control semantics via library extensions seems to be a thing now. It has a bad history, though, from the extensible languages of the 1970s, when people first discovered that you could do fancy things with macros. That didn't end well.
You don't need many control constructs, and the list is pretty well known, so you may as well design them into the language.
It's really a common source of error, I was hit by it once and lots of other people too. However It's not only common to Go, I learned that behavior in C# (which captured loop variables by reference), which I think they changed in the meantime. It can be also encountered in Javascript (if a var instead of let binding is used for the loop variable).
I found it interesting that the old Java style guarded against that behavior, because it required captured things to be final, so you had to copy the thing you wanted to capture from the loop variable into a fresh final variable anyway.
The good thing is: If you encounter this behavior once in any programming language you most likely research in new ones how loop variables interact with closures. So the golang behavior wasn't a new source of errors for me.
However I still learned something new here: I didn't expect the different behavior between the go/defer statement() and go/defer func() {statement()}() variants.
In C++, you can explicitly specify whether lambdas capture by value or by reference, on a per captured variable basis. Most reasonable programmers would capture `int`s by value. For instance, this program is guaranteed to print all numbers 0..9, not necessarily in order:
#include <iostream>
#include <thread>
#include <vector>
int main()
{
std::vector<std::thread> vec;
for (int i = 0; i < 10; ++i)
vec.emplace_back([i] () { std::cout << i; });
for (auto & thr : vec)
thr.join();
std::cout << std::endl;
return 0;
}
Reusing the loop counter variable across iterations in an imperative language is perfectly fine. The confusion comes from capturing a mutable environment by reference, which is a confusing (and hence bad) default.
I think the problem, really, is mutability, which leads to pretty unintuitive results.
The same is true for javascript. For example at first glance you'd expect
var xs=[]; for (var i=0; i<10; i++) {xs.append(function(){return i});}
to be equivalent to
var xs=lodash.range(10).map(function(j){function(){return j}});
but the first won't work as expected -- it's dangerous because its loop is based on mutation so you need to think of your variables as mutable containers rather than simple values, even for primitive types like integers.
> you need to think of your variables as mutable containers rather than simple values
uhm, no. you think of variables like labels that happen to be attached to a value. and since they are variable (like in "they vary"). just like in all other languages that don't label themselves as "functional programming languages", they can be re-attached to other value. In functional programming language (Haskell, OCaml, Scala?) you simply "can't re-attach the label to something else", you just "create a new scope and inside it a new label with the same name that is attached to the same value".
this is the only sane way I found to think about these issues. oh, an Javascript's `let` is kind of like a "transplant" from a functional language to an imperative/procedural one ...a pretty cool transplant imho since by putting it at the beginning of a block you get the "standard functional language behavior".
only problem in go is probably the `:=` that messes up with people's intuition. they shouldn't have allowed it to work with mixes of defined and new variables on the left...
this is the only sane way I found to think about these issues.
That seems a little unfair. The question here is whether i is treated as a value or a reference. In JavaScript, i would usually be treated as a value: passing or returning integers is done by value, appending integer i to some array on each loop iteration would append the value, and so on. Giving i reference semantics when building a closure is a departure from the way JS treats integers in most other contexts. It would not only be perfectly sane to close over i by value as well, it seems it would also be more intuitive given that the misunderstanding we're discussing must be one of the most common errors for intermediate JS programmers.
Now, if i were an Object rather than an integer, I think the argument for the current behaviour would be much stronger, because Objects are passed around by reference in other contexts in JS as well. (Personally, I strongly dislike the way a variable might implicitly have different semantics depending on its type within a dynamically-typed language, but that's a whole other debate.)
Unfortunately, changing the semantics for closing over a variable to be by-value in cases like this would also break lots of other idiomatic JS, including the whole idea of building its version of modular design around a function that has local variables and returns one or more closures over those variables. IMHO, it would have been better if the language had explicitly distinguished between values and references to allow those kinds of idioms without the counter-intuitive consequences in other contexts, but we have what we have for historical reasons and it's far too late to change it now.
10.times.map do |i|
-> { puts i }
end.shuffle.each(&:call)
This makes 10 lambdas which print the current value of the loop and then calls them in random order. Each puts sees a different i.
I don't think the behaviour in the article is really a problem with loop variables, but with defer. It is odd that defer packages up the function and its arguments when it is defined. Deferring printing i remembers the value of i at that time, whilst deferring printing &i remembers the (unchanging) address of i.
OTOH, having defer remember the values makes things like
f = open("file1")
defer f.close()
... do stuff ...
f = open("file2")
defer f.close()
close both files, which is probably what the programmer expected. I think that's pretty horrible code, but either way some people are going to find the results surprising.
Go's "problem" is that it can't efficiently give i a new address in each iteration: it'd have to put each one in a new memory location.
Go's problem here stems from the fact that it relies on C-style for loops for this kind of thing instead of iterators. With iterators you can define the language semantics to provide a fresh location on every iteration. It's yet another point against C-style for loops...
The block inside the loop must be kept in memory whilst there are still anonymous functions which reference the per-iteration loop variables. Loops are linear in space if such functions are being generated and retained in each iteration. If they don't, then iteration stack frames are cleaned up by GC.
This applies to each-style loops. Actual Ruby for loops don't have this behaviour.
a = []
for i in 1..100
a << ->{ puts i } # append, to a, a function that prints 'i'
end
a[0].call()
This prints 100, since the scope of i covers the entire loop, and not each iteration. They are, however, much faster than each-style loops, since they don't have to make a stack frame for each iteration.
I don't know enough about Go to be sure, but it seems that there are multiple possible semantics here, and the choice made by its designers does not seem to be inherently more efficient than the alternatives, except perhaps for some rarely-used cases.
Also (though this may be going off-topic), imagining what instructions might be emitted has been known to mislead, especially in the face of aggressive optimization (looking at the actual compiler output may give you insight into the language's semantics, but reading the language's specification is probably an easier, quicker and more reliable route.)
This definitely seems to vary among different languages, and is something I've always been annoyed with in C++.
Scoping outside the for loop is rarely useful, but often creates bugs. The one case where I like it is for searching an array and wanting the index, but even then it's not necessarily elegant.
I don't see this as whether or not there is literally a new i created or not, but if you have access to i after the loops last bracket. Here's an example:
for (int i=0; i<10; i++) {
do_something(i);
}
// i is still valid which might be
// unexpected and cause a bug
vs. something that would look like this
{
int i;
for (i=0; i<10; i++) {
do_something(i);
}
}
// usage of i at this point would be
// a compiler error
I'd have to come up with a convoluted case, where this would directly cause a bug, but it would be a case where you reuse the variable i, and forget to reinitialize it. Java does the second form, and forces you to declare the variable outside the loop if you want to use it in that context.
Saying "loop variables are scoped outside the loop" is incorrect.
All the gotchas noted stem from the fact that people are using _closures_ in the form of go and defer statements, which capture i by reference unless you pass it as an argument.
I think it's more an issue with closure gotchas (or the fact that go and defer statement aren't viewed that way) than with scoping.
How would this make any sense, though? In the case of "range" I can see the confusion, but when you explicitly declare a variable and increment it for each iteration of the loop, the only way it can work is if there's one instance of the variable.
It's the disparity between how a human thinks and how a machine thinks. Systems languages tend to care less about humans think, which I prefer because humans don't have consistently defined logic.
I don't think so. As programmers, we have to think logically, and in this situation there really is no other logical way of thinking about how that code might work.
> It should be stated that entire teams build large systems with Go on a daily basis and don't step on these
I strongly suspect they do step on them, and then simply learn the behavior at that point, and then get on with their lives.
> The community of professional Go programmers has arrived at practices that avoid such issues.
I'd be interested in knowing those practices.
Shadowing I'd assume is "solved" with warnings (as errors)? I don't know enough Go to know how you'd effectively avoid the non-nil interface to nil instance issue, short of carefully checking for nil on every conversion. As for the for loop scopes, do you just declare new variables in the local scopes? Interestingly, I expected the "WRONG" behavior in 2 of the 3 for loop examples... and C# actually changed their behavior (in C#5?) from "WRONG" to "OK" with for-each loops.
>Shadowing I'd assume is "solved" with warnings (as errors)?
Yep, shadowing variables are caught by the linters built into every golang package for major editors. Using these linters is built into the culture and as a result they tend to crop up less than you think for such a glaringly obvious issue.
>I don't know enough Go to know how you'd effectively avoid the non-nil interface to nil instance issue, short of carefully checking for nil on every conversion.
You let it panic. This is a programming error, which is what panics are for. If you caught the error, the proper thing to do would be call panic manually.
>As for the for loop scopes, do you just declare new variables in the local scopes? Interestingly, I expected the "WRONG" behavior in 2 of the 3 for loop examples... and C# actually changed their behavior (in C#5?) from "WRONG" to "OK" with for-each loops.
This is subjective, and easily testable. Most early Go developers learn to test their understanding of "for loop scoping" after they implement an infinite loop that expects `defer f.Close()` to close the resource, but instead it builds up until the kernel kills the process for not releasing file descriptors.
---
I was recently surprised to find out that `type MyError error` and switching on types like `switch variable.(type)` can be a bit scary [0]. Basically you should always define your new error type from a string and never attempt to re-use the builtin error type. In retrospect it makes sense, but you'd naively think this is the proper way to create a new error type.
I don't consider the linters enough to defend against shadowing - on Kubernetes we find one or two err shadowing bugs a month (https://github.com/kubernetes/kubernetes/search?p=1&q=shadow... is just the ones for a trivial search) that can sometimes be quite serious (I fixed one last week that leaked scheduler workers - https://github.com/kubernetes/kubernetes/pull/32643). Being extremely strict on shadowing has helped some, but it also leads to code which looks non-idiomatic and is slightly worse to read and review.
Taking address of a loop variable has also come up several times, although linters have many fewer false positives there.
Maybe it's just me, but in these cases when I encounter some weird behavior in a language I assume that I don't know enough of the language. It's actually not the designers fault that they are there, but my knowledge that requires more info.
Tell me the language that you're using and I'll find multitude of warts and "landmines".
2 out of 3 listed in the article (scoping of loop variables and variable shadowing) is the same in pretty much every other language, for good reasons.
The semantics of nil interface are surprising on first encounter, but when you think about other possible semantics, it turns out it's the best solution. Or at least I haven't seen a proposal for how it could behave better.
Note that Go was invented to address a very specific socio-economic issue that Google was facing.
Go's simplicity is syntactic. The complexity is in semantics and runtime behavior. It is an economic trade off. A language with a 'complex' syntax, such as Rust, is intimidating to the novice and presents a problem of how to 'integrate' a newbie with an evolving codebase written by old hands.
It is true, imo, that ultimtely, the initial cost haircut exacted by languages such as Rust/Scala/Haskell pay substantial dividends, but there is the 'socio-' aspect of the context for you!
Which is why most simple languages created as a reaction to complexity, end up with complex codebases full of workarounds to fill the lack of features.
In Go it is already visible with developers using error handling libraries and code generators.
> Which is why most simple languages created as a reaction to complexity, end up with complex codebases full of workarounds to fill the lack of features.
I agree with this. It is a ~zero-sum game. We note this phenomena in terms of concurrency, system architecture (microservices), as well: deceptive up-front simplicity will exact recurring payments with compound interest :)
Frankly I think this says far more about our [field] (academic to industry) than any specific tool, or language.
We have a problem in this field and we haven't solved it. (My money is on machines writing most of the code in the future.)
> In Go it is already visible with developers using error handling libraries and code generators.
Agreed, but note that while I'm critical of the uncritical embrace of "solutions" that don't spell out the consequences on the proverbial tin (such as microservices), at least they get to float their boat. Consider that lacking this alternative a possibly substantial subset would be sitting on the beach dreaming of sailing :)
What you actually need to be careful to do is always return interfaces when possible because mixing interfaces and structs breaks shit.
You then also have to always return explicitly 'nil' when you mean nil. That's what GetACat messes up; if you want to return nil from a function that returns an interface, never return a struct you know to be nil, always return the literal value nil.
Why would you return an interface type? My advice is don't. Interfaces should be tightly scoped and defined by the consumer not provided by your package.
I think it's the other way around. In your package you define an interface and a struct/implementation. The contract to the user is the interface, the struct is merely an implementation detail and ideally shouldn't be public anyway so that are are free to change it.
Then you provide functions like NewCat(...) which provide instances of the interface, and which always returns either a valid implementation (non-nil type,non-nil impl tuple) or nil. The user can then directly use this like var cat Cat = NewTabby().
If you instead would return a struct pointer and the user does var cat Cat = NewTabby() you would actually triggered exactly this error. Instead he would need to check what the function returns before assigning it to an interface.
The issue you describe is not something you should design around. I have written/reviewed hundreds of thousands of lines of Go on projects large and small and have never encountered this problem. However, in practice there are very good reasons not to use interfaces as you suggest.
If you make an interface the contract with the user you will find it is now very hard to change. Everyone who had once implemented it no longer does. If I return a struct and consumers use tightly-focused interfaces for only the methods they depend on, I can continue to add methods or modify the struct as I see fit.
Think of all the engineers who will create special versions of your interface for testing. You add a new method and every single one of those needs to be updated, even if the overwhelming majority have no interest at all in the method you've added.
It's not clear which kind of nil interface you mean. :)
In Go, interface variables are a pair of values: a pointer to the concrete value stored in the variable, and a pointer to the type of that concrete value (i.e. the actual struct).
So there are essentially three kinds of nil in Go: a raw nil pointer, an interface variable with both nil value and type pointers, and an interface variable with a nil value pointer, but a non-nil type pointer.
One way that the two "interface nils" differ is that, to follow the article's example, if you have a Cat variable containing a nil *Tabby, you can invoke Meow on it, and it'll print meow. This works because Tabby's Meow doesn't try to dereference its pointer-to-Tabby. However, if you have a Cat variable that's completely empty, invoking Meow on it will fail.
What is the conceptual difference between a raw nil pointer, e.g. var t *Tabby = nil, and an interface with nil value and non-nil type? The raw pointer also has nil value and non-nil type.
The difference is that on a nil-interface you can't call a method, it will panic. On a non-nil interface it will always call the method, wether a nil-pointer is stored as the receiver or not. It a nil pointer is stored it will call the method will it, and the method may still do something useful.
It can be theoretically be used to implement some interfaces without any backing object at all (singletons, pure functions, ...), which is a difference to most other programming languages.
Sorry - the question is not immediatly understandable. If you mean how 'var t *Tabby = nil' is different from the interface representation: For this one no type information is stored at runtime. It's only a single pointer. Whereas an interface at runtime is always represented by 2 fields, a pointer and a type field.
I'll give it a shot. I'll assume you know what an interface and struct are in go.
Conceptually, an interface in Go can be thought of as two things: a concrete type, and a pointer to the implementing struct of that type.
For example, if you have `var err error = &os.PathError{}`, then the `err` variable is of type `error interface`. That `error interface` simply stores `{type = os.pathError, value = &os.PathError{}}`.
The need for the interface to have the value and the type is obvious if you consider that interfaces may have methods called upon them (so they must have an implementation), but may also be used in type assertions / switches (so they must know their underlying type).
Go, however, insists that each type has a zero value so that it may be instantiated simply, and interfaces are no different. A user may type `var err error` and create a "zero-value" interface. What should that logically be? Well, it absolutely has no way to have an underlying concrete struct since go does not provide a way to specify a default implementation for an interface, and the type is unknown too.. so it makes sense that it takes the form `{type = nil, value = nil}`.
That's a nil interface.
A nil interface is useful in many cases. One common one is to indicate the lack of a return value, such as an error. `if err := f(); err != nil` is a common pattern, and takes advantage of the existence of a nil interface, since the function must have returned one (via something like `func f() error { return nil }`).
It's also useful to have a nil interface when instantiating something sometimes. For example:
var x myInterface
// x must be *something* here
if condition1 { x = func1() }
else { x = func2() }
It's mostly useful for consistency with other parts of the language in that things have zero values.
I think the only alternative would be for an interface specification to include a "default implementation" to be used as its zero value, but that would not work if a package defines an interface of what it takes in (it would just define a default zero of a sentinel 'zero' value to error on), nor for the `interface{}` shenanigans.
Hopefully that helped, but if it didn't please ask a more specific question.
> In your hypothetical example in Go you would pick one of the set values (e.g. "1") as a "zero value".
> The alternative of not having a "zero value" is to have variables with undefined values, and we know from C/C++ that it's extremely bad idea.
Selecting a "zero value" that also happens to have actual meaning is insane.
The real alternative to not having a zero value is to require all variables to be initialized with some value. If you can't, then the variable should be an optional type wrapping the type you want. Pointers/references have something similar (you can have a valid pointer, or you can have null), but optional types are generally applicable to any type, and pulling a value out of an Option generally requires you to check for the null-equivalent or explicitly say "I know what I'm doing, assume it contains something" (unlike null pointers/references, which you can blindly dereference).
from `bytes.NewBuffer' would be a bit absurd. But returning an interface (`io.Reader') from `io.MultiReader' makes more sense than returning some sort of structure.
I think the author was getting at how one should always return explicit zero values _or_ explicit non-zero. E.g.,
For me channels and append() are the biggest language level landmines. Not so big are nils, :=, lack of panic when you go beyond last element of the slice, i.e. ([]int{0,1})[2:].
Libraries are a lot worse though, they are minefields.
Do you propose having another syntax for getting the zero-length slice just past the last element? The [2:] syntax giving you {} is a pretty natural progression from [0:] giving you {0, 1} and [1:] giving you {1} in your example.
As a Swift programmer Go programming seems riddled with a need to be perfect. I'd rather the language help me not make errors rather than assume I am perfectly aware of all the gotchas. Swift isn't perfect but it does make things less likely to confound you later.
Seems that the explicit lambda capture in C++ is a good way to avoid the first issue. Implicit capture of closure variables is kinda scary when you think about it.
If only one could go back in time and fix the weird overloading of nil for pointers and interfaces...it is the one breaking change to the Go1 language guarantee that seems worth it.
I don't think it's worth breaking compat, but I agree it would've been nice and saved a ton of confusion.
Right now a pointer to a type that is zero is called "nil". Similarly, an interface that contains zero for both the type and the value is called "nil".
This is really confusing because
var foo SomeType = nil
var bar SomeInterface = foo
fmt.Println(bar == nil) // prints false, which is confusing
If instead you called the zero-value of interface something else, say "unset", it would be a lot less confusing.
var foo SomeType = nil
var bar SomeInterface = foo
fmt.Println(bar == unset) // prints false, which makes sense, because bar is set -- to (nil, SomeType)
I first encountered this idea in this reddit thread:
In Go nil is a special identifier that represents a "zero value" for several types (pointers, interfaces, functions, slices, channels).
"unset" would be a "zero value" for interfaces so that nil could be "interface that has a type but whose value is an zero value of a pointer or function or slice or a channel.
This is inconsistent - now you have 2 identifiers to represent a concept of "zero value".
go has _three_ different concepts: what is now called nil has instances from two classes, let's call them nil.type and nil.interface.
Discussion is about whether it would be better to require programmers to explicitly specify the .type or .interface part in the code they write, or to let the compiler infer it from its arguments (with the special case that it should assume that, in nil == nil, both have the same meaning, so that it evaluates to true)
I think I would favor having two separate names in my source code, with the caveat that the compiler should make it illegal to write x == nil.interface if x is a type and vice versa, but I know too little about go to claim that's really a good idea, and also think it would change if go got generics (and maybe, the code generation thing it has now in place of generics (go generate) already would change my opinion, if I took time to think it over)
yeah, you've exactly identified the confusion -- assigning the zero-value of a pointer to an interface does not give you the zero-value of the interface.
using the same word to represent these two things makes this more confusing, because it violates our intuitive sense of algebraic laws (a = nil, b = a => a == nil); using different words for the zero-value of interface and pointer would break this false intuition (a = nil, b = a =/> a == unset), and save a lot of confusion.
This is because nil is not the same as SomeInterface. They are different types. In the above example, the correct code would be this:
fmt.Println(bar.(SomeType) == nil) // this will print true
You need to explicitly convert the interface to a specific type (as above) before you can compare whether or not it points to a concrete value. Since SomeInterface could point to any type (by implementing all the members of SomeInterface, you can make any type be of SomeInterface). This is because variables that have a interface type are never nil.
I wouldn't call this a design issue, I believe this was done very purposefully and it makes perfect sense to me and other long time Go developers. The biggest problem I've seen with learning Go is for others coming from Ruby / Java / PHP / etc languages, who have this expectation of what "null" is (or nil as the case may be) and what "interfaces" are. The key to learning Go is to come at it with the thinking of C/C++.
Having been at Google when Go was announced internally, I remember being disappointed that a modern statically typed language is repeating Tony Hoare's Billion Dollar Mistake (tm). (I was also disappointed that C interoperability required recompiling your C code using kenc so that it used the Go/Plan9 calling convention. Maybe this has since been fixed.)
I would have expected at least something a maybe/option type (or another mechanism to make optionality and refrences orthogonal concepts), if not full algebraic data types. Allowing Nil everywhere forces lots of otherwise unnecessary Nil checks and introduces myriad opportunities for bugs.
Stepping back and assuming that for whatever reason an ML/Haskel/Swift/Rust/etc.-like maybe/option type is out of the question: when a programmer is comparing a passed interface instance to nil, they're almost certainly trying to answer the question "can I really call the defined interface's methods on this instance?".
Having done very little Go programming, my mental model is that interface pointers are pointers to a tuple of meta-information (including dynamic dispatch table) and a pointer to the concrete type. I can understand that both the meta-information and the pointer to the value are necessary when comparing interface pointers. However, I don't understand the utility in there being a distinction at a language or ABI level between an interface pointer to a null value and a null pointer to the interface <metadata, value> tuple. In what situations is the semantic distinction useful? There may be some domains where the semantic difference is important, but it seems to me that in those cases you're better off using a field of an enumerated type, for the sake of cleaning up the semantics of nil interfaces.
Though, coming from languages with algebraic data types, (and C++, where they had the good sense to make null references undefined behavior), cleaning up semantics of Nil in a language with Hoare's Billion Dollar Mistake is a bit like passing a public safety ordinance requiring fire extinguishers to be on hand at all gasoline fights... definitely helpful on the one hand, but seeming to miss an understanding of the root problem on the other.
Interfaces aren't pointers, they are a value type (it's a tuple as you describe).
The distinction for typed nil is useful because some receivers may work with a nil value. This whole thread enumerates the disadvantages pretty clearly, but I just wanted to point out that it _can_ be useful.
I do think compiler warnings would be helpful when this kind of thing happens implicitly, to prevent new language users from being confused for extended periods.