Hacker News new | comments | show | ask | jobs | submit login
Go’s Type System Is An Embarrassment (functionwhatwhat.com)
109 points by mikeevans 1354 days ago | hide | past | web | 109 comments | favorite



What's embarrassing is that people are still evaluating programming languages like they were bags of features. The more features you stuff in the bag, the better it must be!

The reality is that generics aren't free. They result in difficult-to-understand error messages and more cognitive complexity.

To quote Yossi Krenin, "You may think it's a StringToStringMap, but only until the tools enlighten you - it's actually more of a... std::map<std::basic_string<char, std::char_traits<char>, std::allocator<char> >, std::basic_string<char, std::char_traits<char>, std::allocator<char> >, std::less<std::basic_string<char, std::char_traits<char>, std::allocator<char> > >, std::allocator<std::pair<std::basic_string<char, std::char_traits<char>, std::allocator<char> > const, std::basic_string<char, std::char_traits<char>, std::allocator<char> > > > >"

Of course, it would be nice to have generics, or something like them, for writing typesafe containers. But there are other goals in the language like fast compilation, easy-to-understand error messages, and overall simplicity. I hope they can come up with an implementation of generics that can keep those good properties of the language. But if they can't, it shouldn't be added just to get a feature bullet point. Use the right tool for the job.


> The reality is that generics aren't free. They result in difficult-to-understand error messages and more cognitive complexity.

> To quote Yossi Krenin, "You may think it's a StringToStringMap, but only until the tools enlighten you - it's actually more of a... std::map<std::basic_string<char, std::char_traits<char>, std::allocator<char> >, std::basic_string<char, std::char_traits<char>, std::allocator<char> >, std::less<std::basic_string<char, std::char_traits<char>, std::allocator<char> > >, std::allocator<std::pair<std::basic_string<char, std::char_traits<char>, std::allocator<char> > const, std::basic_string<char, std::char_traits<char>, std::allocator<char> > > > >"

The Kreinin quote is in relation to C++. For historical reasons (reverse compatibility with C) and interoperability with other parts of the language (manual memory management), C++ types are way more complicated than types in a more recent language such as C#, where a Map<string,string> is really not much more complicated than you would think it would be. One might be able to argue that generics make things more complicated, but they certainly don't have to make them as complicated as C++ does.


You can't blame C++ generics on compatibility with C, because C doesn't have them. The culprit isn't manual memory management either.

The issue is just that the complexity of the type grows exponentially as you compose more generic types together. The string was templated on three things: the character type (usually char), the traits, and the allocator. Fair enough, but now the map is templated on the key type, the value type, the comparator, and the alocator. So that's 4 x 3 x 3 = 36 different parts in the StringToStringMap. Java has similar "russian doll" types where you have many different layers of generics within generics. You need to read all of them to fully understand the behavior.

To be honest, overuse of generics is a code smell in both C++ and Java. One of the early warning signs that your code might have this problem is if you feel the urge to template on bool. I do think that generics may come to Go some day, but I think it will require a lot of thought, and it may not come in quite the same form as it did for Java and C++. No doubt there will be many angry posts about how Go "has" to do it the C++/Java way at that point.


> You can't blame C++ generics on compatibility with C, because C doesn't have them. The culprit isn't manual memory management either.

This is arguable. C doesn't have generics but if you look at the std::string example posted above as a "bad example", it indeed has some C backwards compatibility hacks.

In particular, std::allocator<...> is repeated many times. That is an attempt at making the string type (and other containers) at least partially compatible with manual memory management.

std::string is an overengineered and complicated solution compared to string types in other languages (and C heritage plays a part in it), it is not a good example of generics.

Posting that litany of C++ template code (which is essentially std::map<std::string, std::string> fully expanded, not what you'd actually write) is not a very good example of generics use.


> The issue is just that the complexity of the type grows exponentially as you compose more generic types together.

Haskell handles this just fine. So we can blame it on C++. (I don't know whether it has anything to do with memory management, or C compatibility.)


I don't know Haskell, but I do know OCaml, a strongly, statically-typed language with type inference. OCaml does avoid a lot of the pitfalls of C++, but it shares the problem of difficult to decode error messages. Oh boy, does it ever.

Adding explicit type specifications helped a little bit. Adding parentheses around everything helped a little bit more. For some reason, the parentheses trick seems to be little-known online, but it's something I really had to do sometimes in OCaml. I once spent a half hour trying to figure out why a file was getting a parse error at the end. I eventually did a kind of binary search, commenting out half of the file at a time until I could narrow it down. It was a missing brace halfway through. The problem is that without the need to put all functions in the top-level, things just float in a sea of declarations, and if two things are next to one another, it's assumed to be curried... and so your error message about a missing brace shows up on a line number far removed from the actual problem.

It's possible that with more development effort put into the language, OCaml and other functional programming languages would have better compiler error messages. It's also possible that there's just a fundamental difficulty trying to teach a profoundly stupid computer to give reasonable error messages about a super-clever system. Hopefully the popularity of functional languages like Scala and OCaml will give us a chance to find out the answer to tihs.


Clang has excellent error messages for C++ - it even spellchecks your code and can suggest the function you meant to type when you typo something. So this is almost certainly a case of development effort, and not something intrinsic to the type system.


Yeah, clang's error messages are excellent, especially given how difficult C++ is to parse, let alone compile. However, they are still not as good as Go's. And try doing the kind of automated source transforms that Go can do with "go fix." IDEs for C++ are still fairly limited due to things like conditional compilation, unit-by-unit compilation, macros, etc. etc. You can make a pig fly, by shooting it out of a cannon, but ultimately it is still a pig.

I've been meaning to try the clang plugin for vim... context-sensitive autocomplete for C++ would be nice.


Oh, Haskell can have it's share of weird error messages. But generics are not really a source of them.


Haskell handles polymorphic-code-by-default just fine, with no particularly complicated types.


Ok, then blame it on the STL. Its API gives you extreme flexibility, but at the cost of extreme complexity.

In a language like Java or C#, as the parent stated, a Map<String, String> is really more or less just that.

In "real" terms, I don't need a string of anything but char (say I only use utf-8, or perhaps I just don't care what std::string uses as its internal encoding and will expect there to be methods on it to give me encodings that I want), so std::string being a std::basic_string<char> underneath is useless to me. Kill that flexibility.

I don't care about the allocator used. Kill that flexibility.

Etc.

Ok, so then std::string is now just... std::string. Cool.

On to the map. The comparator? Not sure why that needs to be a template parameter (though I guess the type-safety is somewhat nice); why can't I construct and pass one into the constructor or a setter at runtime? Kill that from the template.

Again, with the allocators. I don't care. Kill them.

Ok, now the MapStringToString is really just std::map<std::string, std::string> and nothing more. Awesome.

Sure, you can't blame the complexity of C++ generics on C compat or manual memory management: it's the STL's very powerful API. A very powerful API that makes no effort to hide the complexity it exposes. Arguably generics themselves in C++ aren't to blame at all. And so it follows that generics in Go need not be complicated and painful either.


"I don't care" is not the same as "nobody cares", and especially not "nobody will ever care".

Why would somebody care? Well, C++ gets used a lot in embedded systems. There, you can have different storage pools for different specialized kinds of memory. But if Go isn't trying to solve those kinds of problems, then Go doesn't need the same machinery that C++ does. It doesn't make C++ stupid, it makes it designed for a different environment.

Also note that C++/STL templates default the allocators and comparators, so you don't need to supply them unless you really need them. That's pretty much what you're asking for... until you try to read a compiler or linker message.


Sorry, I wasn't clear in my overall message. The only point I was trying to make is that "generics error messages are hard to read in C++" is false: it's really that STL-related error messages are hard to read. If someone wanted to build a simple standard library that omits a lot of the complexity provided by the STL, they could do so, and the error messages would likely be readable.


I am not convinced that C++ examples are valid as a generalization of the failures of generics for other languages. C++ libraries are often overly generalized with means to "configure" the generics by passing additional type arguments. I do not think it has to be like that.


C++ templates are also latently typed. The bad error messages arise from the fact that C++ templates behave more like a duck-typed language than a strongly-typed language. Look at the error messages that arise from C++ template errors and a lot of them are about missing functions, just like you get at runtime when you have a type error is a dynamically-typed language. I think C++ concepts will improve this when they're standardized, because they will allow C++ compilers to type-check template arguments. (See bounded polymorphism.)


Some of it is also the fault of the compilers for being too explicit in their default error messages. You get that horrible output because string is really a typedef to "basic_string<char>" and basic_string takes two more template arguments with defaults. But why is the compiler resolving the typedef in the error message by default? At least let it be "std::map<std::string, std::string, std::less<std::string>, std::allocator<std::pair<std::string const, std::string> > >" instead of that inscrutable noise.

And if you really need to provide the information that "std::string" is "std::basic_string<char, std::char_traits<char>, std::allocator<char> >" then put it on a separate line in the output, once, instead of repeating it internally within every recursive template instantiation.


Maybe the typedef:s have been resolved and discarded at the point in compilation where the error is reported?

So using it in the error message would require extra work.


So why do we pass that work from the compiler,the tool, to the programmer? Except that I think I know the reason, as I have looked upon the specification and it was not sane.


What's embarrassing is that people are still evaluating programming languages like they were bags of features. The more features you stuff in the bag, the better it must be!

well in some cases your platform only support one language, or maybe to. So it makes a lot of sense to then evaluate tha language as a whole. Even then, I don't see much wrong with evalulating a language as abag of features as long as you don't simply consider the amount of features but rather it's quality.

The reality is that generics aren't free.

No, but I noticed that the end, once you know how to use them properly and become used to them, they tend to become invaluable, even to a point that programmers coming from years of C++ background are struggling in other languages when they lack generics. Sure, right tool for the job. But in case of generics I'd say they're much too broad to speak of a single tool and they can be used for multiple jobs. So I'm with the author and feel go lacks on that point.

To quote Yossi Krenin

aka the guy who doesn't want to write std::map< std::string, std::string > :P


Please see the simple implementations of generics in ocaml and C# (added later).


My biggest beef with the Go typesystem is that they didn't get rid of nil. Tony Hoare (the guy who invented null) has acknowledged they were his "billion dollar mistake" [1], common practice in Java and Haskell is moving away from them, and yet Go included them anyway - in a language that's supposed to be robust because you're supposed to handle every case. The Maybe (or @Nullable) type is a much better idea, because it flips the default from "any variable may be null" to "only variables explicitly declared as such may be null".

[1] http://www.infoq.com/presentations/Null-References-The-Billi...


It's exacerbated by nil being reused as a value for things that aren't even pointer types, like the slices, channels and especially interface "pairs"[1]

[1] (<NoType> . nil) vs (T . nil) is pretty weird: http://play.golang.org/p/fmk-72OYkO


There is no such thing as a Go variable with no type. They all have a type (could be interface{}) and a value (could be nil).

    package main
    
    import (
    	"fmt"
    
    	"github.com/davecgh/go-spew/spew"
    )
    
    func main() {
    	var x interface{}
    	var y chan (int)
    	spew.Dump(x)
    	spew.Dump(y)
    
    	var z interface{} = y
    	spew.Dump(z)
    
    	fmt.Println(x == z)
    	fmt.Println(y == z)
    }
    
    // Output:
    (interface {}) <nil>
    (chan int) <nil>
    (chan int) <nil>
    false
    true


Doesn't an interface value have both a static type and a dynamic type and in a case like x its dynamic type is "nothing"?

I didn't mean that this is necessarily bad, just that it seems weird for "nil" to be overloaded beyond pointers.


Yeah. From the Go spec:

"The static type (or just type) of a variable is the type defined by its declaration. Variables of interface type also have a distinct dynamic type, which is the actual type of the value stored in the variable at run time. The dynamic type may vary during execution but is always assignable to the static type of the interface variable. For non-interface types, the dynamic type is always the static type."


Hoare didn't invent 0-valued pointer. It was there since the beginning of time. Or at least the beginning of the CPUs.

People talk about getting rid of nil like it's actually possible.

It's not.

If you have pointers, they sometimes have to start the life uninitialized (i.e. with a value of 0) hence nil pointers.

As you admit yourself, the proposed solutions don't actually get rid of anything. At best they can force you to handle nil value by wrapping it in some wrapper.

Guess what - if you have that kind of discipline, you can do the same in C++.

Why aren't people doing that?

Because it comes at a cost. There's a cost to the wrapper and when it comes down to it, you still have to write the code to handle the nil pointer whether you're using a wrapper or a raw pointer.

It just doesn't buy you much.

Finally, fixing crashes caused by referencing a null pointer is downright trivial. Spend a few weeks chasing a memory corruption caused by multi-threading and you'll come to a conclusion that fixing a null pointer crashes (which can be done just by looking at the crashing callstack most of the time) is not such a big deal after all.


> If you have pointers, they sometimes have to start the life uninitialized (i.e. with a value of 0) hence nil pointers.

Maybe at the machine level. But there's nothing that stops a programming language a few levels above machine from requiring that every pointer be initialized with a reference.

> As you admit yourself, the proposed solutions don't actually get rid of anything. At best they can force you to handle nil value by wrapping it in some wrapper.

The entire point is that a large majority of APIs don't WANT to accept or handle NIL, but have to, because the default is to allow it. And in some languages, such as Java, the only way to extend the type system is with reference types, making it impossible to ever not have to handle NIL. By reversing this decision, it becomes possible to specify both allowance and disallowance of NIL-valued parameters. Why you would ever argue against such expression is beyond me.


> Maybe at the machine level. But there's nothing that stops a programming language a few levels above machine from requiring that every pointer be initialized with a reference.

What if the pointer is to a large structure (expensive to initialize), and I want a function that returns a pointer, which may fail to initialize?

Without a nil, developers would create structures that have a "valid" field. Nil just makes that more convenient, and the way Go does it is pretty good -- you can't cast it the same way you can in C/C++.


If you want to be able to return "pointer or null", any decent language will let you do that, sure. But it's good to have the option of saying "this function will never return null".

Think of it this way: functions should document whether (and under what conditions) they return null, right? What if the compiler could check the accuracy of that documentation, so it would be an error to return null from a function that said it didn't return null, and a warning to document a function as possibly-returning-null when it never did? (And once you had that, surely you'd want a warning when you accessed a possibly-null thing without checking whether it was actually null?)


So basically you want the compiler to solve the halting problem for every function it compiles.


Ideally yes. But keeping track of nullability is not impossible, it's not even hard, as any number of existing languages with such checking prove (some of which have very performant compilers).


With a Maybe/Option type, you are forced to always deal with the possibility of Nothing/None.

And even better, you can use monadic bind to chain together several actions on possibly-nullable things and get either a value or a null at the end.


An argument can be made that this merely masks the problem : it is certainly possible for a Haskell function to crash (and therefore not return anything), no matter it's declared type :

reallyfun xs = reallyfun xs ++ [ 1 ]

Note that this will OOM, not just run infinitely long. There are plenty of ways you can cause this sorts of issues. So you can't trust Haskell functions to always return their declared type either.

The million-dollar-question : should your program be ready for this ? (in the case of a database : ideally, yes it should, and it's in fact possible to do just that)

That's the problem with abstractions, like the "always-correct-or-null" pointers of java : they're leaky. A type system, unless reduced to pointlessness, can't really be enforced fully. Haskell ignores failure modes, like memory and stack allocation, jumping to other parts of the program, reading the program, ... all of which can in fact fail.

Thinking about this gives one new appreciation for try ... except: (catching an unspecified exception). It's not necessarily worse than a pure function. Good luck defending that position to mathematicians though.


That misses the point, I think.

Of course functions can cause a program to crash, and there are all sorts of bad things that happen that cannot be caught at the language level; Haskell doesn't save you from memory corruption, for example.

But those things don't violate the language's guarantees about type correctness. A crashing program simply ceases to run; it's not like an Int-returning function can somehow crash and return a bogus Int value.

In this sense, Haskell is no different from languages like Java or Go. It's completely orthogonal to the null problem.


This is the mathematician's argument. It boils down to the unfairness of having to execute programs on real hardware with real constraints. Well, that doesn't work in the real world obviously. Especially memory allocations WILL fail, so, frankly, deal with it. Haskell makes this impossible, and therefore throws real-world correctness out of the window because it makes mathematical correctness so much messier.

Your assertion that this can't be caught at the language level is wrong : checking malloc'ed pointers for NULLness will do it. In java, catch OOM exceptions. This error doesn't have to cause your programs to crash. Neither does an infinite loop ("easy" to catch in Java). Given more tools, you can write programs that are safe from some measure of memory corruption.

The real world is messy. Pretending it's not doesn't fix that, and nobody but mathematicians are surprised at all. End result is simple : your programs will crash after you've "proven" it can't crash. Running everything in "just-large-enough" VMs has massively exacerbated CPU and OOM error conditions, at least from where I'm sitting. I'd expect further cloud developments to make it worse.

So the type system only guarantees correctness if the following conditions hold, amongst other things:

1) infinite available memory (infinite in the sense that it is larger than any amount the program requests, haskell provides zero guarantees that limit memory usage, so ...)

2) infinite available time for program execution (again, for the normal definition of infinite)

3) no infinite loops anywhere in your program (more problematic in haskell because every haskell tutorial starts with "look, lazy programming means infinite loops terminate in special case X", and of course it only works in special cases)

Note that this is only the list of "hard" failures. There are factors that can blow up the minimum execution time of your program (e.g. VM thrashing, stupid disk access patterns) that I'm not even considering. In practice, these "soft" failures, if bad enough, cause failure of the program as well.

And only then do we get to the conditions that people keep claiming are the only conditions for haskell to work:

4) no hardware malfunctions


... And all this has what to do with having non-nullable references be the default, with a wrapping Option or Maybe type for otherwise?


The point was that there is no real-word type system that can guarantee non-null pointers (not even Haskell's). You cannot do allocation reliably in the real world, and if your type system guarantees that, it is simply wrong.


You are arguing a useless point. Yes, in a real computer, memory can be randomly flipped by solar radiation and viola, your pointer is now actually NIL. Or any other various failure modes you've mentioned. The point being that those are failure modes, not normal operations. Once a system reaches a failure mode, nothing can be guaranteed, not even that it adds 1 and 1 correctly, because who's to say that the instructions you think you're writing are being written, read, and processed correctly? The only solution is to down the box, reset the hardware, and hope that whatever happened wasn't permanent damage.

My point being, you cannot invoke catastrophic system failure as an argument against a static-time type system and call it an argument, simply because that's an argument against any programming construct at all. Linked lists? But what if you can't allocate the next node... Better to not use them at all!


You're trying to change the topic. I'm not talking about unreliable hardware. The two main things I contend WILL happen to production haskell programs are :

1) OOM (both kinds : stack and heap). 2) functions taking longer than the time they effectively have (ie. to prevent DOS conditions).

Both of these are guaranteed by the Haskell type system to never happen, and you will hit them in practice. (guaranteed may be the wrong word, maybe required would be better).

The C, or C++ type system does not guarantee allocation will succeed on the heap, and has deterministic results if it does fail, meaning you can react to that in a clean manner (and make sure it doesn't interfere with transactions, for example). With extra work you can guarantee the availability of stack space for critical methods too. Java guarantees both stack and heap allocation failures will result in deterministic progress through your program.

These are not serious enough to merit being called "catastrophic system failure". They are not. Don't tell me you haven't hit both these conditions in the last month.

That's all I'm saying.


As others have pointed out, I'm not talking about nil as the zero-valued pointer at the hardware level. I'm talking about nil as the additional member of every type in the language's typesystem. A typesystem is an abstraction over the hardware, designed to catch programmer errors and facilitate communication among programmers. There's no reason it has to admit every possible value that the hardware can support.

And people are applying that sort of discipline - see the NullObject pattern in any OO language, or the @Nullable/@NotNull annotations in Java, or !Object types in Google Closure Compiler. The thing is, they have to apply it manually to every type, because the type system assumes the default is nullable. That makes it an inconvenience, which makes a number of programmers not bother.

I'll agree that null pointers are comparatively easy to track down compared to memory corruption caused by multi-threading. Threads are a broken programming model too, and if you want a sane life working with them you'll apply a higher-level abstraction like CSP, producer-consumer queues, SEDA, data-dependency graphs, or transactions on top of them.


> Threads are a broken programming model too

Oh come off it. Threads are in no sense 'broken' - compared to CSP or actors, they just give you a larger set of things you can write. Some of those things are bugs. Others are very useful. For example, a disruptor:

http://lmax-exchange.github.io/disruptor/

Nulls are broken because they let you write bugs, but don't let you write anything you couldn't write with options or whatever.


Really ? How would you even encode Maybe or Option in Java, if you can't use null anywhere ?

The problem is that Maybe doesn't work without Algebraic Data Types.


Oh come on, that's trivial

  public interface Maybe<T> {
      boolean hasValue();
  }

  public final class Just<T> implements Maybe<T> {
      public final T value;
      public Just(T value) {
          this.value = value;
      }
      public boolean hasValue() { return true; }
  }

  public final class Nothing<T> implements Maybe<T> {
      public boolean hasValue() { return false; }
  }
You can even get rid of hasValue (but then you need to pay for instanceof each time); or of the Nothing class and make Maybe a class. You may ask what the value of Just.value is before the constructor runs - the value is a machine null and if you somehow manage to access it before the ctor runs, that's a NullPointerException; or what writing "Type varname;" in a function would do - that would be perfectly legal, but you won't be allowed to use it if the compiler can't prove you've initialised it first (which it does right now).


Problems :

1) How do you get at the value itself ? (Casting ? That's bad)

2) How do you prevent in Maybe<Integer> x; x == null ?

3) How do you prevent someone from extending Maybe<T> ? e.g.

    public final class LetsHaveFun<T> implements Maybe<T> {
      public boolean hasValue() { throw Exception("Can't touch this");
    }
4) (you need a null check in the constructor)

5) (I dislike the autoboxing this uses)


> 1) How do you get at the value itself ? (Casting ? That's bad)

You could add the usual map method to the Maybe type - a Maybe<T> can take a Function<T, U>, and returns a Maybe<U>. If it's None, it doesn't call the function, and just returns None; if it's Some, it calls it with the value, and wraps the result in a new Some. If you want to do side-effects conditionally on whether the value is there, you just do them in the Function and return some placeholder value. You could write a trivial adaptor to take an Effect<T> and convert to to a Function<T, Void>, etc.

> 2) How do you prevent in Maybe<Integer> x; x == null ?

You're right that using a Maybe does not exclude the ability to use nulls. Nobody can deny that. The point i was making is that there is nothing useful that you can do with nulls that you cannot do with a Maybe instead.

> 3) How do you prevent someone from extending Maybe<T> ? e.g.

You can trivially control extension by making Maybe an abstract class, giving it a private constructor, and making Some and None static inner classes of it. It's a kludge, but it works!


1) Yes, casting. You need to do casting in Haskell too, it's just hidden for you by pattern-matching

2) It's for a hypothetical Java implementation that doesn't have null

3) If you really want to, make it an abstract class and do a check in the constructor that this.getClass() == Nothing.class or Just.class.

4) see 2)

5) well if you're using primitive types they are non-nullable already

Edit: to clarify, I don't expect someone to use it for Java today, it's what I would put in the standard library if Java didn't have a null in the first place.


> Threads are a broken programming model too

More specifically, threads with shared mutable state. If state is never simultaneously shared and mutable then a host of problems disappear.


And even more specifically - threads with locks. Shared mutable state is okay as long as any state-swapping operations are atomic, eg. with STM or if you build up the state in one thread, switch it with an atomic pointer swap, and don't let other threads mutate the state.


Threads with locks are fine as long as the compiler enforces that you take the lock before mutating the data (e.g. Rust). :)


Annotalysis can do this for C++ as well. The problem is that you need to enforce an ordering on locks to avoid deadlocks. That works fine if they're all within one module. It doesn't work at all if you have to coordinate callbacks across threads from different third-party libraries.

The usual solution I've seen given for this is "Don't invoke callbacks when holding a lock." This is not a viable solution for most programs.


I'm surprised no one has mentioned Rust[1] yet. It's a systems programming language that has managed to get rid of null pointers by carefully controlling the ways in which a a value can be constructed[2].

[1] http://www.rust-lang.org/

[2] https://github.com/mozilla/rust/wiki/Doc-language-FAQ#how-do...


I don't think Rust is such a shining example of a solid language in this area, at least not yet. For example: http://stackoverflow.com/a/20704252/626867


That has nothing to do with null pointers, and that was agreed to be changed today.


The SO issue you linked to has nothing to do with null pointers.


And IIRC the compiler translates usages of None to null, so any complaint about inefficiency probably does not apply in this case, either.


This is actually quite common in languages which support ADTs—you stick tags in the pointer to discriminate between subtypes of a union.


> If you have pointers, they sometimes have to start the life uninitialized (i.e. with a value of 0) hence nil pointers.

Right, and those pointers must then be declared as potentially null. Or more generally, potentially non-present values of any type need a type like Maybe T.

> As you admit yourself, the proposed solutions don't actually get rid of anything. At best they can force you to handle nil value by wrapping it in some wrapper.

That's exactly the point: you can tell from a glance at any type whether it can be null or not, and you can only look at the value of something that's guaranteed to not be null; otherwise, you have to handle the null case first. Forcing that to happen at compile time via the type system is far better than dereferencing a null pointer at runtime.


> If you have pointers, they sometimes have to start the life uninitialized (i.e. with a value of 0) hence nil pointers.

Why do your variables have to start life uninitialized?

> Why aren't people doing that? Because it comes at a cost. [...]

Not a runtime cost, though. This can be handled purely by the typesystem at compile time.


> Guess what - if you have that kind of discipline, you can do the same in C++.

Well, ironically, C++ already has non-nullable pointers. They're called references and they work extremely well.


  int& toref(int* ptr) {
      return *ptr;
  }
  //snip
  int& i = toref(0);
  i = 5; // segfault
One thing I kind of appreciate at times about C++ (or at least about certain implementations) is the fact you can call methods on null pointers and you can check in the method if it's called on a null pointer. I.e.

  class A {
  public:
    int foo() {
        return this == null ? 1 : 2;
    }
  };

  //....
  A* a = 0;
  std::cout << a->foo();
Which allows you to make classes that work just fine even if you use a null pointer.


> int& toref(int* ptr)

Yes, it's theoretically possible. The point is, you need to explicitly do evil things to get "null references". The language cannot and is not meant to protect you from yourself.

> One thing I kind of appreciate at times about C++ (or at least about certain implementations)

Yes, certain implementations indeed. It is formally undefined behavior, so anything at all could happen if you do that; the most straightforward and efficient implementation just happens to "work".


> The point is, you need to explicitly do evil things to get "null references". The language cannot and is not meant to protect you from yourself.

No, you don't need to do evil things at all. Any C api that returns a pointer which you then need to pass to a C++ function that takes a reference is a possible source of failure if you forget to check your pointer. It can happen quite easily by mistake. The language cannot in any way guarantee that you won't make a reference from a null pointer. At least taking a reference as an argument does at least alert people that you're not expecting to be passed a null as an argument to your function.


A fair point. Still, only having to check for nulls on certain API boundaries is much less work than the alternative, and it's easy to train oneself to automatically spot "dangerous" points where a pointer-to-reference conversion is done.


In 97.5% of case, the `nil` value of Go types follows the NullObject pattern.

There's really just nil pointers that are still equivalent to `null`, and their absence would be an abnormality, as there's no reason why a pointer couldn't have a nil representation.


Just because "NullObject pattern" ends with "pattern" doesn't mean that it's usually a good idea.


If one is to write robust programs it is useful to automatically enforce that there are no execution paths that set particular pointers to null.


This masterclass in clickbait titling pulled in 735 comments and counting on /r/programming. Watch and learn, bloggers!


It did something similar here before the outage.


Was it cached?


I don't think so. For the most part people were either curious about why Go did what it did (let you export the `NewHidden` while keeping `hidden` unexported: requires `hidden`s to be created using the specific constructor), or defending Go's choice in letting you export functions that return an unexported type.


What's wrong with leaking un-exported element? Your code wants it to be exported. If you care about it, please return an interface type.


As I posted before the outage, I don't think leaking un-exported elements is actually that broken either -- Haskell gets by fine with it (you can omit the type and its constructor in the module export list and have types "leak" into the code importing said module). It can be a useful way to keep types opaque and not allow users to construct instances directly.

The actual issue, in my opinion, is the weird behavior of the reflect package which allegedly cannot reflect on the leaked element despite its members being publicly accessible anyway.


Author here. What I’ve written regarding the reflect behaviour isn’t entirely true and I should probably update the article to reflect that. It’s possible to get (but not set) the value of unexported fields as long as they are of a built in type (e.g. int). What’s not possible is to get access to unexported fields as empty interfaces, which means you cannot type assert them to structs or other types. This seems like an arbitrary limitation (although I’m sure there is a reason for it) and somewhat limits the usefulness of the reflect package.


I love go, but my personal gripe with the type system is that there is no way to pass type information around without using a zero valued (or fully populated, it makes no difference) instance. For example, I have a factory struct with a New method that returns interface{MySignatures()}, but I cannot inform the factory of the concrete type I want it to produce without passing in something like new(ConcreteStructWithMySignatures). Some have suggested I pass in a string, but what is the point of a type system if I have to use strings to reason about my types?


Go is designed to be a small, simple language, it's not hard to release that. If you want a fully featured type system, go program in Ocaml, Haskell or another 'big' language.


That falls flat on it's face as a cop out when you look at a language like nimrod that manages to be practically as simple as go while including full generics. Oh, it's faster too.

http://nimrod-lang.org/tut2.html#generics


Nimrod is a gem. Just needs a big company to host release parties and sponsor it.

I am convinced, a good part of Go's popularity is Google. As a language it is ok. Has good stuff in it. Quite often the argument for adoption is "Google is behind it, and it is becoming really popular". Which, is not completely irrational -- it is nice to have a large company throwing away perhaps years of quality full-time dev work at a language.


How come Google plays a good part of Go's popularity while the language per se is open-sourced and a fair amount of its contributors don't relate to Google at all? If you had observed what's happening around Go and Google since the language was released, you would have noticed that everything that happens inside Google is due to its creators motivation to push the language into the company rather than Google being "behind it". Dart on the other hand is backed by Google for good. Go? Except Google App Engine, i don't see it.


> How come Google plays a good part of Go's popularity [...] ?

Because it is the perception. It is enough for people to think Google is sponsoring Go to choose Go. They expect continued support and upgrades not a dead forgotten project 3 years down the road. Now how much does Google pay for and support Go, I don't have figures for you, but in this case perception is what matters most.

Also, doesn't it pay full time salaries for people who work on Go? Or primary creators work on Go on the weekends and holidays and/or most are not even affiliated with Google?


"They expect continued support and upgrades not a dead forgotten project 3 years down the road."

Go is open sourced for like the last 4 years and has its own separate community. There isn't such a possibility either down or up the road.


Quite. The idea that Google is "behind" Go is mistaken. The situation is that some people inside Google, some of them quite well-known and influential, are behind Go. There are also people not inside Google who are behind Go, and people who are inside Google who aren't keen on Go.

Therefore, Go is certainly not in the same position as things like Chrome or Android, where the company as a whole has a stake in their success.


> The idea that Google is "behind" Go is mistaken.

Does Google pay full time salaries and lets primary Go creators work on Go during the day?

Does Google blog about implementing and deploying Go based services?

Let's just put it this way, Google is a lot more behind Go than it is behind Nimrod.

Pretty sure there are developers at Google that like Nimrod but there is night and day difference between associated of Google and Go and Google and Nimrod.


"Does Google pay full time salaries and lets primary Go creators work on Go during the day?"

Have you ever read any interview or watched any presentation about Go by Rob Pike, Andrew Gerrand, or any other Go/Google engineer? If so, you wouldn't say that so easy. Just browse through Google Jobs and tell me if you find any Go job.


Well they do support Go on App Engine.


They do support Go for Glassware development, too, but these are just exceptions. If you search for jobs in Google you are not going to find any vacancy requiring Go programming, while you can find such vacancies in companies like Soundcloud, Cloudfare, Songkick, or Reuters. Also i would like to see Go natively used for Android development. Then i would probably reconsider the "behind it" statement.


The fact that Google blogs about rewriting some of it services in Go internally and it employs full time Go primary creators is worth 100x more than Soundcloud, Cloudfare and Songkick combined job boards.


These rewritings have been executed by Go's creators and core developers, not by common programmers.

eg. http://talks.golang.org/2013/oscon-dl.slide#1


The point is publishing that constitutes good advocacy and implies expected future support for Go. A lot of people given the choice between learning multiple languages will factor a company like Google supporting and using Go in their decision.

I am sure if Google published on it blog that it rewrote some of its services in Nimrod, there would be a bump in popularity.


Of course it is implied that there is future support for Go since there are serious people behind Go who are Google developers but its current popularity comes from various implementations of the language and "rewritings" which are translated into blog posts rather than blog posts per se. I hope the difference i am trying to point out is obvious.


Quite true. I doubt Go would be as talked here if it wasn't for Google.

Just check how far Go's ancestors (Alef/Limbo) went.


I guess this also means that Go wouldn't have the success it has if it wasn't built and used by a group of engineers who are used to the pitfalls of complexity and value tools that help you to get the job done and yet produce maintainable code.

This also means that it's not a good fit for everybody. It's certainly less exciting than most of the things that you find around; that's ok, that's the point. Less focus on the language, less focus on the magic and more focus on what you do with it.


The gold question is, would anyone care if the same engineers would be doing this work in another company?


I believe it would, if the other company obeys to this constrains:

* has a very large number of top-notch/rock star engineers and this fact is widely publicised * has a history of producing products with a very good quality (at least in the area under discussion, in this case reliability, performance and scalability) * has been on the market and successful for a long period of time * engineer opinion within the company carries weight and rules are not blindly dictated from above.

There are many other good companies, hiring extremely bright people. However, it's interesting to learn about what arises from the requirements of a company with the above listed characteristics.

I'm sure some people will just cargo cult everything googly, but there are also those who have a genuine interest and know when to apply those things if they make sense for them.


You just described Google by other words.


Or Sun Microsystems in 1996.


Or Facebook in a few years?


Both ML and Haskell are really tiny languages syntactically and have very simple core type systems.

Go has both crappy syntax (excusable for appealing to C programmers) and a pathetic type system.


Maybe Go is a small and simple language now because it's still missing some features that potentially make it a complicated language. :)


Are we complaining about generics again?


I see nothing wrong with that. I would like to keep on complaining till they listen! :-)


It's not that "they" aren't listening. There's a limited number of developers working on the language, there are many other problems being solved.

So far despite all the people lamenting the lack of generics in Go I've yet to have seen someone propose a viable (eg: backwards compatible, realistic to achieve with the current compilers and runtime, etc.) way to implement, nevermind someone who's produced a prototype.

Maybe that sort of thing will be easier once more of the toolchain is rewritten in Go later this year.

I don't think anyone is denying that generics make writing certain types of code easier, but it's worth getting them right since major language additions are something you need to support for a long time.


   So far despite all the people lamenting the lack of generics in Go I've yet 
   to have seen someone propose a viable (eg: backwards compatible, realistic 
   to achieve with the current compilers and runtime, etc.) way to implement, 
   nevermind someone who's produced a prototype.
This seems to make it even more of an issue that they weren't included in the first place. Based on your statement, it's likely that when generics are finally added, they will initially cause the community some pain or they will feel tacked on.


> So far despite all the people lamenting the lack of generics in Go I've yet to have seen someone propose a viable (eg: backwards compatible, realistic to achieve with the current compilers and runtime, etc.) way to implement, nevermind someone who's produced a prototype.

https://github.com/droundy/gotgo


It's kind of crazy to prioritize anything other than generics and exceptions. If Go never catches on, those will surely be why, and IMHO rightly so, it's just unreasonably hard to write correct code without them.


Type system power really is the only question i have left relative to go. I've always seen go being used for low level very technical stuff, where you don't need lots of abstraction, but rather powerful primitives and libs. I've always wondered how it scales to business process modeling, and the lack of generics always worried me.

On another aspect, 2013 was the year of Go being unanimously praised, and i've got the feeling 2014 is going to be the year of the backlash.

Still, i don't see any other languages able to take the crown back. Rust looks way too unpolished and has not only one but multiple pointer types.


Rust really doesn't target the same uses that Go targets. Go basically targets people who need a faster Python or a simpler Java. It doesn't have enough low-level control to be used for most of the things that C is still used for.

Rust targets the low-level uses where a high degree of control over memory management is necessary. One of it's main goals is to make it possible to statically reason about memory ownership.

You would never try to implement something like a modern browser or a production-quality language runtime in Go. On the other hand, Rust is targeted directly at that kind of thing, and the multiple pointer types are absolutely necessary for those use cases.


> Rust looks way too unpolished and has not only one but multiple pointer types.

This is overstated. There is just one pointer type in the language, the unique pointer, and there are also references.

That said, the distinction between unique pointers and references is there for a reason. You need this power for low-level programming. A pervasive global GC is not appropriate for all applications.


Nothing I've seen on Hacker News would lead me to the conclusion that 2013 was the year of Go being unanimously praised. It gets flack every time a story shows up.


Are we trying to make Go into Java or JavaScript as it relates to dynamic or static?




Applications are open for YC Winter 2018

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | DMCA | Apply to YC | Contact

Search: