Hacker News new | past | comments | ask | show | jobs | submit login
Maybe adding generics to Go is about syntax after all (cheney.net)
114 points by UkiahSmith on Sept 4, 2018 | hide | past | favorite | 87 comments



It's been three years since I started writing Go professionally and I think adding generics to Go is one of the worst ideas I've ever encountered. I love Haskell, and I much prefer hindley-milner type systems, type classes, and real sum types. I see the value of generics where appropriate. Golang, however, made the tradeoff to sacrifice anything near that level of abstraction, and it's success might largely be attributed to the approach-ability of the language as a result of that tradeoff. Retrofitting generics onto the language will be at it's best a death sentence for the principles and ergonomics of the language. One thing that has become clear to me as I have spent time with Erlang, Haskell, and Go is that it's much better to use the right tool for the job rather than trying to manipulate a tool to fit every use case, which is precisely what adding generics is all about.


I've also been writing a lot of Go professionally for a long time, and I've used every part of the language extensively, and I disagree.

There are many times I remember where I've had to implement some sort of application specific data structure that's not built into the language, and generics would have made it a lot nicer. I made it generic by using interfaces, but then you have the problem of throwing away type safety, so to preserve it, you need wrappers which cast.

It's not a given that adding generics would ruin the ergonomics of the language. I hope that whatever form generics do take in the end, that they remain simple. If I see template meta-programming in Go of the sort that occurs in C++, I think I'll scream, but that meta programming was to work around limitations in C++'s typing system which Go doesn't share.


What do you mean by nicer? I would imagine that something that is application specific would not require generics.

Interfaces do not throw away type safety if they are used correctly. They are also better suited for the consumer of the type rather than the definer. One of your problems might be that you are prematurely abstracting your interfaces for your callers.


interface {} definitely does throw away type safety, however, and is reasonably necessary for some library use cases. Is it possible that you're coming at this from an angle of building applications for specific purposes, while oppositelock is building more libraries?

My experience (admittedly limited) is that application writing in Go is nice, but as soon as I try to build a library for use in several similar (but not quite identical) cases, maintaining type safety can quickly become a hassle.


Let's say that I have an interface called Sortable, so I design my data structure to accept and return Sortables. I can insert any Sortable into it. When I get the Sortable in a future lookup, I don't know its type. I have to use reflection to error check.

With generics, I could create a generic data structure which is strongly typed - what goes in is what comes out.

Right now, the way to ensure that is to wrap your data structure in something that converts those Sortables into specific classes, say PatientRecords.

Interfaces are wonderful, I like working in Go more than any other language so far, but they have their limitations.


I loved go from 1.3 onwards, and it still has a special place in my heart. But ultimately a failed attempt at writing a heavy numeric/scientific app in Go soured me on the lack of generics. the overhead of needing to re-implement functions in order to avoid casts was way to much to bear in one app, and projects such as gonum suffer either due to an over-reliance on interface or the requirement to constantly re-implement functions to deal with the change in type signature between int, float, and their associated sizes.


This is an example of why I say use the right tool for the right job. Go is not the best language for scientific computing. Depending on the math involved you should likely be looking at Fortran or Python, which will most likely use Fortran or C under the hood. Go is best suited for small application or microservices, or as glue code for services.



i haven’t written a whole lot of go, but from what i’ve done and what i see coming in go 2 i don’t really want to have to write a lot more. go felt like it was modern-vintage to start with memory management but also values/pointers. with a heavy emphasis on multi return error handling instead of generics (Either), nil instead of Option, and mutability by default

and the future seems to be more weirdness. magic methods like handle. list comprehensions still uncertain. contracts sound like a backwards compatible break and duplicate of existing functionality.

i used go to use libraries and low memory foot print for my small projects. but i bet there’s simpler routes out there


Major props to Dave for realizing that his previous position (that adding generics had nothing to do the syntax) was mistaken, and for admitting so publicly.


This might be nice to use in the specific case. However, its important to note that in the general case, you need a type variable because it is used in multiple positions. In the function below (taken from the proposal), one is stating that T is the same type in all usages.

    func Join(type T strseq)(a []T, sep T) (ret T)
This below version is different, each variable could be a different type (that satisfies strseq).

    func Join(a []strseq, sep strseq) (ret strseq)


As I understand, the type would be implicitly applied to every func/struc in scope (which I assume would be the package)

It work quite well for package like a Map so any data structure/algorithm. I suspect it would be less ideal for filter/map/reduce.


I think the syntax is essential to the problem. A few days ago I wrote a quite similar post about the difference between Generics and Interfaces: https://gist.github.com/arendtio/77dd4df5f4b19dc69da35064843...


Why not adopt standard ML's polymorphism semantics? Is there some type theory issue I don't know about preventing that?


I wish the world adopted ML's modules and functors. Especially higher-order functors, which are kind of an extension of the standard.

But I think Go is more worried about compiler performance, so if there are significant complexities in the surface syntax of generics or in their downstream consequences, it will be an impediment there. I think the people behind Go consider most type theory to be an impediment to productivity and would rather accept something simple with shortcomings they understand than try to do it "right." I think this is a perfect example of Unix's "worse is better" philosophy. Whether we agree with it or not is another question.


I don't know much about Go. Does it have subtyping? Mixing polymorphism and subtyping isn't obvious. Stephen Dolan's thesis on algebraic subtyping [0] is the first really satisfactory answer, all previous ones having significant drawbacks, but it's only 2 years old, untested in a production language.

Considering how conservative, bordering on reactionary, the philosophy of Go seems, I think it's politically untenable.

[0] PDF: https://www.cl.cam.ac.uk/~sd601/thesis.pdf


Go does not have subtyping. Its type system is sufficiently rudimentary that steering it towards ML would just be just too extreme at this point.


On the topic of "syntax is important" (though not related to generics -- other than by aside to the Result type) I'm still not a huge fan of the handle syntax -- I reckon something more similar to Rust's .map_err() would be handled better because lots of people now use pkg/errors[1] which has the boilerplate of

    if err != nil {
        return errors.Wrap(err, "some message that is unique to this line")
    }
and the handle construct doesn't really handle (excuse the pun) this properly (you have to re-define handle each time or have a local variable or something similar). Which means that the new construct won't actually be massively helpful unless the only thing you want to do is errors.WithStack or just bubble the error back up.

But I do get why they couldn't get .map_err() -- because that's something you can only really do if you have a Result type.

[1]: https://github.com/pkg/errors


Yeah. I always try to include some context with my errors, so the only actual uses for check/handle I have is

  handle err { log.Fatal(err) }
in quick "scripts". And even there I should probably provide more context. All clean-up work is already done in defer.

Overall, check/handle feels like a poorly-thought slap-on to me, on par with ON-units from PL/I[1]. And PL/I is not something you want as an inspiration.

[1] https://en.wikipedia.org/wiki/PL/I#ON-units_and_exception_ha...


Right -- people already use defer as a way of mapping errors and doing cleanup in an already awful way:

    func something() (Err error) {
        defer func() {
            if Err != nil {
                someCleanup()
                Err = fmt.Errorf("wrapping: %v", Err)
            }
        }()
        return blah()
    }
And unlike handle you can actually give an arbitrary mapping function, while as far as I can tell you'd need to do something like

    handle err { return mapError(err) }
Rather than the more ergonomic

    handle mapError
Or

    handle func(err Error) { return mapError(err) }
But whatever -- all of this is pretty useless. The main benefit of check in my mind is actually that you would be able to easily bump your test coverage -- because Go test coverage is based on lines and so the repeated use of

    if err != nil {
        return err
    }
artificially dilutes your test coverage (you don't need to test that every error is propagated -- that's just doesn't make any sense and might not be possible if you are returning directly from os.* functions that don't even have fault-injection).


I enjoy the Go authors' attempts to propose novel or innovative solutions to existing problems, without going straight towards the status quo, but I do wonder if sometimes they're trying too hard.

At risk of cargo culting or not knowing enough about the problem space, I can only think of what I've recently read from The Psychology of Computer Programming^0. The entire reason that generics and error handling are difficult is because of the decision to avoid a syntax table.

This to me feels like an artificial constraint because the Go authors are optimising for compiler simplicity and not developer ergonomics. So the developer has to parse a whole lot more because the compiler guys don't want to take a hit. The reason why generics are such a pain in the ass, and errors are such a pain in the ass, is because the emotional attachment to how Go currently runs and what it stands for is still far more powerful than any attempt to innovate on the language's own status quo.

So what Cheney says makes some sense in that in a lot of ways, it feels more intuitive than implementing language grammar that isn't executed but looks like real code. But at the same time, the whole debate is basically around how to change go without changing it.

Either do it or don't.

[0]https://leanpub.com/thepsychologyofcomputerprogramming


As someone who just went to gophercon I'm still confused why they're tackling error handling first, and then generics. I would imagine tackling errors in Go 2.1 with the help of generics would make for a much much cleaner solution... if we even need one (i'm not fully convinced verbose error checking is as bad as people say it is)


(I agree that waiting until generics were ironed out would have been a better idea, because we already have pretty large communities around existing error handling libraries. Unfortunately this is a pattern of the Go language designers -- they have often ignored community consensus around an issue, like they did with vendor/ and other similar issues).

My main issue with it (aside from making memes about RSI) is that it actually dilutes your test coverage artificially. 'go tool cover' instruments line-by-line (technically I think it's statement-by-statement but that's not relevant here) and so having a dummy line like

    if err != nil {
        return err
    }
where you aren't doing anything useful with the error decreases your test coverage over a line which is not only obviously correct but might also be impossible to test (some APIs have an error return even though they can never fail in most usecases -- and as a user of the library it's better to be safe and check the error anyway). A perfect example of this is the Read() interface from math/rand -- it is impossible for this to return an error and yet you definitely should check the return value from Read() and it would be irresponsible not to.


The problem you describe is not specific to Go. You should always test all error paths in production code. Using mocks or error injection can help with triggering otherwise “impossible” cases.


That is definitely one argument (and I do see where you're coming from -- especially if your error path has a defer that does cleanup or something complicated like that). However, I don't think spending a significant amount of time mocking out every struct method that has an error return is a worthwhile investment -- the majority of 'if err != nil { return err }' cases are not going to be interesting and the ones that are usually don't even need mocking to test because they are significant enough error cases that they are easy to trigger using the real version of whatever struct. And of course you should test error paths to make sure that your code does validate things.


C/C++ __LINE__ is unique to a line.


No, it isn’t, not even within a single file. https://gcc.gnu.org/onlinedocs/cpp/Line-Control.html:

”If you write a program which generates source code, such as the bison parser generator, you may want to adjust the preprocessor’s notion of the current file name and line number by hand.

[…]

#line linenum

linenum is a non-negative decimal integer constant. It specifies the line number which should be reported for the following line of input. Subsequent lines are counted from linenum.

#line linenum filename

linenum is the same as for the first form, and has the same effect. In addition, filename is a string constant. The following line and all subsequent lines are reported to come from the file it specifies, until something else happens to change that. filename is interpreted according to the normal rules for a string constant: backslash escapes are interpreted. This is different from ‘#include’.”


I'm not sure you understand what I mean -- the "unique to a line" message is actually a descriptive message not a line number. errors.WithStack gives you the file, line, and function name for free and works with handle -- my complaint is that giving descriptive and stacked error messages will be very hard with handle (but is not fundamentally hard with Rust's map_err even though their error packages aren't as well-developed as Go's from my experience).


Why not reuse Common Lisp macros?


This is not a very useful thing to say. Macros have nothing to do with generics.


...except that both can be applied to solve the same problems.

And so they are in fact very related.

For instance with a macro you could do (made up syntax)

SortIntSlice = mymacros.MakeSorter!(int)

...then at compile time have code for sorting a slice of ints generated.

This is again similar to C++ templates (main difference is C++ will implicitly instantiate instead of explicitly).

And C++ doesn't need generocs because people use templates instead.

They really are in the same problem domain.


Using macros to implement generics has the same problem that C++ templates have, which is described in the Go 2 generics proposal: it's difficult to provide clear error messages for developers in many cases. Functions parameterized on a single type are the easy problem to solve. Where things get thorny is when you parameterize on multiple types with various constraints and/or have some kind of interaction in the function body. The goal of the Go 2 generics contracts (and C++ concepts, Haskell typeclasses, Rust traits, etc.) is to make those expectations explicit.


Macros and C++ templates are also incompatible with separate compilation which leads to increased compile times and binary bloat.


Why not use lisp syntax, at all? Why do we have to reinvent syntax every single generation? I'm not an old school engineer (I'm in my early 20s and in my first job, freshly outta college) but I can't see why we don't use lisp, prolog or ML syntax for everything. Seems vastly better than C-like synaxes in all ways I can think of, easier to implement, easier to extend, easier to read (imho) etc... When I program even in a low caliber lisp like elisp (which I do routinely since I use emacs) it feels like syntax gets in my way less frequently compared to python, C, Javascript etc... I just wanna think about my program, and not the syntax. If lisp is not expressive enough, why not just ML? Why do we have to reinvent it so many times when syntax is not an interesting or relevant problem (e.g. Rust syntax)?


I tinkered a bit (as many others have too!) with a lisp syntax for C, just to see what it'd feel like. That is, exact C semantics (so not inventing anything) but sexpressions.

What I remember struggling with is that it was surprisingly kind of hard to remember when parens are necessary where. For example you might define an if statement:

(if (> x 3) (printf ...))

Looks nice, but what about if you want multiple things in the body? Your options are

(if (> x 2) (progn (printf a) (return b)))

Where "progn" means "curly braces" effectively, or instead always requiring a list-shaped argument

(if (> x 2) ((printf a) (return b)))

which means in even the single-expression case you have to remember to always double-wrap

(if (> x 2) ((printf a)))

And now throw in handling of 'else' into the above. It gets surprisingly hard to use surprisingly fast. This problem repeats everywhere (function definitions, type declarations, for loops, and so on).

What all the above really clarified for me is that sexps work great in languages like lisp/scheme that are designed around them, but they don't solve syntax for free.


  (when (> x )
    (printf a)
    (return b))

  (cond
    ((> x 3) (printf a)
             (return b))
    ((< x 2) (printf b)
             (return a))
    (t (return c)))
Nobody who works with Lisps thinks in terms of "do I need double parentheses". You just know that if takes two or three expressions, when takes an expression followed by zero or more forms, cond takes a sequence of zero or more clauses which consist of a test followed by a body of zero or more forms.

If you're designing an S-exp syntax for something else, it's probably best to keep the familiar things the same; don't make some different if and such.

The Lisp if maps naturally to the ?: ternary operator; cond, when and unless can compile to cascaded if/else if/else.


Yes, but you are describing how a lisp programmer would expect things to work. As my comment was saying, I was exploring whether a verbatim translation of C syntax works as sexeps, and where it fails.


See the Common Lisp IF* macro:

  (if* (> 3 2) then (foo) (bar) else (baz))
https://franz.com/~jkf/ifstar.txt


Or "if" could be a special form so that you can just write (if (> x 3) (printf a) (printf b))


WHEN takes an implicit PROGN like that (so does UNLESS). IF can't because it needs to delimit the then-form from the else-form somehow.

http://clhs.lisp.se/Body/m_when_.htm


Because not everyone agrees (thankfully!) that syntax is uninteresting or irrelevant, or that Lisp is some amazing global aesthetic or ergonomic maximum.


Why thankfully?


Because syntax actually does matter, and Lisp is only a local maximum. Thinking otherwise causes pain, as people try to use Lisp in situations where it is not the best answer, and is therefore harder to use than other languages.

So, "thankfully", because acknowledging reality here means not having to live with the pain of using a language that is sub-optimal for the problem you're working on.


This is overstating the case for syntax. Without any empirical evidence that syntax is important to anyone other than folks that draft up syntax rules, this is just an assertion.

Put another way, do you have evidence that it is important? Has it let people write better programs? How would you prove that?

This is akin to saying grammar is important. It is, to an extent; but over adherence to grammar is usually not an effective way to communicate.


Well, empirically, syntax is important to a large number of programmers who complain about Lisp's (lack of) syntax. That's not a rigorous empirical study, but it's the best I can do at the moment...


I assert most programmers have not been exposed to lisp. I further assert most that complain of it's lack of syntax have not used it in any capacity.

I would be interested in studies on this. Most I ever see is posturing and assertions with little to no evidence. Frustrating, because I think both sides have valid points. And I fully believe there is a learning curve both ways. I suspect they both converge on total ability. I am genuinely interested in ideas on how that could be explored.


I think it's going to be really hard to do the study in a rigorous way. Solid data would be nice, though. Then we could stop guessing.

My best try at a data point is that Lisp was #4 on the TIOBE index back in the 1980s. Now it's on the fringe. I assert that people had been exposed to Lisp, it was widely used. It became less and less popular. It's not that industry didn't know Lisp; they knew it and turned away from it.

Of course, the big flaw in my argument is the growth of the industry. A huge number of new people came in. Did they know Lisp, even at a college level? Maybe, maybe not.

My counter-argument to that is that industry knew Lisp, even if the new programmers didn't. If industry wanted them to learn Lisp, it could have made knowing it a job requirement, or trained people.


> My best try at a data point is that Lisp was #4 on the TIOBE index back in the 1980s.

Unlikely, given that TIOBE the company wasn't founded until 2000, and the TIOBE index is based on web search engine results, and there weren't any web search engines, or a web for them to search, in the 1980s.


https://www.tiobe.com/tiobe-index/ has historical data going back to 1988. (Presumably they used a different methodology for the older data.)

But I remembered wrong. They list Lisp as #2 in 1988.


In the realm of personal computing, Lisp was nothing in 1988. "Nobody" was doing any Lisp programming for affordable consumer hardware. Lisp had almost no mindshare among people who got started in computing with 8 bit microcomputers in the late 1970's and 80's and who moved on to PC's and Macs. (Speaking of Macs, there was maybe the OpenMCL exception there.) Lisp was something that you heard about that ran on "big iron" owned by exclusive institutions. Lisp machines count as that, basically.


Macintosh Common Lisp was mildly popular on Macs. Apple used it themselves in several projects and eventually bought it, hired the team and sold it into their developer channel.

Another interesting offer was MacScheme.

LeLisp had a Mac version and it was used to implement an interface builder, which was demoed to Steve Jobs, who then hired the developer to create a version for NeXT.

Both Symbolics and TI delivered Lisp Machine boards for the Mac II.

Procyon developed a Common Lisp with UI for both Macs and PCs.

ExperCommon Lisp was another Lisp for Macs.

XLisp had Mac version.

On the PC there were numerous Lisp implementations.


Agreed it would be really hard. I do want to underscore I'm interested in the question. And I don't know which way I expect it to go.

My personal hunch is that lisp was just too expensive when most industry took off. In near every way. C compilers being basically free is a ridiculous advantage. Yes, there are free lisp implementations today. However, early mover advantage is working against them, now.


I'd agree that Lisp was too expensive when things took off. Not just in terms of compilers, but also in terms of hardware requirements. C was a better match for the tight memory of the day.

My personal hunch about the experiment (based on nothing more than my intuition) is that people think differently, and that Lisp's syntax is a good match for how some people think, but not for how the majority of programmers think. But as I said, that's just my current intuition. My intuition a year ago, for example, was different...


Maybe because they don't understand it.

Lisp has syntax. More than any other programming language.

What you think Lisp syntax is, are really simple function calls like

    (+ 1 2)
which are based on the function + various args pattern.

But something like Common Lisp has a few dozen syntactic built-in forms. For example the syntax for LET is in an EBNF (extended backus naur form) form:

     let ({var | (var [init-form])}*) declaration* form*
Then Lisp has macros. Zillions. Each of them implements syntax.

Lisp has a two-stage syntax:

1. stage are s-expressions, a data format: symbols, numbers, lists, strings, ...

2. stage are Lisp forms. They are written as s-expressions. But they have to follow a defined syntax. not every s-expression is a valid Lisp form.

  (let ((a 1) (b 2))
    (declare (integer a b))
    (+ a b))
Above is a valid LET form.

  (let ((a 1) (b 2))
    (+ a b)
    (declare (integer a b)))
Above is not a valid LET form, because the syntax requires that the optional declaration is directly after the binding form.

Without parentheses it might look like:

  let (a 1) (b 2)
    declare integer a b
    a + b
You would think that it has a syntactic structure. Just because Lisp uses s-expressions as a base mechanism, does not mean Lisp has no syntactic structure - but it is on top of s-expressions.

Since Lisp has user-programmable syntax, there is basically an unlimited amount of all kinds of syntactical constructs.


Another helpful comparison is Lisp syntax versus Forth syntax. If you actually want a simple, terse, unstructured language, Forth is the way to do it: Words will consume and return any number of inputs or outputs to the stack, and you get entire classes of error when it's unbalanced that would surface as syntax errors anywhere else.

Just adding s-expressions is actually a huge jump up in syntactic complexity and structure, since now you can define a whole hierarchy statically, "on the page", instead of having to use a series of stack operations to manipulate memory from afar. And then as you lucidly put it, going from s-expressions to a practical Lisp dialect adds another level of context on top of that.

(And yeah, it's possible to bolt structure onto a Forth dialect, too, but the "Chuck Moore way" is that you do that custom every time you need it, instead of trying to generalize. Generalizing a concatenative structure leads to something like Factor.)


I really should adopt this view in my future posts.

I think the difference is most people, myself included, are looking at basically the syntax as where the capital letter and the sentence ending punctuation go. Which is really the only mostly constant thing in most sentences. You raise a vital point that that is a small part of the syntactic makeup of a sentence.

Which is funny, because so many people are convinced you get no static help in lisp. Which is demonstrably wrong.


Sure, but what's not known is whether that's primarily because the majority of programmers learn to program on languages inspired by C syntax.


The ""market"" (including me) overwhelmingly prefers Algol-derived languages. The different types of bracket used in different ways seem to be a benefit. Very few empirical studies of this have been done, and there's also the problem that "easy to read (for newbies)" and "easiest to read (for experienced people familiar with C syntax)" can be completely different.


The average programmers seem to like different symbols for what they consider different things. I don't know if it is better or not, but it seems to be the status quo...


But if that's really the case, then why hasn't Perl way of denoting the basic kind of variable with a starting symbol been copied in other languages?

My guess is because most languages don't do that, so people are used to variables not having a symbol denoting their basic type. It largely comes down to what sort of syntax people are accustomed to.


It was copied. Php, Ruby, Powershell...


If Go gets generics, maybe they’ll actually add an ‘implements’ keyword for interfaces and no more of this:

  var _ SomeIface = SomeConcrete{}
  var _ AnotherIface = SomeConcrete{}
Just to see if you actually implemented all the signatures on the interface correctly.


Actually one of the main benefits of Go over Java is the lack of a requirement for "implements" because otherwise it means that you cannot use a type in a context where the original developer did not intend it to be used (though I imagine you're implying that it would not be required).

However quite a few Go developers now use interfaces with an unexported method specifically to stop users from misusing their types in a context they didn't intend. I guess it's come full circle.


> However quite a few Go developers now use interfaces with an unexported method specifically to stop users from misusing their types in a context they didn't intend. I guess it's come full circle.

What a horrible hack. Why bother with that? Either export your interface, your don't.


Well -- what if you have multiple implementations of an interface and you want users to be able to choose between them (or rather, you want them to be able to write functions around your interfaces)? I don't really like it much either, but it's a completely valid usecase for unexported methods. In fact I'm pretty sure this was an example from the language developers for why you can have unexported interface methods in Go.


> In fact I'm pretty sure this was an example from the language developers for why you can have unexported interface methods in Go.

Indeed. It's used in the standard library for describing Go's AST if I recall correctly.


Go has a lot of pain points but the one positive thing it has over other languages is how it handles interfaces.

The whole point is that the caller can make new interfaces to fit existing structs in other libraries.


Actually, two of the pain points of Go is not knowing exactly what interfaces a class implements, and finding out that you misimplemented an interface but not discover this static analysis until you get a runtime type check failure.

All this pain for what boils down to not having to make a wrapper. Sure wrappers are dumb, but pushing a static analysis out to a run time error dumber, especially since implementing interfaces is a lot more common than wrapping 3rd party structs.


Nothing new, ML derived languages also allow for it.


TypeScript too.


Implicit implementation of interfaces is a disjoint concept from allowing third-party users to implement new interfaces on first-party or second-party structs.

The latter is wonderful, and other languages support this without needing to resort to the former.


I was at Gophercon and saw Russ's video where he spent 30 seconds explaining what generics were to the audience and showed a half-baked example of the syntax they are considering. I've been using Go for almost 5 years now and have no desire to see generics introduced. It would ruin what is currently a very simple and easy-to-use language. Am I the only one who is hoping this whole thing fizzles out?


Go already has generics for its built-in collection types, yet you can see nothing but downsides for user-defined generic types?

Incredible mental gymnastics you've got there.


It's the number one reason I tried out but wouldn't return and do anything serious in the language. And I'm not the only one.


There are numerous successful "serious" projects written in Go!


Many companies also had successful "serious" projects in Algol, Pascal, Basic, Forth, you name it.

The world of programming languages has improved since then.


Of course there are, but I wouldn't want to be the one to write them.


OK - well, from my 4+ yrs of experience creating and maintaining a big Go project, take it from me that it's really a very pleasant and productive language. I am branching out and learning Rust as an intellectual exercise, but can't help but think about how obtuse its syntax is compared to Go. I will continue to choose Go in the future just because it's so quick to work with (while also providing solid safety and refactoring support). The `interface{}` thing rarely comes up as a problem.


It's not just the interface{} thing that's annoying. From my point of view, Go has way to many annoyances to be decent. No constness. No protection against copying. No compiler warnings. Crap error handling. Public/private denoted by uppercase/lowercase letter (WTF?). Implementation details leaking into the language - such as the "typed nil" nonsense. Struct and interface embedding is insane. Multiple returns from functions but no tuples. Needlessly complex `for` syntax, needlessly complex receiver syntax. Needless magic channel operators (they could've easily just been functions).

The tooling is also annoying with the most glaring deficiency being the lack of package versioning, which I believe is going to change (or it has already? I don't remember). And then there are tools like `go vet` which basically just attempt to make up for the compiler deficiencies in sub-par ways...

Go has a very nice runtime, but the language itself and the tooling feels extremely half-baked. I'm curious whether Go 2 will improve upon this.


All very valid points. I think most of those features were omitted for the sake of simplicity by the Go team. Many people disagree with that.


Yes, the people who want generics are either people who don't use Go, or don't understand the purpose of Go and the implications of adding them to the language.


Great way here to completely discount the criticisms of people who've used the language and found it unacceptable how often you have to escape out of the type system of a "safe" language with `interface{}`.


I've written Go for three years, and I've successfully avoided using interface{}. It's a matter of architecting your application to fit the language and choosing a different language if your requirements grow beyond what Go can provide. Golang has made a tradeoff between simplicity and abstraction, which is a perfectly fine choice. Turning Go into Java is just going to ruin yet another language with complexity thanks to mob mentality.


It's not always that simple. Sometimes you cannot avoid using interface{}. There is a reason why some SQL libraries use interface{}, why (un)marshallers use interface{}, and why I use it heavily in my scripting interpreter which is written in Go.


This is a rather elitist and blanket statement comment and is not at all fair. You do not know the mindset of "the people who want generics" but you're acting like you do to make a rhetorical point about how you feel about them. There are many more polite ways of making your case.


Exactly. Say goodbye to fast compiles and easy-to-read code. Not to mention the muddy waters that will result from having both interfaces and generics together.


Just imagine the peer-pressure strategies that must already be forming in community minds to deploy against Go 1.x loyalists :)




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: