Hacker News new | comments | show | ask | jobs | submit login
Go 1.8 Release Notes (golang.org)
389 points by petercooper on Feb 16, 2017 | hide | past | web | favorite | 190 comments



OK, so now this is out, how about we discuss what is actually in this release!

The sort pkg now has a convenience for sorting slices, which will be a nice shortcut instead of having to define a special slice type just to sort on a given criteria, you can just pass a sorting function instead.

HTTP/2 Push is now in the server, which is fun, but like context might take a while for people to start using in earnest. Likewise graceful shutdown. Is anyone experimenting with this yet?

Plugins are here, but on Linux only for now - this will be interesting long term for things like server software which wants to let other compile plugins for it and distribute them separately, presently that has to be compiled in to the main binary.

Performance: GC times are now down to 10-100 microseconds, and defer and cgo are also faster. Compilation time improving but still not close to 1.4.

GOPATH is optional now, but you still do need a path where all go code is kept, perhaps eventually this requirement will go away - GOPATH/pkg is just a cache, GOPATH/bin is just an install location, and GOPATH/src could really be anywhere, so I'm not sure if long term a special go directory is required at all if vendoring takes off, then import paths could be project-local.

There's a slide deck here with a rundown of all the changes from Dave Cheney:

https://talks.godoc.org/github.com/davecheney/go-1.8-release...

Finally, as someone using Go for work and play, thanks to the Go team and everyone who contributed to this release. I really appreciate the incremental but significant changes in every release, and the stability of the core language.


> The sort pkg now has a convenience for sorting slices, which will be a nice shortcut instead of having to define a special slice type just to sort on a given criteria, you can just pass a sorting function instead.

So you can do this now:

  sort.Slice(people, func(i, j int) bool { return people[i].Name < people[j].Name })
But what I think most people want is.

  sort.Slice(people, func(p Person) string { return p.Name })
Since that lends to things like this:

  //Sort first by age and then by name
  sort.Slice(people, func(p Person) (int, string) { return p.Age, p.Name })


They have sort.SliceStable for this I think, it's not much more verbose and is arguably clearer and more flexible as you can define exactly how the keys are compared (for example perhaps you want to downcase names before compare).

Compare the old way to do a multi-key sort:

https://play.golang.org/p/NJTVoeQkMt

With the new way:

https://play.golang.org/p/6ReQxHT1lR

it's a lot simpler and clearer and doesn't require a special slice type (as it did before), but lets you sort on multiple keys if you need to. Presumably sort.SliceStable is a bit slower and that's why they have a separate sort.Slice.

Note though that the old way using slice types does let you do something neater at the point of use at the cost of more setup:

    OrderedBy(language, increasingLines, user).Sort(changes)
which in some ways is more similar to what you desired?

Seeing this sort of hack added does make me think containers are the one big weak point in Go at present, they are a bit ugly, and special cases abound (append,delete,range,sort) - maybe in Go2 they could have something neater, more generic and more extensible. My hopes for Go 2 as a really uninformed language noob are a tidy up of the stdlib, variants and more elegant ways to manipulate containers (I won't say generics!), I would love to see things like magic comments and struct tags disappear too but that'll never happen. I'm pretty happy with the language otherwise, and would actually like to see Go get smaller and simpler over time.


ah, SliceStable does make it a little easier at least. I'd say you could just wrap that in a function, but you can't easily do that.

Sorting is probably the one area where I really miss generics in go.. because comparing and sorting IS such a generic thing to do. In my head I think, "Oh, so I just want to do

  hosts.sort(key=operator.attrgetter("os", "version", "address"))
and then proceed to write the 20 lines of code that lets me do that :-(


Just thought of one more way to do this which is neater - define functions on a simpler slice type:

https://play.golang.org/p/0YcxA-Ufs-

So your sorting becomes quite neat:

    sort.SliceStable(changes,changes.byLines)
    sort.SliceStable(changes,changes.byUser)
I think I might use it this way (haven't actually used this in a real app yet).


Perhaps something combining all of the things you've made use of? https://play.golang.org/p/8RDqjjZDB8


But apart from reversing the resulting slice, how do you sort according to ">"? The current approach is more general and might be used with custom comparators. For example, it might be possible to implement the sort you have in mind with it.


Sort could have a reverse option, but for that to work well go would need overloaded functions, or at least types that you could define an ordering on, which would let you do

  a := ReverseOrder(10)
  b := ReverseOrder(20)
  
  a < b // false

  sort.Slice(people, func(p Person) string { return ReverseOrder(p.Name) })

Comparison functions are just not very user friendly when what you really want to do is just sort naturally based on a list of fields.

  sort.Slice(people, func(p Person) (string, string, string, int) { return p.Country, P.Gender, p.Name, p.Age })

turns into a pretty messy comparison function.

Not to mention, the key func aka DSU[1] approach is much faster when the comparison function is expensive to call.. imagine something like

  sort.Slice(numbers, func(i, j int) bool { return num_factors(numbers[i]) < num_factors(numbers[j]) })
vs.

  sort.Slice(numbers, func(n int) int { return num_factors(n) })
One needs manual memoizing or caching of this function call, the other can do it internally in a much simpler manor.

[1] https://en.wikipedia.org/wiki/Schwartzian_transform


    GOPATH is optional now
GOPATH isn't optional; it just has a default value now.

If you're using anything in a vendor folder, you still need to set GOPATH to a parent folder of your /src/project.

ie. `vendor` is still ignored if it is outside GOPATH; and if you don't set GOPATH, it just defaults to `$HOME/go`. If you ignore GOPATH and don't set it, your code won't build.

You still need GOPATH for now.

(once vendor can exist outside of GOPATH we can really start moving forwards with things)


Yes sorry that was badly phrased. Should have been Setting gopath...

I'm confident they will gradually get rid of it. It was a convenient crutch at first but for those outside a mono repo quite limiting.


Is this likely to happen?


The tracking issue is here: https://github.com/golang/go/issues/17271

Maybe? I think broadly the answer is yes, but not very soon.

It'll probably be a timeline like:

    - Sometime between now and 1.10 / 1.11 - allow vendor outside of GOPATH
    - 1.11 / 1.12 - dep formally becomes part of the go toolchain
    - 1.12+ ?? - GOPATH stops existing or significantly changes form.
I'm just speculating based on the progress of dep and the discussions in various issue trackers.

Nothing is going into 1.9; it's not ready yet even as a candidate feature. 1.10 could see some changes, and beyond that there will definitely be some kind of changes.

...but probably not this year.


To add: overhead for defer and for calling into C have been reduced pretty significantly with 1.8. Anything that does quite a bit of the latter may see a fair improvement in perf.


> Compilation time improving but still not close to 1.4.

It's worth noting that this is mostly the result of an automatic translation from C to Go, which made the compiler slower, but also a lot more accessible to outside contributions.


And also, that in the meantime, the compiler got much better at analyzing and optimizing the program. So the compiler got faster and better. So while it is still slower than 1.4, it does produce better code.


>Plugins are here, but on Linux only for now

The documentation says Linux only but the build flags seem to suggest it may work on macOS?

https://golang.org/src/plugin/plugin_dlopen.go:

    // +build linux,cgo darwin,cgo
Can someone with a Mac confirm?



We've immediately used the graceful shutdown feature.


Is this in an open source project, any link to code using it?


I'm wondering if "The Go Programming Language 1e" (Donovan / Kernighan) is still relevant enough to be used as a first book for self teaching.

https://www.amazon.com/Programming-Language-Addison-Wesley-P...


Absolutely it is.

Go, the language, changed very little since 1.0, most of the improvements have been done on the runtime, GC latency and such.


I personally learned Go about 3.5 years ago myself by following tutorials on tour.golang.org, reading the Effective Go document and other blog posts here and there. Then I bought the book last year just out of curiosity. Honestly I find that the experience you gain by reading/writing Go code and consuming online content is more beneficial than the book. The book has some good parts, but I don't think it’s as practical as online resources.


> Honestly I find that the experience you gain by reading/writing Go code and consuming online content is more beneficial than the book.

This applies to almost all languages and technologies!


Yes, but sometimes it is much easier to get moving with a good textbook.

With Go, a few online tutorials and the documentation that comes with Go itself, were all I needed to get going. I am not sure if that was an explicit design goal, but I found the language very easy to learn.


I believe it was an explicit design goal. They wanted new employees joining from straight out of college to be effective within Google's codebase as soon as possible. Therefore the learning curve is pretty low if you already know another mainstream programming language.


The answer is yes. This is the go version 1 compatibility promise [0]. The maintainers have been very careful to not break compatibility, even if it means bug fixes.

[0] https://golang.org/doc/go1compat


Yes, Go maintains strict backwards compatibility, you can apply everything from the book no problem (unless it exploits a go bug which I don't think it does), though you'l miss out on the shiny new stuff.


It is. I already knew Go, but read it from cover to cover. It's a very nice book, and as was said, the Go1 compatibility promise will ensure it stays relevant for a long time.


Yes, read that book cover to cover, it is a really nice book which teaches you concepts, it might not contain the latest features of the language, but it is a great reference book nevertheless


I reckon h2 push (https://beta.golang.org/doc/go1.8#h2push) support will be big for web servers like caddy and traefik.

Caddy already has a few interesting ideas on how to use this: https://github.com/mholt/caddy/pull/1215#issuecomment-256360...






Also new: callback for client cert selection during TLS handshake.

https://github.com/golang/go/issues/16626


1.7 release was great for finally addressing the issue of ever-growing size of produced binaries. [1] Moreover, some additional improvements were promised in 1.8 which I was looking forward to.

But after upgrading to 1.8 I am now observing 3-4% binary increase vs 1.7, so the trend is again reversed back to fatter binaries. :(

[1] https://blog.golang.org/go1.7-binary-size


You can use upx[1] to compress the binaries. I usually can shrink my binaries by 75%.

Read more on this[2] blog post.

1: https://upx.github.io/

2: https://blog.filippo.io/shrink-your-go-binaries-with-this-on...


I see stop-the-world garbage collection pause times have been reduced to microseconds which is great. But for many applications the pause times for single threads still matter. Any numbers for that?


Official release blog post https://blog.golang.org/go1.8.


I'm seeing significant performance increases in my app (small RNA aligner). Haven't profiled yet, so not sure where the gains are coming from, but happy all the same.


> The DefaultTransport.Dialer now enables DualStack ("Happy Eyeballs"[1]) support, allowing the use of IPv4 as a backup if it looks like IPv6 might be failing

[1] https://tools.ietf.org/html/rfc6555


http://www.slideshare.net/huazhihao1/what-is-new-in-go-18-72...

so go is becoming more and more better for lower GC tasks


Looking at the c ffi improvements posted, I thought I'd check if there were improvements.

Time in ms to call a c function 2 billion times (lesser is better).

95536 - go 1.1.2

130105 - go 1.8.0

It is a rather crude example:

// plusone is the ffi

int x = 0;

while (x < 2000000000) x = plusone(x);

Here's the source https://github.com/dyu/ffi-overhead

With other programming languages:

c:

4778

nim:

4746

rust:

5331

java7:

17922

java8:

17992

go:

130105

(edited for formatting)


Small stacks used for goroutines don't play well with calling C functions which assume huge stacks. That's price you pay for having nice scalability story. There might be other issues, which come from the fact that Go has elaborated runtime.


Yea it was a toy example. Still the overhead is steep for applications that call c ffi often.

The workaround is usually to batch your computations on the c-side (micro-rpc) and minimize the calls (like cockroachdb is doing)


nim is faster at calling c routines than c? That seems interesting... But probably the measurement errors are larger than 1%?


They're basically the same (its just a measurement error as you said).

Afterall, nim generates c files.

What's interesting is the nrvo optimizations that nim generates under the hood without much effort from the developer (on non-trivial codebases).


Folks, can someone enlighten me on how new Argument Liveness https://golang.org/doc/go1.8#liveness works ?

I mean, so now if you are in the middle of a long function, you cannot be sure anymore if parameters passed to that function are not GC'd? Or compiler does some hinting on "ooh, this param is not used after that point in function, lets mark it as `ready for GC`?


A little not so related question, what is/are the most used stack for web applications in Go? How common if at all would be a Go backend & React for instance frontend stack?


That's the beauty of Go. The "most used" stack is just net/http in the standard library. Other toolkits implement and extend its interfaces and work together quite well for the most part. Your choice of front end framework is really a matter of taste.


Cool, thank you for the insight!


I can't tell about "how common", but a react web client with a go backend API is exactly what I do on my main product, currently.

It still has a few problems I haven't managed to solve, though (but not related directly to go). For example, given the visible routing is handled by react-router, I haven't find a way yet to issue proper 404 status (I have a catchall route that renders web client page, then client router displays a "not found" page, but it will be a 200 status). Not that a big deal, because the web client is an application behind auth, and not some public facing pages.


Another great 3rd party router is Chi. https://github.com/pressly/chi

My company uses httprouter, but if Chi had been available when we started, we'd have used that instead. It follows Go's standard library API (httprouter diverges a little bit with its params) and adds support for Go's Context (which httprouter currently lacks).


I've been using gorilla's mux, precisely because it follows the stdlib API, will certainly have a look at Chi, thanks for the link.


The code that does this for server-rendered js is in https://github.com/ReactTraining/react-router/blob/master/mo.... There's code to convert URL patterns to regular expressions in https://github.com/ReactTraining/react-router/blob/master/mo..., although that seems to work only for a single pattern and not all. Presumably you should find a way to convert all patterns to a single regex somewhere in the codebase and could use that to check the request in go.


Hmmm, indeed. I didn't thought about using react server side rendering and intercepting it to check which status to provide. Good idea, thanks!

No, I'll have to think about whether it's worth it adding a whole SSR stack just for that, in my app behind auth :)


A priory it seemed really ideal to use Go and React so that's good to know. That issue is something I'd hardly find out to happen if just looking up online, thanks for sharing.


Note that it's not really about using go, but more about implementing routing on client side : you lose http status for non api urls, whatever the backend language is.

But then again, when you have static pages for public facing (and indexed) pages, and you have proper http status for api requests, having or not http status on web client urls is not that a big deal, so that's probably why it's not often mentioned.

The only problem it could cause is if someone makes a deep link to somewhere in your app, it will return a 200 status and will be indexed by bots, probably indexing login form for just any url, so that's the annoying point (huge trolling possibilities, btw, there :) ).


Indeed there is this caveat and its exploitation possibilities. I appreciate the thorough insight, thanks for taking the time!


I suggest you adopt a 3rd party router, rather than using pure Go standard library for this. https://github.com/julienschmidt/httprouter is the most popular one. It supports a custom 404 handler https://github.com/julienschmidt/httprouter/blob/master/rout...


Actually, go routing is not the problem, it's react routing that is : go router only knows about API endpoints, and have a catch all route to serve the web client, which then manage its routing by itself. The advantage in that is that the backend only needs to know about API routes, and the client is totally free to implement whatever routes it wants.


I can't help you with statistics, but it doesn't seem strange to me. I tend to start with (pure) Go for web services. If the frontend gets bothersome, I use something relatively lightweight, like Vue. If the backend gets a bit unwieldy, I add something minimal there, like Alice, httprouter/httptreemux or Chi. There's a lot of options on both ends, but I like to start small until I notice I'd have to start implementing whole stacks, at which point I might add a library close to what I had in mind myself. I work mostly on lower-level projects and services though.

I don't know about the most used stack, there are many ways to go depending on what you want to do, but I think the kitchensink approach of gigantic frameworks isn't commonly associated with Go, and I don't think Go really has one go-to stack like say Ruby on Rails. That does mean you'd have to be somewhat familiar with both Go and its community projects to make a decision.


Why is the latest stable version on their website still 1.7.5? Where can I download 1.8 installation for linux (not source code)?

I've just removed my previous version and then I realize there's no 1.8 on their website.


It's not released yet. The release machine is running.


What linux distro are you on? Most people on Linux tend to use their package managers to get updates, or is your distro too far behind on Go?


There are a few packages I always install directly from source, to avoid lagging. Golang is one of them.

If you're using Debian, or some similar system, chances are your package manager will never deploy something beneath /opt. So in my case I configure golang to install in /opt/golang/

On this particular host I have only three packages here:

      $ ls /opt/
      arduino-1.8.0
      calibre
      go-1.8


I'm using xubuntu, how do I manage go through package manager?


I use godeb: https://github.com/niemeyer/godeb

It packages the go release into .deb files, which you then can manage using standard Debian tools.

Note that since it scrapes the release page for what releases are available, it won't be able to download 1.8 until it, you know, has actually been released, and sometimes the release page changes format upon a new release and it may take a bit for godeb to be updated.


I've just installed 1.8 through godeb, thank you!


Ty, I will try it


I'd encourage you to look at building from source[1] rather than using package managers or binaries. It's quite simple, let's you upgrade/downgrade at will and importantly makes cross compiling to other architecture/OS trivial.

https://golang.org/doc/install/source


you can use 'godeb' to install the latest version of golang. it transforms upstream tarballs for the Go language in deb packages and installs them.


Ty


I am just leaving the release party in Hamburg, Germany and looking forward to simpler sorting and all the other improvements. Awesome stuff and awesome people!


random question here, but what are people using in the way of ORM/sql access with go? Just database/sql or are there any other good packages worth checking out. I don't expect a full-blown activerecord style ORM, but something that takes a bit of the pain out of mapping structs to records.



Check out https://github.com/vattle/sqlboiler. I prefer the STD lib usually. Also, look at https://github.com/jmoiron/sqlx for easier struct mapping.


I've been looking forward to 1.8 to solve this problem. I wrote a small tool using the new `ColumnType` info that takes my SQL queries and generates code to query the database and unmarshal rows into an array of structs.


I would love to see that tool


Final release notes: https://golang.org/doc/go1.8


What is the best place to start if I want to learn Go? Any books? Tutorials?


Take the tour on their site. Check out gobyexample after that. Then read effective go.


Does anyone know the roadmap for Go 1.9?


There's some stuff at the end of these slides

https://talks.godoc.org/github.com/davecheney/go-1.8-release...


Here's a thread were the developers discuss what they intend to work on for the next release:

https://groups.google.com/forum/#!msg/golang-dev/Dn-pfWETUrM...


wow an official dependency manager. Didn't see that coming. Makes me happy.


Yay Mips32!


Linking to the Github release seems more like a grab for fake Internet points than a useful post.

A useful post would be waiting for golang.org to be updated and linking to the official release notes.

Edit: Thanks to whoever updated the link to point to something useful at least. Still would have preferred this came down until the release was actually posted.


An unfair, groundless projection on your part. Thanks for spreading a little more negativity today :-)

From the HN guidelines: Please don't submit comments complaining that a submission is inappropriate for the site.


I'm as excited as the next gopher about Go 1.8's release, but for all practical purposes it hasn't been released yet. It's actually an incredibly frustrating submission for those of us excited to get the official binaries.


In addition to what schmichael mentioned: You linked to the general release history front page, which in the future will display nothing about the 1.8 release. You should have linked to a page that is not expected to change as time passes. This is why we use permalinks for blogposts rather than linking to the front page of some blog.


It still hasn't actually been released yet though, 3 hours after your post, so it was a little premature :)


What about a comment complaining that a comment is inappropriate for the site?


Don't you have some errors to check for, or something? ;)


For people that use Go, this is a useful PSA since they could grab it right now. for those that don't, this is uninteresting haha.


If only there was some kind of list that enthusiasts could join to get notifications like this.


Draft release notes: https://beta.golang.org/doc/go1.8


Thanks, we've updated the link from https://github.com/golang/go/releases.


Not really, since there's a risk of new people deciding to use it whenever it comes up. It would be silly to say "alright, we get it, cigarettes are bad for you, can we stop harping on this please?"


Please don't do programming language flamewars on HN.

We detached this subthread from https://news.ycombinator.com/item?id=13662583 and marked it off-topic.


I was responding to "it's kind of useless to be beating the language up upon every release". My comment applies to literally any programming language release or other product release notification; it makes sense for people to discuss the pros and cons of a product every time it comes up, because A) things change and B) it might be the first time that someone has come across the product.

If minor version release notes are on-topic enough to get posted every single time, I'm not really sure how an attached discussion of the product in question (possibly involving criticism) isn't.


> there's a risk of new people deciding to use it whenever it comes up.

And?

They're free to do so, if they think it can fit their needs.

Also, while Go isn't great, it isn't as terrible as some people make it. It mostly just isn't in any way exciting.


> They're free to do so, if they think it can fit their needs.

Which is why people need to be here every thread explaining the problems with Go.


Equating lung cancer and the lack of programming language features?

You might be in the wrong business.


Cigarettes kill people. Go is a programming language. There's no need to relitigate the issue in every Go-related post.


>Cigarettes kill people.

Very very slowly. Time spent fighting lacking programming languages also kind of kills people (steals time off of their lives).


So what?

Even I that disagree with some of the decisions that were taken,see a value in having people using Go instead of C.


And people explaining the problems with go allow potential future users to more accurately determine the value of using it.


[flagged]


We detached this subthread from https://news.ycombinator.com/item?id=13662467 and marked it off-topic.


Go not having exceptions is a definite plus for the language. Adding them would be a change for the worse, not an improvement.


> Go not having exceptions is a definite plus for the language. Adding them would be a change for the worse, not an improvement.

I disagree (I'm from the Elixir/Erlang "code the narrow happy path and accept failure as quickly as possible" camp) and want to see your empirical data that discarding exceptions was a good design decision. Because to me it seems like you would spend a lot of time writing unnecessary boilerplate code to check for errors everywhere (if you bother checking at all... and if you don't, you are begging to get into an inconsistent runtime state which could lead to extremely hard-to-trace bugs)

It is my informed opinion that all unexpected states are bad and should be logged and killed/restarted from a known state ASAP. (Basically, the Erlang philosophy.)

Here is my data: Here are the most-used words in Go https://anvaka.github.io/common-words/#?lang=go ("if err != nil return err," anyone?) and here are the most-used words in Elixir https://anvaka.github.io/common-words/#?lang=ex . Sure seems like a lot of Go code is written to handle these "exceptions" they're not having! ;)

I didn't actually come here to debate Go, but statements like yours without any evidence to back them up are basically a religious belief and absolutely nothing more.


Why am I getting downvoted for empirical data? That's at least an informed opinion with evidence. If 33% of your code is error-handling in 1 language, and in another language it's 2%, and both languages are equally performant and have similar logic, then how is that not bad? That's 31% more code to do the same work!


"If you bother checking" - yes, you absolutely must check, and consider the repercussions of every error. This results in much better functioning, more stable, and easier to reason about code than blanket exceptions.


You have errors, which functionally can be considered equivalent (yes small differences here and there but from a programming standpoint it's more like java forcing you to try/catch due to the multiple return values).


Except that I'm forced to try/catch on every line that can possibly produce a runtime error.

If you were selling this language to me, how would you explain writing 33% more code to do the same business logic, just to handle errors, over something like Elixir with its rather astonishing uptime and response rate and almost no boilerplate error-handling code? For the backend of an API or webserver, hypothetically.


So I don't know elixir, but but reading up on it the main different seems to be that you can silently ignore errors? You can do the same thing in Go, except it's demarkaed by assigning the error to _. Not sure what the main difference is otherwise?


Any errors crash the node (they are NOT silent, that's the whole point!) and get logged and then restarted by the supervisor node in well under 1 ms. In the event of too many restarts in too short a time, the supervisor is restarted by ITS supervisor (supervision hierarchy). In this way you only have to code the happy path and walk the "known states" while noting frequent sad paths/"unknown states". You can then elect to handle them in your code.


You can still have fatal paths that don't ever get solved, or some large % like that. Running the Go process under something like supervisord would give you the same effect.. it's not clear why this is so different, or why this needs to be at the language level.

In either case, you should be handling errors.


True, I guess.

Just found this comparison which is at least interesting: https://www.slant.co/versus/126/1540/~golang_vs_elixir


Rust uses error values and its error handling is pretty terse (and more type-safe), so this is more of an indictment against go than error values in general.


I agree, I wonder if C++/Java would have gone down the exception path if they had multiple return values? Its it seems to me exceptions are really a special kind of return value that doesn't clobber your single return value.

Compared to Java I have ported to Go, returning error + defer has led to much more localized (and this grokable) and less verbose error handling. Less verbose? I suppose this is not true if you throw exceptions and handle them all at the top (Go has panic for these kind of fatal events BTW), but if you regularly have try-catch-finally blocks, the Go approach is in fact less verbose.


Multiple return values are a kludge. Combined with nil being only usable for a pointer value, it's entirely too easy to accidentally not process the error and accidentally use the non-error return value (bool, int, or other primitive type).

Go should have had Rust-style enums that let you do things like Option<T> and Result<T,E>. This way, you are forbidden by the compiler from ever extracting an invalid value from a function that returned an error (or returned no value).


I agree wholeheartedly. Also of note, the combinators Rust offers on Option/Result make it possible to chain multiple operations that could fail and then check for the first error that occurred. In Go, you have to check each potential error immediately after it's potential return. This allows Rust to be just as clear as Go while being more concise and enforcing at compile-time that errors are handled. Propagating errors out of a function is also easier given try!/?.

I really don't see a downside of Rust's approach compared to Go's after having programmed in both. But, then again, Rust's approach requires generics and I know Go's team doesn't want to go in that direction.


Chaining and failing on first error is only useful for the most trivial of error situations. Non toy programs handle errors with retry strategies, alternate algorithms and fallbacks which may all be contextual.


It depends on the problem domain, but it's been my experience across a few domains that many errors are of the "trivial" type you've described, in that recovering from an error by taking some alternative path is usually not very valuable. A bail-out-on-any-error approach doesn't necessarily mean it's a "toy program".

Nothing in Rust (or Go, even) ordinarily precludes the type of retry approach you're describing, of course.


> I know Go's team doesn't want to go in that direction.

I disagree:

https://github.com/golang/proposal/blob/master/design/15292-...


Did you check the discussion thread for that proposal? It was rejected by the mailing list devs:

https://github.com/golang/go/issues/15292#issuecomment-22198...


Yes, I read it.

Here is the comment you linked:

> Ian's proposal demands too much. We cannot possibly develop all the features at-once, it will exist in an unfinished state for many months.

> In the meantime, the unfinished project cannot be called official Go language until done because that will risk fragmenting the ecosystem.

> So the question is how to plan this.

> Also a huge part of the project would be developing the reference corpus. developing the actual generic collections, algorithms and other things in such a way we all agree on that they are idiomatic, while using the new go 2.0 features

The author of this comment is not a member of the core team, as far as I know, and the comment doesn't suggest the proposal was rejected.


Multiple return values are most definitely not a kludge. They're an awesome abstraction that few languages actually provide because of the implementation complexity. The kludge is where a language forces you to use a struct or some similar compound type when it's most natural for a function to return multiple values.

And it's symmetric with function parameters. If you think multiple return values is a kludge, then for the same reasons so would multiple parameters. After all, you can always curry multiple parameters, or shoe-horn multiple parameters into a compound objet like a struct.

Now whether it makes sense to leverage multiple return values as a substitute for specialized error types, or for exceptions, is a different question. But if you don't see the value in multiple return values than I think it's hard to assess things objectively.

I program a lot in Lua and love multiple return values. And in Lua the standard practice is to leverage multiple return values with exceptions. E.g.

  local x, y = assert(bar())
where on success bar is expected to return a first value that evaluates to true in a boolean context. If so then assert just returns that first value and any subsequent values. If not then it throws the second value as an exception, which usually is expected to be an error description string or an object that can be stringified, but assert doesn't care about the type.

Note that assert() isn't a specialized function. You can implement it in pure Lua. Multiple parameters and multiple return values are generalized as lists (similar to Perl) which also can be used when, e.g., initializing a table.

   function foo() return "one", "two", "three" end
   for i,s in ipairs({ foo() }) do print(i, s) end
prints

  1 one
  2 two
  3 three
Of course these things can be abused and they're not strictly _necessary_. But they're powerful abstractions that can prove very useful, especially in the context of functional programming styles.

Go seems simple on the outside but is actually very elegant and sophisticated internally. Multiple return values, lexical closures that automatically capture by reference and which can escape their original scope without any fuss, lightweight stackful coroutines (stackful meaing no decoration required for recursively invoking yielding functions) that automatically adjust their stack size. You usually only see these things in interpreted and dynamically typed languages, if at all, because they can be very difficult to implement efficiently with an optimizing compiler. Similar concepts in C#, C++, Rust are not nearly as powerful yet still require syntactic kludges (like decoration) to help drive the kludgy implementations. That Go implements these things seamlessly, while compiling straight to native code, is quite a testament to the Go team.

Most of the attention these days is given to stronger typing systems. And that's fine. Go doesn't win too many points on that front, although its duck typing has unique advantages, even over traits. But it's not reasonable to overlook all the other things that Go does extremely well and almost without equal for an optimizing, AOT-compiled, strongly typed language.

All of these things come with trade offs. Typing systems are hardly the be all, end all of language design. They're usually not even the most interesting part, nor the part most relevant to productivity. What's unique about Rust, for example, isn't its typing system, but its borrow checker. And the borrow checker drives directly and indirectly much about their choices with other aspects of the language. Like I said, multiple return values are difficult to implement underneath the hood, and in a language like Rust might require some really odd declaration syntax and have other unforeseen consequences. Thus you see them reconceptualizing multiple return values in terms of a kind of polymorphic compound type. That's not intrinsically better--if anything it can be seen as a kludge--but in any event it's driven by other needs than pure abstract functionality and should be not only criticized that way, but appreciated that way.


You still need the struct because all of these

  s = append(s, f())
  m[k] = f()
  c <- f()
  f().DoStuff()
  h(f(), g())
work with single-valued functions but not multi-valued functions. The feature is not well integrated with the rest of the language.


> I agree, I wonder if C++/Java would have gone down the exception path if they had multiple return values? Its it seems to me exceptions are really a special kind of return value that doesn't clobber your single return value.

Because of constructors, I don't think they would have eschewed exceptions. Constructors don't return any value (in both C++ and Java ABI terms, they are implemented as void functions!) anyways, so having the "I don't know how to do what your arguments told me to do" be indicated via an exception being thrown is an intentional model rather then using return codes.


I think I'd want more than multiple return values. I'd also want syntactic sugar to pass errors through to callers, or alternatively, ignore errors and skip future processing and have the error be the result of the future processing.

These things are typically done by exceptions and monads; the "zig" language has some interesting syntax around this kind of thing as well.

But just multiple return values isn't enough for me - too much "if error return error" noise for my taste.


> But just multiple return values isn't enough for me - too much "if error return error" noise for my taste.

This is such a big deal, I don't understand how it's defensible. Code is read dozens of times more often than it's written. When code looks like:

    if val1, err := thingImActuallyTryingToDo1(); err != nil {
        return nil, fmt.Errorf("thing failed: %v", err)
    }

    if val2, err := thingImActuallyTryingToDo2(val1); err != nil {
        return nil, fmt.Errorf("thing failed differntly: %v", err)
    }

    if val3, err := thingImActuallyTryingToDo3(val3); err != nil {
        return nil, fmt.Errorf("thing failed again: %v", err)
    }
it's impossible to tell at a glance how the function is actually working. You have to parse out 90% garbage just to get to the details of what's really happening.

Worse is that it's never this "clean" and repeatable in practice. It becomes a bunch of subtly-different variants on that theme (append if the function wasn't an error, or do something different if it errored, etc.) that the actual logic of your function is obscured in a maze of error-handling.


In distributed systems (i.e. what Go was designed for), 90% of your actual logic is dealing with errors of different shapes and colors. How you handle that is what makes the difference between a toy and a reliable system.

Code doesn't become easier to read/understand by just tucking away 90% of it (the hard/interesting parts at that, mind you).


Dealing with errors should create noise in the code in proportion to the places where error handling needs to take action.

If 90% of your code really is unique, specific action to error conditions, then by all means, this topic isn't for you.

But if 90% of your code is "stop processing in this function, log the error, and pass it up the stack", then why not abstract that away and reduce the noise? (i.e. something like exceptions or monads or arrows or ...)


Yes, thank you. If you somehow have a problem domain where most of the errors you encounter can be gracefully (but uniquely) handled, then that's fantastic for you. But in virtually every golang example I've seen, you're just gift-wrapping the error with a little bit more context (or not) and returning it up the stack. In which case you've just replaced automatic, computer-executed exception handling with bespoke, artisanal, handcrafted exception handling for no gain and a hell of a lot of loss.


> it's impossible to tell at a glance how the function is actually working. You have to parse out 90% garbage just to get to the details of what's really happening.

After a while, you just learn to ignore the `if err != nil` and get on with your life. I mean, sure, it would still be nicer to have an abbreviation, but it's not like Go code is 90% `if err != nil` (as critics always try to frame), and when I read Go code, I'm most definitely not spending most (or any) of my time stumbling over them.


It's not 90%, but it seems like it has to be more than 50% because not only does every call require an error check but f(g(x), h(y)) has to be broken down and given three error checks.


Yeah. In my experience it's way closer than 90% than 50%. Even with one argument, function chaining is absurdly annoying. You can't just call f(g(x)), because you have to pull out the error check and do it manually.

And most of your code winds up doing error checks because if any call in the stack under a particular function produces an error, it usually winds up having to bubble the entire way up the stack.


>too much "if error return error" noise for my taste.

I understand that. I was pointing out this is less noise than try-catch-finally (assuming the programmer is comparing Go to C++/Java/C#). I don't deny some functional langs have more elegant handling in this respect.


No, it's not less noise, because with exceptions, you can put all you error handling in one place, or wherever is the most appropriate. With go, you are forced to handle it at literally every function call.

If you want some data, "if err != nil return err" contains the four most frequently used words in Go code. See https://anvaka.github.io/common-words/#?lang=go . For comparion, try/catch are only 34th and 35th for Java, and even lower for C# and C++.


> With go, you are forced to handle it at literally every function call.

Every call that can fail, which isn't quite every function call.


It isn't less noisy. With exceptions I can choose to have no noise or concentrate all the noise in one part of the method. In Go there is junk all over the method deeply conflating the good path and the exceptional path.


If you add exceptions, you trade noise for action-at-a-distance. From my experience, I prefer a little noise over action-at-a-distance.


Everything is already at a distance when you load libraries and bundle up your commonly used functions elsewhere.

DRY principle seems not applicable here.


Go's error handling is the primary reason I'm not using that language. If I ever, even on a single line, dont handle an error, my project might fail in an unexpected, undebuggable, unlogged way. With any other language I get an exception and a stacktrace printed out to me. With go, it just silently fails. How this was ever construed to be "better" is beyond me.


Adding tuples (value, error) and / or optionals would be a better way.

But that would require either more special-case magic (always bad), or generics (not in 1.x, supposedly).


Even a special-case Result<T, E> in the language would be far more useful than multiple return values simply because you could pass it through a channel or a slice.


> Adding tuples (value, error)

Like https://godoc.org/net/http#Get?


Returning multiple values is the norm. But these values are immediately unpacked, cannot be passed around as a whole, or have methods.

It would be great to have something like a conditional return to get rid of the endless `if err != nil`, or, while we're at it, from endless `.then` that futures-based programming ends up with:

    foo_result, err = foo(...)
    on_failure(err, "foo failed")
    bar_result, err = bar(foo_result, ...)
    on_failure(err, "bar failed though foo did not")
    return bar_result + 1


I've never found `if err != nil {` to be a problem; on the contrary, it draws my eye right to the error-handling logic in any function. OTOH, exceptions tend to obscure this (in the best case you have checked exceptions and you are only unsure about which function is raising the exception; at worst you have something like Python where you have no idea what exceptions could be thrown or if they're handled properly), and Rust's `try!` tends to blend in too well with its surroundings (this is a special case of Rust's density making it hard to read). YMMV.


I get it, but at the same time, it's kind of useless to be beating the language up upon every release.


Please stop. The whole "Go sucks because it doesn't have generics" thing is so overdone at this point. One of Go's strengths is that it's a simple language. It's not getting generics or exceptions anytime soon.


Well, and I say that one of Go's shortcomings is that it doesn't have generics.

See how mere opinions work? Now what?


No there's a difference between expressing an opinion and constantly complaining about it. I already read a hundred comment threads and blog posts for and against generics. Do we really need a "still no generics!" comment on every fucking article about Go? It's just noise at this point.

Me pointing out it's a simple language is not a matter of opinion; it's an intentional design decision of Go. Generics will complicate the language more than the Go developers are willing to tolerate.

At the time of writing, there's 110 comments on this post and 50 are replies to prodigal_erik's comment on not having generics. HALF the comments on this whole thread are discussing something that's already been discussed for years with no end in sight.

For the record, I really don't care if we have generics or not. I just want a higher signal-to-noise ratio for topics about Go on Hacker News. This is a low-signal topic. You may as well argue about systemd.


>At the time of writing, there's 110 comments on this post and 50 are replies to prodigal_erik's comment on not having generics. HALF the comments on this whole thread are discussing something that's already been discussed for years with no end in sight.

Doesn't that point to an actual demand for the feature, and a constant pain point for the language? Why else would it come up in all discussions? It's not about all discussions about Go spin off to random topics / features talk -- it's usually Generics. Similarly, I've not seen people discuss anything else than common pain points (or perceived pain points) in a language. E.g. with Python it will be about the GIL or the 2 to 3 transition. With JS the "fatigue" and/or the crazy coercion rules. With C++ how complicated it is or the slow template system. And so on.

Your argument is about ease of use, but people rooting for Generics also argue that they make lots of programming cases easier. Are there many programmers in 2017 really wishing that e.g. C# or Java rather didn't have generics?

And what the original feature set as designed isn't very relevant, as languages can and do evolve. Similarly for the opinions of the main developers.


>Your argument is about ease of use

No! My argument is more meta; that this whole discussion is an unproductive waste of time and I can't tell anymore if people are just trolling or looking to start a flame war by bringing up generics in a Go thread. It's noise and it's making Hacker News a worse place.

This has been discussed to death already.


I didn't expect a full-blown rehash of my parenthetical. I wanted to point out that "people who use Go today" is one of two audiences who might be interested in announcements of changes.


Go has exceptions, though it doesn't call them that; it's just not idiomatic to let them propagate across public API surfaces.


They certainly do propagate across API surfaces. People just ignore that.

Just like Go error handling : looking on github you see that Go mostly makes it easy and less visible if people totally ignore errors.


Go's exceptions are called panics; the recommended practice is generally not to let them cross API surfaces, and the standard library (again, generally) follows that.

It is, of course, possible to create libraries that don't follow the idiom.


> It is, of course, possible to create libraries that don't follow the idiom.

Yeah. Thing is though, there are these libraries who access pointers (especially in Go where lots of stuff are implicit pointers, like maps and interfaces). Or, may God strike them down where they stand, people who use division. Or allocate. Or cast. Or use interface{}. All such code has implicit (non-visible) panics in them.

What I mean to say is: in practice, Go code is not fundamentally different from Java/C++ code : everything can throw. Everything. Yes even that thing that you're thinking of right now that couldn't possibly ever think about throwing. There's just people who admit this and people who don't.

That the Go authors come up with idioms that are so incredibly shortsighted is emblematic of everything else about the language.

This is how Go works. They claim they are perfection, and know better than anyone.

For instance, compare:

https://blog.golang.org/go15gc

with

https://blog.plan99.net/modern-garbage-collection-911ef4f8bd...

(just to put up something other than the generics discussion. But everything about go suffers from both massive deficiencies and an extremely arrogant development defending deficiencies ...)


With the significant risk of becoming less interesting for large number of current users.


As a daily user of Go, I wish so much it had some tools in its toolbox for simple boxed generics. Things like `Stack<T>(a T)`, `Max<T>(a T, b T)`, and other data structures and basic functions.

Then for `err == nil` we need some type of syntactic sugar to simplify this. I don't know what it would look like, but so many copies of that simple if statement get written that it is a bit ridiculous.


My experience working with Java all these years is this:

1) Java is not suitable at all the task at hand. I am just trying to solve something far more suitable for different type of language.

2) Java is working fine for the purpose and it leaves me with time to think how about Java fix its longstanding issues.

Ideally Java should fix that stuff but I come to conclusion 2) is possible only when Java works fine for me and fixing those issues is either impossible or very low priority. I think same will apply to Go or any other language.


> With the significant risk of becoming less interesting for large number of current users.

There is no data demonstrating that statement. I don't think you are talking for the majority of people using Go. Choice is good. Generics in the language wouldn't change anything for you as you wouldn't be forced to use them.

I don't understand people who hate generics. They are just types and they are here to help the developer write compile time type safe code. interface {} everywhere isn't compile time type safe.

The biggest argument against generics, is that it would increase compilation times. Well, the compiler would be still faster than the developer copying and pasting code manually, or rewriting the same algorithm for each type variation again and again. Isn't programming about automating stuff?

For those who complain generics are too complicated to understand, I don't know what kind of developer you are frankly if you can't understand something as basic as generics yet pretend like you can work with a language that has pointers,channels and threads and mutexes... What do you think is more complicated? generics or concurrency?

While the argument itself is pointless (it's unlikely one can retrofit generics in a backward compatible way with Go), there is no need to answer with these bad faith arguments. It really sound stupid.


The selling point of go is that it is easy to read and maintain because it is so opinionated. If you start adding features that only some programmers use, then you lose the language's strongest point.

For Go choice is bad. Go is going so strong because any Go programmer can read any Go program and library after a couple days with the language. Can you say the same for any language that has generics?


I agree with this, but at the same time some solution for type safe generic data structures would be nice. I'm not sure if there is a best-of-both-worlds solution. :/


>The selling point of go is that it is easy to read and maintain because it is so opinionated.

The selling point of Go for most people is rather: hey, a language with GC (easy to manage memory) AND nice parallel/concurrency story AND static builds AND faster than my dynamic language I use.

I don't think "it lacks generics" comes into play much as any kind of benefit.


That's a strawman: Grandparent said "easy to read and maintain" which is not "it lacks generics". The Go team has communicated that they would like to add generics if they could do it without breaking the language or introducing large inconsistencies.

Re the selling point: You're underestimating the importance of a conceptually simple language. If you put a new developer on a project in any other language that he has never used before, he will need at least a week to get up to speed. With Go, I had colleagues sending me pull requests with real, useful feature additions to my Go programs within 1-2 days of them installing the compiler.


>You're underestimating the importance of a conceptually simple language. If you put a new developer on a project in any other language that he has never used before, he will need at least a week to get up to speed. With Go, I had colleagues sending me pull requests with real, useful feature additions to my Go programs within 1-2 days of them installing the compiler.

Sure, but maybe you're overestimating the complexity of Generics (on the programmers, not the implementation of the compiler side)?

What makes you think those same programmers could not grok them? They could grok CSP (channels etc), implicit interfaces, pointers, reflection, unsafe pointer work, and closures, but suddenly Generics are too much? Seems rather arbitrary -- especially since the language already has generic functions.

It's also the matter of initial familiarity (productive in a few days) with eventual expressiveness and power (how much more productive in a few months and more)?

Because your argument seems to me like saying Go is Nano: one can start editing immediately.

Sure, but Vim/Emacs are worth the investment in learning them, even if one feels lost for a little while when starting with them.


> There is no data demonstrating that statement.

Neither is there data for % of all programmers will move to Go if Generics are available.

> Choice is good.

It is even better by having more programing languages with preferred feature set.

> I don't understand people who hate generics.

Sure, just like you don't understand that, it is entirely possible some people who can write useful and robust production grade applications without generics to ask what is the big deal about with generics or lack of it.

> For those who complain generics are too complicated to understand, I don't know what kind of developer you are frankly if you can't understand something as basic as generics yet pretend like you can work with a language that has pointers,channels and threads and mutexes... What do you think is more complicated? generics or concurrency?

There is no need to decide on behalf of programmers on what concepts they should find difficult and what not. For example some people using Rust find borrow checker really difficult to grasp some find it an elegant and obvious way to track objects.


> % of all programmers will move to Go if Generics are available.

It's not about moving to Go, but about leaving Go. You can be pretty sure that there will be power users leaving the language unless it addresses many issues that it has. In fact, there are such people even here on HN already. It could, of course, end up being a language for non power users with the libraries of corresponding quality for awhile. And I feel like the quality issue is already here too. But ultimately not being an attractive choice for experienced programmers cannot be beneficial for the language in the long run.


> Neither is there data for % of all programmers will move to Go if Generics are available.

I never said that. It is you who used that kind of argument like there was any truth to it. There is none.

> Sure, just like you don't understand that(...)

I never said that. You keep on making stuff up.

> It is even better by having more programing languages with preferred feature set.

I don't think adding more programming languages to a project makes that project more maintainable and readable, it doesn't make sense. You're not going to convince me that juggling with multiple programming languages on a daily basis is "even better". It isn't.

> There is no need to decide on behalf of programmers on what concepts they should find difficult and what not. For example some people using Rust find borrow checker really difficult to grasp some find it an elegant and obvious way to track objects.

There is no need to decide that a feature you wouldn't even use yourself is bad either. Nobody would force you to use generics.


>nobody would force you to use generics

Yes and no. In a couple years, a Go developer gets hired at a shop and realizes on the first day "oh snap, these people have a years old codebase that uses generics everywhere and I'm stuck with it." Is he being forced to use them? Can he leave if he cares more about generics than his paycheck? Sure. Can you withold the combo to your safe from the home invader if you care more about what's inside it than your daughter he's pointing a gun at? Sure.

I don't think it's incomprehensible that GP prefers that a language remain simple and opinionated for the sake of what one might call "cross-developer" code.

Someone up the thread a ways mentioned 1980s C++, object oriented C++ and functional C++ as examples of how disjoint different styles of a single language can be when its building blocks allow it to be sufficiently flexible/agnostic. Another example that comes to mind is Lisp, where its ability to create macros and abstractions mean a developer can, if he chooses, essentially write his own language per project (as far as its legibility to newcomers is concerned).

C++and Lisp are both powerful, of course. But power isn't everything. Jira is much more customizable and flexible than Trello, but Trello is clearly doing quite well with the small subset of Jira features that a large chunk of the market is satisfied with.


Indeed, the whole "use only what you think is good" from a programming language is a fallacious argument. Or, at least, it only applies to the lone wolf developer who writes everything by him/herself.

Knuth fell into making this mistake in the 1993 Computer Literacy Bookshops interview, in which he responded thus on the topic of C:

DK: I think C has a lot of features that are very important. The way C handles pointers, for example, was a brilliant innovation; it solved a lot of problems that we had before in data structuring and made the programs look good afterwards. C isn't the perfect language, no language is, but I think it has a lot of virtues, and you can avoid the parts you don't like.

Yes, you can avoid the parts you don't like, if you're like Knuth, working on one program by yourself.


>I don't understand people who hate generics. They are just types and they are here to help the developer write compile time type safe code. interface {} everywhere isn't compile time type safe.

You're right, and the FAQ explicitly says that the designers aren't happy with the choice either.

If there would be a one time compile penalty for generics, it may work. The issue is that if large code-bases (Go's reason de'etre) will now take an hour to compile instead of five minutes, it's just dead in the water.

Also, it looks like they want to make Go a language where all code looks the same. So you won't find team A writing C++ like in 1980, team B writing C++ in OO mode, team C writing C++ in FP mode, etc.

It's so fanatical, that the spacing is standardized.


> The issue is that if large code-bases (Go's reason de'etre) will now take an hour to compile instead of five minutes

This is just FUD. Monomorphization takes a lot of time, but no one prevents the developer to keep the dynamic dispatch by default and just do the type-checking, this is not expensive. And for people who need more performances you add a compiler flag to perform monomorphization.


What you propose adds more complexity on top of generics. The Go team specifically does not want performance tweaks under flags as it adds complexity for no good reason.

If generics are to be added, they can't hurt performance and they can't hurt compile times noticeably. The tradeoff needed is too big to be worth it. If this is such a big problem to you, there are many languages out there that embrace the complexity and the tradeoffs. Scala or Kotlin might be good choices.


> The Go team specifically does not want performance tweaks under flags as it adds complexity for no good reason.

Giving the user the ability to balance his needs for compile-time or his needs for performance is not «complexity for no reason» imho. There is no complexity from the user's perspective, it's just "optimise" vs don't optimise and compile quickly, like what you have with C compilers. It's really different from other kind of performance tweaks, let say the choice between two garbage collectors with different trade-offs. Anyway I'm aware of the dogmatic stance of the Go team on the «simplicity» mantra …

> If generics are to be added, they can't hurt performance and they can't hurt compile times noticeably.

As I said, pure type-check generics with dynamic dispatch under the hood wouldn't hurt compile time and they wouldn't hurt performances because dynamic dispatch is exactly what Go already does with interfaces.

> If this is such a big problem to you, there are many languages out there that embrace the complexity and the tradeoffs. Scala or Kotlin might be good choices.

Not everybody is responsible for the technical environment he is working in. And I'm not sure that telling «if you're unhappy then GTFO» somebody who says he has a problem with the language is a really good approach.


> They are just types and they are here to help the developer write compile time type safe code.

Type safety mostly appeals to developers with a low time preference. Getting a bunch of compiler errors is the opposite of immediate gratification. In a language like python, you can start running your program instantly and push dealing with bugs as far back as possible. You might spend more time, since debugging is harder and slower in the long run than making the compiler catch mistakes up front, but you get to procrastinate.


> In a language like python, you can start running your program instantly and push dealing with bugs as far back as possible.

Which is often never. Part of the reason why I avoid using anything written in Python.


1. Sometimes being able to ship is more important than being perfect 2. Sometimes I don't care if it's perfect. If I need something for in-house use, Go (and Python) is quite enough for my needs.


Considering that C#, Java and C++ (which have those features) have 10x or more users than Go, that sounds like a good risk to take.


That's the same fallacy as the Twitter board trying to turn Twitter into Facebook. Sure, if Twitter were Facebook, it would have more users, but no-one needs a second Facebook.


That would be the same fallacy is there was only one other such language I could mention.

But between C#, Java, C++ and tons of others, it seems people do need "second" and "third" such languages...


Oh come on, those languages have been around for 20-30 years, Go has been around for 7 years and haven't seen anywhere near the push from Google that C# and Java had from Microsoft and Sun.

And it's not as if generics is off the table, personally I think they will arrive eventually, here's what Go lead Russ Cox said in January this year:

https://research.swtch.com/go2017#generics


I can kind-of-do generics with code generations, hate exceptions, and nil is only weird the first time you use it and before finding the FAQ entry.

I really, really doubt that this supposedly-huge crowd of developers exists who are ready-anytime-now to jump in Go and are being held back by things like you mentioned.

Because even if Go had all those things, I'm sure you would scoff at its lack of Monads, inability to easily create DSL's, presence of Garbage Collector, and no compile-time turing-complete templates ala C++. Right? Right?


Feature parity with mainstream languages is a very slow-moving target. I do scoff at many languages' mediocre type systems and extensibility, but I don't feel the need to talk people out of using them.


Hurray, Finally




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: