Hacker News new | past | comments | ask | show | jobs | submit login
Go 1.3 is released (golang.org)
358 points by enneff on June 19, 2014 | hide | past | favorite | 108 comments



The static analysis features are exciting: http://golang.org/lib/godoc/analysis/help.html

"Because concurrent Go programs use channels to pass not just values but also control between different goroutines, it is natural when reading Go code to want to navigate from a channel send to the corresponding receive so as to understand the sequence of events.

Godoc annotates every channel operation—make, send, range, receive, close—with a link to a panel displaying information about other operations that might alias the same channel."

Understanding large concurrent programs is significantly easier with a precise pointer analysis to help you track objects.


This is wonderful.


GC changes are probably most exciting ( deference to the support for Plan 9 ;) )

I still can't tell if it's evolved beyond a mark-and-sweep - I have to assume no - we've heard the GC would be seeing improvements and that the current (now older) method was stop-gap.


This is still the mark and sweep collector. We hope to veg begin work on a new GC over the next few months.


From the release notes,

>> The garbage collector has been sped up, using a concurrent sweep algorithm, better parallelization, and larger pages. The cumulative effect can be a 50-70% reduction in collector pause time.


Minor note. Go had some form of Plan 9 support for years, mostly broken though, we just made it much much better this release.


I would have assumed Plan 9 would have been supported in some capacity for obvious reasons,


The release notes mention the graceful shutdown of a http.Server:

"The net/http package now provides an optional Server.ConnState callback to hook various phases of a server connection's lifecycle (see ConnState). This can be used to implement rate limiting or graceful shutdown."

Knows anyone here a code example for this?



I'm super pumped about this -- I plan to spend the weekend upgrading a graceful shutdown server I maintain to use it.

https://github.com/braintree/manners



Please note that net/http does not implement graceful shutdown, 1.3 has made it easy to implement in your application code.


Current Go users, What's the state of package versioning right now? Is vendorization still the answer?


We use Godep, it is very good. As per another answer to your question: with '-copy=false' it behaves a lot like bundler.lock. Having spent a lot of time working with it we've found a few areas where you can get burned a little; particularly if you've structured your repos as a set of libraries, as seems to be the encouraged golang pattern.

When you have multiple libraries you have to be very specific about when you run godep, lest you find yourself with two libraries needing different versions of a common library, for example Main imports Foo and Bar, which both import Baz. Godep provides a mechanism for handling this: each dependency is explicitly locked into a fixed revision (e.g. commit sha, in the case of git). The pain comes about when during debugging as it can be very hard to reason which version of a library you're using.

Additionally the revision aspect is also a bit of a PITA, we use a development flow which rebases our small commits into a big commit and then merges that into our master branch; if you ran godep prior to that you're now referencing a commit that no longer exists. Given the chain of references that can exist this can go a very long way down. This same pattern also forces you into needing to push your dev branches to an origin server, as godep checks out the repos during the build, which while pretty benign a concern is a PITA if you forget and your build breaks because of it.

We're strongly considering moving to "one big repo" to help combat this issue (as well as a few others) for our internal golang repositories. Referencing "published commits" in 3rd party libraries is an acceptable level of pain. We're not entirely sold on this yet... just considering it.


No need to vendor. Use Godep without copying (godep save -copy=false) to create the equivalent of a bundler.lock file and check that into source.


There are other ways around, but I'd say the community is solidifying towards godep. Someone correct me if I'm wrong.


There seem to be a lot of comments here recommending godep, but just to throw my experience in: none of the projects I've interacted with use godep (other than the Heroku buildpack, which was written by the author of Godep).

It seems to be a solution for some (not all) projects that are released in binary form, but that isn't relevant to most projects out there[0]. I have never felt the need for what godep provides; vendoring myself has been sufficient for the (very rare) case in which I need specific versions of dependencies other than tip/trunk.

I asked around on #go-nuts, and (though the sample size was small), the other regular contributors who idle in the channel seemed to have the same experience.

YMMV obviously.

[0] https://botbot.me/freenode/go-nuts/2014-06-19/?msg=16563763&...


In my opinion, yes, vendor-izing your dependencies is the way to go. My preferred way to do that is to use git subtree.


Yep. Either through godep or a VCS-native solution like git submodules or git subtrees.


Also, for variety (and a shameless plug), check out:

https://github.com/mediocregopher/goat


Check out gom, it's really popular too.


godep is winning and works well, but I still think there's an opportunity for something like a Go-flavored CPAN.


Another +1 for godep.


I'm very happy for this:

"Cross compiling with cgo enabled is now supported. ... Finally, the go command now supports packages that import Objective-C files (suffixed .m) through cgo."

Great news!!!!!!!!!!!!!!!!!!!!


I'm a little confused at what it takes to get this going. I want to use cgo for a linux/arm target, built on darwin/amd64. Do I need to first build a gcc toolchain for linux/arm on my Mac?


Yes.


> Previously, all pointers to incomplete struct types translated to the Go type *[0]byte

This makes me really happy. I end up writing a lot of bindings to C libraries (libjpeg, libpng, etc), some of which use incomplete types to hide the contents of their internal structs. With this fix I'll finally be able to work within the constraints of the typesystem and stop hacking around it with casts to/from unsafe.Pointer.


I just noticed this session in the Go track for Google IO... It might be nothing of course, but I don't remember seeing anything Android related in a Go session before: https://www.google.com/events/io/schedule/session/eeeb1991-0...


I rather like the new Server.ConnState callback, as well as the various HTTP timeouts. Yay for simple but useful quality of life improvements!


Very excited about Sync.pool and the contiguous stack.


  warning: GOPATH set to GOROOT (T:\Tools\Go) has no effect
  go build runtime: windows/386 must be bootstrapped using 
  make.bat
So, if you installed Go in a different directory, ignore the Windows environment vars "PATH" entry and just execute the "make.bat" in the "Go\src\" directory - the first time.


As a scientific programmer I find it a shame that they have decided not to go with operator overloading. The rationale from the FAQ is that it "seems more a convenience than an absolute requirement". In the case of scientific software though it usually is a requirement.


I'm honestly glad that Go, unlike C++, does not have features that are useful only for a small subset of programmers. The only 2 cases I've seen where operator overloading is useful are:

1. String concatenation (Go has it) 2. Matrix/vector operations

That's it. Any other use case for operator overloading is dubious at best. It sucks for scientific programmers, but Go is a general purpose language, not a Scientific Programming language.

Complicating parsers, the grammar, readability, all those things, just to please your sub-group -> no thanks.


> Complicating parsers, the grammar, readability

Operator overloading has absolutely no effect on the parser or grammar.

It may harm readability but that's more a question of naming. An operator is just a name. If you use that name to refer to something unexpected, you'll harm readability. If you use it for something intuitive, you'll improve it.


I don't disagree with you. I'm not suggesting that Go designers made the wrong decision with the goals they had. I am jealous that it's less useful to me though.


forgive my ignorance, why is operator overloading particularly useful in scientific computing?


Because it lets your math code look more like math and less like code.

    v := Vector{}
    v2 := Vector{}
to add the two vectors, currently you have to do something like

    result := v.Add(v2)
rather than the nicer

    result := v + v2
In the small scale, it doesn't seem like a big deal. In a large and complicated scientific program, it can make the code a lot harder to read.

Note: I think it is good that Go does not have operator overloading, though I think it's a shame that means it's not as good for scientific & math programming.


So your code can still look like math when the operands are complex types.


Is it really a "requirement," though? It sounds like it makes it easier to look at formulas as you would, say, on paper, but you can still build the same things as methods & functions.


It's not a "requirement" in the strict sense of the word, but I think it's useful enough to prevent the use of Go in science. Java has essentially suffered the same fate in science.


Could you please give an example of code using one style vs the other that showcases why someone would turn down a language for it?


Say I had an array of masses and their velocities and I wanted to calculate the kinetic energy of each mass using the equation:

    E = 1/2 mv^2
With operator overloading:

    E = 0.5 * (m * v)**2
Without:

    E = (m.mult(v)).pow(2).times(0.5)
The operator overloaded example is Python (numpy) - the non-overloaded one is something I made up, but it's basically what it would need to look like.

I think the first example is much closer to the maths.

This is not some contrived example, if you have raw data and you're using mathematical equations to work out relationships you do this kind of thing all the time.


Look at this http://stackoverflow.com/questions/11270547/go-big-int-facto... and compare how the function looks with int and with big.Int. Interestingly, big.Int has a method MulRange, which does exactly what the function would otherwise do, but this won't be the case the majority of the time. Given the extra tedium involved, someone working heavily with vectors, matrices, big numbers, etc., would certainly care about operator overloading.


I am not a scientific programmer, but I'm going to attempt to provide an example anyway until someone else does one better.

I think there are certain mathematical operations that operate over what would be implemented as complex objects, so it is convenient to continue using these agreed upon symbols to implement your work.

  // addition and multiplication of native integers
  1 + 3 * 4

  // add and mult of math objects
  matrixA + matrixB * matrixC

  // here the math is slightly occluded
  add(matrixA, multiply(matrixB, matrixC))


There are proposals out there for adding multi-dimensional matrices to the language. I do like the recent proposal for 2-D matrices I read about. I think that would be a better for the language overall, than adding operator overloading.


Just as a clarification, the proposals are to add multi-dimensional slices (dynamically sized arrays). This statement is splitting hairs, but a Matrix is a container for numeric types that can do things like multiply and have a cholesky decomposition. A multi-dimensional slice is just a container for whatever you want. Such a container is very useful for matrices, but a package would still have to turn a multi-dimensional slice into a Matrix in order to add methods like Add.


I'm very curious why you say operator overloading is a requirement. It's certainly nice to look at code with operator overloading when you're trying to understand a problem (or maybe it's a curse when you're trying to understand what a block of code actually does!), but it's the plainest case of syntactic sugar I can think of in language design. What are you getting from operator overloading that you can't get from chaining methods?


If you work with vectors and matrices and do a lot of computations, it is very convenient to be able to express, thanks to operator overloading, the formula you have on paper into code with nearly the same syntax. This is one area where, for example, FORTRAN still shines. You can can really TRANslate your FORmula into code easily.

In the case of FORTRAN it is because the vector and matrices are recognized types, in other language, they are not built-in types but the convenience is added thanks to operator overloading.


Yes, exactly. The same could be said for computer graphics, where we traffic in vectors and matricies all day. I want a language in which those are base types, will all of the proper operators already defined, so I don't need operator overloading. Yes, I could write in Fortran, but something a little more modern would be nice.


Fortran is coming up to speed pretty quickly. The 2003 and 2008 updates to it are pretty nice. In fact, I'm in the process of porting the nanomsg sockets library over to it. You can even use GTK3 and Glade to make GUIs for it now, too. I do a lot of matrix calculations, so Fortran is absolutely invaluable.


A simple preprocessor for transforming custom infix operators to method calls might do the trick here. Go comes with good libraries to make this, you wouldn't have to invent a new Go parser. There are other preprocessors that you can look at for inspiration.


That would produce a custom language not understood by other Go users and destroy much of the benefit of the compiler reporting errors straightforwardly.


Sounds very much like the incumbent popular language supporting operator overloading!


Congratulations and excellent work!!!


Maybe the time has come for Google to use Go for Android. I think it will give Go a nice boost.


Swift's high-level syntax will probably open up iOS development to many people who otherwise would have seen Objective-C's square brackets and run away.

I like Go a lot, but Google switching to it for Android would probably have the opposite effect. Many people learn Java in school. Almost no one learns Go.


People don't learn Objective-C in school, but it has not slowed down adoption of the language for the iOS platform. If the platform is popular, and a language supports it well, programmers will learn.


I love Go; having Go as an option for Android development would be awesome because the language is just so much fun. I'm not getting my hopes up though, because Go was intended for server-side stuff. Object-oriented design is a great strategy for rapid app development, and this is the key different between Swift and Go that I've noticed.


...But does it have generics?


If they add generics, I would expect it would be in a 2.0 or some other release where they're willing to make possibly backwards-incompatible changes, not a point release.


I think it's pretty clear by now Go is not aligning itself with type parameters. If you need generics, don't use Go. Most programs probably don't need generics.


My understanding is that the Go team isn't opposed to generics, and in fact is quite aware of their value, but that:

- it isn't a priority

- it is a tricky thing to get right in the context of Go

- there haven't been any satisfactory proposals thus far

If this isn't quite right I'd love to be corrected.


That was the common understanding from 2-3 years ago, when they left the question hanging and said that "we'll do it when we find the proper way etc".

In the meantime there have been several satisfactory proposals (in the lists and elsewhere), and it's not like 200 other languages implementing Generics have had many issues with them.

So, it boils down to the Go core team putting up the solution to an impossible tradeoff as the holdback (and a little exagerrated one at that, with relation to the costs involved).

And then, they said "the language won't change" etc recently.


So it's more like Guido refusing to add TCO to Python because that would be oh so hard to do. And look, the stack trace! The horror, the horror. /s


I'd like to add that generics can add a lot of complexity to a compiler, and the compiler's simplicity is IMHO of great benefit to the language. I'm happy they're taking their time with it; it's better than ending up with a half-assed implementation like Java.


that is basically the "please go away and stop asking" response


Most programs don't need generics, but a lot of programs benefit from having them. This, along with exceptions, are the two things that keep me from considering Go for future service development. We've stood up a couple of successful Go services, but the maintenance overhead from not having these two things has been substantial.


Go has something that is functionally equivalent to exceptions; they just don't call them exceptions and make them look a lot different from what we are used to.


If you're talking about panics, you can only handle them in defer funcs if I'm not mistaken, which makes them significantly different from exceptions that I'm used to that can be caught in any part of the code. The dependency on defer in my opinion makes them unusable as typical exceptions in my case. This was perhaps the intent, making panics and the recoveries thereof truly exceptional and painful, pushing one towards the normal error handling semantics, which is what I dislike.


> If you're talking about panics, you can only handle them in defer funcs if I'm not mistaken,

That's true...but no different from "exceptions can only be caught in catch blocks".

> which makes them significantly different from exceptions that I'm used to that can be caught in any part of the code.

It doesn't make them any different from exceptions -- just as a catch block can be anywhere up the call chain, a deferred function can have been set anywhere up the chain.

Defer is basically "finally", except the position is different, and "recover" inside it lets it also do what "catch" does.


It's true that you can only handle them in defer funcs, but the defer func could be anywhere up the stack, not just the current function you're inside.

(I'm not sure if that was your understanding or not)

"The panic and recover functions behave similarly to exceptions and try/catch in some other languages in that a panic causes the program stack to begin unwinding and recover can stop it. Deferred functions are still executed as the stack unwinds. If recover is called inside such a deferred function, the stack stops unwinding and recover returns the value (as an interface{}) that was passed to panic."

https://code.google.com/p/go-wiki/wiki/PanicAndRecover


Yes and no, errors are just return values. They’re not special, they are conventional. So you get the whole type system to express errors – they’re just values. And you don’t have a different control flow.

It’s not without warts. You can bail with panic(), which is essentially a throw.

It’s tedious to handle errors, but that’s because actually handling errors is tedious.


"Actually handling errors" isn't tedious at all in Scala. Chain flatMap() across methods that use Try[A], handle the Failure[_] at the other end, and you're done.

I'm not joking when I say it really is that easy. Go makes it (and, really, many other things) very difficult for reasons that are at best murky.


So you call say 3 methods, and at the end you have a "file not found" error... which call resulted in that failure? I don't know Scala specifically, but from what I know of most languages that support this kind of chaining, you can't tell. And that's the problem. Did your initialization fail to find it's config file in step 1? Did you fail to find the target file you were going to transform in step 2? Was there some other failure in step 3? You can't tell, so you can't actually handle the error.

This is effectively like doing this in Java:

  try {
      DoX()
      DoY()
      DoZ()
  } catch( Exception ) {
      // The code has no idea what failed here.
  }
Whereas, the go code looks like this:

  if err := DoX(); err != nil {
      // handle error from DoX
  }
  if err := DoY(); err != nil {
      // handle error from Doy
  }
  if err := DoZ(); err != nil {
      // handle error from DoZ
  }
This is what Go programmers means when they say "actually handle the errors". At each step you handle the specific failure from the specific call. It's somewhat more verbose, but it's a LOT more robust against real life failures.


You'd have the same effect (you don't know what specific call failed) if all of the comment lines said "return err", which is a common pattern in Go. The Scala approach mentioned is just sugar for that. You can of course handle each case separately if you want in Scala, just as you can in Go, but the common pattern has sugar.


What? Actually, absolutely not.

If you only care about whether something succeeded use Option, if you care about the error use Either, Validation, etc.

Just pick the right Monad.


Yes, I agree, but I don't see how that's related to what I said.


Then maybe don't comment?


> So you call say 3 methods, and at the end you have a "file not found" error... which call resulted in that failure?

The first exception hit pops out the other side and you pattern-match against it. So, the one that did file access and returned a FileNotFoundException. If you have multiple pieces of code that can return FileNotFoundExceptions, you can pass a message in the exception, just like any other Java exception. You can often be more type-specific, too. Bear in mind that defining a new exception in Scala is a one-liner, and you can encapsulate your FileNotFoundException in a RetrievalFailedException very easily and cleanly.

Your method is not more robust, I assure you--it's just verbose and both typo- and thinko-prone.


Errors are just return value, but in go, you either explicitly ignore the error returned (using the "blank identifier" _ ), or, you assign it, in which case you have to deal with it (otherwise go will complain about an unused variable).

An that is what makes go awesome. In java or php, you, as a developer, can never know wether the functions you are calling will throw an exception. The only way to know? Read the doc, if you're lucky and there's a doc, or read the code... The result? your program will crash if you didn't add your try..catch block.

Go forces you to either explicitly ignore errors (and your fellow co-worker will know you did it on purpose) or deal with them. No surprises.


I get the same in C, with gcc's __attribute__((warn_unused_result)).


This comment should be more prominent, warn_unused_result is pretty awesome!


> In java or php, you, as a developer, can never know wether the functions you are calling will throw an exception.

That's what checked exceptions are for in Java.


The larger problem with exceptions, checked or not, is that they interrupt the flow of your program, which is too bad since most exceptions are not that exceptional. That problem is exacerbated by developers who don't know how and when to throw exceptions and end up throwing exceptions for easily recoverable errors. The other thing I dislike about exception is that your code ends up with tons of try..catch blocks.


It's that convention and exclusion of special abstractions that appeals to me by keeping the cognitive load to a minimum. I can focus on the problem without thinking about the language. If Go became like various other languages, then what would be the point? We already have Java, C#, ...


The only real significant difference between panics and exceptions in the Java/Python/etc. style is that -- reflecting (apparently) the same philosophy of explicit-over-implicit that governs other Go error handling -- the common catch-one-kind-of-exception-and-implicitly-rethrow-everything-else idiom of the Java/etc. language is inconvenient and verbose to express in Go (though if you really want to use that idiom a lot, it seems like it should be implementable in a library function to create that kind of handler that would then eliminate the boilerplate on each use, though given Go's convention on panics generally not crossing public API boundaries, you shouldn't need that idiom as much in Go code.)


Most programs don't need generics like most programs don't need static typing. Just use "any" and downcast.


That's entirely different. Generics can be replaced by code-generation (C++-style templates) in a preprocessor with no loss of safety, or by boring copy-paste (with some loss of safety :-().

non-static typing is a massive loss of safety.


Static typing can be replicated in a dynamically typed language with a preprocessor too. (For example, see TypeScript.)


Templates/macros/preprocessing do sacrifice separate compilation and separate type checking; so your error messages might not be comprehensible, your compile times can be much slower, and...you need source for your generic container. Rob Pike probably knows the drawbacks of that approach well and I doubt he would go there.


No programs really needs anything above assembly.

It's the "nice to have" things that make programming easier/more correct/better.


A single instruction is sufficient. Anything more just complicates things and is completely unnecessary.


Most programs do list manipulation and list manipulation in Go is a pain. Generics are one solution for that.


I agree generic would be a serious win for Golang. However in the meantime we have this workaround/hack which alleviates some of the pain http://clipperhouse.github.io/gen/


Appreciate the mention! Also gen is getting more, um, generic with the notion of custom ‘typewriters’, out on this branch: https://github.com/clipperhouse/gen/tree/3.0


Lisp manipulation? Do you mean linked lists, or arraylists? The only common use of linked lists I've encountered is writing custom allocators, and most programs don't need custom allocators.


lolwhat? How exactly would list manipulations in Go be made easier because of generics?


How many of these[1] functions can I write in Go without doing a bunch of casting? Probably less than half.

[1] https://hackage.haskell.org/package/base-4.7.0.0/docs/Data-L...


Go "lists" are type dynamic -- "generics" in the most important place for generics to be.


Heterogeneous lists are not "generics."


A Go map is of the defintion map[KeyType]ValueType. A Go slice and dynamic array is similar typed. That isn't heterogeneous, it isn't of type object, and is 99% of the common use of generics.

Yeah, the native fundamental collections are generic.


I have a feeling we're talking about different things: when people speak about Go generics, they mean that the language doesn't support parametric polymorphism. Yes, some of the built-in types are special. That's not the point.


Great, downvoted 'to oblivion' for asking a question that would have no doubt come up in the thread, anyway.


Pretty sure most people thought that was a joke.


So? It brought up a subject that was probably on at least 20% of the participants in this threads' mind, anyway. Might as well get it out of the way.


I'm really sorry, everybody who replied seriously. I just couldn't help myself. I...actually, I don't regret it.

Edit: I'm kind of hoping "But does it have generics?" will become the new "Will it blend?" for golang discussion.


To anyone that wonders: "Will it blend" was never a karma magnet on HN.

One of the reasons I like HN.


Congrats, guys! Thanks for the great effort.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: