
5 Weeks of Go - iand
http://blog.iandavis.com/2012/05/23/5-weeks-of-go/
======
kibwen
_"Names imported from a package are accessed by using the last component of
the package name as a prefix (rather than the horrendous full package prefix
that you often see in Java and similar languages)"_

    
    
      import "image/color"
      x := color.RGBA{255, 255, 255, 0}
    

Perhaps I'm naive, but I thought you could do this in basically any language,
including Java (it's been a while).

EDIT: Not sure why this was downvoted, but it's a trivial Google search[1] to
show that this is indeed something that you can do in Java:

    
    
      import javax.swing.JOptionPane;
      
      class ImportTest {
          public static void main(String[] args) {
              JOptionPane.showMessageDialog(null, "Hi");
              System.exit(0);
          }
      }
    

Which, without the import, would force you to use the full name as the author
alludes:

    
    
      class ImportTest {
          public static void main(String[] args) {
              javax.swing.JOptionPane.showMessageDialog(null, "Hi");
              System.exit(0);
          }
      }
    

I'm not intending to bash Go (it has the largest quantity of awesome features
of any language I've seen in a long time), nor necessarily to defend Java. It
just seems a bit disingenuous to claim that this is exceptional behavior.

[1] [http://www.leepoint.net/notes-
java/language/10basics/import....](http://www.leepoint.net/notes-
java/language/10basics/import.html)

~~~
iand
The difference is that javax.swing.JOptionPane is a class whereas image/color
is a package containing functions and types.

x := color.RGBA{255, 255, 255, 0} is instantiating an RGBA type from the
image/color package

~~~
yummyfajitas
In Java:

    
    
        import org.somelib.colors.*
    
        x = new RGBA(255, 255, 255, 0);
    

where RGBA is some class defined in the org.somelib.colors package.

~~~
ssmoot
I think they're looking for:

    
    
      import org.somelib.*
    
      x = new colors.RGBA(255, 255, 255, 0);
    

So you don't have to bring all the classes into scope, but you don't have to
write out the full path to use them either. Just the last bit of the package-
name as a qualifier.

~~~
plorkyeran
In C# that'd be:

    
    
      using colors = org.somelib.colors;
    

I don't know if Java has something similar.

~~~
wolverian
It doesn't.

------
taliesinb
I've been using Go for a performance-critical part of a production system and
I've been thoroughly enjoying it. But certain things have been odd and
frustrating.

One trait that I find frustrating is how pedantic Go is about types.

For example, if I have a float _x_ and an int _y_ , I can't write _x / y_ , I
have to write _x / float64(y)_ †. The intent is to force awareness of type
conversions that introduce subtle gotchas, but I don't see how it applies to
the case _float / int_.

A better example of the same phenomenon is that alias-style types need to be
explicitly cast to their aliased types, which introduces pointless noise into
what would otherwise be a nice way of 'marking up' the semantic role of
variables and arguments (think _type index uint_ ).

 _†Notice there is a common-or-garden 'int' but no common-or-garden 'float',
apparently because one should always be aware of precision. Unfortunately,
because of the alias type issue I mention above, 'type float float64' doesn't
help._

~~~
ori_b
> _but I don't see how it applies to the case float / int_

Does the int get converted to a float for this calculation? is sizeof(int) ==
sizeof(float)? Then you can lose accuracy in the conversion; MAX_INT is larger
than the largest same-sized float that can be represented without rounding
error.

~~~
taliesinb
Fair point, although for the garden-variety int (int32) and float64, this
isn't the case. In an ideal world, the compiler would be smart enough to know
this, and complain appropriately.

~~~
luriel
"garden-variety" int wont be int32 forever.

~~~
taliesinb
That's true, and it will probably change fairly soon, but there are a whole
range of safe integer to float casts that will still exist even if int changes
from int32 to int64.

And these safe casts should be transparent so that people who regularly do
things with floats don't feel like second class citizens in the language.

~~~
4ad
The check cannot be made at compile time, only at run time, and even if it
were possible to make the check at compile time it would still be a bad idea.
At some point in the program's 20 year old life time, someone will decide that
a number N used somewhere in the program has to become N+1, and N+1 doesn't
work. It would be unacceptable for such a trivial change to break the
program's compilation, and for a number to carry so much hidden state and
meaning.

~~~
taliesinb
I'm not sure what you mean by "The check cannot be made at compile time, only
at run time". Are you talking about a language other than Go? Go is statically
typed.

edit: I've been talking about compile time errors that prevent potentially
dangerous implicit casts from happening. I think you're talking about
something else.

~~~
ori_b
If int64 is the default type for int, then any int will be able to contain
values not representable as a float64 (you'd need a float80 or float128 to
make it work). To safely convert, a run-time check will be needed.

------
stcredzero
_The amazing speed of the compiler means the development cycle is a fast as a
scripting language even though full optimizations are always switched on._

This is a common misconception of those who work mainly in compiled languages.
In some dynamic languages, you don't just have the same edit-test-debug cycle.
Advanced users of dynamic languages can conduct mini edit-test-debug within
all the parts of the larger scale edit-test-debug cycle.

 _Go is not object-oriented. This, in my opinion, is a massive plus for the
language....In the real world, things are fuzzy and they spread across
multiple conceptual boxes all at once. Is it a circle or an ellipse? Is she an
employee or a homeworker? Classifying things into strict type hierarchies,
although seductive, is never going to work._

Here, the author is simply wrong. This is another common misconception about
OO. You don't need strict type hierarchies. You don't even need classes at all
for OO.

EDIT: Other than that, an overall good review.

~~~
iand
Thanks for the comments. I take your point about OO, but I was aiming at
producing a broad overview of Go which necessarily means simplifying in some
parts. To 90% of developers today brought up on strongly typed languages such
as Java and C#, OO means single inheritance type hierarchies.

~~~
kibwen
I agree with your decision to not label Go as object-oriented; to too many
people that term is simply synonymous with inheritance. But rejection of the
term itself doesn't mean we should automatically reject the often-related
concepts of encapsulation and polymorphism, both of which we can have without
inheritance.

Specifically, you say this:

 _I can hear the die-hards screaming already about encapsulation, inheritance
and polymorphism. It turns out that these things are not as important as we
once thought._

And then you proceed to give examples that only attempt to refute the
usefulness of type hierarchies, without addressing encapsulation and
polymorphism. I'd be curious to know what facilities, if any, Go provides for
these concepts.

~~~
iand
Go doesn't support polymorphic methods but supports polymorphic types through
interfaces. Encapsulation is through the private/public naming convention I
refer to in the blog post.

------
timwiseman
The word Go is far too overloaded in the English Language, and the difference
is not always clear from context.

This title could easily have refferred to the correct meaning of 5 weeks with
the programming language Go, 5 weeks playing and studying the game Go, or in a
slightly colloquial usage, 5 weeks of constantly doing things and going.

And those are just the meanings that make sense in this context. It is also a
verb with a wide variety of (related) meanings and forms a command part of a
command in multiple computer languages (T-SQL uses Go, goto is infamous in
Basic, etc).

~~~
spacemanaki
Why is it actually called "Go"? The official FAQ just has a snarky response.
[http://golang.org/doc/go_faq.html#What_is_the_origin_of_the_...](http://golang.org/doc/go_faq.html#What_is_the_origin_of_the_name)
Is it really just short for "Google"?

When I searched for "why is golang called go" Google actually suggests I
change "golang" to "google" which is kind of funny.

~~~
iand
I think it's a play on the word "Co" derived from "coroutine" which is the
basis for concurrency in Go (which calls them Goroutines)

~~~
pmjordan
Indeed, _go_ is a keyword in the language, for launching said "goroutines".

------
rogerbinns
I'd be interested in hearing more about the exceptions side. The usual
derision is that some top level unrelated code ends up handling them, but that
seems silly. A far more normal example would be a routine that has to get some
resources and calls various retrieval routines which may end up accessing the
filesystem, databases, or the network. Those routines could go several calls
deep before throwing an exception. The top routine can then find alternatives,
use cached versions, return an error etc.

I do like exceptions in Python where their use is ubiquitous and there is
garbage collection. I detest checked exceptions in Java because you are forced
to handle things at a level you often don't want to.

I'm 65% certain Go made a mistake not using exceptions, but would love to hear
from others.

~~~
zmj
Go uses the Error type and multiple returns in the scenario you describe. Code
might be something like this:

    
    
      foo, err := GetDataFromDatabase()
      if err != nil {
          foo, err = GetDataFromLocalFilesystem()
      }
      if err != nil {
          foo, err = GetDataFromNetworkFilesystem()
      }
      return foo, err
    

There is an exception-like mechanism called 'panic', but it shouldn't be used
as a control flow tool. Panic is reserved for unrecoverable errors, not
mundane situations like a failed query.

~~~
stickfigure
This seems extraordinarily painful. I usually have five or six stack frames
between GetDataFromDatabase() and code that can _raise error dialog to user_
or _return HTTP error code_. This means every single one of those stack frames
are going to duplicate this annoying if err != null sequence.

The article author is quite wrong when he says "exceptions are broken by
design because they force errors to be handled at points far away from the
cause". Exceptions don't force you to handle errors at any particular
location; they _allow_ you to handle errors anywhere in the stack. On the
other hand, return values do _force_ you to handle errors then-and-there,
which 90% of the time is a half-dozen stack frames away from your handler.

~~~
taliesinb
We have two orthogonal tasks which traditional exception mechanisms complect
together:

1\. Handling a failed assumption, which will cause the code that follows to be
incorrect (I need the contents of this file to do my task, but it doesn't
exist, so I can't do my task)

2\. Handling a failed operation, which there is a straightforward way of
working around (if I can't write to my log file, maybe write to stderr instead
instead and give up)

The trouble is that the 'inner' code doesn't know which of these a given
failure is, because it is context specific. Seems like with languages like
Java, the default is 1, whereas with Go it is 2. They're kind of equivalent,
though, because you can always convert a !ok into a panic(...).

But certainly exceptions are the nuclear option, so it seems reasonable to me
that they shouldn't be the default for common operations that can fail.

~~~
stickfigure
Why wouldn't you want to use exceptions for #2? I posit that it is very rare
for errors like RPC failures and filesystem errors to be handled at the level
of the call. 90% of the time the natural handler is several stack frames up
where you can raise an error dialog or write an http error response.

Return values create endless repetitive "if error return error" code, or worse
- programmers get lazy and ignore return values, producing bugs that show up
only in hard-to-test-for situations like transient network failures.

~~~
agentS
Why is it rare for RPC/filesystem failures to not be handled at the level of
the call?

I think its much more natural for a memcache API to return an error if the
server is not reachable, and I can continue to execute the current function.
Similarly, I think its more natural for a "users" API to return an error if a
particular user doesn't exist so I can redirect to a signup page or something,
rather than throw a UserNotExistException.

And yes, errors as return values may seem to add more code to simple examples.
But I find it does wonders for clarity/readability. Using "regular" control-
flow for error conditions and the "happy path" makes code much easier to
follow; this is as opposed to trying to intuit the different ways control can
jump from the happy path into the error handling.

Also, I find that having to write the "if error return error" makes me pause
to think about how to handle errors properly. For example, if the function I'm
writing literally cannot proceed I will return the error. If its a really
weird place to be getting an error, write it to a log and return the error. If
I can ignore errors (like the memcache example above) then I keep going.

~~~
stickfigure
It's rare because RPC & filesystem access is usually wrapped in a library or
module which does not have any business knowledge of the task at hand. A more
realistic example is your getUser() call which calls 10 stack frames down to a
filesystem access, and let's say you get a filesystem error. The natural thing
to do is throw an exception which goes up all 10 stack frames and gets caught
by getUser(), which provides some sort of "sorry, internal error" message to
the client.

The notion that typing "if error return error" through 10 stack frames makes
you think more clearly is absurd. Decades of C experience has shown that lazy
programmers will ignore critical error conditions and introduce hard-to-find
bugs because execution plows ahead past the original error.

Whether getUser() returns an error object or throws an exception is a question
of high-level API design. Sometimes UserNotExistsException makes sense,
sometimes a null (or error) result makes sense. That is an entirely separate
issue. Any designer with significant experience will use both approaches as
appropriate.

~~~
agentS
Again, I'm not disagreeing that exceptions tend to be more terse. Exceptions
optimize for writability at the expense of readability. Reading linear code
that uses if statements and loops is easier than code that uses try-catches.
Especially trying to come up with all the ways control could jump from happy
path code to error-handling code.

You tend not to be writing all 10 methods in a particular call chain at the
same time. You will be writing a few methods that call each other inside a
module. You paint this as a massive timesink, and I can assure you, it
definitely is not.

Lazy programmers can also have catch-all exception handlers. I don't see how
exceptions help make lazy programmers perform due diligence.

That being said, there is a place for exceptions. Truly exceptional conditions
such as index out of bounds, or nil pointer dereference, or some internal
precondition violated, should be treated in an exceptional manner. Go does
this with panics, and panics are almost never caught as part of control-flow.
They tend to be caught at the root of goroutines, logged, and the goroutine
killed. The HTTP library, for instance, will catch panics in any goroutine it
spawns, and write a 503.

I just find it odd that people treat commonplace things as exceptional. File
open failed? Could not resolve hostname? Broken TCP connection? These aren't
particularly exceptional things. They are probably not a result of a bug, and
so should be handled by the programmer.

~~~
stickfigure
We seem to be going around in circles here... You say 10 layers of _if error
return error_ is not a time sink, and I say it is. I spent about a decade
doing C and C++ programming before Java, and IMHO exception handling is second
only to garbage collection as life-changing language improvements.

There is a key difference here: When a lazy C or Go programmer fails to check
an error value, execution continues - possibly many lines or stack frames
ahead before some sort of failure symptom is observable. In perverse cases
this can produce silent data corruption. I spent far too much of the 90s
chasing down these kinds of problems.

When a lazy programmer uses a catch-all exception handler, the error is still
caught - immediately - with the full stack trace of the original problem. This
is golden. Furthermore, a catch-all exception handler that prints an error
message to the user/http request/whatever is often _exactly_ the right
approach.

There's a lot of stupidity in the Java standard libraries, but your examples
(file failure, bad hostname, broken connection) are exactly the kinds of
things that should be exceptions, and are usually best caught at a high level
where meaningful errors can be reported to the user.

------
ssmall
Whoa, I didn't know about being able to import packages from a URL. That is
pretty slick.

~~~
supersillyus
It is quite neat and mighty convenient, but I sorta live in fear that at some
arbitrary point the library will change and my code will just stop working. I
konw this isn't a problem unique to Go and there are solutions, but the lack
of explicit versioning makes me nervous. Then again, it hasn't bit me yet, and
I certainly have benefitted from the ease.

~~~
jemeshsu
One solution is to add version to url. Some examples from Google API for
Golang (<http://code.google.com/p/google-api-go-client/>):
code.google.com/p/google-api-go-client/tasks/v1 code.google.com/p/google-api-
go-client/books/v1

~~~
technomancy
This is absolutely the only right way to do this. It's very unsettling that
locking to a specific revision isn't considered the default.

Depending on git master is cowboy coding at its most cavalier; build systems
should prize repeatability above all else.

------
taliesinb
The tools are also really nice. "go build", "go run", "go get", etc. Building
dependency management and build management built into the language - even if
it is not super sophisticated yet - is a great idea.

------
JadeNB
I don't understand this sentence:

> Enforcing a brace placement style and other conventions means the compiler
> can be super fast.

Surely the lexical-analysis phase is not the bottleneck in compilation? (The
context strongly suggests that 'other conventions' means 'other lexical
conventions'.)

------
ssmoot
This is a stupid question, but it's beyond difficult to search for an answer
(I've really tried!):

Can I import Java libraries into Go? If I can't find a Go library to do what I
need, what are my options besides messaging to another process, or
(presumably) finding a equivalent C library (I don't trust my C skills enough
to make wrapping something a confidence inspiring no-brainer).

Is there general language/library interop details out there any golangers
could point me towards?

~~~
jbarham
> Can I import Java libraries into Go?

No. Go compiles to machine code, not the JVM.

Go's built in cgo tool makes it very easy to wrap C code. See
<http://golang.org/doc/articles/c_go_cgo.html>. Crucially, you can write most
of the wrapper in pure Go and the C boilerplate code is generated
automatically.

------
nextparadigms
So, any chance we'll see Android using Go anytime soon?

~~~
rogerbinns
You can run shared libraries on Android very easily. The shared libraries can
do OpenGL (video) and OpenSL (sound), plus most regular libc stuff. Most of
the Android APIs are exposed as Java and you can use JNI to call into the
shared library (ultimately it is C). As far as I can tell you can't generate a
shared library from Go so this "normal" approach is off the cards. (There are
also issues like how the Dalvik garbage collector and Go GC would interact.)

To have a pure binary would effectively require a reimplementation of the
framework of Dalvik and all the various classes/methods and would be a huge
undertaking. It would be extremely unlikely to install on existing Android
versions and would only be in a future version. Or in other words it would be
many many years before you could depend on it being on Android devices even if
this was done for the next Android version.

The shared library thing is the biggest problem though. Android applications
are really mashups of components from the same app or others (see Activities,
Services, Content Providers, Receivers). There isn't actually a main() method
or equivalent. Instead the components are loaded and called as needed.

------
exim
Could someone point out, what is the reasoning of putting types after variable
names? i.e. why not `int i` instead of `var i int` ?

~~~
drivebyacct2
This is mentioned somewhere but it removes the ambiguity and bugs that can be
introduced by pointer types.

Consider: int* i, j (or rather int *i, j)

~~~
to3m
Well, there's no ambiguity in this case, because "int" is a keyword, known to
the compiler, and will always be interpreted as a type. But if you had
something like "x*y,z", then the meaning of this would depend on whether x is
a type or not.

~~~
singular
I think this is more in reference to the common C error of:-

    
    
        int* x, y;
    

when you meant to declare both x and y as int pointers, here only x will be a
pointer and y will be a straight int.

Obviously the go designers could have chosen to simply make this mean that
both x and y are pointers, however this would be somewhat confusing for those
familiar with C.

As iand says, there is a rather good article on this which goes into a lot
more detail - <http://golang.org/doc/articles/gos_declaration_syntax.html>

------
tinyjoe
the only thing i'm looking forward in Go is inheritance

------
drivebyacct2
Yes, Go is different. Yes, the language designers made a lot of decisions that
people will complain about [at first]. Yes, I'd [really] love to have generics
(even just for simple code repeat cases).

But goodness I love writing Go. Sorry, it's hard for me to be terribly
specific outside of, for some reason, I'm very productive with it and I love
the standard libraries. And where they're lacking Google Code, Github and
IRC/play.golang.org make up for it.

~~~
taliesinb
I agree that it is a pleasure to use. I also suspect that generics will be
forthcoming after everyone understands the 'theory of Go' a little bit better.

Unfortunately, the standard libraries that I've sampled have felt
inconsistent. Two examples I happen to remember:

1\. The _strconv_ package has a _Atoi_ (a wrapper around _ParseInt_ ) but no
_Atof_ \-- instead you use _ParseFloat_ directly, and you have to give it as a
second argument one of the integer _values_ '32' or '64' to set the precision
it parses at (why not use an enum or two funcs?).

2\. The _bytes_ package has versions of _Index_ that search for single bytes (
_IndexByte_ ) as well as byte slices ( _Index_ ), a nice performance-friendly
touch. However _Split_ only has the byte slice version. _SplitByte_ would
probably be twice the speed.

If you are going to write a package to stand the test of time, _be
consistent_.

 _(edit: I got Atoi the wrong way round)_

~~~
supersillyus
For what it's worth, I've looked into the 'Split' case, and the performance
difference when specialized to the single-byte case is about 2%, which is
mostly because Split already has built-in specialization for the single-byte
case, which amounts to a couple extra instructions in a function whose running
time is dominated by allocation.

I think they made the right choice there; the Go team seems very good about
optimizing only where it matters; there's lots of low hanging fruit, but the
majority of it isn't very useful fruit.

~~~
taliesinb
Just for fun, I just looked into it too. Which factor dominates depends on the
kinds of strings; for large strings, extra instructions in the loop matter
very much. I'm processing very large strings.

I did 128 runs on a byte array of length 2^24. It has delimiters placed at
positions {2^i, i < 24}.

I tested my implementation against both the "bytes" package implementation,
and a copy of the relevant portions of the "bytes" package (to account for any
odd effects of inlining and separate compilation). I did the set of timings
twice in case there was any GC.

Here's the wall time in milliseconds for the three implementations, on a 2010
Macbook Air.

mine 3313 copy 4709 bytes 5689 mine 3327 copy 4660 bytes 5660

My single-byte implementation is about 40% faster than the local version, and
70% faster than the "bytes" version. Not quite twice, but I wasn't far off.

But aside from performance, there is just consistency of interface. Once
you've established a 'Byte' variant of some functions, you should do it for
all the common functions.

~~~
supersillyus
It doesn't sound like you're using the benchmarking tools that Go provides;
I'd recommend using that if you're not.

Ah, yeah, I was testing a much much smaller byte array with multiple split
points. I'm not terribly surprised that in your case you've found the hand-
coded byte version to be faster (though the difference is more than I would've
guessed; care to post the code?) However, I'm still not sure it's merited in
the standard library. Split() could pretty easily be further specialized to
basically call your single byte implementation or equivalent at the cost of a
handful of instructions per call. Alternately, if you know you're dealing with
very large byte slices with only a few split points, it is only a couple lines
of code to write a specialized version that is tuned for that. The same
argument could be make for IndexByte, but I'd claim that IndexByte is a much
more fundamental operation in which one more often has need for a hand-tuned
assembly implementation. I wouldn't say the same for Split. There's a benefit
to having fewer speed-specialized standard library calls, and I don't think
splitting on a byte with performance as a primary concern happens often enough
to merit another way to split on a byte in the standard library. But I'm
certain that reasonable people who are smarter than me would disagree.

~~~
taliesinb
Sure.

Here's the implementation of the three versions:
<https://gist.github.com/2821937>

Here's how I was originally benchmarking things:
<https://gist.github.com/2821943>

Here's benchmarking using the Go testing package:
<https://gist.github.com/2821947>

Some of the performance I was seeing on my crappy benchmarker evaporates using
Go's benchmarker. But there is something else afoot. Try changing 'runs',
which controls the size of the inner loop (needed to get enough digits of
precision):

From "runs = 128":

    
    
      BenchmarkSplit1	       1	2478184000 ns/op
      BenchmarkSplit2	       1	2787795000 ns/op
      BenchmarkSplit3	       1	2747341000 ns/op
    

From "runs = 32":

    
    
      BenchmarkSplit1	1000000000	         0.62 ns/op
      BenchmarkSplit2	1000000000	         0.68 ns/op
      BenchmarkSplit3	1000000000	         0.56 ns/op
    

Why did it suddenly jump from 1 billion outer loops to just 1? I think there
is a bug in the go benchmarker here, because if you take into account the
factor-of-4 difference in work and then divide by 1 billion, it looks like the
first set of ns/op _are actually correct_ and aren't being scaled correctly.

Either way, the increase in performance is now only about 10%. Which I agree,
isn't anything to write home about. More bizarre is that the bytes package one
is faster for runs = 32 but not for runs = 128. I can't make head or tail of
_that_ , or why it should matter at all -- unless there is custom assembly in
pkg/bytes that has odd properties inside that inner loop.

But this is only one half of my complaint: it's the interface that matters,
and I see no good reason for having IndexByte, but no CountByte and SplitByte,
contrary to what you say about which is more fundamental. Having to construct
a slice containing a single byte just to call SplitByte and CountByte left me
with an bad taste in my mouth.

------
gringomorcego
I don't get why the compiler speed is such a big deal.

<http://en.wikipedia.org/wiki/Incremental_compiler>

