
Go: 90% perfect, 100% of the time - enneff
http://talks.golang.org/2014/gocon-tokyo.slide
======
kitsune_
I've created a couple of projects in Go so far (a map server, similar to
TileStache etc.) - Sure, the lack of generics is annoying, but I think the
real elephant in the room is dependency and package management. The way Go
expects you to organize your source code (the workspace concept, GOPATH, go
get and github-repos as import paths) is a huge road block here. To this day
the Go community hasn't settled on any kind of solution. If you bring any of
this up on the mailing list you'll be crucified by the Go zealots who think
Go's way of doing things is a thing of immaculate perfection.

If you want to be safe you will have to version your dependencies with your
project. So one way to achieve this is to put your entire GOPATH src folder
into your project repository. Of course this defeats the purpose of resolving
"github.com/myname/myproject" with "go get". Or, more annoying still, you'll
put them into a /vendor/ folder inside your package folder and rewrite all the
import paths. So "github.com/myname/myproject/vendor/somepackage".

Of course, if you simply rely on "go get" then dependency hell is just around
the corner when it comes to transient dependencies. A -> C-v1 and A -> B ->
C-v1.1. In fact if the entire Go eco-system relied on "go get" there would be
no guarantee that a library worked - upstream dependencies could change at any
moment.

~~~
NateDad
A -> C-v1 and A -> B -> C-v1.1 is a problem in any language.

As for the rest, you can have your cake and eat it too. Go get is useful for
getting trunk, so let it get trunk. You can use Keith Rarick's godeps to pin
branch revisions on release branches to have repeatable builds there. Then go
get can still get trunk, which is what most devs will want to do to start
hacking on it (and honestly, no one but devs are going to use go get, right?)

Yes, this means that if one of the repos you depend on goes offline, you have
a problem.... but this is DVCS, you almost certainly have N backups of that
repo, where N equals the number of developers on your team. So it's not like
you're hosed for forever.

If you want to be really paranoid you can use godeps or third_party, or one of
the many other vendoring tools to keep all the code in your repo. This is a
lot like keeping libraries in your repo in other languages.

It's not really a problem, in fact, you have many different ways to solve
dependency management in Go. That's not a bad thing. You just need to choose
one, (or not, and choose your upstreams wisely).

~~~
espadrine
> _A - > C-v1 and A -> B -> C-v1.1 is a problem in any language._

That is inaccurate in two ways. First, it isn't a problem in node.js. Second,
it isn't a language problem, it is an algorithm problem. dpkg has the same
issue, but nix is immune to it.

I'd also like to point out how bad those things were in C (and C-style
languages). Go is definitely an improvement, but it isn't the end of the road.

~~~
NateDad
How is this not a problem in node.js? Keep in mind, I don't know node.

However, in general, there's going to be several common problems in any
language:

Values from v1 and v1.1 won't interoperate perfectly. If you get a v1 object
and pass it to B which uses v1.1 to operate on it, stuff may blow up / not
work correctly, which may only be obvious in rare runtime situations.

Any external resources used by both v1 and v1.1 will now be in contention. If
both of them try to open port 12345, or write to the same config file, for
example.

And that assumes that there's no ambiguity based on just importing almost the
same code twice, like if you say C.Foo... which C? Can be gotten around by
always namespacing the code via the version number, I suppose.

There's no magic fix for these problems. Some human has to go through all the
dependencies and make sure that there won't be conflicts. Maybe v1 and v1.1
can coincide just fine, maybe they'll instantly explode. Computers can't
figure that stuff out.

~~~
espadrine
> _How is this not a problem in node.js? Keep in mind, I don 't know node._

The automated tool npm relies on a local nested tree. The A -> C-v1 and A -> B
-> C-v1.1 problem would look like this:

    
    
        (your code)
        node_modules/
          A/
            (code for A)
            node_modules/
              C/
                (code for C at v1)
              B/
                (code for B)
                node_modules/
                  C/
                    (code for C at v1.1)
    

The whole tree is created painlessly when you enter `npm install A`. The
program figures out the dependencies through JSON files bundled with each
module. They include version requirements; npm simply chooses the highest
version that respects those requirements. Within each directory,
`require('C')` looks for `./node_modules/C/` and finds the module it actually
needs. All versions of all modules are kept at all times on a central public
server at [http://npmjs.org/](http://npmjs.org/).

This design is only possible in some languages. Dart, for instance, cannot
apply that to its libraries because its types are globally defined and would
therefore clash.

In Go, types are not global, which means this design can apply in principle,
according to the Go Spec. However, the implemented import syntax does not act
in a way that would make this possible. Oddly enough, that syntax is not
specified in the Go Spec:

> _The interpretation of the ImportPath is implementation-dependent but it is
> typically a substring of the full file name of the compiled package and may
> be relative to a repository of installed packages._

~~~
NateDad
Getting the modules and making the tree is easy. Making the code work is hard.

Yes, sometimes this will work... but if A passes an object from v1 into B and
B passes that object into v1.1... that can blow up. If v1 and v1.1 both use
any global resources, that can blow up.

It doesn't sound like node addresses those problems at all, which are the only
ones that are actually difficult.

------
lmm
Yet another piece of Go advocacy from someone who seems to have never heard of
any modern functional languages. If the only alternatives were
perl/python/ruby/javascript/java/C then Go would probably be a good choice for
some problems. But what about OCaml, F#, Haskell or Scala?

~~~
xyproto
He mentions both Haskell and C#, but I agree with your overall point. Nimrod,
Rust, D, C++11 and Julia are also missing from the comparison. (and more)

~~~
VeejayRampay
Well, I understand your point, but there's a limit to how much namedropping
you can do in one article. There are hundreds of programming languages, each
with their own little niche of enthusiasts ready to get outraged that their
raison de vivre language hasn't been mentioned. The fact that Nimrod, D or
Julia weren't mentioned isn't necessarily relevant in the overall estimation
of the quality of the contents of the article.

------
nshepperd
An alternative opinion: [http://www.quora.com/Go-programming-language/Do-you-
feel-tha...](http://www.quora.com/Go-programming-language/Do-you-feel-that-
golang-is-ugly?share=1)

Go's lack of generics kills it for me, as I've learned trying to implement a
persistent collection type recently. Without any concept of parameterised
types (Foo<A>), programmer defined container types are second class citizens.
Casting things to (interface{}) and back everywhere gets old quickly, and you
get no static type safety. So in that domain it's essentially back to the C
way of doing things, with (void*) everywhere.

~~~
NateDad
Sorry, I won't click on Quora links... post to a non-terrible Q&A site that
doesn't hide stuff behind registration pages.

------
lazyjones
What Go really lacks right now for oldtimers like me: NOT generics, but a good
single-step debugger. We've had these almost 30 years ago built into our
awesome Assembler (Devpac ST anyone?), C, Pascal etc. IDEs. Go is still
lacking even proper gdb support and IDEs build on this shaky foundation. Why?

~~~
4ad
Because the people who need it most don't actually need it that much,
otherwise they would have written it. When I debug most Go programs, I'm doing
high-level debugging. A debugger is a sharp tool. For 90% of my debugging task
I don't need scalpels, I need tools that can provide a holistic view of a
system from above. I use DTrace for that.

When I actually use a debugger, it's low-level work; I either debug the
runtime or the compiler and I use assembly. Mdb/gdb/acid is perfectly adequate
for that. A debugger with special Go support is of no use to me.

That being said, a Go debugger is on the table.

Yes, on the Internet everyone wants everything, generics, IDE, higher order
metaprogramming, etc. In the end, people who actually implement any tool,
feature, or process are doing it only because it helps their concrete work in
a very practical way. Yes, a debugger would be useful, but almost anything is
useful in isolation. So far, the people who do the work didn't write a
debugger, because even if it might have been useful to some, it would have
been a net loss for their particular problem they were solving at that
particular time, because of the additional effort.

I'm writing the arm64 compiler. I wrote a CPU simulator and a debugger. It's a
very low-level tool and very useful to me at this particular stage in the
port. Even though it took time to write it, it saves time overall. It's
useless for 99.99% of other Go programmers.

~~~
frobware
Please could you elaborate on the arm64 compiler. Is there a repo somewhere,
etc?

~~~
4ad
No sorry, the work is relatively too early for that yet. Significant progress
has been made, but a lot is still to be done. The Plan 9 C compiler included
with Go works well but the Go compiler not yet. I'm pushing for inclusion in
1.4.

------
higherpurpose
Still no talk about Go for Android, then.

------
spystath
In slide #28 it is claimed that Go is as fast as Java. I don't use Java
extensively but I always had the impression that was quite performant with the
highly tuned HotSpot and all even more so than Go. Are there any comparative
benchmarks between Go and Java for similar-ish tasks? I get it that Go
probably handles concurrency in a more elegant way but I have some doubts
regarding raw performance. I am talking about the standard compiler
implementation, not gccgo.

~~~
NateDad
[http://benchmarksgame.alioth.debian.org/u64q/benchmark.php?t...](http://benchmarksgame.alioth.debian.org/u64q/benchmark.php?test=all&lang=go&lang2=java&data=u64q)

Go is pretty close to java's speed. Generally slower, but not by a lot.

------
diminish
> pure Go AES => slower, but safer

Just curious, why Go AES is safer than OpenSSL or GnuTLS, anyone? Or safer
here means something other than secure?

~~~
SwellJoe
Several classes of security problem, including the Heartbleed bug, are
impossible or very unlikely in a language that doesn't have manual memory
management.

His assertion is theoretical...he assumes a Go implementation will be superior
based on this removal of a huge class of potential problems. But, it is
difficult to be certain without an audit and a lot of real world testing and
experience with the library. It is not unreasonable, however, to assume that
security libraries in a language like Go could be safer than similar libraries
in a language with manual memory management like C/C++.

Purely functional programming and programming without side effects (as in
Haskell) can also reduce some types of error and make for more provably
correct software, but Go is not particularly pure in this regard...but, as I
understand it, can be used in a manner that is similar in many ways. Again,
this could prevent entire classes of common security problem.

~~~
diminish
Thanks for nice explanation. My crypto class teacher on coursera (Dan Boneh of
Stanford) used to repeat one thing again and again; "Don't design or implement
crypto systems yourself". His argument was that crypto required the validation
of time and a lot of experts (working as adversaries). From this point of
view, a young language or implementation seems to be at a disadvantage
compared to older ones. Of course fresh memories of Hearbleed seems to
contradict what I say. But what doesn't kill OpenSSL makes it stronger or (get
forked).

------
justinsb
Is this an old presentation? Doesn't Go now start with 8KB stacks per
goroutine, which is the same as OS level threads (which physically allocate
8KB from a larger virtual mapping)?

I thought the benefits of goroutines were now about more efficient scheduling,
not lower memory utilization?

~~~
cnbuff410
it's latest. Go 1.3 will have a minimum stack size back down at 4 kB. I think
they have plan to even trim it down to 1-2 KB in Go 1.4

~~~
4ad
Go 1.3 will ship with contiguous stacks that start at 8kB. The original plan
was to cut them back at 4kB, but it was not feasible ritgh now (while
maintaing performance). It will probably be reduced in the future.

------
enscr
Probably worth knowing that the author 'Brad Fitzpatrick' is part of the Go
language team [http://golang.org/CONTRIBUTORS](http://golang.org/CONTRIBUTORS)

~~~
lazyjones
He is also the author of memcached and Livejournal.

------
signa11
it is kind of interesting to see godrone.io being mentioned here, where the
f/w for the parrot drone is in go. for comparison with erlang, here it is
being demoed with in-flight hot-code-reload :) [http://www.erlang-
embedded.com/2013/10/minimal-downtime-in-f...](http://www.erlang-
embedded.com/2013/10/minimal-downtime-in-flight-drone-firmware-upgrade-in-
erlang/)

------
anentropic
Does anyone have the numbers to go with the claim that Perl is faster than
Python?

~~~
SwellJoe
For fun, many years ago, I did a series of benchmarks of ack vs grin (both are
programmers greps, one in Perl the other in Python, and both were written and
maintained by excellent developers well-known in their respective languages).
Because the tools do almost identical work, and can be configured to behave
nearly identically, in terms of enabled features, types of searching to do,
etc. I believe it's a useful real world performance test.

At the time, ack (in Perl) ran in roughly a tenth the time of grin (in Python)
on most tests. Similarly, find+grep with the right options (and occasionally
run through sed or awk to get further refined results) was approximately an
order of magnitude faster than ack (but was much harder to use, and so ack or
grin were still preferable tools).

It would likely be useful to repeat the experiment with modern Perl and Python
versions. Both have improved and likely have better performance. It seems
likely Python will have closed the performance gap some in the ensuing years.

It's also worth keeping in mind that this may be a sweet spot for Perl, given
its genesis as a language processing language. And, it may be a tough one for
Python given its historic aversion to regular expressions as a common solution
to problems. So, saying Perl is (or was) ten times faster for this class of
problem may simply mean this is a problem that Perl is particularly good at,
and may not cover other classes of problem.

I'm suspicious of microbenchmarks, though enough microbenchmarks can give some
sort of indication of what larger programs will do. But, they may also be
indicative of the quality of the programmer contributing solutions...a better
algorithm implementation can often erase language differences. Which is why I
liked comparing two nearly identical programs, both by known excellent
developers and not doing anything _particularly_ prone to algorithmic
complexity, as a means of testing language performance.

I imagine one could find other examples that would be good to compare. scons
and cons, perhaps, would be another decent choice, assuming cons is still
actively maintained.

------
afandian
Is there a way to disable slide transitions? It's very difficult to navigate
with all that movement.

------
etenil666
Sell your soul to Google 100% of the time.

~~~
cnbuff410
It's probably harder than just registering a new account and trolling here.

