
The State of Go - xkarga00
https://talks.golang.org/2015/state-of-go.slide#1
======
Mithaldu
Edit: This commit was written to the original submission:
[https://talks.golang.org/2015/state-of-
go.slide#7](https://talks.golang.org/2015/state-of-go.slide#7)

\--------------

It seems like these people simply don't understand Github very well.

    
    
        Can only view diffs on a single page (can be very slow).
        Cannot compare differences between patch sets.
        Accepting a patch creates a "merge commit" (ugly repo history).
    

Don't use the merge button, just add the requester's repo as a remote to yours
and use your familiar tools. If possible and done, a fast-forward merge + push
will also close the PR.

    
    
        Comments are sent as they are written; you cannot "draft" comments.
    

How is that different from pull requests via emails? (Also, on the website
itself the comments can be edited.)

    
    
        To create a patch one must fork the repository publicly
        (weird and unnecessary).
    

What exactly is weird about that? It makes it possible for the requestor to
craft their changes with full control, without requiring upstream to give them
write access. This point is just entirely nonsensical.

    
    
        In general, pull request culture is not about code review.
    

A strong claim, but one without any justification, and which is, in my
experience, as far from the truth as possible.

Edit:

In light of their complaints about the need to fork, i have to say that their
current contribution process can in its entirety be described as weird,
unnecessary _and_ baroque:

[https://golang.org/doc/contribute.html](https://golang.org/doc/contribute.html)

~~~
justinmk
> Don't use the merge button, just add the requester's repo as a remote to
> yours

Actually, you don't need to do that. GitHub provides a special _pulls_ remote
"namespace" on the upstream repo, so you can add it as a fetch pattern to your
.git/config like so:

    
    
        [remote "upstream"]
          url = https://github.com/neovim/neovim.git
          fetch = +refs/heads/*:refs/remotes/upstream/*
          fetch = +refs/pull/*/head:refs/pull/upstream/*
    

Then when you `git fetch --all`, you will have ALL pull requests available in
your local repo in the local pull/ namespace. To check out PR #42:

    
    
        git checkout -b foo refs/pull/upstream/42

~~~
config_yml
is this documented somewhere on github?

~~~
ben0x539
Yeah, I have to google the precise path every time I need it:
[https://help.github.com/articles/checking-out-pull-
requests-...](https://help.github.com/articles/checking-out-pull-requests-
locally/)

------
Blackthorn
The comments here represent one of the best examples of bikeshedding I've ever
seen. The deep details of go are too complex for some people to really have an
opinion on but god damn does everyone want to get their voices heard about
github!

~~~
gtirloni
Agreed. However, the original submission called attention to that slide
specifically, not the whole State of Go presentation. So people kept on topic
:)

------
falcolas
Personal opinion here, but I much prefer merge commits as opposed to fast
forwarded commits. With a merge commit, there's one commit to revoke if
something breaks, and there's one commit per PR to step through with git
bisect, and more importantly it maintains history, which to my eyes is more
useful than a "pretty" repository.

Drafting comments... if you want to draft comments, can't you do that in a
separate editor? Better than relying on the browser as an editor.

I agree, though, the public repository requirement for making a PR is a bit
awkward.

~~~
scrollaway
> which to my eyes is more useful than a "pretty" repository.

A pretty history is extremely important when you need to attract new
contributors to your repository. I've maintained extremely clean git logs and
extremely nasty ones too. New contributors are immediately turned off in the
latter case, because a good developer will generally git log extremely early
when discovering a new project.

~~~
rdtsc
It is a false dichotomy. You can have a dirty repository history with fast
forward merges as well. And you can have a clean repository history with
explicit merges. As I wrote in a sibling comment, feature branch commits
should be rebased to looks clean and meaningful (squashed in some cases if
needed) but then merged into main/master branch using --no-ff flag in order to
generate a merge commit.

~~~
Mithaldu
Here's my specific issue with that: Even in such a merge it would be possible
for someone to hide a change in the merge commit, and due to the nature of git
it is quite difficult to figure out with confidence whether a merge commit
contains additional changes or not. I much prefer the rebase/merge no ff
approach over the common "just merge whatever the fuck whenever", but due to
the possibility of hidden details in merge commits, i prefer a linear history.

~~~
hueving
Who is doing the merge commit? In every project I've contributed to, that's
the automatic part by the code review system. If you can't trust that, then
you are screwed in any workflow.

If you're talking about merges into an outstanding pull request, then that's a
non-issue because any changes will still show up in the diff against the
target branch/repo.

------
nawitus
What is the better alternative Go is using?

"Can only view diffs on a single page"

You don't need to use GitHub to view diffs.

"To create a patch one must fork the repository publicly (weird and
unnecessary)."

I don't think it's weird.

"Accepting a patch creates a "merge commit" (ugly repo history)."

You don't need to have a merge commit, although I don't think that creates an
ugly repo history.

"In general, pull request culture is not about code review."

I have no idea what this means.

~~~
admax88q
They're probably using Gerrit.

Although I agree, I don't think merge commits are ugly. I think people coming
from SVN/CVS where history is strictly linear have this obsession with keeping
it that way.

In fact I find lots of developers just have an obsession with "clean" history,
and fetishes for particular tools. It baffles me.

~~~
Mithaldu
> I think people coming from SVN/CVS where history is strictly linear have
> this obsession with keeping it that way.

This has absolutely nothing to do with those, especially since they didn't
even allow cleaning up histories. There is one very simple reason for why some
developers prefer a linear history of master:

It makes debugging very easy.

With branch merges, especially when the branch lines cross, or the merge is an
octopus merge, the complexity of the code necessitating inspection to find the
root cause of a bug straight-up explodes. Meanwhile with a linear history it's
not only easy, but automatable to find a commit that breaks a thing.

I do realize that you may not have had the displeasure yet to be in the
situation to learn these things. Please feel free to consider yourself
fortunate, but please also do try to understand that the things i just wrote
are in fact simple observation of realities.

~~~
lmm
Linearized histories are much harder to find bugs in. You end up looking
through revisions that never existed; in SVN you merge the remote history into
your local history without it ever showing up as a merge. The explicit git
approach tells you something much closer to the truth.

Your condescension is only hurting yourself.

~~~
Mithaldu
Your building strawmen does not help your credibility much, nor your attempt
to convince me of your view. "Revisions that never existed" do exist, and if
you put a rebase of a branch on master without verifying the rebase, then you
may end up in a mess, but it's your fault. Code review is a thing that is done
for a reason.

Frankly, i find your style of argument through implication, and through trying
to disregard something because it doesn't fit your definition of truth to be
much more condescending than anything i wrote before.

~~~
MaulingMonkey
> "Revisions that never existed" do exist

"Revision that were never built nor tested" is probably closer to the truth.
Do you rewind through all your history and rebuild and retest every commit in
a branch every time you rebase? Sure, they're similar, and you probably didn't
mess up the merges. There's likely no subtle lingering bugs that QA's only
going to catch weeks down the line. Probably.

> then you may end up in a mess, but it's your fault. Code review is a thing
> that is done for a reason.

Sure. But I've missed so many things in code reviews, had so many things in my
own code missed in code reviews, and generally make mistakes and messes.

I do agree that simpler branch topology tend to be easier to reason about. But
branches do have their advantages... and I've also found that preserving the
original branch topology has helped me untangle merge mistakes that were
missed, committed, and then only discovered a year or more later.

~~~
Mithaldu
> Do you rewind through all your history and rebuild and retest every commit
> in a branch every time you rebase?

Yes.

~~~
MaulingMonkey
I hope that's all automated with short build+test iteration times! That'd
easily eat a day for minor feature branches for me - I can't afford that kind
of turnaround time.

------
ak217
_Can only view diffs on a single page (can be very slow)._

I've seen GitHub take five seconds to render a 100K line diff. In my
experience all of the other tools I've used, including some of the ones
listed, can take longer to render individual file sections of such a diff.
It's fast enough.

 _Comments are sent as they are written; you cannot "draft" comments._

I'm a bit puzzled as to why you would need to draft comments inside the PR
interface, especially in light of the fact that they can be edited.

 _Cannot compare differences between patch sets._

You most certainly can. Just use branch/revision compare.

 _To create a patch one must fork the repository publicly (weird and
unnecessary)._

Also immaterial.

 _Accepting a patch creates a "merge commit" (ugly repo history)._

This comes closest to being a legitimate complaint. Because GitHub emphasizes
recognition of contributors, it doesn't let you rewrite the commits in the PR
as you merge them with the merge button, but there are multiple ways to merge
on the command line that avoid the merge commit and play nice with the PR.

 _In general, pull request culture is not about code review._

WTF?

Sounds like a strong case of NIH.

~~~
enneff
> _I 'm a bit puzzled as to why you would need to draft comments inside the PR
> interface, especially in light of the fact that they can be edited._

This scenario happens often: I read through a change, making comments as I go.
Then I reach some part of the change and realise "Oh, that explains why they
did that in that other file!" So I go back and delete or alter my comments.

In Gerrit or Rietveld, the reviewee never sees those earlier comments.

On GitHub, the reviewee has already received the comment notifications and
started responding before I have a chance to make the changes. The reviewee
wastes time responding to questions to which I already know the answer. It's
clunky, inefficient, and unnecessary.

It seems like you haven't used a tool like Gerrit or Rietveld. You should
check them out.

~~~
jaimeyap
A lot of people commenting here probably haven't had the experience of working
with mature pre-submit code review tools. It's funny how much of an opinion
people seem to have about things they don't know about.

I'm shocked at how bad Githubs PR review UI is given how much funding they
have had for so many years. Even abandoned side projects like Rietveld have
vastly better review UIs and workflows for larger patch sets.

~~~
frowaway001
I don't see a problem. Google people use what their NIH-filter-bubble tells
them and the rest of the world uses Git(Hub).

Everyone is happy.

~~~
jaimeyap
I probably sounded overly dismissive of github. Which probably isn't fair. But
I'll leave my post un-edited for posterity.

I use github every day for work. There are lots of things it gets right. But
if you want to work on a project where you more or less have a central
repository, take contributions from external and internal contributors, and
have a strict policy of pre-submit reviews for individual commits to master
that vary in size and complexity, then you want a tool that is 'code review
centric'. That is, something that supports comment drafts, back and forth
exchanges across many files, and notions of iterations as the reviewee
responds to feedback. Github is lacking in this regard, for all the reasons
the Go team pointed out. I mean... they just recently shipped side by side
diffs!

The main thrust of my comment was to point out that lots of people in this
thread are criticizing the Go team's choice to use Gerrit instead of Github,
and the points they make seem to stem from an ignorance of both tools like
Gerrit, and of workflows that work well that aren't pull requests. It's a bit
of "what you see is all there is" where all they've seen is Github. And
statements like "github is good enough" seem overly dismissive of the
decisions of a lot of very smart people on the Go team.

For folks that are familiar with Gerrit and still poopoo their decision to use
it. Well... we can agree to disagree :).

I'm rooting for Github. I want to see it get better so I can stop running a
separate code review tool for my own projects! But it's got a ways to go
still.

~~~
jaimeyap
@piotrkaminski Comment nesting seemed to run out. So replying to myself.

Our team is currently using Rietveld. But I've used Gerrit in the past, as
well as internal tools of the same flavor back when I was at Google.

I don't particularly love Rietveld. But it's simple to maintain and does the
job. That being said, I'm genuinely looking forward to one day being able to
just use Github for this.

~~~
piotrkaminski
Heh, that's almost exactly like me: used internal tools at Google, brought up
a Rietveld instance after I left. Except that I got frustrated with Rietveld
and built [https://reviewable.io](https://reviewable.io) \-- you might want to
check it out. :)

FullStory looks awesome, BTW, I just wish I could afford it.

~~~
jaimeyap
Shoot me an email and let's see what we can do.

Also, I'll definitely have to check out Reviewable!

------
justinsb
> Better bindings for calling Go from Java.

If that is available for general Java code (i.e. uses JNI) and not just for
Android, that could be really huge for Go. Writing performance-sensitive low-
level code in Java is still fairly painful, wheras Go still isn't great for
writing big programs. I can imagine (for example) Hadoop, Lucene/Elasticsearch
and PrestoDB all using this.

~~~
eternalban
You jest?

Go is GC'd just like JVM. The only possible benefit -- even if Go catches up
at runtime -- is the compact form of memory objects in Go vs Java object. But
then again, if you are writing such systems (in either language) you are very
likely to spend quite a lot of time in 'unsafe' land.

~~~
perturbation
Doesn't Go have the potential to be faster than Java since it's compiled to
native code (rather than compiled to byte code)?

~~~
vardump
JIT has potential to be faster than compiled native code. Better runtime
information. Compiled code needs to cover all potential options and this means
more instructions to execute.

Say a variable value is set through a command line option to be a certain
value. Compiled native code has to assume the value to be dynamic, but a JIT
can optimize it away, effectively hardcoding it for that particular
invocation. Same applies to more complicated type of software. Some
configuration and invocation parameters tend to be effectively static during
that particular invocation. JITs can capitalize on this fact.

JITs have also better chance to adapt to exact hardware it's running on.
Compiled code is forced to make one or a limited number of assumptions of
available CPU hardware configuration.

In the end, both options are running compiled native code. JIT just does it a
bit before running.

Of course current reality is the opposite, but the key word here is
_potential_.

~~~
dottrap
I'm glad you emphasized _potential_ and mentioned reality.

One other aspect that has become increasingly important is power consumption
and heat. Huge data centers now have to worry about enormous electricity
consumption and keeping all the equipment cool. JIT code must do more work to
compile (and recompile to optimize) on the fly which means more power and more
heat.

On the consumer end, Android just switched to Ahead of Time compilation
instead of JIT because its JIT performance wasn't that good and it required
more power thus sucking battery life.

~~~
pjmlp
Same on Windows Phone. .NET is also compiled to native code ahead of time.

------
jrochkind1
Yeah, I dunno, the github PR 'culture' I've engaged in has often in fact been
about code review. I don't see any problems with forking a repo publicaly to
make a PR (what is there to hide?), am not really bothered by merge commits
(and in some cases they are actually quite useful, some people prefer them,
it's a point of some contention), and I don't really understand what they mean
by 'Comments are sent as they are written; you cannot "draft" comment'

But to each their own -- I'm curious what alternative system (if any) of
accepting patches they have. If it's emailed git patches on a listserv, then I
would definitely find it a barrier to submitting patches, myself, compared to
github PR's.

I think in general, github PR's have proven succesful at soliciting code
contributions from a wider field, which seems to be the goal of their UI (over
command line git itself). Of course, this can seem a downside too, as
committers have to spend time dealing with those submissions.

~~~
mdwrigh2
They use Gerrit:
[https://code.google.com/p/gerrit/](https://code.google.com/p/gerrit/)

------
wereHamster
> In general, pull request culture is not about code review.

This is not true. There are many projects on GitHub which do extensive code
reviews on pull requests. It may not be as nice as Gerrit for the type of
project like Go (where you often have many iterations or the diffs are large).
But for many other projects the UI that GitHub provides is sufficient (and
arguably more efficient than Gerrit).

~~~
justinmk
> where you often have many iterations or the diffs are large). But for many
> other projects the UI that GitHub provides is sufficient

GitHub is painful for non-trivial reviews. Biggest WTFs:

\- No comment threading (or at least collapsing). On a PR with 100 comments[1]
it is unlikely that those revisiting the thread need to see (and download, and
render...) the first bazillion comments.

\- Source "annotations" are lost after a force-push (why not keep around a
read-only view of old comments? We have lost some valuable discussions on GH
pull requests)

Yes, we try to keep PRs small. But they also need to be meaningful, and
sometimes they require (many) more reworks than expected.

[1]
[https://github.com/neovim/neovim/pull/1820](https://github.com/neovim/neovim/pull/1820)

~~~
falcolas
Wait. Force pushes? That's a terrible habit to get into; force pushes have
decimated more than a few a open source project's repository.

[https://news.ycombinator.com/item?id=6713742](https://news.ycombinator.com/item?id=6713742)

~~~
hk__2
What? Pull requests should be on their own branches, if you want to keep one
commit per PR but need to edit it you _must_ rebase it and thus force push.
It’s a terrible habit if you’re sharing your branch with others, it’s
completely normal if not.

------
CatDevURandom
I'm late to the hate-parade but I did want to chime in to say how much I
appreciate all the work the go team has put into the language, and tools. Go
is a wonderful tool to get things done with. I recently rewrote a cross
platform `enterprise` app from java (25k LOC) to go (8k LOC) and saw
improvements in readability, memory footprint, and overall quality. Some notes
on my experiences so far:

Go binary size is a non-issue for most software. The java-rewrite I mentioned
above went from a 80MB or so binary, to a 8MB executable. That said, there's
been a few occasions when I've really wanted to use go for an embedded
project, but couldn't due to it's size.

I read somewhere, someone said of go, "you'll come for the concurrency, but
you'll stay for the interfaces. This is very true for me.

Generics. Go's red herring. Sure, there's been a handful of occasions where
generics would have saved me some boiler-plate, but it's not been a pain point
for me.

Tooling, from fmt, vet, to unit testing are all first rate. However, I wish
there was a better debugger option for go. I know that gdb works with go (and
with a great deal of difficulty if you develop with OSX) but I'm probably not
alone when I say I really dislike GDB.

Overall, I've found the community to be friendly both online and in person.

As an aside, I've noticed much of the recent vitriol towards Go seems to come
from the Rust crowd which I think is too bad. I enjoy both. Languages are not
a zero-sum game. Who knows, the hate means Go has finally arrived.

------
frabcus
There were some great new tools mentioned on there - like "callgraph" for
plotting the callgraph.

I can't however find any instructions how to download and install them. Anyone
able to help me?

~~~
enneff

        go get golang.org/x/tools/cmd/callgraph
    

Or similar for any of these commands:
[http://godoc.org/golang.org/x/tools/cmd](http://godoc.org/golang.org/x/tools/cmd)

------
inglor
This presentation looks horrible on the iPhone screen. I wonder if they
couldn't spend a few minutes to point phone users to a working version or at
least not lock the viewport size so mobile users could pinch-zoom out.

~~~
boulos
Each slide was okay when viewed horizontally, but then advancing then shows
the damn address bar. I had to annoyingly advance, rotate, rotate back, and
then read. After a while I just gave up on the titles...

------
jakub_g
I tried to mitigate some of the pains with GitHub code review UI by writing a
helper userscript to track the progress of big code reviews. Some of the items
listed in the link can be fixed in this way.

The idea is to expand/collapse files, store progress of the review and
collapse status of the files in local storage of the browser, so you can stop
and resume at any time (I also have in mind serializing this stuff into a hash
in the URL so you can forward it to the other machine for instance, and
recreate the progress there).

I work on it every now and then and have a number of items in the backlog. If
someone is interested to contribute I'll be happy to accept pull requests
(sic!) :)

[https://github.com/jakub-g/gh-code-review-
assistant](https://github.com/jakub-g/gh-code-review-assistant)

------
omeid2
> In general, pull request culture is not about code review.

A lot of people seem to here insist that this is somewhat even remotely true.
This is not. Look through Docker's pull requests on Github, look at their CI
hooks.

------
hueving
Gerrit still produces merge commits unless they have it configured to cherry-
pick onto master, which is insane because you are changing the commit sha at
that point.

~~~
enneff
That's exactly what we do, and we need to change the commit hash because we
want to include the review information in the commit message.

Not sure why this is "insane."

~~~
hueving
It's essentially rewriting history. As a contributor it's nice to know a
commit went in exactly as you wrote it, which is not the case when the commit
hash changes.

Git is powerful because it was written with the ability to merge trees. The
cherry pick workflow is throwing all of that in the trash. Why not use SVN at
that point?

Because of cherry picking in Gerrit, dependent patches are a nightmare to
maintain. Say patch c depends on b, which depends on a. Now say that patch c
requires a change that merged into master. Because you can't merge into your
development branch, you have to rebase c AND b AND a. This really pissed of
the owners of b and a because it shows up as a new changset and wipes out
votes. God forbid you depend on two different patches that each have separate
dependencies.

You can use the merge commit to add the review information if you want. Then
you don't have to molest the code change commit.

~~~
enneff
One nice thing about Git is it lets you choose your workflow.

Our general workflow for the Go project is to review single commits, and
sometimes do major new work in feature branches. When we submit a single
change we cherry-pick. When we merge trees, we create a merge commit.

We don't write commits that depend on other pending work. That's overly
complicated (IMO) even if you always use merge commits.

~~~
hueving
It's not complicated at all if you use merge commits. That's exactly what you
are doing with a feature branch. Failing to realize merge commits are useful
is what makes them complicated.

~~~
enneff
It doesn't matter what the mechanism is; writing code that depends on code
that changes is more complex than simply not doing that.

~~~
hueving
You can't "simply not do that" if you're project is fast moving. That's
essentially the "you're holding it wrong" defense for a terrible design flaw.

Imagine you are developing a plugin framework for something and would like to
develop a reference plugin at the same time to flesh out the API. Neither
belongs as part of the same change but the plugin certainly depends on the
framework. This is basically impossible in Gerrit because of the awful way
dependencies work. The only way it can work is with a feature branch, which is
basically giving up on Gerrit anyway and using git in the way it was intended.

Gerrit ultimately becomes a choke point on the throughout a given project can
have unless you have an extremely small set of contributors that can
coordinate well (i.e. Not a large open source project). Maybe this isn't a
problem for Go since there is a high barrier to entry for contributors, but
it's something to keep in mind.

~~~
enneff
The Go project is sufficiently modular that this isn't an issue for us.

~~~
hueving
Didn't you say that you used feature branches? If so, that was a case where
Gerrit failed and you had to use git in an almost normal way.

~~~
enneff
How is that Gerrit failing? Gerrit is designed to support all kinds of
workflows, just like Git. As far as I can tell, we're using our various tools
the way they were intended. Just because it doesn't line up with your exact
view on how Git should be used, doesn't mean we're doing something wrong (or
"insane").

This is a boring conversation.

~~~
hueving
If it's boring, why were you trying to defend such a suboptimal use of git?

------
boulos
Does GitHub still not support "fast-forward only" commits? The "ugly merge"
thing is easily avoided with a rebase before committing.

~~~
sytse
GitLab CEO here. Both GitHub and GitLab normally always create a merge commit
when accepting a merge request in the web UI. GitLab EE has a rebase feature
where you can accept merge requests by automatically rebasing them just before
merging, for more information see
[https://about.gitlab.com/2014/12/22/gitlab-7-6-and-
ci-5-3-re...](https://about.gitlab.com/2014/12/22/gitlab-7-6-and-
ci-5-3-released/)

~~~
sytse
I see this is down-voted (after being up-voted initially). I thought it was
interesting that GitLab allows you to accept pull/merge requests without
creating a merge commit. Should I not have included the url? Edit: Thanks for
re-upvoting it

------
omni
Breaking the mouse entirely probably isn't the greatest of frontend design
patterns.

~~~
icedog
Besides the arrow keys, you can move forwards or backwards by clicking on the
left or right edges of a slide.

~~~
zellyn
You can also click on the vertical area between slides to go forward/back.

~~~
integraton
Also known as "easter egg navigation."

~~~
enneff
It's a tool for _giving_ presentations, not necessarily for reading them.

But I've just sent a change to add some help text to these pages: [https://go-
review.googlesource.com/4910](https://go-review.googlesource.com/4910)

edit: The change is now live. No more easter egg navigation. Yay!

------
drigao_ssj3
[https://www.youtube.com/watch?v=x8wcP-W2pTQ&list=UUysVbKMJEm...](https://www.youtube.com/watch?v=x8wcP-W2pTQ&list=UUysVbKMJEmhmkx_k-
CqBWeA)

------
zak_mc_kracken
> To create a patch one must fork the repository publicly (weird and
> unnecessary).

I think it's very fair to demand from a contributor to sync and build the
entire app before they're allowed to submit a patch.

Interestingly, I note the Go team says it's "unnecessary" but doesn't provide
their alternative.

~~~
dsymonds
We expect contributors to sync and run all the tests. The _public_ forking is
the weird and unnecessary part.

Gerrit is the alternative used, and contributors have a full local clone of
the git repo, with their commit, and that's private on their machine until
they push it to Gerrit for review.

~~~
zak_mc_kracken
What is so disturbing about the "public" part that it's worth discarding the
whole approach?

~~~
dsymonds
No-one said "disturbing". It's weird if you've come from an environment where
you do your individual work in private, and only show it to people when it's
ready to be committed/merged.

No-one said that that point alone is worth discarding the whole approach.
You're nitpicking a single bullet point from around a half dozen points that
stacked up.

------
arsv
> Where we're at in February 2015

Still producing 1.3M hello world executables.

I wonder if rewriting the linker from C to Go will be primarily rewriting, or
maybe they will start fixing it somehow.

~~~
georgemcbay
Yeah, what an outrage this is.

1.3 megabytes. That's like $0.00004 USD worth of hard drive space.

Does the go team think we are all rich or something?

~~~
SBullet01
I think the point he tried to make was that if only "hello world" produces a
1.3 megabytes executable, the file size of a fairly complicated program made
in Go will be significantly larger than the same program implemented in
another language.

~~~
enneff
Which makes no sense, anyway. The reason a Hello World program in Go is large
is because it must include the baseline runtime support that is included in
any Go program. A 10 line program won't be 13mb.

------
bdcravens
Most of career has been spent doing web programming, so maybe I'm ignorant,
but if you have to build Go with a Go binary, instead of C, doesn't this break
OSS's value proposition? (obviously I'd have to start with a C compiler, but
that feels a bit purer in my mind, as I can start with a pretty bare-bones OS)

~~~
enneff
I don't understand the basis of your question. What does the bootstrapping
process for a compiler have to do with "OSS's value proposition?" (By OSS I'm
assuming you mean "Open Source Software"?)

To build gcc, you need a C compiler. To build Go, you need a Go compiler. To
compile anything you need to start with some kind of compiler.

~~~
leereeves
I think bdcravens means that to install Go from source on, say, a new Linux
box, we will have to download a binary version of Go first, even though we
probably already have gcc.

What happens if this catches on, and PyPy replaces CPython, and other
languages do the same?

Hassles for those who prefer to install from source, and potentially a lot of
duplication of effort writing compiler backends in every language.

~~~
nosefrog
> What happens if this catches on

I don't understand your comment, because we already live in your nightmare
scenario:
[http://en.wikipedia.org/wiki/Bootstrapping_%28compilers%29#L...](http://en.wikipedia.org/wiki/Bootstrapping_%28compilers%29#List_of_languages_having_self-
hosting_compilers)

In practice, it isn't a big deal. When was the last time you worried about the
language your compiler was written in?

> potentially a lot of duplication of effort writing compiler backends in
> every language.

That's also the state of things for every compiler that doesn't use LLVM or
compiles to another language.

~~~
leereeves
I agree the installation issues are fairly trivial; they were just a response
to the original comment.

But the duplication of effort (which won't end when the "GoGo" compiler is
ready to replace cgo), and a potential freeze in Go while "GoGo" catches up,
are more serious drawbacks.

Yes, that is a trade-off other compiler teams have made, and it may be the
right choice for Go as well, but it should be (and no doubt has been)
considered.

------
rsarsarsa
I think go is failed, compare to rust. 1\. not memory safe on multi-thread 2\.
no generic support 3\. error handle is full of pain compare to rust's
Result<T>, Option<T> and try! 4\. gc can't be disable 5\. no RAII support,
defer can be forgot light-weight thread (spawn) and channel (thread safe FIFO)
already exist in rust that make golang more meanless. If you think still any
advantage of go please tell me.

~~~
Skinney
Go is stable, wheras Rust is not. Go also targets other use cases than Rust,
and so the two are not directly comparable. 1\. Rust cannot guarantee multi-
threaded memory safety in all cases. Care, as in Go, must still be taken. 2\.
I agree with you here, I think Go would be better with generics. But
apparently, most people don't have a problem with this. 3\. Rust and Go's
error handling is based on the same principle: force the user to deal with
errors, instead of ignoring them. While Go's way of handling things can be a
little more verbose, in principle I don't see the big difference. 4\. Alot of
people will argue that this is a good thing. Using a GC by default makes
certain things easier, like writing datastructures (especially immutable ones)
without reaching for 'unsafe' code. Having a GC also means you don't have to
mind memory fragmentation, as you do with a manually malloc/free based
allocation. 5\. RAII is great, but loses some of it's use in GC based
language. Defer is, IMHO, a much better option than the other solutions I've
seen in other GC languages (Javas try-with-resources and C#s using)

Just because you can do the same thing in language X, doesn't make language Y
obsolete.

Advantage of Go, for me, is less verbose code (implicit interfaces for the
win) and a fantastic, stable and huge standard library. I also like the strict
compiler, and that the language has a GC by default, makes certain things
easier, and for most tasks I don't need the predictability that you get with
manual memory management.

~~~
Jweb_Guru
While I do generally agree with what you've written, I do want to take issue
with point (1). Rust _is_ actually intended to guarantee multi-threaded memory
safety in safe code. Period. If you can get it to behave otherwise, it's a bug
in Rust.

~~~
Skinney
True. What I really meant is that safe code can interact with unsafe code, and
thus cannot guarantee everything working correctly. Of course, this is bug
with unsafe code, my bad.

