The slides said Go is easier than Erlang. That is true in some aspects, namely syntax (Elixir to the rescue here!). Go is also easier because of static typing. Speed of compilation is awesome. Building to an executable that only depends on libc -- yes please! love that about Go.
I am scared about safety and fault tolerance. Ability to share memory is good for performance but a shared heap in a large concurrent application is scary to me. Not sure what I think about multiple channels and running select over them. Then it pseudo-randomly choosing one channel among those that can receive. Erlang's focus on processes/actors vs channel and each actor having a single mailbox somehow makes more sense to me. Having a shared heap mean faster data processing but GC cannot be fully concurrent.
Now passing channels inside other channels is cool, but also kind of scary to me. You can pass channels, inside channels, inside channels, can make some kind of a lazy computation chain.
Also I have been wondering, but couldn't find it by doing a quick search, is it possible to build supervision trees out of goroutines like one builds supervision trees in Erlang's OTP. Say I want to have a chat client goroutine to keep the chat up. But I want to monitor it in case it crashes (panics?) then then spawn another one perhaps. Is that possible?
There is no real way to grab a goroutine from the outside and tell it what to do -- you have to use its channels, which if bad things have happened are probably broken.. there are some systems built that catch exceptions and send messages (this is from inside the goroutine of course) on the channel, but I haven't used them.
That might come across an unduly negative of Go, but Go has major upsides. Great tooling, easy to understand after a tiny amount of time, a culture of explicit (even at times non-DRY) code, amazing deploy model, and generally just very understandable even to newcomers.
> which if bad things have happened are probably broken.
It reminds me of Joe Armstrong's quote about how it is hard to perform surgery on yourself. In other words having the component that is failing trying to fix itself. I guess I would have to read more about design patterns. I saw a presentation about concurrency patterns in go, but it is about very simple toy examples, I am more interested in a larger concurrent applications, handling multiple connections for example.
Having a part of your program fail and restarted if needed. For what I understand so far that isn't possible. It would have to be done at the OS process level instead.
On the unix level it would be possible if your process manager supports dependencies between processes and you're able to connect pipes between them. This would also be much more heavy than erlang threads in terms of memory.
But that isn't true anymore. Lots of people and teams are using Go. You don't need to explain yourself. Just write stuff in Go. Maybe even write about the process of how you wrote your app in Go, but stop making it about the language. It's there, it works. No need to apologize, evangelize or explain yourself.
Does the language provide some facility that allows docker developers to make docker easier to write and easier to scale than writing docker in Python? Totally relevant. People experiment with new language and hearing Go is scalable and Go is about concurrency - what do these things mean to docker inventors? As a library writer I want to understand what makes Go their choice when Ruby, Python, C and Java seems to the primary languages people use to write system tool.
If I were to write a similar tool like docker in Go, what patterns can I learn from reading docker?
I suspect in many cases the answer is 'because I've invested time in learning it' rather than technical benefits. Not that this is a bad reason.
Imagine if presentation software generated a coded tone for each transition that could be used later on to merge the recorded audio of the speaker and the slides into an audio slideshow.
Then having a video of your presentation would just be a side effect of your presentation. And it would focus on the material you wanted to present, not your image.
Although videos of speakers are nice, often the material the speaker is referring to isn't visible. It really should be the visual focus.
And if there's a video, I'm going to immediately go somewhere else. I don't have time to be watching hour-long videos when the content could be summarized in something I could read in 5 minutes.
The only real problem is that it seems to detect slides based on delta with previous slide instead of getting information from the actual presentation. This means that if you switch to the lecturer's camera (which allows the lecturer to display text written on paper on a big screen) or write on the slide then it counts as a new slide.
If the presentation software makers wanted to close the loop, they could just give you an app that records the audio, listens for the tones, uploads it to their site where you've saved your slides and automatically put it all together where anybody can see the final product.
I've wished for so long that slides and presentations were synced, the best I've seen are videos like DEFCON where they have a small window for the presenter then slideshow.
That's a cool idea.
I wish were more common though.
But putting these together was a manual effort. This idea would eliminate that.
"go get, can't pin a particular revision"
- There's an elegant solution by now, see https://github.com/kr/godep
"must deal with private repos manually"
- go get can get them via ssh using key auth, just do https://gist.github.com/shurcooL/6927554
godep is definitely cool. But it's a bit sad that it has to be an add-on. Don't get me wrong -- it's a great thing that go gives us a good build system, a good test system, a good dependency system, etc. But if we end up using another build system, another test system, another dependency system, because the stock ones aren't powerful enough, it kind of ruins it :-)
Regarding private repos: IMHO it's much simpler to go to your $GOPATH/src/github.com/blahblah and `git clone firstname.lastname@example.org:blahblah/repo`. But if you know a difference between both approaches I would be happy to hear it!
That's the thing, godep isn't another build system. It's built on _top_ of the existing Go build system. It augments it, doesn't replace it or replicate what's already there.
Go has tools for building at master; godep adds a file that keeps track of explicit commits and creates a sandbox GOPATH with those revisions checked out. Nothing you couldn't do by hand; godep simply automates the task, making it reproducible.
The reason Go doesn't have everything is because the developers haven't found a very good solution for everything. So instead of providing something crappy, they just don't include it... Until a later time. This gives the community time to experiment and find a good solution in the meantime.
Honestly, I'm very glad they're taking this approach.
The better (but unknown) way of running it is to install Docker 0.7-RC4 through curl http://test.docker.io | sudo sh.
We definitely don't have that at my job. We have to be conscious of the dev, test, build and production environments when coding. Ops has to be more conscious of what they are deploying than they'd like. It's not that either team doesn't care about the other's work or the big picture.
I actually kind of like Go's scheme here. One thing I never quite liked in C/C++ was huge directories full of source files where it wasn't clear what code was part of what binaries. If something is only part of one binary, why not make that obvious by putting it in another package?
By using Go, it avoids a lot of existing biases. I think that's what that slide is referring to.
If the reason is neither Python nor Ruby can do concurrency right out of the box like Go does, then sure this could be a good reason because people in Python and Ruby world would argue which set of tools to use.
I mean Python coder and Ruby coder can argue that Go is not the way to go; static compilation, like whattttttt? Strict type... like what?
I am not very convinced and think that slide is pretty biased - well basically doesn't justify why Go was picked and the tone of the slide makes me think Go is the solution to all the disputes we have in other languages. We have plenty of Go vs Python vs Ruby threads.
Indeed, the "OMG GO!1!" tone of many HN stories and comments on the language suggest that it's diving in headfirst...
My understanding is that Docker is strongly tied to Linux by virtue of using Linux containers. So how does Docker benefit from the multi-arch build facilities of Go, if it only runs on Linux?
And just today there was a breakthrough in getting Docker running on the Raspberry Pi: https://news.ycombinator.com/item?id=6708565
It looks like Docker provides binaries for only x86-64 Ubuntu. So slide 24 seems to be more of "we think this is cool in theory" rather than "this really helps us."
It would be cool if IBM sponsored more powerpc ports of stuff. They do give free access to ppc machines, but they should have people who eg write go backends for ppc64.
"GCC 4.8.2 provides a complete implementation of the Go 1.1.2 release."
"[gcc] Go has been tested on GNU/Linux and Solaris platforms for various processors including x86, x86_64, PowerPC, SPARC, and Alpha. It may work on other platforms as well."
The original compiler created by the Go team at Google, doesn't work on many platform. But Go is a language with many implementations of the compiler. Currently gccgo is the only other implementation I know which would be useful, but there is work on a llvm frontend too.
Take into consideration that precisely because of the design choices behind the language as it's currently specified (focus on simplicity in the language and implementation, e.g. explicit lack of dynamic linking) it's reasonable to expect that your application will work on a different compiler (compared with c++, at least for those who remember more troubled times)
The server-side component only runs on Linux for now, and while it's supported only on x86_64, people have confirmed that it could run on i386, and arm v6/v7/v8.
Later releases will also target FreeBSD, Solaris, and possibly OS X.
The client-side component already runs on OS X, and could possibly run on Windows as well.
However, there is currently no interest at all in running Docker on AIX, HP/UX, Plan9; or on Sparc, Power, s390, or Mips platforms.