I played with Go a bit and I like the idea of Go replacing C/C++ to write *NIX tools (no, not the GUI ones, the command-line ones, the daemons, the utilities, the server/collectors).
There's still findutils, inetutils, binutils and other collections part of the GNU base system, though. The core daemons are all separate projects, as well.
When will it be that Rust is stable? Many moon ago I was of the told that December of 2014 would be the calendar date of the Rust stable version. Yet December is now and I see no Rust stable version. When come it? When?
If the release candidates are a few weeks out then it must be beta and very close to stable? Can anyone who's using Rust daily discuss performance and reliability. It should be ready for weekend projects.
The project has 2,100 open issues and 8,500 closed issues.
The stability more or less comes down to the community moving away from targeting nightly, and that can only happen when all of the fires are put out. There are still a lot of breaking changes and clean-up being done in the final hour.
Some people consider stable to mean that the language is "production ready", which often implies a complete set of common libraries. There is still a lot of work to be done in this regard after 1.0. Doing the nightly build whack-a-mole is too frustrating for many, though, so 1.0 will help a lot for developing the ecosystem that a "public ready" language needs.
Weekend projects certainly, but I'd still off from doing anything super serious because there are still a few breaking changes waiting in the wings (such as the finishing touches on closure reform), as well as a few potential last-minute syntax tweaks that are still being considered (though if accepted those should be trivial enough to migrate via an automated tool).
The biggest stumbling block there, for me, is that it's impossible to write shared libraries in go. You can't write libraries in go that can be used from anything other than go, so until you've converted everything over to go, you'll still need to write libssl.so and friends in C. Go is a great application language, but it doesn't play well enough with others to really be a systems language as long as you can't write libraries other languages can use in it.
But shared libraries aren't the only way or even (often) the best way to share code. Especially when we're talking about UNIX tools, it's pretty easy to have one process run another process. Go has a really quick startup time which makes that practical.
There are absolutely some scenarios that can be adequately (or better) handled by communicating with another process! I completely agree with you. There's still a large volume of use cases where linking code in-process is a very good fit for the problem.
I am looking forward to go someday being suitable for producing native shared libraries. If they ever prioritize and complete that work, that's something I'll absolutely be taking advantage of, and will extend the situations where I'm able to reasonably use go.
I moved from C# to Go, and one of the things I really loved is the sudden absence of DLL-Hell. I never have to worry that installing my program on another machine would suddenly cause it to bork.
I guess I'm happy importing everything into monster huge static binaries and not worrying :)
yeah, I get that. But my actual lived experience of working with C# is Visual Studio, Windows, and DLL-Hell.
Whereas the go toolchain is very much a part of the language. The go team don't tell you whether tabs or spaces are canonical, they tell you to run go fmt on your code.
DLL hell was solved in .net a decade ago. If you need specific versions, ship with your own version in the local directory. Otherwise, .net assemblies solve the problem elegantly.
Go needs to support shared libraries if it wants to move to the next level of adoption.
Marcus, I'm also giving Go a try at the moment (C# is my day job). What are you using tooling-wise? I'm trying out Lite IDE at the moment and that's the best experience I've had yet.
So far my view is that I really like the language but I'm finding the tooling pretty unappealing. Wonder if I'm missing a trick?
I find that Sublime Text with the GoSublime plugin works pretty well. There's "Intellisense" for imported packages and GsLint actively detects syntax errors. Gofmt also takes care of indentation and formatting for you at every save.
My experience: you kinda have to rethink what an IDE is. Go comes from the *nix tradition where the "coding environment" is a text editor and a few terminal windows.
I use GoConvey to automagically run my tests and tell me what I broke. I keep this running in a terminal window (so I can CTRL-C stop it) and a browser window.
I have another terminal tab on the same window for godoc, and that's serving another browser window.
I have a terminal tab for git commands and file manipulation (this is the one the terminal window is normally on).
I moved away from SublimeText to Atom (with go-plus) because while the load times suck, the go language support in the tooling is way better. Not that GoSublime is bad, it's not, but Atom just works better for me. I tried vim, and loved it, but the support for the go tools was never quite there, and configuring the bloody thing was a nightmare.
So every time I save a file it automagically runs gofmt, goimports, compiles and runs my tests in goconvey (and tells me what I broke), and colours my test coverage right in the editor.
It's not quite the integrated experience that Visual Studio is, but it's incredibly powerful, and conforms to the unix philosophy of lots of good, small programs working together.
And just for comparison; today I spent 20 mins trying to get a dark theme on VS2010 and had to give up (or hand-pick every colour in what looked like around 100 options)... whereas I have dark themes galore on everything else ;)
There are a lot of good choices out there. Personally, I use vim plus "gocode" to do context-sensitive code completion. gocode is able to parse go, which means that its autocompletion results are pretty good.
Tooling actually is a huge advantage. I can install the latest Go distribution and get everything I need to work (minus a text editor).
On the other hand, on top of the JDK I have to install, eg, Eclipse, maven, etc. Not to mention all the crazy frameworks I would need to get any real project off the ground.
Similarly with .NET I have to install a whole host of stuff with Nuget, LINQPad, etc.
> My experience: you kinda have to rethink what an IDE is.
An IDE is an integrated development environment; emphasis on integrated. Having to, by yourself, put together a mixed assortment of tools for your development environment doesn't seem to qualify as being integrated. Though of course someone could come and make a program that bundles a lot of tools together and present them as a coherent, integrated package.
(Personally I'm sceptical of IDEs myself and tend to believe more in being able to make your own work-flow and programming environment, which a lot of loosely coupled (ideally not coupled at all) tools makes simpler.)
Integration is a matter of degrees. The trend with recent editors (textmate, sublime, atom) is that they expose more abilities to integrate with simple editor functions. I don't see how this doesn't qualify as integrated.
I suppose you could work around that by going the busybox route and provide multiple tools as a single binary and then just hard link them to the different command names. But I guess there might be other problems with that.
the rest is pretty much all runtime, the hello world itself should take about 10k, that's more or less the size of the C or dynamically linked Rust versions (Rust also uses static linking by default, and the hello world is 300k in that case)
to see how much the runtime contributes to the size of the binary use an empty main. for hello world you bring in the fmt package which is a relative heavyweight:
$ cat > t.go
package main
func main() {}
$ go build t.go
$ wc -c t
623280 t
$ nm t | wc -l
1029
$ cat > t.go
package main
import "fmt"
func main() { fmt.Println("hello world") }
$ go build t.go
$ nm t | wc -l
2541
Go binaries start at around 2mb on my machine, so not as bad as 5mb, but still much bigger than similar utils in c. Storage is getting dramatically cheaper all the time though, so the size is becoming less important.
Shared libraries for the std lib would definitely be nice for utils, but they do have drawbacks too (dll-hell).
The problem is not in storing the binary, the problem is that you have to load the binary in memory to execute it. So 5mb you have to transfer from disk to memory and keep it there until you're done (and this is not even counting the runtime memory use)
Not true at all on systems that use on-demand paging (i.e. all systems in the past 30 years). Only what's used is paged-in, and it's paged-in only when it's used. A big part of Go binaries are DWARF info, which are not paged-in in regular operation. Also, much of the code is not paged-in either. The code is currently bloated because the linker can't always statically determine which methods it can elide (it does that for functions), because Go allows to query the dynamic type of an interface at runtime (reflection, type switches and type assertions, and all that). Alan Donovan has done some promising static analysis work that will allow the linkers to elide more code than they do now.
If memory use is a huge issue, you can get down to 400KB or so without using the stdlib - just copy in the isolated code you need if your tool actually requires it. Even with liberal use of the stdlib it's not 5MB on average, more like 1-2MB - where did that 5MB figure come from? It is possible to have smaller binaries with static linking but you have to be careful. Of course it'd be nicer if they were more the size of tiny linux utils (30KB), but they'd need dynamic linking for that, which brings up other issues.
If you are severely resource constrained, and need to have lots of tiny programs resident in memory at once, then go with static linking is not a good choice, but does this preclude using go tools on modern servers, desktops, phones where many binaries are typically > 1MB today and many are only run for short periods anyway?
> Shared libraries for the std lib would definitely be nice for utils, but they do have drawbacks too (dll-hell).
There is also .a/.lib hell, the only difference is when it gets handled.
In Windows dll-hell is a solved problem since Windows 2000 for any developer that bothers to follow Microsoft guidelines instead of copying dlls into systems directories.
I didn't mean by referencing dlls to imply this was a windows-only problem, just that using dynamically linked libraries shifts the burden of dealing with dependency conflicts to the runtime machine, and away from the developer at compile time.
Are shared libraries a memory win in production environments? For shared libraries to be a win, you have to be running different executables on the same machine (or VM) which share libraries of significant size relative to the memory use of the program itself.
You also have to be using a big fraction of each shared library. With static linking, if you need "cos", you get that loaded; with a shared library, you bring in the entire math library.
Modern production environments often involve only one program per VM, or multiple instances of the same program. In such cases, shared libraries are a lose.
Go is never going to make much inroad in that area until it supports shared libraries, and its creators seem to be strongly opposed to that idea (while claiming their goal is to replace C++...).
There are many different aspects to shared object support, there is support to some of these aspects today (you can link against shared libraries, for example), and there are on going conversations about supporting other models.
Really exciting to hear a focus on stability and performance improvements. The GC improvements are absolutely necessary first and before anything else, in my opinion. Go is new and it doesn't have the benefit of all the hard work and learning that happened in other languages, particularly Java and the JVM. We're playing catch-up there, to be sure.
For what it's worth to those new to Go, it's well-worth learning. It's not a perfect language but I've thoroughly enjoyed working with it. It strikes a lot of balances very well.
Per http://golang.org/s/go14android, apps that don't need the Android UI (e.g. if it's only using OpenGL) should not need Java and could be written in pure Go.
even if Android is running on ART binary rather than Dalvik ? I have been debating whether you can write code in Go that will work in the JVM GC and is linked through ART... not sure if it is possible.
I have mixed feelings about this roadmap. On the one hand it's refreshing to see that they are not pushing new features recklessly and rather focus on core improvements. On the other hand, I think many would agree that the language is ripe for a little bit of innovation at this point.
Obligatory comment regarding generics. God I can't wait - if they add generics and a decent package manager, I'm gonna be pushing hard for Go at our shop.
What sort of "decent package manger" do you mean? `go get` is pretty solid as far as I can tell, the only thing it's missing is the ability to lock to a particular version...
It's missing the ability to publish or lock to a particular version, which is a game breaker for shops with >5 people, and the go maintainers don't seem to see it as a problem, making it a permanent game breaker.
I saw some posts on the mailing list where their position wasn't that it isn't a problem, but that no obviously-right solution is yet known, so it's better to let it shake out in third-party land for now, which seems reasonable. (That's all from memory, so salt appropriately.)
FWIW, I have been using godeps and it seems nice so far. I'd definitely recommend taking a look.
godep or glide, though hackish, appear interesting. The CoreOS devs have also been contributing back to godep for that reason [1], which is a plus. There's also a good discussion here [2] on that very topic.
Last I checked godep, the only thing I didn't like was the weird things you had to do with your project layout, but the way ahead is promising (and maybe it's not so bad now). Certainly it's much better than gopkg.in.
Though I do agree. It would be nice to have this sort of tooling as part of the Go distribution.
But it requires the use of interface{}, which reduces the usefulness of the type system, and you still can't do (what I guess must be crazy) stuff like return a slice.
"the goal is to eliminate C from the source tree so that—except for programs that use cgo"
What does the future hold for cgo? Yesterday I ran into erratic DNS success with a cross compiled program, win to lin. Haven't had a chance to look further, but I'm wondering when I'll run into similar snags again.
Can you please raise bug report or let me know what you find here? I work on go native dns library for linux, which iirc is used when crosscompiling rather than libc.
$ ./dns-win-xc.bin
err: lookup feeds.feedburner.com on [178.22.66.167]:53: no such host
$ ./dns-lin.bin
addr: [173.194.116.64 173.194.116.69 173.194.116.71 173.194.116.65 173.194.116.73 173.194.116.70 173.194.116.67 173.194.116.66 173.194.116.72 173.194.116.78 173.194.116.68 2a00:1450:400c:c0a::76]
Win 8.1, go1.3.3., Centos 7. There's a downvoter for some reason, maybe they have something to add...? Don't let me lead you astray - I haven't yet read that article I linked to, but a quick glimpse suggested to me that this isn't a bug.
Interesting that a lot of serious and important tools of today are being written in Go (like Docker and Rocket which made another news today). Very impressive for a young programming language which is still at version 1.3.
Except that Docker and probably Rocket work really well. And the speed of development on the projects looks to have benefitted from using Go.
Speed of development is the killer feature that Go has. For me and most people I know who enjoy the language it provides an almost frictionless development experience that makes us more productive than any other language I know.
I'm a language geek so I totally get the whole "Why doesn't go have {Generics,Typeclasses,Hindley-Milner Type Inference,...}" arguments. Sometimes I miss them too. But then I remember how I literally cut months off a personal project's development time using Go. That's when I remember to be greatful for the core teams approach to making a purely pragmatic language that is geared toward frictionless development above all else.
> Except that Docker and probably Rocket work really well. And the speed of development on the projects looks to have benefitted from using Go.
By this you mean that people adopted Go because it is good (in a technical sense), and because of the speed of development rather than because it was "new and shiny"? Fair point.
> I'm a language geek so I totally get the whole "Why doesn't go have {Generics,Typeclasses,Hindley-Milner Type Inference,...}" arguments.
Another counter argument is that golang clearly not an elitist attempt to distance oneself from the mediocre programmer. It is an attempt to distance itself from languages that have a huge surface area. In other words your links premise doesn't fit.
That was but one of the mentions in the article. Take it away and it still is true; many of us programmers seem to be attracted to new and shiny things. No, they don't have to be super-complicated, ground-breaking languages that make you feel like a guitarist who has lost his left arm and now needs to learn how to play using only his right arm. New things are still shiny, even if there are already things that are kind of like them already; those other things have just lost their twinkle with age. If anything, there seems to be many programmers who complain about how new trends are just reinventing 10-20 year old wheels or whatever, that trends are cyclic, and so on.
Not being an expert on language design, but these improvements "compiler, assembler, linker, and runtime for 1.5 will be entirely in Go" look like they could lead the to large enhancements in 2.0.....