Hacker News new | past | comments | ask | show | jobs | submit login
The State of Go (golang.org)
318 points by crawshaw on May 27, 2015 | hide | past | favorite | 167 comments



Nice little change in syntax.

    m := map[Point]string{
        Point{29.935523, 52.891566}:   "Persepolis",
        Point{-25.352594, 131.034361}: "Uluru",
        Point{37.422455, -122.084306}: "Googleplex",
    }
may now be written as:

    m := map[Point]string{
        {29.935523, 52.891566}:   "Persepolis",
        {-25.352594, 131.034361}: "Uluru",
       {37.422455, -122.084306}: "Googleplex",
    }


I'm puzzled about the asymmetry of the syntax. What's the reason behind

    map[Point]string
as opposed to something more symmetrical?


It makes sense if you're used to Go. In Go, in function declarations return types come after the argument list, e.g:

    func doSomething(input string) string { ... }
The go map declaration syntax is analogous to the function declaration syntax: the keyword (map, analogous to the func keyword) followed by the type of the keys in brackets (analogous to the argument type in parentheses) followed by the type of the values (analogous to the return type).


I've never seen that explained so succinctly. Thank you.


makes little sense but since go doesn't have user defined parametric types, it makes no difference.

Now I wish I had 1 million dollars and a better brain to work full time on a proper language that would fix go's issues. What a missed opportunity to create something everybody would be comfortable with. It could have been the biggest language of the next 20 years in webdev, CSP is a great concurrency model. At least it will inspire some people in the future I hope. Rust is cool but doesn't have CSP builtin.


> something everybody would be comfortable with

Not possible.


This worked for structs since a very long time, the change makes maps consistent too.


I came into the comments to say that I very much dislike this change. I much prefer the extra syntax to not really knowing what '{29.935523, 52.891566}' actually means.


It's in the declaration of the map. Rewriting it over and over is redundant encoding


They seem to say you may choose to continue specifying it, as you prefer (?)


Sure, I can write code the way I like. But the readability of code affects everyone, not just me.

Still, 4ad's comment above mine changed my mind. I didn't consider that you can do this with structs already (in spite of the fact that I already do that in my code), and I prefer the consistency to the lack of verbosity.


Much agreed. Call me a masochist, but I prefer extra typing if it means readability, to me. And while subjective, retyping Point{} is much clearer to me than {}. It just looks...wrong.


Pretty neat, go is now written in go "Go 1.5 has no C code in the tool chain or runtime.", go shared libraries interoperable with c.

I'm really a go tinkerer, but I like the langauge.

I didn't realize Garbage Collection was so expensive that the goal is to only have it run 20% of the time. But its a good goal.

"Run Go application code for at least 40ms out of every 50ms."


That goal is referring to latency spikes. GC is usually much less than 20% of the time on average, but it may take, say, 200ms in one go and then run for 2 seconds without another collection. It seems their goal is to ensure there's no large pauses like that, so the 200ms would be spread across the 2 seconds as smaller time slices instead.


go shared libraries interoperable with c.

How does this work? As far as I understand Go is moving to a copying collector, so any pointers passed to C may become wild pointers when garbage collection is performed.

On go-nuts, Go's developers have been warning that passing arrays to C by getting the address of the first element of the slice will be unsafe for this reason.


some GC languages pause garbage collection while calling out to native code. under this scheme it's no problem to call C to do something like multiply matrices or compute hash functions. the problems come when you want the C code to call back into GC code, for example an event loop or a huge computation that wants to report its progress.

it's hard. mixing manual and automatic memory management is tricky.


Dang, I always figured that would be unsafe eventually.

I wonder, does this make stuff like this unsafe:

var thing C.thing; C.somefunc(&thing)

i.e. can stack address change now?


The Go spec does not contain the stack or heap, so I guess that in principle it's fair game for the collector to relocate it in memory. In some future version it's only safe to use memory in C-land that was allocated in C-land.


Wouldn't it be safe to use in C-land as long as you kept a reference to it in Go-land? Something like:

  Create memory in Go-land
  Use memory in C-land
  Dispose of reference to memory in Go-land
That becomes a bit problematic if the C-land use is for a long-running process - you need a go object that owns the memory that has a lifetime that (at least) matches the lifetime of the C code execution.


Currently, yes. Once Go has a moving garbage collector (soon-ish), no. A moving garbage collector can move objects around. This would invalidate pointers in C, since the GC can run concurrently with C code (even if it didn't, it would invalidate pointers to object allocated in Go in C structs).

To make things work, you have to allocate date used in C in C and pass Go variables only by value.


I actually was expecting it to have a greater hit on performance.

After all, a go executable can be 2x - 8x slower than a C one (which is still good enough for many uses).


I always have this question:

Since a large project like Golang has so many auto-build tools and test OSes, how hard is it to provide a binary download for e.g. Ubuntu 14.04 LTS? You know, not tarballs, but actual static binaries as deb packages installable via apt-get.

Good examples:

http://wiki.nginx.org/Install

https://www.percona.com/doc/percona-server/5.6/installation/...

I root for open source movement, but to install something on a popular arch/OS by compiling from source everytime is just meaningless power consumption and adding CO2 emissions.


golang.org offers two types of downloads for Linux: a source tarball and a pre-compiled tarball. If you download the pre-compiled tarball, you just have to extract it somewhere and set some environment variables (all explained in detail on golang.org).

If you really want to install Go via your package manager, you can install the golang package from the Ubuntu repositories. However, this package is naturally on an outdated version of Go. Also worth mentioning is godeb[1] which can generate and install a .deb of any version of Go for you.

[1]: http://blog.labix.org/2013/06/15/in-flight-deb-packages-of-g...


> extract it somewhere and set some environment variables

That's exactly the problems package would solve, well, plus easy update capabilities.


To install simply download the tarball, extract it to /usr/local/ and add /usr/local/go/bin to your $PATH. If you want to install it to a different location (I like to keep it in my home directory), you just to set $GOROOT to that path and set your $PATH accordingly. To update, just remove the old go dir and extract the new tarball in the same place. I'm ok with it.


Simply is never that simply. There is always something that happen to suck in IT. You think you have done everything right and sometime, you just made a little misinterpretation that screw everything and you can lose a lot of time figuring out what.

That being said, I had no problem installing go on my ubuntu, despite the fact that I consider that the installation process sucks.


$ wget https://storage.googleapis.com/golang/go1.4.2.linux-amd64.ta...

$ tar -C /usr/local -xzf go1.4.2.linux-amd64.tar.gz

$ echo "PATH=$PATH:/usr/local/go/bin" >> /etc/profile

That's it, it is that simple. All you have to do now is set your $GOPATH, and you would need to do that even if you had installed Go through a package.


> If you really want to install Go via your package manager,

Don't most people want to install X via their package manager?

> However, this package is naturally on an outdated version of Go

I can't tell if you're being sarcastic or not here - why would a package such as this be outdated?


>I can't tell if you're being sarcastic or not here - why would a package such as this be outdated?

Because of the way software is packaged in most Linux distributions. The upstream version is frozen in that particular release.


One would hope that being six to ten months behind the development branch wouldn't make very much difference. If it does, I'd argue Go isn't mature enough for general use yet.


The Go project makes stable releases every 6 months. They are extensively tested and fully compatible with all previous major releases. But we do introduce new tools, libraries, and minor language features. Developers almost always want to be using the latest stable release of Go. (There's seldom a reason not to.)


It's not behind the development branch, it's behind the stable branch. The stable branch is called stable for a reason, because it's stable. You can still use the older versions and they're not bad, but the newer versions have brought improvements, and I see no reason why Ubuntu shouldn't package new stable versions when the Go team is committed to stability.


Golang could host a PPA.


The Ubuntu 14.04 package ships Go1.2

http://packages.ubuntu.com/trusty/devel/golang-go


If you don't have experience with packaging, https://github.com/jordansissel/fpm is probably your best bet. And it is super easy to use.



You can download and install the binaries on Ubuntu: http://pokstad.com/2015/02/03/installing-go-binaries-on-ubun...


For debian-esque, I use https://github.com/niemeyer/godeb .


Slide 10 shows the benchmark differences, and I found them a little suspect. The "binary tree" benchmark is the only one that is 25% worse than the old version, but in my opinion it is an important benchmark, because I suppose that this benchmark actually creates a lot of objects that need to be garbage collected. At least that is what I suppose, based on the fact that this holds for binary tree algorithms. The other benchmarks are mostly about formatting and regexping, which require much less objects to be created on the heap, and these are making the benchmarks look better.


> Slide 10 shows the benchmark differences, and I found them a little suspect.

To be fair, they follow a slide about general performance, 4 slides after the GC slides.

And from the concurrent GC slide it looks (as one would expect) that the GC burns significantly more resources overall (it's not just longer wallclock), though it rarely completely stops the world anymore: on the right-hand side, where the old GC would burn 1 unit of CPU for 240ms (3 * 80ms), the new one burns half a unit of CPU for 900ms (plus a unit for 2ms), which mean it's using 452ms of CPU, close to twice as much as the old one. Though the application pauses significantly less (which is good for responsivity), that's CPU time it doesn't get to use anymore. The concurrent GC also "leaves off" more work for later, the left-hand side graphs show relatively low CGC use for the first 7s, followed by a spike and then much, much longer GC runs (though at progressively lower CPU%).

I don't know about others, but it was my expectation that overall performance would take quite a hit for GC-heavy applications in 1.5.


Yes, if it's referring to Boehm's binary-trees benchmark (as seen on the benchmarks game, for example), it is explicitly designed to be a benchmark of garbage collection. I believe the Boehm GC was tuned using it.


Of course, as with whenever you change something fundamental like this, some programs will get slower while others get faster. That's the nature of trade offs.

It's not really suspect. The benchmarks reflect execution speed. The new GC trades a little raw throughput for lower GC latency. So you would expect a GC-heavy benchmark to show the greatest speed loss. What matters for go users, though, is the behaviour of real programs. Real programs benefit from the reduced latency of the GC and other compiler optimisations we have made since 1.4. The outcome, in our measurements, is roughly equivalent performance for most programs but with significantly shorter GC pauses.


Shared libraries? I thought "no DLLs" was one of the major design philosophies of Go.


One of the major design philosophies was ease of deployment, DLLs generally complicate that process, but for many purposes, they're a necessary tool.

Interop with existing libraries is certainly a worthy enough feature to add to the Go toolchain, and it's an optional feature, so ease of deployment is still a primary concern, but if you need DLLs, this is available.


In particular, you've gotta have DLLs if you want to deploy on Android.


Furthermore, the years they spent where dynamic linking were not allowed incubated the culture -- even though dynamic linking is now allowed, it really will be a special case!


Special case?


I only meant in an architectural sense. Most Go apps will be standalone, statically linked. Dynamic linking will only be for interop circumstances rather than something everyone does.


There was a big push from the linux distros as the maintainers had anneurisms thinking they'll need to rebuild hundreds of Go applications when/if a big bug is found in the Go standard library. They also don't like the idea of having N copies of the code for the standard library in N Go applications.

Personally, I find that kind of thinking to be a relic of the past when memory and disk space were expensive and compilation took a long time.

Sure, if you're running on a raspberry pi, a few megs here and there might actually matter... but even my phone has 2gigs of RAM and 32gigs of disk space.


> ...a relic of the past when memory and disk space were expensive ... a few megs here and there might actually matter...

The nature of the modern computer is that cpu speed has increased more than memory speed, so if your data doesn't fit into cache your cpu will be do nothing quickly.

Even with shared libraries, this phenomena has such an impact that the Linux kernel now has a memory de-duplication feature (mostly for VMs):

https://lwn.net/Articles/306704/


This has NOTHING to do with shared libraries vs. statically-linked code.

The size of the binary is completely irrelevant when considering if the code fits in the CPU cache. What's important is the size of code that actually executes. Dynamic linking changes nothing.


I'm not sure what you mean. Absent some deduplication wizardry by the OS, if I have 60 processes using the "same" statically linked copy of libfoo, that's 60 copies of libfoo's instructions and static data fighting for cache and real memory. Dynamic linking & decades-old virtual memory technology cull that to a single copy.


The debian snapshot archive is over 30TB. Some single packages take up to 10 hours to build (although all other packages can be built in parallel if you have enough EC2 credit).


I'm not sure if you're complaining about disk space or time spend compiling, but if you're complaining about time spent compiling, the complaint is not valid. Dynamically linked Go programs build just as fast, to a first approximation, as statically-linked counterparts.

That's not at all surprising considering that with static linking, each package is also built only once.


> Personally, I find that kind of thinking to be a relic of the past when memory and disk space were expensive and compilation took a long time.

OTOH, it does well with the culture of making a lot of small, specialized executables rather than a single, monolithic application.


Not really? That doesn't seem like it has anything to do with static linking vs. shared libraries.

My point was rather that a 3MB executable used to be "large" and now it's not, even if you have 100 of them.


That has generally been true as far as Go packages are concerned, but by default the binaries that Go generates still dynamically link against any C libraries - in particular just about any Go binary will break in the absence of a compatible libc.

I don't know the background on this choice, but I've spoken to a few Go shops that have complained about memory usage when running a bunch of different Go binaries on the same machine, so hopefully this will help there.


I was surprised too given the Plan 9 culture's rejection of dynamic linking.

However, this will probably deprecate or diminish the RPC and code generation techniques for implementing plugin architectures.


Yet Limbo uses it everywhere.


Yet Limbo is used nowhere. Now that's a paradox ! ;)


Didn't you get the memo? It got revamped into Go after a conversation with Oberon-2. :)


Robert Griesemer is the Oberon guy in the Go team. He worked for Wirth on Oberon at ETH Zürich.


Anyone on the Oberon world should know that. :)

I was lucky enough to be able to use Native Oberon back in the day.


You can still use it: A2 Bluebottle. I proposed it as a solution to the paranoia of "NSA et al subverted everything! What can we use now and to bootstrap better stuff!?"

(raises hand) The well-documented, portable, simple OS and toolchain written by people with no interest in subversion available online for free? Not quite an Ubuntu but more usable than most things people were proposing (eg microcontrollers with hand-written Forth lol). I also thought Modula-2 or Oberon might make a good target for high level languages such as ML or Haskell if we're doing end-to-end safety arguments: one's claims can build on the others.


This feature may be prepared for building shared libraries for C. After all, Using shared libraries is popular in C world.


It might also make it possible to write Node/PHP/Ruby/Python libraries in Go instead of C++. I've been looking at Rust for that use but the ability to compile Go to a C archive throws in back in the running.


That was one of my first thoughts there as well... I like the idea of writing lower-level node modules in Go over C/C++ myself.


I don't think you can compile go to a C archive that runs without the runtime.


Presumably the necessary parts of the go runtime would be embedded in the C archive (or a DLL'd libgo.a file); how else would you be able to use the C archive in a C application?


Not sure how this would work, will have to update to 1.5 and see it.


"Go 1.5 provides support for Android and experimental support for iOS."

I thought this day would never come.


It will not come unless Android team changes their mind.

What the Go team is doing is providing the same level of access that C and C++ enjoy.

Which means that outside developing games or writing pure business logic, is all about JNI fun.


I've been using Go 1.4 for iOS development and it works fine. This is just mainstream'ing the support for iOS, which is needed now because things seem to be quite settled.

In fact, I really like using Go on iOS - the ease with which it can be done was astonishing!


As a professional iOS dev and a golang tickerer, I've heard nothing about this, but I'm curious; any projects you can point me at?


I wonder how Go can be better than Swift for iOS development.


I think the attractiveness is that you can share Go code between applications on different platforms, which is not possible with Swift as long as the compiler and standard library is not open source (or generally available).


It may not always be better since you'll likely never be able to call from Go into Cocoa libs, but if you can write a shared library and use it on the server, iOS and Android, then it's a big win. Linking a Swift app against the shared Go library and calling directly from Swift into Go sounds pretty great to me.


>call from Go into Cocoa libs

It can be done, thus:

    #cgo LDFLAGS: -Wl,-U,_iosmain,-U,_PopUpDialogBox


Any developer who wants to do this needs only to create these #cgo statements for whatever Cocoa API they want to use, and they become available from the Go context ..


> It may not always be better since you'll likely never be able to call from Go into Cocoa libs

IIRC, Go can call any Objective-C code through cgo.


I would never exchange the power of a ML based language with first class support from the OS vendor, by Go.


I seriously was wondering if Go would ever replace Java on Android because of legal troubles. I'm still wondering.


That will happen as soon as Android team decides to scrap all development of Android and rewrite it in Go.


It could also happen that ChromeOS and Android merge with Go as native language and HTML5 apps as another first class citizen.

I read in then news that iOS9 will use Swift in some internal apps/libraries too (major internal rewrites?).


Not that I'm complaining but it this a coordinated effort to push Go to the front page? There are literally 6 submissions on the front page related to Go - what gives?


Go's SF meetup just finished.

https://news.ycombinator.com/item?id=9609427


The same was true during the Microsoft Build event, the difference was that a lot of Microsoft employees registered a new (green) HN account. Not that I'm complaining but it looked more like a coordinated effort than Go language which has minimal PR from Google but is used by many devs around the world and is very big in Asia.


There are a lot of MS employees on HN normally, no need for green accounts. Also, where is Go popular in Asia? I haven't heard much about it in China (it isn't a resume point yet).


Go seems to be getting really popular in India (More than other new languages like scala/clojure/elixir that is) . We had a GopherCon earlier this year and some large e-commerce companies now use Go in their stack.


That's odd, from within the Go community it's understood that there's a rather large followership in China. At least, speaking for myself.


I've heard this to, but haven't observed it in my part of the industry (systems/PL research with lots of undergrad and grad interns from top ranked schools, who are usually the PL early adopters). Scala has been getting more interest lately, and there is growing interest in Rust.


The thing is, Go is particularly not interesting for PL researchers and geeks seeking novelty. It's more of a boring, pragmatic language. Elixir, rust et al. are much more funny, IMHO. That might be the explanation.


Yea, but isn't Go supposed to be a systems language? You might expect it to pop up more with systems candidates.


Oh man, dynamic loading of Go libraries is going to be awesome. I'm in the process of developing an http API framework/server (yes, yet another one, but why is a different topic). This will allow me to compile the actual APIs running on the server (multiple APIs can run on one server) as shared libraries and have the server automatically start them.


As far as I know there is still no easy facility to dynamically load libraries at runtime (like the analogous 'dlopen' in C). There are shared libraries and they can be linked at compile time so it's a step in the right direction.


The spec that's linked from the presentation says that it should be possible (and has an example API usage). Unless they didn't implement it fully, of course, or it's due for a later Go version.

> to support Go packages that want to use plugins, a new package will be added to the standard library: plugin. It will define a function and a type.

https://docs.google.com/document/d/1nr-TQHw_er6GOQRsF6T43GGh...

[EDIT] I looked at the commits and it looks like it's not in the works, at least on the main repo.


From the penultimate slide, a list of upcoming Golang community conferences:

GoCon - Tokyo, Japan - June 21 http://gocon.connpass.com/event/14063/

GopherCon - Denver, Colorado - July 7-10 http://www.gophercon.com/

GolangUK - London, UK - August 21 http://www.golanguk.com/

GothamGo - New York, NY - October 2 http://gothamgo.com/

dotGo - Paris, France - November 9 http://www.dotgo.eu/


If you're interested in the full talk, we'll have the vid posted at https://www.hakkalabs.co/meetups/gosf within a few days or so.


At the risk of lazy-web, I just came off an attempt at writing a small service in Go. I liked the language, but found vendoring and package/dependency management sucked most of my time, and I never reached a workable conclusion. I'd really like to give Go another, heh, go.

I know it's an open topic, and there's no true "one way", but I'm wondering how people here would handle this situation. And perhaps it's my Python-addled brain that's the problem, since GOPATH is kinda sorta like PYTHONPATH, maybe enough to confuse me.

I have a repo at github.com/user/repo. It consists of independent but related applications, most in Python, one in Go. The Go project would be rooted in a "service" subdirectory.

I feel like I'm really missing some critical piece of a Go workflow, because I could never get that to build reliably when I started adding external libraries as a dependancy. It seems like I'd have to put the project in github.com/user/repo/service/src/github.com/repo/service, which is what makes me think I don't understand what I'm talking about!

To bring this rambling lazyweb back on topic, I guess I was a little disappointed that vendoring/packaging wasn't mentioned in this state-of-Go talk. I was really hoping I could easily give Go another shot, but in the absence of that, anyone have a suggestion for my setup above?


I have a similar feeling. I don't really like GOPATH, but I'm trying to come to terms with it.

Your Go "project" goes in $GOPATH/src, i.e. your git repo would be in $GOPATH/src/github.com/user/repo

If you have your main pkg in $REPO/service, then you can build like so: go build github.com/user/repo/service (this can be ran from anywhere as it uses $GOPATH)

If you need to add a dependency, use: go get github.com/.... and go get will add it to $GOPATH/src/ which you can then import.


Hmm, ok. I think I get that -- I'll have to see if I can symlink into the GOPATH; I feel like I read somewhere you can't. I'd like to avoid making my GOPATH be a subdir in the containing repo.

Thanks!


There are some good dependency managers out there. I personally like gb: http://getgb.io/, but there is GoDep and GPM as well (which are more similar to Rust's Cargo in the way they work).


I was surprised to see Plan 9 still supported. Are there any core Go developers who still use Plan 9 as one of their primary work machines? Is Plan 9's userbase growing?


Go uses ports of the Plan 9 system libraries extensively, so Plan 9 support is probably a freebie, or implicit.


What's the rationale for not supporting it, given that the work is done and the maintenance burden is minimal?


I can't offer a rationale, because I have no say in the matter, or even a preference, one way or another. I asked out of mere curiosity. But if you were to tell me that the Go team dropped support for Plan 9, for the same reason they dropped support for dragonfly/386 and OSX 10.6, that explanation would have a very low entropy for me, because apparently, the last release of Plan 9 was in 2002.


Plan 9 is still updated incrementally; there just hasn't been a formal new numbered release. Dropping Dragonfly/386 and OS X 10.6 would be more like dropping the 3rd edition of Plan 9 (except we never supported it).


I see. Thank you for the clarification. I was hoping you'd tell me that there are quite a few active users who are developing cool stuff for Plan 9.


There are.


Rob Pike, maybe?


I take that back: http://rob.pike.usesthis.com/


"Go 1.5 provides support for Android and experimental support for iOS."

Tell me more ...



`gomobile install` to build an APK and install it on a device. Cool.


Was hoping for some news on vendoring, the discussion on the mailing list didn't really end with any actionable conclusions.


The analysis and tracing tools look really wonderful. Have those been in the works / maturing for awhile, or is that all new tooling for 1.5?


I think the tracing tool [0] isn't go specific but rather just a general tracing tool that's been maturing for a while. The support for outputting compatible traces is new to 1.5 AFAIK but that would explain how they have such a nice looking interface already :)

[0] https://github.com/google/trace-viewer


The trace viewer is not Go specific, it comes from the chrome browser, but of course the technology that makes it possible to trace Go is very Go-specific.


With go 1.5 able to build to static/dynamic C libraries, I wonder how long it will be until there is a bridge to be able to easily write binary modules for Node.js ... Which may seem a bit crazy.


How does shared object support interact with cross-module inlining?


Live stream recording here: https://youtu.be/Fx304EfqtMo?t=395


Shared library support, thanks!


please tell me how to make such slide presentation



Still no generics, nor real type inference.


You created a burn account just to complain about this?

As you know, generics won't appear before 2.0, if ever. It's fine if that's a nonstarter for you. Perhaps you'll like Rust instead. It needs a big community too.


Why Rust? It doesn't really compete with Go anywhere. Haskell is a much stronger player in the space of high-level, garbage-collected languages. The only thing Rust and Haskell really have in common is not willfully ignoring the last 50 years of programming language design research.


Why Rust? It doesn't really compete with Go anywhere.

I guess people compare Rust and Go because they were released in similar timeframes and people like to view things in competition.

Rust is especially attractive when you can't afford a garbage collector, need deterministic destruction, and full compatibility with C. It's what C++ would be if it could start from scratch.

I think your point needs to be re-iterated more often: if garbage collection and ~2-3x the running time of C is not a problem, OCaml, Haskell, Scala, F#, etc. are the competition of Go. They integrate modern (read: mostly 70ies/80ies) language design insights, while providing approximately the same performance as Go.


Great points, esp on Go vs real competition. It's why I'm against their modernized ALGOL68. Languages such as Haskell, Clojure, Rust, Julia, Scala, Ocaml, and others are at least trying to apply lessons learned in programming research to improve things for programmers. Google would've been better off taking one of the more native ones, identifying its worst issues, using their geniuses to fix those, and then building tools + community on that.

Instead, we get an alternative to unsafe, native and safe, scripting languages without most advantages of the best of either. I might still try it out if I do a comparative evaluation of modern work. By that time, it might have gotten better.


> similar timeframes

I hope you understand first stable release of Rust happened less than 2 weeks ago (15 May 2015), vs. Go's Go 1 in March 2012.

3 years is pretty big difference.


I hope you understand first stable release of Rust happened less

Of course, but Rust has been in tech news for a while already. E.g., the Rust 0.1 post garnered 82 comments[1], over 3 years ago. Rust seems to be first mentioned in a submission title 5 years ago.

Of course, their inception was not exactly at the same time. But if we look back in 20 years, it's the same timeframe. Just like e.g. Python and Ruby, despite being 4 years apart. Or C and Pascal.

[1] https://news.ycombinator.com/item?id=3501980


I find your perspective a little bizarre. For instance, if we look on a galactic time scale, Lisp and Go were created at pretty much the same instant!

It's not fair to Rust to compare it to Go, as if they are at the same stage of their adoption and development. Several years is makes a big difference in the tech world.


Err, wat.

It seems ridiculously reasonable to compare programming languages on the time-scale of modern computing (~70 years). Dismissing the comparison like that is pretty crazy.

However, don't get me wrong: I totally agree that the years between the stable releases of the projects makes a big difference, especially now, while it is such a large percentage of the total lifetime of them.


Go was first publicly announced in November 2009, Rust in July 2010.


At the time of its announcement, Go was tremendously further along in its development than Rust was at its own public announcement. Knowing modern Go, you can go back and understand Go code from 2009. Knowing modern Rust, trying to read code from 2010 is essentially impossible.


I wouldn't say "further along". Deciding to not tackle any medium-to-hard problems just allowed them to be finished far earlier with their language.


Actually, I read an interview with one of the Rust devs recently where he said that one of the meanings behind the name "Rust" is that they purposely chose not to use any ideas from programming language research within the past 10 years. IIRC it was because they wanted to use only ideas that had had some time to mature and be used in practice. So yes, Rust chose to willfully ignore at least some of the past 50 years of PL design research. :)


They both compile to machine code?

People get bent around the wheel all the time with the tiny niches but both seem well equipped for writing user land applications and are designed to be safer than C and it's cousins. Both have some legitimate interest in them which is what really differentiates them from like D or Ada. Neither is going to be used for kernels (maybe I should say 'serious kernels') and drivers any time really soon. They have a lot of similarity in those regards.

Seems like a golden era to have 2 competing languages that are aiming at C and C++ and have legitimate community interest.


Doesn't it? Why do people seem to compare them all the time...


Go gets compared to Rust because go was originally labeled a "systems" language. But the designers had a different older view of what "systems" meant than what is commonly used to day. People heard systems and thought they meant low level/operating system/embedded systems. That is not what Go is good for because it's garbage collected.

They also planned to attract C++ programmers, but they've basically been attracting dynamic language programmers instead. Why that was a surprise is beyond me--I can't imagine that anyone currently using C++ would be OK switching to a garbage collected language.

Rust on the other hand, is shaping up to be a C++ replacement.


Yeah, additionally it would be unwise to use Go in embedded systems because it is so reliant on heap memory for dynamic allocation. In certain situations where things could fail if an operation takes too long - rocketry and robots - it would be difficult to tell how long something would take in Go with any degree of certainty because allocating and freeing memory on the heap is highly non-deterministic.


If you can do it with .NET Micro Framework, Java (Atego, MicroEJ, J9, ...) and Oberon (Astrobe), surely Go is also welcomed.


It's not that you can't use Go. It's that Go's probably not the best tool for the job.

Sure you can do embedded development with the .NET micro framework because there are embedded systems that don't have hard real time requirements.

If you're building a hobby robot with netduino, you might not care if you have guarantees about input response time--as long as the delays are usually small enough. But if you move beyond that, you're most likely going to need a real time system.


Do you consider missile guidance systems hard real time requirements?

http://www.atego.com/products/atego-perc-pico/

As a language I am not a fan of Go, but on the context of using strong type languages in embedded scenarios. I think the less C the better.

Just like JavaScript JITs, it is always a matter how much companies are willing to invest to improve the quality of the existing eco-systems.


That's a virtual machine specifically designed for this kind of thing.

So sure, if you want to use go syntax and build a completely new runtime designed for use in real time systems then go ahead.

As for strong typing in embedded scenarios Ada's been doing that for decades.


This particular link has some info about that situation. GC pauses are deterministic starting Go1.5. Application code will run for at least 40ms out of every 50ms. The upper limit for the GC pause is 10ms and its typically lower.


It's not "deterministic". It has limits on time consumption of GC defined. But you still can't tell when, and if, at any given moment you'll have a pause.


As soon as you put limits like that you have to compensate by risking that you run out of memory -- if you can't meet the deadline, you can't fully scan the heap[1], free, and so on. Or is that wrong?

That seems like another variable to consider if you're in an environment where you have to somewhat control how much memory you use. But I don't know really know anything about the details of garbage collection, I just figured that that would be one of the trade-offs. So correct me if I'm wrong.

[1] Or whatever technique is used


If you can do robots with Node.js, I'm pretty sure Go may well be acceptable here.


No one is arguing that you absolutely can't use Go to build a robot, just that it's probably not a good idea.

Just because someone uses Node.js to build a hobby robot, doesn't mean that many people are using it for production embedded systems. If your systems has hard real time requirements, which many systems do (even many hobby robots), a stop the world garbage collector is out of the question.

Let's say you have a legged robot that depends on interpreting sensor feedback to keep from failing over. What happens to your balance when the garbage collector pauses your code for 10ms?


Can you elaborate on the different meanings of 'systems programming'? I just thought it meant OS development.


Apparently the creators meant infrastructure like web servers, and database servers.

There is some argument that using systems programming the way it is used today is a relatively modern convention. Here is a hacker news thread where this is discussed a bit

https://news.ycombinator.com/item?id=7009563


The "systems" in "systems programming" can also be interpreted to refer to Google's systems. After all, Go was designed to solve Google problems in a Google environment. That's how I always understood it.


In CS speak that is actually known as "distributed systems", not "systems programming".


In modern CS speak yes, but the creators of Go were apparently referring to infrastructure types of programs; web servers, database servers etc...

Here's a quote from the announcement talk

"And it's a systems language in the sense that we intend it to be used to write things like web servers"


If it has servers in the name it is distributed computing.

How modern is modern CS?

I took my CS degree around 20 years ago with focus on computer architecture, compiler design and distributed systems.

Distributed systems literature used in the degree went back to the early UNIX days.


>How modern is modern CS?

The creators of Go were on the team that built UNIX, and they clearly have a different meaning for the word systems.

They were using it to refer to infrastructure, without making a distinction whether it was distributed or local. Something like a compiler would be "systems" program under this definition.

I agree that distributed systems, and systems programming are generally accepted terms of the art in CS now, and even 20 years ago.


Ah, than makes sense.


Rust is a systems-level programming language. Go isn't: it's garbage-collected.


Besides the sibling post.

Xomb, Mesa/Cedar, Oberon, SPIN, Singularity were all written in GC enabled system programming languages.


Good points. People often forget GC languages were often used to write OS's and system software even on constrained devices. A number of languages, such as Modula-2 and Ada, offered the ability to go unsafe in certain modules out of necessity but with safe interfaces. So, where Go or Rust can't cut it, language developers can just use a feature like that (or inline assembler) to solve those problems until the language itself can.

Additionally, they can do what PL/S and certain Ada's did where specific modules were transformed by the compiler differently based on their needs. One might have total safety, one no safety, one a GC'd runtime, one a non-GC runtime, and one no runtime at all. Still using the same language and tools for everything, albeit subsets in some modules with extra worries.


Yeah, but at least with XOmB, we weren't actually _using_ the GC, as you can turn it off in D.


Thanks for the heads up.

Although given my Oberon knowledge, I would say that was more a consequence of D's GC implementation quality than anything else, right?

On Oberon and all the derived OS, the Kernel module was the only one where GC wasn't used, as it needed to be implemented somewhere.

All the other modules enjoyed access to an OS wide GC.


Well, it was really just that we didn't get that far, I guess. Mind you, my friends did much more of the work than I did, but like most student hobby projects, at some point, you just stop working on things.


Lisp is garbage collected. Yet Lisp machines existed, and they were invented at the same time as UNIX.


And some's details were published for other people to learn from. Anyone wanting to see how they implemented it can google LISP and Scheme machines, hardware, processors, and similar search terms. I recall one implemented cons directly while hiding GC behind memory interface. The programs didn't even know there was collection and it happened in parallel with execution without stopping execution. Great stuff. Many of these give software and hardware implementation details where they could be copied in FPGA's.

Closest thing I've seen in a modern product is Azul Systems' Java processors with concurrent GC's. So, at least one company is benefiting from the old wisdom.


> Why Rust?

Whenever Go is mentioned, Rust has to be mentioned somewhere in that same thread. And vice versa. It's practically law.

Perhaps a good indication that the debates tend to be more fashion-driven then about technical points (when the discussion is about new languages itself, that is of course reasonable though).


Right. I think the closest languages to Go, regarding respective niches or overall philosophies, are Java and Oberon. Yet nobody ever compares them.




Applications are open for YC Winter 2023

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: