What annoys me a little bit about Go (and other projects similarly adherent to "worse is better") is the implied dichotomy between "research" and making "programming lives better". Do they think the point of Haskell isn't to make programming easier? That's what most of the research into the language is exactly about! This isn't just about Haskell either: most other PL research is also about making programming lives easier. And Go ignores essentially all of it.
Now, there is even nothing strictly wrong about ignoring research like that. It's just annoying how they revel in ignoring all recent progress in the field.
Different software communities have different opinions on what makes life better for programmers.
As far as I can see, programming language researchers are just a subset of those communities with their own sets of beliefs. On average researchers are much smarter and better educated than most programmers. However it is likely that the majority of the smartest and best educated people are NOT in academia. (Before doubting that, remember there is an eternal brain drain of top people from academia to industry for the simple reason that industry pays more.)
In the case of the developers of Go, we are talking about people who I believe to be smarter than the vast majority of programming language researchers, and who _definitely_ have more experience around large software systems. They have come to different opinions about what makes that kind of development work. People like Ken Thompson and Rob Pike might not be right, but it would be unwise to assume that they are necessarily wrong simply because they disagree with a bunch of programming language researchers.
> In the case of the developers of Go, we are talking about people who I believe to be smarter than the vast majority of programming language researchers
Why would you have such an opinion ? I can turn the problem in my head a thousand times, i still can't see why. Because it is from the makers of Unix ?
The only way i could make sense of your statement is by saying people who makes the most used technology are the most clever, intrinsically. But i think this is an incredibly naive point of view.
I can see a bunch of things that make Go susceptible to have success : Traction, from the name of it's inventors and main sponsor (Google). Simple philosophy that makes it easy to learn, and pragmatic orientation. It doesn't mean it's a better language in any way. Just that it's aiming for the path of least resistance.
It is a respectable goal. Now i don't see the logic in saying that PL researchers are not as smart because they have more idealistic goals.
Why would you have such an opinion ? I can turn the problem in my head a thousand times, i still can't see why.
I'd guess that Ben is basing this on decades of exposure to papers published, work done, and books published by Pike and Thompson. He then compares this to the many other programming language researchers with whom he has personal and professional familiarity, and offers his opinion on their relative abilities. He's not arguing 'in principle', he's stating his personal assessment of these particular individuals.
Exactly. There is no shortage of material to decide on. From articles like http://cm.bell-labs.com/who/ken/trust.html, to books like The Practice of Programming, to software like Unix and UTF-8, the invention of UTF-8, with many books in between, their abilities and accomplishments are obvious. There is also no shortage of articles like http://www.cs.princeton.edu/courses/archive/spr09/cos333/bea... written by people whose opinions I respect, lauding their abilities.
By contrast I have no reason to believe that the average programming language researcher is smarter than the average CS prof. It would be astounding if this average was anywhere near Pike and Thompson.
Checking golang a bit, I didn't feel it was "aiming for the path of least resistance". For instance, blocking compilation if any imported module is unused or forbidding circular dependencies, is going against the "let's let the coders do what they want".
I am currently involved in refactoring a big Python code base and I oh-so regret Python (which I like very much) is so easy-going on these issues.
Flake8 detects unused imports and generally adds some rigor that Python otherwise lacks. Not perfect, but it has made Python more pleasant for me for larger code bases especially.
Unix isn't the most used software, but it is probably the best. I think the makers of unix bring with them a strong sense of pedigree, but also something of a expectation that it shares unix design principles. Not sure if it does this or not, but the baggage is there.
Tabs are used for tabulating stuff. Spaces are used for spacing stuff. Indentation is a form of tabulation, so use tabs. Similarly to when you use a word processor, do you hold space down for long enough to center your text or do you use the "center" alignment? Right tools for the right job.
Hell it's not exactly hard to fix in vim either - just load the provided vim extensions as per the instructions and it just works without screwing up the rest of your configuration.
The amount of time I've had to spend dredging through fucked up merges where someone has added 5 or 3 spaces to the start of the line instead of 4 gives credence to why you should use tabs as the tab means "indented one level" rather than "n spaces depending on how the user or editor fucked up the file".
Haskell is an elegant language but Haskell fans will be much more effective advocates of the language if they admit that it's conceptually very difficult for most programmers to grasp. Some of the most experienced and capable programmers I've ever known have struggled to get their head around Haskell, and I don't buy the argument that it's only because their minds have been poisoned by overexposure to other languages.
Haskell is important and I'm glad it exists but I think Go has a much better chance of become a plausible alternative to Java and C++ and perhaps Python in most programming shops.
He says Haskell is useless. Without even debating that (it can certainly be useful for certain classes of programming problems), you can see his point : The goal of Haskell was to start from an ideal view (pure functional programming without side effects) and try and see how to make that eventually practical for everyday programming, without sacrificing the pure core of the language.
You have languages, like F# (or even C# !) that takes lessons from Haskell, with at the same time being practical for pretty much every programming task (their platform , .Net, makes them useless for a lot of use cases, but that is another problem)
And on the other side (in my opinion), you have languages like Go, being pretty dismissive about pretty much everything learned from languages like Haskell.
I respect the point of view of Go's creator, i just disagree with it, and don't see any project i could do in Go that i wouldn't prefer doing in Scala/OCaml/C#. But more power to people who want to do great things in Go !
Go fills a niche (AOT compiled without a VM) that only OCaml fills in the list of languages i have listed, and OCaml is probably much more complicated to learn, even if it is way simpler than Haskell, so even if i don't agree with the design choices, i can see why Go could gain traction (apart from the fact that it is Google powered)
However it's far harder to make it not useless than other choices. To me at least, Haskell appears to be alphabet soup like badly written Perl. The syntax is so close to the metal it's unreal.
> and I don't buy the argument that it's _only_ because their minds have been poisoned by overexposure to other languages.
I've not heard that argument. I think most Haskell people would acknowledge theirs is not an easy language to learn and program.
The argument is especially non-stick because in practicality every programmer will be "poisoned" by other languages. That means that even if Haskell wasn't a hard language to learn in theory (i.e. if a "virgin" brain could learn it just fine), in practice it will be.
It's easier to expand on the solid points of the language, not easiness of understanding. But like I said, I've not seen Haskell being thrown around in that fashion.
I'm not sure exactly what exactly you're referring to, but I'd say it's not so much "worse is better" (in the traditional sense that simplicity of implementation is better than simplicity of interface) but "simplicity is better" (than maximum functionality, whether in implementation or interface). Generics, for instance, are not exactly recent PL research, but Go doesn't have them (yet) because they tend to add complexity to both interface and implementation.
As for Haskell, well - whether or not it makes programming easier, it adds complexity, and it certainly doesn't have the 'free and light' feeling borrowed from dynamic languages that Go likes to revel in.
For what it's worth, Rust includes a lot of pragmatically useful research stuff that was passed over in Go. In particular, I think Rust's emphasis on immutability is a better decision that will result in faster, lock-free code. (Yes, it counts as locking when you have to pass the one-and-only pointer to a thing over a channel just to read it.) But Rust is a big complicated language. Go is syntactically tiny, and that's an advantage in an number of ways, from speed of compilation to ease of use.
Usually when people say "big and complicated", they mean C++, which Rust is much simpler than. We put a lot of effort into keeping Rust as minimalist as possible without sacrificing the language goals. It's certainly a bigger language than Go, of course.
"This isn't just about Haskell either: most other PL research is also about making programming lives easier. And Go ignores essentially all of it."
That's a bold statement, and I'm not sure that's what Rob Pike is asserting. I saw his presentation as saying "We knowingly made tradeoffs in the programming language to, in part, make managing large codebases tractable." He's right in saying that dependency management isn't intrinsic in most of programming language theory today, and it's a big productivity sink for development organizations with very large codebases.
Slides eight and nine, to me at least, suggest by the implicit contrast between research and helping programmers that the Go designers just weren't interested in the state of PL research.
That is because PL research isn't interested in helping programmers write programs under their real world constraints. Pike isn't interested in what geologists have to say about programming languages either.
I don't think making the lives of a maximum number of today's industrial software engineers simpler is the main concern of academic PL researchers, nor should it be. A PL researcher should be able to spend time working on formal aspects of some thought experiment without thinking about interoperability, compile times, social issues in large corporations, etc.
And then programming languages are user interfaces. Hence, researching them is a social science and a cognitive science as much as it is a formal science. I don't hear much about that aspect from PL researchers. Arguably, language designers with an industrial background know a lot more about that than academic PL researchers. It may not be very scientific, but they should have pretty good intuitive access to these things.
Why should PL researchers care only about languages and lot about language usage? Only about programs and not how to create them? That seems extremely ivory tower to no good.
I believe they do care about that, but it shouldn't limit their thinking too much. For instance, there could be a wonderful model of computing that no current machine can efficiently implement. Kind of like Lisp in the 1950s.
You aren't the first person I've seen that mention that goes ignores a lot of programming research. Would you mind expanding on that for someone who doesn't know much about PL research / theory? Are they missing things you think a modern language should have? What are they doing wrong according to research?
The way I like to think of it is that a large component of PL design is making something pleasant for humans to use, which is more of an art than a science: inherently subjective (not everyone likes the same things), holistic (it's not any one feature of Python that makes it Pythonic, it's the whole philosophy), and hard to quantify. So it's no surprise that rigorous design is not the prime factor in popularity of programming languages.
I don't think, however, that really explains why research languages these days have very little in common with popular languages - why purity and strict types and all their baggage, despite their promise to make software less buggy, have not been widely adopted. There are a few reasons why this might be the case (not just "FP is hard and most programmers aren't very good at programming", although that is one factor), but their sum seems to be something fundamental about software design - perhaps changeable, but identifiable as a unit, not simply a consequence of the limitations of rigorous design.
I don't think the divide between research and industry languages is as deep as you imply. Take your examples: purity and strict typing. Purity has been adopted wholesale by Clojure and has influenced C++11 (constexpr implies purity in addition to many other things). Go, the subject of the post, is stricter about type conversions than the C family languages are. Java's type-safe implementation of generics was influenced by ML.
> why purity and strict types and all their baggage, despite their promise to make software less buggy, have not been widely adopted.
I am guessing is it mainly due to the inertia of the current popular languages (C, C++, Java, Python, etc.), and that none of them offer such guarantees. There are tons of libraries written in them, and so it is hard to move away from them. I am hoping that Scala and Rust will change things.
While I am not an expert at PLs, one thing they definitely missed for a language that claims to be "built for scale/multicore" is immutability. Not only is it not default for variables to be immutable (like many functional/hybrid programming languages), but there is no notion of immutability built in to the language in the first place.
The slides (and plenty other Go literature) describe the metrics used to decide Go features in quite a lot of detail. If these metrics are applicable in your context, then Go is more likely to "make your life better" than Haskell.
Can you point out to specific PL research results that Go has outright ignored, rather than decided against for X reasons?
this is a very interesting question, but can it be answered?
I mean, we can say that design by contract (or linq/multiple dispatch/type state/whatever) offers an interesting solution to common problems and was ignored.
Do you spend most of your time "programming" learning how to program? Or using what you've learned? Haskell has a steep learning curve, for sure, and that's one reason it doesn't have widespread use. Once you've learned it, programming in it is arguably a lot more enjoyable (and thus easier) than other languages...
It seems a lot of people have trouble ever escaping the steep learning curve stage in Haskell, but it does look interesting and your post has inspired me to go take a look at the Haskell intro docs.
However I definitely wouldn't group Go with 'worse is better' languages like PHP, so that seems an inaccurate description to me. I suppose what I was interested to know from those advocating Haskell as intrinsically better than Go was whether the learning curve is the only issue with Haskell, or whether there are also times in daily use where it sacrifices ease of use/comprehension for power, as opposed to Go where there is a great emphasis on making the language itself as simple as possible, even at the cost of some convenience.
Ultimately it's not practical to use it, so even if it is easier to use after the great hump of the learning curve, it's still not good for real world applications, principally because of libraries, support and ubiquitousness . Certain languages reach terminal velocity, whereupon their real world usefulness goes up exponentially, and whether they are well designed or not the become practically necessary.
"Research" is about producing new knowledge about something, which in this case is programming computers.
The slide on the problems that the Go system solves points out things the solving of which is unlikely to yield significant new knowledge in the domain. Yet solving those problems does improve the lives of programmers and so is a valuable engineering contribution.
For some problems, they do reuse techniques that are at least two decades old. Putting import information in compilation units at the head to speed up compilation was already done in Turbo Pascal and is available in Free Pascal today. They also did take up CSP as a concurrency model.
"Do they think the point of Haskell isn't to make programming easier?"
The point of Haskell was stated by the community much early on -- that they'd try to "avoid success at all costs". "Making programming easier" isn't even a clearly definable research goal I think. Easier for who? .. for which problems? .. on which kinds of machines and environments? Haskell, as I see it, is a platform for experimenting with new techniques and asking "what if?" questions. A sandbox to play in. If it happens to make certain kinds of programming tasks easy and elegant, that's a by product.
The focus on Haskell as an alternative is unfortunate. What about, say, Racket? There's a language that emerged out of PL research, by researchers that clearly care about getting programming concepts across to people (The Little Schemer, etc.), that's pragmatically oriented and has a lot of research-derived neat features.
Is the problem that there are too few Racket boosters? Racket seems pretty swell to me.
Good point. The focus on evaluating the success or failure of programming languages as a popularity contest is also unfortunate. There is also the matter of influence.
In the mid 80's I took Dan Friedman's (author of The Little Schemer) Programming Languages course and simply had a ton of fun learning all about Scheme & the implementation of languages. I had no idea at the time it would be by far the most important class that I took as either an undergraduate or graduate student.
Turns out, one of the most important things programming languages do is influence the way we learn and think about computation. I've learned crucial lessons from Scheme and all the languages I've worked in over the years. To this day, I am perplexed by people so vested in one programming language that they fear instead of savor the chance to work in another.
As a math major, a key lesson demonstrated so vividly by Prof Friedman & by the philosophy of Scheme resonated with me personally: the importance of clearly understanding "what depends on what?" and of learning which things can and/or should be made fundamental and which derivative.
The Golang Team has looked at their problem domain and, through the creation of Go, has built a striking and innovative statement on what they've judged as fundamental. I welcome their emphasis on selectivity over accumulation and believe there is much to be learned from their choices.
Racket's goals appear more pedagogical and the effort they put in (which is bloody awesome for a relatively small team, btw) on the pragmatic aspects is to the extent necessary for teaching.
If we, again, ask whether Racket (or any of the features in it such as delimited continuations) can help solve the problems that the Go team wants solved, I still need to scratch my head a bit. For example, V8 has overtaken mzscheme w.r.t. speed and you would (rightly) ask for a comparison with Node today for a class of server applications. (Being a PL buff, I'm pretty familiar with Racket/Haskell, having written and deployed a Scheme interpreter myself.)
I offer another categorization - Go is to design where Racket/Haskell is to research. A good part of design work involves intelligent trade offs. In PL research, you're interested in ideas, new ways of thinking, insights, optimization techniques, etc. You're not (usually) interested in taking a new idea and packaging it so it is digestible to a certain class of programmers. Constant factor speed improvements are also not usually interesting to PL research, but very welcome to the pragmatic programmer. Syntax design, for example, is seldom the highlight of a PL research project.
Haskell's designers believe in the goal of great pegramming, but they refuse to compromise today to help engineers. Haskell Ultimate will be better than Go. But it doesn't exist yet.
- Go codebases by non experts are peppered with magical incantation (sleeps, etc.) to avoid the dreaded "all goroutines are sleep". Of course "they are doing it wrong", but that is the germinal point.
- A concurrent Go program will likely behave differently given 2 bits (just 2 lousy bits) of difference in the object binary. (runtime.GOMAXPROCS(1) vs runtime.GOMAXPROCS(2)). Imagine someone touching those 2 bits in a "large codebase". It is practically impossible to do the same thing in a large Java codebase and fundamentally change the programs runtime behavior. (Happens all the time in Go.)
- It is very difficult to reason about a Go routine's behavior in a "large codebase" without global view and a mental model of the dynamic system e.g. which go routine is doing what and who is blocking and who is not. Pretty much defeats the entire point of "simple" concurrency, to say nothing of "scaling". Programming in Go's variant of cooperative multithreading is actually more demanding than preemptive multithreading. Cute little concurrency pet tricks aside, Go concurrent programming actually requires expert level experience. "You are doing it wrong". Of course. Point.
- There is nothing, absolutely nothing, that you can do in Go that you can not do via libraries in Java. Sure, the cute syntactic go func() needs to be replaced with method calls to the excellent java.util.concurrent constructs, but the benefits -- high performance, explicit-no-magic-code -- outweigh the cute factor in this "programmer's" book.
- On the other hand, there are plenty of things you can do in Java that are simply impossible to do in Go.
- Once we factor in the possibility for bytecode engineering, then Java is simply in another higher league as far as language capabilities are concerned. (Most people who rag on Java are clearly diletantes Java programmers.)
If Go actually manages to be as effective as Java for concurrent programming at some point in the future (when they fix the somewhat broken runtime) then the Go authors are permitted to crow about it. Until that day, go fix_the_runtime() and defer bs().
One thing that programming in Go has made me realize is just how awesomely Sun/Gosling, et al. hit that "practical programming" sweet spot. No wonder the modern enterprise runs on Java and JVM.
It just works. (But it is "boring" because it's not bling anymore. Oh well, kids will be kids.)
Well, that's the same in any concurrent application. If you don't think about race conditions while coding, you shouldn't be using a concurrent language.
This is the same for anything. For example, a webserver written in C++ that uses pthreads might have a #define that determines the number of threads in the thread pool. If the default is 1 and nobody changes this in development, it should not break when someone changes it to 2 or 3, otherwise the code is wrong.
As for your argument about Java, there's nothing you can do in Java that you can't do in assembly. This argument is invalid. Go's strength is in making complex things simple. For example, if you wanted to build a distributed system (across machines), in Go you just use "net/rpc" it would handle it for you. In Java, you'd have like 3 layers of abstract classes that define a serializable type, you'd make some kind of listener class that receives responses from an RPC call, and do tons of other boilerplate. In Go, you just make a function call. That's it.
What exactly can't you do in Go that you can do in Go? Maybe generics, but that just complicates code unnecessarily most of the time. Sure, I've done my share of generic programming, but I've also written non-trivial, several thousand LOC programs in Go, and my Go code was much simpler and easier to read.
BTW, Java sucks for concurrent programming. I hate that stupid Runnable interface (or whatever they call it these days). I'd much rather have a bunch of functions that can be called synchronously, or in a go-routine and have simple checks in place to eliminate race conditions.
In Java, you'd have like 3 layers of abstract classes that define a serializable type, you'd make some kind of listener class that receives responses from an RPC call, and do tons of other boilerplate.
Or you just add Akka to your pom and pass messages between actors. (If using Scala, the syntax is tightened up very nicely.)
>There is nothing, absolutely nothing, that you can do in Go that you can not do via libraries in Java
Yes there is.
You can run thousands of concurrent goroutines. Creating thousands of threads in Java will bring the system to its knees. java.util.concurrent doesn't solve the stack size problem.
You can have compact structured value types in Go but not in Java, meaning Java will always eat a lot more memory than Go.
In Java you can use executors and thread pools to "start thousands of threads" in the same way those "thousends of concurrent goroutines" are running "in the same time" on a modern 16 core cpu.
That's not true at all. Each goroutine has its own variable sized stack whereas Runnable objects in Java don't have a stack of their own. They use the thread's stack. So as long as a Runnable uses its stack (that is the run() method hasn't finished), the thread cannot be used by other Runnables.
So, in Go you can have thousands active function invocations (i.e tens of thousands of stack frames on thousands of stacks) at the same time and they do not occupy thousands of threads. You can't do that in Java.
I'm talking about every single Java language implementation that I know of. And I believe (although I'm not completely sure) that the Java language specification requires it at least implicitly.
No you did not. You stated that is not possible in Java at all regardless of the implementation. I don't see any mention of " none of the implementations I know of" on your comment.
<quote>
That's not true at all. Each goroutine has its own variable sized stack whereas Runnable objects in Java don't have a stack of their own. They use the thread's stack. So as long as a Runnable uses its stack (that is the run() method hasn't finished), the thread cannot be used by other Runnables.
So, in Go you can have thousands active function invocations (i.e tens of thousands of stack frames on thousands of stacks) at the same time and they do not occupy thousands of threads. You can't do that in Java.
</quote>
You are being deliberately dense. If no existing JVM gives runnable objects their own stack, then it's impossible for a java developer to create thousands of threads (without creating their own JVM, which is totally unreasonable 99% of the time). It's a practical impossibility. Is that so much worse than a technical one?
That's OK, it's been a perfectly civilised debate after all. You're making a fair point insisting on a distinction between implementations and the spec. My initial posts were very unclear in that regard.
> There is nothing, absolutely nothing, that you can do in Go that you can not do via libraries in Java
Agreed, however there are things I really don't want to do in Java as even using the wonderful selection of libraries and tooling out there they are like sandpapering your face and repeatedly ramming it into a salt bath.
By the time I've set up Eclipse, Maven, Jetty/Glassfish and a reasonable framework, I could have actually finished the task in Go. To be 100% honest, I could probably do it whilst maven is puking in the Eclipse log window.
Go is excellent for prototyping ideas, which is why it is my "language of choice" at the moment. But your argument falls flat when considering deploying "large scale" systems, which have multiple dimensions of ceremony before hitting production.
I disagree. I build very large scale systems. I think that large scale systems should be constructed out of small components which are loosely coupled distributed components with strong contracts between them. These components are ideal for encapsulating in small standalone go components.
That is a viable approach and I share your affinity for that type of architecture, but insisting that it is the only way to build large scale systems is not reasonable. In real life, for example, you have clients and clients have their preferences or requirements or existing systems. And system architects also like having choices.
And speaking of choice in context of massively concurrent CSP systems, Erlang is (as of now) the superior choice for that sort of system architecture.
I think you mean Erlang's strict CSP -- share nothing -- is not as efficient as shared memory systems such as Go. That is a fact and by design it is a feature and not a bug. Nine 9s: http://pragprog.com/articles/erlang
> Have you tried finding Erlang programmers?
Go looks easy but it is a subtle language. Brad F. writes a server and it performs. Mr. X writes one and it crawls. Go figure.
Erlang doesn't scale as in Erlang doesn't scale to larger projects when you require lots of people working on them. I doubt we could put our 2MLOC trading platform into Erlang and get away with it.
It certainly scales from a technology point of view.
Go is simple enough to allow Mr X's code to go through a review by Brad F until Mr X gets the point.
Do you contend that concurrency—and I mean true concurrency in the abstract, not a particular implementation of a parallelization idiom like threads—is well-served by C++ or Java?
I cant speak for the world, but i have a personal, irrational dislike for Java ever since my university standarized on it for almost every subject and assignment. This was before we even had things like generics.
People hate on Java, likely not because its the worst language, but because it was the least favorite language they were forced to use too often. This hate does not reflect a rational opinion, but an experience.
Ands it about a lot more than language features. Its about culture, what it represents, and the type of role it plays in our industry. Like Cobol before it, Java will not escape its faith and it will die. Not because it must die, but because we must kill it.
6 days ago the poster was calling it their "language of choice" and saying other nice things[1], so perhaps they're just having a bad day with debugging a deadlock.
Go has many strengths, but for Rob Pike to claim that Go addresses concurrency for the modern CPUs better than Java is just clearly b.s. Doug Lea, Cliff Click, Brian Goetz, et al. addressed multicore before Go even existed. I call b.s. when I see it.
(The nearly cult-like mentality of some of Go programmers is really quite annoying.)
It goes deeper than that. The prime difference is in the stance: "Is concurrency a library or is it a built-in feature?". In Java, it is the former, whereas in Go, it is the latter.
In my experience, having concurrency primitives in the language tend to integrate better than having them in a library - for the simple fact it saves some space when writing and reading programs.
As for the choice of `java.util.concurrent` over goroutines it again depends on your stance. j.u.c provides you with some excellent high-level primitives on which to build concurrency. But these primitives are easy to build in Go with its concurrency primitives. Some, like `BlockingQueue<E>` or `Future<V>` are directly isomorphic to a channel or a simple goroutine/channel invocation respectively. It is a matter of opinion whether or not one should actually do these kinds of implementations themselves.
A more important difference, however, is that Java operates on Threads as the "smallest" primitive, whereas a goroutine is a light-weight "thread". That is, you can easily have tens of thousands of goroutines running, but you do probably not want the same amount of threads in a Java system. For that, you need stuff like kilim.
In my experience, I am a professional Erlang programmer, the ability to run extremely concurrent threads/processes is a game changer for much of the code you will be writing.
> The prime difference is in the stance: "Is concurrency a library or is it a built-in feature?". In Java, it is the former, whereas in Go, it is the latter.
Fully agree. But in case of Go, the language feature is non performant.
cd $GOROOT/src
grep -r Lock *
You are strongly motivating me to write Go channel semantics over Java's (nio) Selectors and j.u.c to demonstrate a point... (Don't. I've other things to do.)
I would not be surprised if it turned out to perform extremely well - and perhaps even outperform Go. On the other hand, performance is not everything when it comes to writing large software systems. Again, my Erlang experience is that even if we are "hellishly slow" because every message is a copy in Erlang, it doesn't matter. For large software systems it turns out that the architectural design is what is the driving factor to program speed. How fast you pass a message is only part of the whole thing.
The real question to ask is how good a language is for the large scale software design. Certainly Java has proven itself to be there. Go is still young, but it specifically targets some of the problems in the same area as well. In my experience, debugging and the ease of doing so is also very important. And I don't know much about how easily this is to do in Go.
If it didn't, the fault would be mine. A dip to unsafe and it is effectively as bare metal as Go.
> The real question to ask is how good a language is for the large scale software design...
Agreed. A comparison with Java at this point would be unfair given the massive investment made by the industry in tweaking of the platform over nearly 2 decades. I think Go is a very promising language.
One simply hopes that the leading lights of the language/industry do not stoop to making threadbare (npi;) claims of superiority over other, equally viable, languages. After all, "it's just code". (PL religious wars are entirely boring.)
> debugging
Go has very good profiling and data race [1] detection tools. As far as debugging goes, I'm in the printf school of debugging (even on Java) and so far so good, so I can not speak to that.
GDB has full support for Go now. So how easy debugging in Go is depends on your opinion of GDB, to some extent (the learning curve is steep but it's a powerful tool, IMO).
The GogC++ engineers programming the custom computing hardware running communicating over custom network hardware with custom network software disagree with your assessment that "architectural design" is all that matters to obtain performance. When your compute budget is around a billion dollars per year, constant factor speedups are highly relevant.
I would like to do that for kicks. What numbers would be interesting to look at in terms of benchmarking (throughput, response times, waiting times, etc.)?
> Go codebases by non experts are peppered with magical incantation (sleeps, etc.) to avoid the dreaded "all goroutines are sleep".
I'm confused. This is indicative of a deadlock, where no goroutine can make progress. Sleeps would mask the symptoms, yes, but would never actually solve a deadlock, the program would just do nothing for a longer period of time. What Go codebase(s) are you referring to?
> A concurrent Go program will likely behave differently given 2 bits (just 2 lousy bits) of difference in the object binary. (runtime.GOMAXPROCS(1) vs runtime.GOMAXPROCS(2)). Imagine someone touching those 2 bits in a "large codebase". It is practically impossible to do the same thing in a large Java codebase and fundamentally change the programs runtime behavior. (Happens all the time in Go.)
If you are trying to find a low number of bits, you can just use 0. GOMAXPROCS can be set via an environment variable, the function is just to override that value.
More to the point, I am convinced that you are wrong. You are saying that in a Java class file, there are no 2 bits I could change that would impact the behaviour of a program, and that is plainly false.
> It is very difficult to reason about a Go routine's behavior in a "large codebase" without global view and a mental model of the dynamic system e.g. which go routine is doing what and who is blocking and who is not.
I guess this could be true if you engineer a system where every goroutine depends on every other goroutine. But it isn't true for code I've seen. As an example, the http library in Go has a goroutine that accepts TCP connections, and a goroutine for every accepted TCP connection. This knowledge is not necessary to use it, and is not necessary if a different part of your program is using it, because it is exposed behind an abstraction (http.Handler). To paraphrase the OP, Go enables simple programming. It doesn't forbid bad programming.
> There is nothing, absolutely nothing, that you can do in Go that you can not do via libraries in Java.
> On the other hand, there are plenty of things you can do in Java that are simply impossible to do in Go.
Depending on what you mean by "do", I either agree with you, or think you are mistaken. If you mean things that you can syntactically write down, like a Giraffe is-a Animal, or an Apple is-a Fruit, then yes, that is impossible to write down in Go. If you mean a problem that can be solved in Java that cannot be solved in Go, then I think you are mistaken.
> Once we factor in the possibility for bytecode engineering, then Java is simply in another higher league as far as language capabilities are concerned.
Why is this in another league? You can use assembly from Go, meaning you can generate code and jump to it if you really want to. Not sure why this would be considered a special feature of Java, or even why you think Java pioneered this. Bytecode is just a portable assembly, but its just as portable to build per-platform assembly emitters (like compilers do).
> Most people who rag on Java are clearly diletantes Java programmers.
Right. That makes sense to me; no one who knows anything about Java would ever criticize it. Sarcasm aside, this demonstrates a fallacy in reasoning.
> If Go actually manages to be as effective as Java for concurrent programming at some point in the future
It isn't more effective now? News to me.
> when they fix the somewhat broken runtime
Sorry, remind me what this is referring to?
> It just works. (But it is "boring" because it's not bling anymore. Oh well, kids will be kids.)
This is dangerous thinking. This is the cry that the native programmers raised when Java was born. Try and keep that in mind.
The issue is actually impossible to fix without overhauling the entire language. It's impossible to garbage collect unboxed objects without being conservative.
Exactly. But conservative GC sounds like a dumb idea in the first place. Im not the expert these people designing these languages are, but i can not understand how memory leaks can ever be acceptable just because the probability of it seriously affecting you, is low (when on x64). Doesnt that just imply you can not use these types of languages in any mission critical context.
I must be wrong though: it seems Google is using it for parts of their infrastructure, and im assuming those are long running procceses. But what if the worst case scenario does happen? Are there workarounds? Monitor memory usage, and just restart? Can we force the runtime to free a certain resource?
What am i missing? How does this not completely destroy anu utility of these languages? Why do people put conservative gc in languages outside of the esoteric or academic context?
> Doesnt that just imply you can not use these types of languages in any mission critical context.
No more so than with C++. There is a relatively high probability of you having left a memory leak in an average C++ server.
> Monitor memory usage, and just restart?
That would work, and is probably a good idea for all servers anyways. I've seen apache take upwards of 60 gigs due to some misconfiguration, so monitoring memory utilization of your programs and alerting or automatically restarting is a good idea.
> What am i missing? How does this not completely destroy anu utility of these languages? Why do people put conservative gc in languages outside of the esoteric or academic context?
There isn't really a good reason as far as I can tell. The only advantages are that its much simpler to implement, particularly when you have C interop. You can annotate Go objects with types to do precise collection, but as soon as you pass them to C, a lot of guarantees the Go typesystem makes go out the window. Other than that, can't think of anything.
It is my understanding the chance of complications is lower on a 64 bit system. But it is very likely, that well crafted requests can make any server application written in Go, trigger this complication, and essentially result in a DOS attack.
I think it would be mindblowingly difficult to cause this DOS. If you think it possible, you can setup a demo on appengine. You even get to upload the source code, so you have that advantage, that you wouldn't ordinarily have.
Gist of the task at hand: you'd have to arrange for data passed into your target application to point to structures in memory, causing them to be pinned. Ideally, these structures should be big, to maximize impact of the attack. Further, you'd have to ensure that the data you passed in isn't garbage collected, as this would allow the target structures to be collected.
> > On the other hand, there are plenty of things you can do in Java that are simply impossible to do in Go.
> Depending on what you mean by "do" ...
Oh, I don't know, maybe dynamic code loading or secure sandboxing of code? For instance you can compile a custom Google Go without disk IO, but you can't have an app that loads plugins much less one where the app can do IO but the plugin can't.
These may not be the wisest or most useful of features for "systems" programming, but you have to admit that there are real things Java can do that Google Go cannot.
I think Google Go advocates put blinders on like agentS because if they actually objectively looked at the situation they would have the same thing to say as others do about Google Go: "meh".
> Oh, I don't know, maybe dynamic code loading or secure sandboxing of code? For instance you can compile a custom Google Go without disk IO, but you can't have an app that loads plugins much less one where the app can do IO but the plugin can't.
Sure you can. Spawn a child process with lower security privileges, and communicate over a pipe. It might require you to restructure to reduce chattiness, but on the other hand, you will be much less likely to expose yourself to security vulnerabilities than the Java "sandbox".
If the child process is written in Go, you can even use the -u compiler flag to only allow safe packages (i.e. no assembly, C, only packages explicitly marked as safe and pure Go). You can go even further, and mark package os unsafe, disallowing file system access altogether. Or do what (I suspect) Appengine does, and provide your own neutered implementation of package os/time/net/etc, and mark those as safe.
The Go playground is an example of this, btw. It merely streams stdout and stderr, but you could imagine doing other things with the sandboxed program.
> I think Google Go advocates put blinders on like agentS because if they actually objectively looked at the situation they would have the same thing to say as others do about Google Go: "meh".
I think you are falling prey to the same fallacy of false cause as your GP. "agentS likes Go, so he must not be objectively looking at the language". It is possible to objectively look at the language, and think it useful.
This is exactly what I was talking about. You can't even admit facts like that Java has dynamic loading and a security model and that Google Go does not. You're simply delusional in this respect.
It seems I wasn't clear enough; I apologize for the confusion. I fully admit that Java has dynamic loading and a security model and the Go does not.
I was responding to "you can't have an app that loads plugins much less one where the app can do IO but the plugin can't."; that is, I was merely providing a way to implement what you wanted (plugins that cannot do IO, in a host that can).
Like I said in my original post, "If you mean things that you can syntactically write down, like a Giraffe is-a Animal, or an Apple is-a Fruit, then yes, that is impossible to write down in Go." Similarly, it is impossible to dynamically load code, or enforce permissions at the runtime level (although implementations of this enforcement has historically been... buggy, to be charitable). I was attempting to show that the problem of having sandboxed plugins is solvable in Go.
The child process approach is viable, but in fairness you should note that shared address space provides efficiencies that the RPC approach does not. This approach is IMO quite interesting in context multi-core and certain problem domains, but as a general mechanism it is a 'hack'.
It is borderline fanaticism to insist on comparing Go to Java, and unfair to Go. Effective advocacy of Go should focus on its strengths (syntax) and not attempt to gloss over its inherent limitations. It is a capable and viable language within well defined limits. Java and JVM are in another league. Accept it and move on.
If a new 'headius' feels up to it, Go on JVM would be a very interesting new language for the JVM ...
> The child process approach is viable, but in fairness you should note that shared address space provides efficiencies that the RPC approach does not. This approach is IMO quite interesting in context multi-core and certain problem domains, but as a general mechanism it is a 'hack'.
As to the lack of a shared address space, I thought I had communicated that when I said "It might require you to restructure to reduce chattiness", but yes, there are performance advantages to having a shared address space.
You should note that there are security disadvantages to having a shared address space with a sandboxed plugin written in a relatively low-level language (bytecode). I won't post several links to exploits of the Java sandbox in the last year alone, but I am sure you can google for them.
This mechanism is most certainly not a hack. This is the approach that Chromium and the other multi-process browsers (all the modern ones?) use for sandboxing. It seems far more effective to me to push responsibility for security to the OS, rather than the language runtime.
> A full featured runtime reflection mechanism that does not drop "static info" on the floor
Package reflect is fully featured. It just requires you to have a value, or type to actually reflect upon. The tradeoff being that Go can do dead-code elimination and Java cannot. Same thing for Class.forName.
> @MetaInfo() ... Arguably not easy on the eye but necessary and enabling.
I think you are referring to annotations. Yes, Go doesn't have these on methods, functions, or types. But it has these for struct fields (tags); this gives you 90% of the functionality, without muddying up the syntax.
> Bytecode injest (beyond dynamic linking) -- the JVM is your oyster.
Sure. See my note about all of these points below.
> and related Instrumentation
Go has a CPU profiler, a memory allocation profiler. In tip, it has a data race detector. It can expose this data over an HTTP interface so you can profile a running server http://golang.org/pkg/net/http/pprof/.
> It is borderline fanaticism to insist on comparing Go to Java, and unfair to Go. Effective advocacy of Go should focus on its strengths (syntax) and not attempt to gloss over its inherent limitations. It is a capable and viable language within well defined limits. Java and JVM are in another league. Accept it and move on.
You continually give me solutions that are impossible to implement in Go, but not the corresponding problems. Like I said, there are things that are syntactically impossible to write down in Go. If you want me to give you a list of things that are impossible to do in Java, I can do that (structural typing, value types, methods on values, multiple top level definitions in a single file, first class functions, efficient array slicing come to mind). I have not done that because it is a nonsensical thing to do. That would be like me saying Javascript is superior to Java because Java does not have prototypical inheritance.
The only useful example I've heard is sandboxed plugins. That is a problem, not a solution. All the things you've listed are solutions. If you actually want to have a conversation about how to solve problems in Go, then list problems, not a laundry list of features that Java has that Go does not.
> If a new 'headius' feels up to it, Go on JVM would be a very interesting new language for the JVM ...
Why? If I want more performance than gc gives me, then I use gccgo. What would Go on the JVM give me?
Also, on an unrelated note, I find it a little sad and pathetic how both you and 0xABADC0DA have to resort to insults to get your point across.
>> and related Instrumentation
> Go has a CPU profiler, a memory allocation profiler. In tip, it has a data race detector. It can expose this data over an HTTP interface so you can profile a running server http://golang.org/pkg/net/http/pprof/.
It is canonically called "instrumentation" but that is not all that it is capable of doing. Have you actually ever used this feature of Java?
> Also, on an unrelated note, I find it a little sad and pathetic how both you and 0xABADC0DA have to resort to insults to get your point across.
?? When and where did I insult you? Pathetic, on the other hand, is an insult. I strongly take exception to that.
Are you kidding me? This post is so full of arrogance and brazen oversimplifications of both Java and Go, there is so little of actual value to even bother discussing.
It's almost like you don't realize that Java came out a decade before Go, or that you don't understand the power of Goroutines or the power of writing your own low-level concurrent code.
I'd hate for you to find out that java's concurrency primitives are probably implemented using the same constructs as goroutines and/or are available to devs in the sync package. (Feel free to write "thread"-safe data structures. Hell, if you'd wanted to complain about the lack of generics in that case, I'd have even joined you.)
Research, sure, but it doesn't ignore the last decade of practical experience writing programs. A programming language is a user interface. The theory isn't all that matters.
And yet I'm writing tons of code in it very quickly. I don't know how it missed HN, but Go also now powers Google's download infrastructure (Chrome, etc)
If only Scala compilation weren't so slow. This is a bit off topic, but I find the difference between Scala and Go to exemplify some of the fundamental tradeoffs in the languages.
Scala is certainly the more researchy language with an advanced type system, better type inference, and multi-paradigm. However, its compilation is horribly slow, programming styles vary wildly, and I find its tools e.g., SBT, to be underdeveloped.
On the other hand, Go has constrained features and complexity, while providing higher-level features like GC, first-class functions, and simple type-inference, along with excellent tools e.g., gofmt and the build system.
I don't want to imply that one is simply better than the other, but I have enjoyed working in Go much more than Scala.
True. I haven't played with Go, but the reason I like Scala (slow compiler and somewhat lacklustre tooling aside) is precisely because I have a lot of time and brainpower invested in JVM-based stuff. If and when I need a wicked fast systems programming language, I think Go might be on the agenda.
I'm pretty stoked on Go. I freaking love C. Love it. It's like they took C and went to the best disciples of C's creators and said "look - we've got these problems now nobody envisioned 40 years ago. Can you make C for us again, but better?" And they said "yes!" and it was good.
We get some nice concurrency primitives, garbage collection, cleaner syntax, something between structs and objects that fits the right feeling, automatic bounds checking, cute array syntax, and a big-ass, well defined standard library. Oh, and this concept of interfaces that is so well executed it's not even funny.
Except. I feel like they are forcing the fanboy mindset. At one point in this slide deck, there is the following bullet: "The designs are nothing like hierarchical, subtype-inherited methods. Much looser, organic, decoupled, independent."
I didn't see the talk. But that is the most vapid, meaningless description I've ever seen of a feature of a programming language. Rob might as well have said, "it's hipster better," which would have conveyed exactly as much meaning.
So here's my question - and I hope there are real answers - can someone point me to >3 real, big systems that are built using Go? I'll accept Google internal systems on faith.
I was a bit confused at slide 35 where he said that Go's declaration syntax is "easier to parse -- no symbol table needed" as I don't typically think of a symbol table being involved in parsing. Here's an interesting comp.compilers thread from 1998 about parsing C++: http://compilers.iecc.com/comparch/article/98-07-199
Couldn't pass this one up seeing as I should actually be dealing with this for a class instead of commenting on Hacker News. What it means is that certain sets of symbols cannot be parsed unambiguously without information generated during the parse itself (the grammar is not context-free).
The link in the sibling comment has the canonical example:
foo * bar;
can either be parsed a multiplication or a declaration depending on whether "foo" has been typedefed previously. So you need to keep track of typedefs in a symbol table to correctly parse things.
The biggest problem with this is that many parser generators (including yacc) are much harder (if not impossible) to use with context sensitive grammars.
To me it looks like Go is designed to make compiler writer's life easier rather than programmers. Why is it so important to save on symbol table if it does make language less intuitive (at least for people with C background)?
We should be moving towards compilers that are capable of dealing with even more closer to natural semantics, not the other way around.
I'm suspecting similar mindset was at work when making decisions on error handling which, IMO, is the most appalling part of Go. If you have coded in C most of your life then it's same ugliness as usual but for others it's a huge move backward. Again the designers of Go could have implemented exceptions and let programmers choose what mechanism they prefer. This would be same as in Java or C# where a programmer can choose to return error values instead of raising exceptions. But I guess that would be more work for compiler writers.
Easier to parse is not only good for the compiler. There is lots of other stuff, which needs to parse a language: syntax highlighting, style checker, reformatter, static program analysis, automatic refactoring. Some simple syntax changes makes it tremendously easier to develop all these tools.
C++ is known for being really hard to parse. Rumors say there is no complete C++ compiler so far. Java avoided those pitfalls. As a result you can see that Java has much better IDE support (e.g. refactoring) than C++.
Why is it so important to save on symbol table if it does make language less intuitive (at least for people with C background)?
Because Pascal-style declarations are superior in readability. With type inference, they also allow less typing ("foo :=" instead of e.g. "auto foo =").
You are going a learn a completely new language, new idioms, new standard library, new tools, and you're arguing that switching to a different style of declarations, a very very minor detail, is "less intuitive"? I'm not sure how intuition applies there: are you learning a language by trying to compile what you typed without reading any documentation or what?
Again the designers of Go could have implemented exceptions and let programmers choose what mechanism they prefer.
Because making them optional doesn't work well. If someone uses exceptions and someone not, combining code from those two people will make you use both exceptions and error returns. It would be even messier than indicating errors in C (does this function return 0 on error or -1 or what)?
It's arguable that the Pascal-style declarations are superior in readability.
I've discussed this with colleagues in the past, and those who were raised with English as their native language, for example, often find C-style declarations easier to comprehend. They find the adjective (that is, the type) coming before the noun (the variable name) more natural.
The opposite happens for those whose native language is one where adjectives follow nouns. They find the Pascal convention easier to comprehend.
The English speakers must not have taken many math courses: there type declarations much more commonly come after the variable name than the other way around, although both styles are used. I mean "Let G be a group and f an automorphism of G" is much more common than "Consider a group G and an automorphism of G, f".
I think that the argument generally goes that if the grammar is unambiguous then it's also easier to read (once you're accustomed to the syntax) because there can only be one interpretation and so you don't need to keep knowledge of what's a type and what's a value in your head to understand a line.
"Dependency hygiene trumps code reuse.
Example:
The (low-level) net package has own itoa to avoid dependency on the big formatted I/O package."
...now, if this kind of attitude stays in the core dev land I don't really care about it. But when I'll consider Go as an alternative for a large project, I'll start worrying if people adopt the "it's OK to reinvent some wheels" philosophy when they start building higher level frameworks in Go... I mean, how hard can it be to split the "big formatted I/O package" into a "basic formatted I/O" package and an "advanced formatted I/O package" that requires the first one, and have the "net" package only require "basic formatted I/O" (or maybe even make "basic formatted I/O" something like part of the language "built ins" or smth - I don't know Go so I don't know the Go terms for this)?
> I'll start worrying if people adopt the "it's OK to reinvent some wheels" philosophy when they start building higher level frameworks in Go...
I certainly don't have this philosophy, and I don't know thats its a thing amongst Go programmers. I think its just a standard library thing.
> I mean, how hard can it be to split the "big formatted I/O package" into a "basic formatted I/O" package and an "advanced formatted I/O package"
It would easy, but then you're cluttering your library to make implementing the std lib easier, rather than making the lives of people using the std lib easier.
> or maybe even make "basic formatted I/O" something like part of the language "built ins"
Things that are in the language built-ins would need to go in the spec, and would need to be implemented by two separate compiler suites. The authors seem to be doing their best to keep the core as small as possible. Having libraries written in Go is cleaner, reusable across compilers, and give newcomers a place to read idiomatic Go.
> It would easy, but then you're cluttering your library to make implementing the std lib easier
yeah, but the "advanced formatted I/O" package can include all the function from the "basic" one (i.e. act as if it contains all the functions and structs from both of them) and be the only package 90% of regular language users know about, and if happened to need to write some low level code that only needed the "basic/low-level formatted I/O" I would just import that ...it just seems 100x more "sane" to do it this way
I was having lunch with a friend of mine who works at Google a few weeks ago and I asked him how popular Go was at Google. He said the only people using it are the Go team. Java and C++ continue to reign supreme at Google.
Your friend is out of the loop. I am one of the Go readability reviewers, and I review thousands of lines of Go code each week. The code is written by Googlers from different teams around the world.
It is true that Go is in the minority compared to C++ and Java. Change takes time at a company of this size. But the graphs are all up and to the right, and the rate of Go code being written at Google is accelerating.
> I am one of the Go readability reviewers, and I review thousands of lines of Go code each week.
If you're on the Go readability team, you are going to see thousands of lines of Go code each week, as a simple consequence of selection bias. Your observation doesn't invalidate my friend's claim.
Something that would invalidate that claim would be how many lines of Go are submitted into the Google code base every week compared to Java and C++. My friend is simply saying that this ratio is minuscule.
Let me add: more powerful and expressive languages can often get away with far fewer programmers. So if you measure in sheer numbers of people working on, or lines of code being executed, you get the wrong number count.
A rough estimate from Erlang, is that the typical Erlang program is 1/5th of the typical C++ program implementing the same functionality. And the Erlang program even handles unforseen events :)
> A rough estimate from Erlang, is that the typical Erlang program is 1/5th of the typical C++ program implementing the same functionality.
And taking that to the extreme, J/K/APL is usually 1/100th of the typical C++/Java program implementing the same functionality. Unlike erlang, it's not more robust - but it's often faster (not because of some theoretical advantage - when you write 1/100th of the code with the right primitive, you have more time to think about optimizing it)
But as another poster said: Shh, don't tell anyone about our unfair advantage.
My stance is still the same as two years ago: Golang is a very decent language with a lot of interesting features. But the lack of direct address of fault-tolerance is what I am missing. In distributed systems, you can't hope to handle all kinds of failure scenarios. You have to punt at some point and assume that the problem doesn't ever happen in the real world. And that is where you need fault tolerance.
It's not really common or standard to rely on a cluster of VM instances to solve your distributed problems for you these days.
For most programmers: Statelessness is the default, replication and fail-over are the back-up plan.
Talking about Erlang's "fault tolerance" as if it's some sort of secret weapon these days (it was more important 10-20 years ago) is a canard and distracts from the better parts of Erlang.
Google had four canonical languages: C++, Go, Java, and Python. They are defined internally in an engineering handbook. There is no such similar style guide for Go, as a a lot of what the existing guides say is "don't use x." If that were needed for Go we would have failed.
No I did not. The friend said that nobody outside the Go team uses Go. My post indicates that the majority of Go code at Google is written by people working on other teams.
I never liked Go, but I admit I like it a lot more if it means I don't have to deal with Maven and other dependency systems as much like the slides say. I've been doing Android since it came out, but before that I was J2EE, and it could be hell if Hibernate needed a different version of a dep than something else in your code stack, for example. Handling JARs by hand stopped working, and the countless XMLs files listing deps for all parts of the system and library started getting full of overrides and the like. Eventually Oracle rewrote all the class loading for their app server which allowed dependencies to have dependencies which were invisible to apps using them. So something as basic as class loaders had to be completely reworked to deal with this, and it was still damn complex and a pain in the ass.
>I never liked Go, but I admit I like it a lot more if it means I don't have to deal with Maven and other dependency systems as much like the slides say.
Hrm, yeah, there are alternatives here. (Not in Java-land...other places.)
> I've been doing Android since it came out, but before that I was J2EE, and it could be hell if Hibernate needed a different version of a dep than something else in your code stack, for example.
I think all of the keynotes at OOPSLA this year were surprisingly engaging.
I've seen a lot of Go talks from various Googlers, and I have to say that this was the best-motivated, most humble, and most honest of them that I have seen. Rob knew he was speaking to an extremely PL-oriented audience, and structured his talk accordingly, and the result was fantastic. Go comes from a very different standpoint than almost all academic PL work, and in that respect, for those of us in academia, it's an interesting breath of fresh air and a reminder of the uniquely fine line between industry and academia in computer science.
If you squint your eyes enough, automatic reference counting is a garbage collection method. But if you are going down that road, there are other garbage collector schemes which often produces way better performance in the common situations, i.e., copying collection or Mark'n'sweep.
But it is a trade-off. Refcounting collectors need to address pointer cycles. And they are also quite slow due to all the refcount bookkeeping going on all the time. But they have the nice property that the lifetime of an object won't linger around in a zombie state. And there are no point in time where the system is in a "collecting" state and cannot give service.
That said, many modern, multi-core malloc() routines contain a lot of ideas from GCs in them. The gap is closing fast.
Because tracking memory allocation and ownership in a concurrent program is very hard. Programmers have to track a lot of that stuff in their heads, and it's easy to make mistakes. If the language is garbage collected, however, the programmer doesn't have to think about that.
There does not seem to be any indication to the user of what they're supposed to do to get the next slide, or even that there is a next slide.
EDIT: Well, OK, now that I maximized there is. But there was no indication that I was missing content to the left or right either. I appreciate simplicity but you gotta give first time users something to get them going.
But this is not for users. These are slides used in a talk served by presentation-oriented software. The user of the software is the person writing and doing the talk, not us. The fact that it's HTML5 makes it very easy to share with people who have not seen the talk, but the primary purpose of the tool is to help the speaker, not late readers.
Actually there is but it's actually a bug it seems..
You can scroll side to side to navigate through the slides (swipe left or right with 2 fingers on Macbook touchpad). However, this seems to only allow a few slides to load -- the swiping doesn't trigger the 'next slide' javascript so only a few prerendered slides are usable.
There's an interesting bug if you arrow past the end of the slideshow. It moves a distance not on a slide width boundary, so if you arrow back from there you're in the middle of the last slide, and so on.
They gave a presentation about it a couple of weeks ago at the Sydney golang meetup. It has an interesting feature where it runs a built-in websocket server that can be used to embed code in the presentation, and have it compile and execute output directly back into the presentation itself. I believe it's similar to the tech running behind the play.golang.org site.
I'm amazed that people don't understand what this is. You are not a user. There is only one user -- the speaker who is giving the talk. Somebody gives a talk and uses this tool to present the slides. It shows the same UI elements as Powerpoint or Keynote when displaying the talk -- none. The presenter knows when he needs to do in order to advance to the next slide, and only he has to know. You wish to add back and forward buttons to Powerpoint as well?
This is a fantastic tool. The input is a text file, so it can be versioned, greped, and seded. It avoids huge and clunky tools. It allows code presented to be edited, compile, and run within the presentation itself.
One of the few nice things about the C include mechanism is that it's pretty easy (and standard practice) to set up separate implementation and interface files. If the implementation of something changes, but not the interface, I don't want to have to recompile all the clients. This is lost in almost all "modern" languages.
(This is, of course, horribly broken in C++ which likes to inline everything.)
As far as I can tell, there are 2 advantages to the separated header file, implementation file pattern in C. First, you get easy documentation of a library's API, and second, the advantage that you mentioned (implementation changes do not necessitiate a full recompile).
I would argue that both of these (or atleast the core of these) is addressed by Go.
For documentation, there is a simpler way to get at a library's public API; an automated program that extracts that API and presents it in a web page (or in your console). See http://golang.org/pkg/unicode/utf8/ for an example (for completeness, here is the file it is generated from http://golang.org/src/pkg/unicode/utf8/utf8.go)
For avoiding unnecessary recompiles, Go solves this problem by making compiles really fast, rendering this problem moot.
The only advantage Go interfaces have over interfaces in other languages, is that you can make third party code comply to a given interface if you're lucky enough to have the same set of methods available.
Even then, you might hit the issue that although the interface matches syntactical, the semantic meaning of the method calls differs.
For the CS folks, Go interfaces are nothing more than structural typing, available almost any FP language.
The only advantage over Java, C#, D is that you are not required to state explicitly which interfaces a given type implements.
However this forces you to use tools to discover which interfaces a given type implements.
Didn't we try error codes back in the day? I think exceptions are brilliant. They let you write your error code in one place (at the top of the stack), instead of planning for every contingency at every level.
Once I discovered exceptions back in the 90s, life got a lot easier.
Of course Java ruined exceptions with the invention of the CheckedException, maybe this tainted the Go designers' thinking?
The designers of Go rejected exceptions because, while seemingly simple, they are exceptionally hard to do correctly, and worse, it is extremely hard and subtle to distinguish between code that reasons about them correctly and code that does not.
As someone that grew up with exceptions, I like the idea of errors being recoverable by default. If it's not fatal, your catch block is going to swallow the exception anyway. And if your error is, panic().
(This could be Java experience scarring my then-young mind)
Most exceptions aren't recoverable. They are usually a result of bugs or system problems. This means 1) the bug needs to be fixed. 2) the system needs to be fixed (start up a DB service, clean disk-space, add memory, etc).
This is where java went wrong, java checked exceptions treats everything as recoverable when recovery is unlikely.
I mean, someone tries to login to a web site and the DB is down. What do you do? Or there is a bug that throws a null pointer exception. How do you recover from that?
I think the error code approach has the same effect as CheckedExceptions. People will tend to code to ignore the problems and not report them. Exceptions are great in that they bubble up to the top of the stack, so they are easy to report/alert on.
That reminds me of a story that I read ages ago. It happened in the 1980s.
This group of programmers had taken to exercising while waiting for compiles. They were all buff. Then Turbo Pascal came out and they all turned back into couch potatoes. :-)
Ah, I remember my first job, where I worked on a huge, legacy C++ project. My first day of work was spent building and fixing linker errors. After that, to test it, I rebuilt everything from scratch, and it took about that long, 30-45 minutes. While waiting for it to build, I wrote my own PHP time tracking system (click when I got in, click again when I went home). Ironically, wasting a day on compiling caused me to save hours worth of work keeping track of hours.
golang: small, simple description, download link, platforms, and UTF (editable!) hello world all far "above the fold" and following that nice "F" shape for where they put important information. very plain. very efficient.
ruby-lang looks like a web 2.0 CRM-built site. So much wasted space, so little obvious flow or prioritization of elements, even on the main page.
And the others attempt to describe things instead of demonstrating them, use a lot of vertical space for big images that don't inform you of anything, and generally hide the real meat a scroll away for what reason? They don't make a sale if I hit the big green button, but they do get a totally-uninformed poke at their system, generally leading to a crappy first experience.
--
I'm not at all a fan of the slide show interface, but I certainly don't find it a bad thing that they aren't high on Photoshop fumes. Though they can sometimes be used to great effect, they aren't generally informationally dense, which programmers often prefer.
I think it's nice. It's super functional and easy to find information. It may not be pretty, but I know exactly where to go to find what I want.
I like that the urls are consistent. To find package documentation, I go to golang.org/pkg/<name of import>. For example, for http docs, I go to golang.org/pkg/net/http.
For most languages, I resort to a Google search to find package docs. This is true for python, c++, and even Java. I rarely have to google search something in Go.
I think the site very much fits in with their goal of simple and orthogonal. I actually wish more language documentation followed suit and made it easier for programmers to search for docs.
Sure, everyone wants a pretty site, but it's not like they need to advertise anything. It gets the job done very efficiently.
Regarding docs, what's also extremely neat is, when you're offline, just run "go doc" and it starts an HTTP server on your local machine to serve the exact same fine documentation experience. So thoughtful, one of many small details just part of the overall package, when you need them you find them easily and are delighted.
It depends on the software. I wouldn't trust an encryption library which had a website with a really "beautiful" design however I wouldn't trust a CSS framework that had a very basic or ugly design. For a language like Go, which is aimed at more advanced developers, I would say that design really doesn't matter that much.
That said I prefer the design of the Go website to all of those that you linked and that Git website still saddens me when I see it: I miss the old site :(
So, yes, presentation does matter but that doesn't always mean that it should be pretty. For some audiences, that can be a turn-off.
I recently saw someone standing in front of a Mies van der Rohe building and call it ugly because the windows featured no curtains and no flower boxes.
I spent a year working on a project in Go, so got pretty used to the language. One thing I did wonder though was how would it fare on very large systems; how could Go hook up to work like MPI across clusters of computers?
> It can be better to copy a little code than to pull in a big library for one function.
I just don't get this. If you statically link in small functions from a big library, you only get the little bit you need anyway. Are they saying you avoid compiling the "big library" over and over? But if it is already compiled, that should not be necessary. And the chances are you are going to be importing lots of "little code" from the "big library" anyway. Unless they are saying the implementation of net's itoa is somehow simplified and not a just a straight code copy...otherwise I don't understand this approach.
> And the chances are you are going to be importing lots of "little code" from the "big library" anyway.
That "big library" is often supported by someone else. One day, a year down the line, you realize startup time for you executables went from 0.01s to 2s because Big Library's initialization code now scans all subdirectories for some caching optimization, and stuff like that - stuff that's a win for general users of biglib, but not for you.
> I just don't get this
This is not about theory, it's about practical experience.
You'll notice that successful C/C++ libraries tend to include everything that's small enough and that they need to function - because dependencies byte you later on. e.g. libav/ffmpeg has its own copy of md5, libtcc has its own slab allocator, gsl has its own everything, everything ships with a copy of libz in case the system libz is borked.
The Go library just starts applying this principle a little earlier - inside the standard library. I suspect that's because they envision a python-style batteries-included extensive standard library, which has no benevolent dictator to put order in it.
I don't get the interface stuff. If you had a method on an interface, you have to add it to all implementors, or clients won't compile. How is that different from Java? Java just adds the implements declaration, so Eclispse can help you find those implementations, instead of making you pore over compiler error listings.
In Java, you can run into situations where two classes share a method, but because they don't implement a common interface, for instance because they're part of two different libraries, you can't e.g. write a method that accepts instances of either. You can't straightforwardly take advantage of polymorphism.
In Go this is never an issue; you can declare an interface that captures the methods you're interested in, and the existing code automatically implements it. It's kind of like duck typing in python (e.g. this function will work on any object that has a method <x>), except it's statically typed.
and everything implementing both will automatically be a member of this newly formed interface. You don't have to go back and tag an "implements ReadWriter" on each of the things you want to implement it. The implementation is implicit if you obey both interfaces.
The power is that you can change your interface structure later on in another part of the program, and the existing code just picks it up. You may not even have the source code , or the right to change the source code, so you can't easily tag an 'implements ...' on top of it.
Now, there is even nothing strictly wrong about ignoring research like that. It's just annoying how they revel in ignoring all recent progress in the field.