For fear of disagree downvotes: I would say that many of the qualms brought up in this article are problems that are encountered fighting the language.
The problem of 'summing any kind of list' is not a problem that is solved in Go via the proposed kind of parametric polymorphism. Instead, one might define a type, `type Adder Interface{Add(Adder)Adder}`, and then a function to add anything you want is fairly trivial, `func Sum(a ...Adder) Adder`, put anything you want in it, then assert the type of what comes out.
When it comes to iteration, there is the generator pattern, in which a channel is returned, and then the thread 'drinks' the channel until it is dry, for example `func (m myType) Walk() chan->myType` can be iterated over via `range v := mt.Walk(){ [...] }`. Non-channel based patterns also exist, tokenisers usually have a Next() which can be used to step into the next token, etc.
The writer seems to believe that functions on nil pointers crash the program, this is not the case. It's a common pattern in lazy construction to check if the receiving pointer is nil before continuing.
Go is not flawless by any means, but it warrants a specific style of simplistic but powerful programming that I personally enjoy.
>The writer seems to believe that functions on nil pointers crash the program, this is not the case. It's a common pattern in lazy construction to check if the receiving pointer is nil before continuing.
And what happens when you don't check? It crashes. That's the unsafe part.
These crashes are simply not possible in Rust and Haskell, and the type system notifies you if failure is possible (because the function will return an Option/Maybe).
Woa, Rust is great, but I think you're going a little far on the hyperbole there.
You can easily generate a segfault in Rust in 'unsafe' (or 'trusted') code; that might only restrict errors of that nature to code that uses unsafe blocks.
Practically speaking that's pretty common; once you enter an FFI unsafe block, you lose all type safety; but you can totally do it without FFI too. Eg. using transmute().
In fact, there's no way to know if you code contains 'hidden' unsafe blocks wrapped in a safe api in some 3rd party library that might cause a mysterious segfault later on.
You can argue that 'if you break the type system you can do anything, obviously'; that's totally true.
I'm just pointing out the statement: "These crashes are simply not possible in Rust and Haskell" <-- Is categorically false.
You can chop your own arms off in Rust just like anything else (including Go).
> You can easily generate a segfault in Rust in 'unsafe' (or 'trusted') code; that might only restrict errors of that nature to code that uses unsafe blocks, but practically speaking that's pretty common; once you enter an FFI unsafe block, you lose all type safety; but you can totally do it without FFI too. Eg. using transmute().
Not directly addressing what you're saying, but, IME people are far too quick to use `unsafe` code. One needs to be quite careful about it as there's a pile of invariants that need to be upheld: http://doc.rust-lang.org/master/rust.html#behavior-considere...
> once you enter an FFI unsafe block, you lose all type safety
You don't lose all type safety, especially not if the FFI bindings you're using are written idiomatically (using the mut/const raw pointers correctly; wrapper structs for each C type, rather than just using *c_void, etc).
Maybe "unsafe" is being used to mean different things, here. Some may interpret it as to refer to unsafe memory access. Others may use it to mean possibility of crashing at run time (due to null pointer dereference).
There is some subtlety about dereferencing a null pointer. Many languages (C, C++, Rust) state that *NULL is undefined behaviour, that is, the compiler can assume that it never happens and optimises based on this. This can lead to a "misoptimised" program that doesn't actually segfault when the source suggests it should.
The compilers don't map pages there, the operating system does. The problem is the compiler will optimise assuming that a null deref never happens, so you can have source that looks like it should crash due to a null deref, but the compiler has "misoptimised" it to have very different behaviour.
Specifically, the word safe, when referring to type systems means "memory safe". Meaning the compiler or runtime either prevents bad memory accesses by construction or ensures dynamic checks are in place that throw an exception or halt execution in case a bad memory access was about to occur. It means the program isn't accessing uninitialized memory and isn't vulnerable to buffer overflows etc. It doesn't mean your program won't ever crash.
I agree it's better to avoid null dereference errors by not putting null in your language, but by the normal meaning of safe here, go is safe.
> It's a common pattern in lazy construction to check if the receiving pointer is nil before continuing.
I disagree: if the construction can fail, the constructor must return an error, which will be checked; only if the error is nil can the process continue. There shouldn't be logic on the actual data returned to assert whether a constructor worked or not.
>For fear of disagree downvotes: I would say that many of the qualms brought up in this article are problems that are encountered fighting the language.
If that's so, it's because it's a language that also fights lots of things a modern programmers wants to do/have.
>When it comes to iteration, there is the generator pattern, in which a channel is returned, and then the thread 'drinks' the channel until it is dry, for example `func (m myType) Walk() chan->myType` can be iterated over via `range v := mt.Walk(){ [...] }`. Non-channel based patterns also exist, tokenisers usually have a Next() which can be used to step into the next token, etc.
Actually using channels as a general iterator just for the sake of using the range operator is considered as an anti-pattern. The reason is not performance (although it has a cost), but the risk of leaking producer goroutines. Your example:
for v := range mt.Walk() {
if blah {
break
}
}
How will the goroutine writing into the channel returned by mt.Walk know when there are no more consumers which will possibly read from it?
One way out is:
done := make(chan struct{})
for v := range mt.Walk(done) {
if blah {
break
}
}
close(done) // or defer close(done)
Picking the right cleanup is error-prone.
What about errors? How will mt.Walk tell you that it had to interrupt the iteration because an error happened? Either your channel has a struct field containing your error and your actual value (unfortunately Go lacks tuples or multivalue channels).
Furthermore uncaught panics in the producer goroutine will generate a deadlock, which will be caught by the runtime, but it will halt your process. One way to do it is:
errChan := make(chan error)
for v := range mt.Walk(errChan) {
if blah {
break
}
}
err := <-errChan
The producer will use the select statement to write both to errChan and your result channel. The success of writing to errChan is a signal for the producer that the consumer exited.
However same thing here about relying on the last statement being executed to avoid a leak in case of returns or panics. Here the defer is less nice since you're supposed to do something with the error:
func Example() (err error) {
errChan := make(chan error)
for v := range mt.Walk(errChan) {
if blah {
break
}
}
defer func() {
err = <-errChan
}()
}
Next-style methods just pass through the panics, and allow you to handle errors either by having a func Next() (error, value) or with this pattern which moves the pesky error handling outside:
i := NewIterator()
for i.Next() {
item := i.Item()
...
}
err := i.Error()
First, any panic that happens inside either your code or the generator will bubble through.
Second, if you return from your loop body, you will have to provide your own error (the compiler will remind you about your function signature, if in doubt). You can return early if the iterator can be stopped and GCed out (i.e. it doesn't handle goroutines or external resources), otherwise you'd have to call a cleanup as with channels.
The rule of thumb with Go should be that you don't have to do things just because they use some syntactic sugar. After a while you start to think about beauty in terms of properties not about calligraphy.
However, I do see this as a weak point of the language, which hopefully can be solved by education; after all Go is so simple to learn that you might be tempted to make it look even simpler. But the fact that the language has (almost) no magic, it means that you can actually understand what some code does, which imho outweighs the occasional syntactical heaviness or having to learn a few patterns.
I'm not suggesting these are the best patterns, or even 'good' patterns. I am simply drawing a parallel between them and pythonic iterators. When it comes to errors from generator patterns, I've gone between your suggestion and chan<-struct{T;error} when channel based generator patterns are appropriate.
There's something very seductive about languages like Rust or Scala or Haskell or even C++. These languages whisper in our ears "you are brilliant and here's a blank canvas where you can design the most perfect abstraction the world has ever seen."
But, for systems programming, abstractions suck. They always, always have a cost. When abstractions break, you not only have to deal with a broken system but the broken abstraction itself too. (Anyone who has ever seen a gcc compiler error for C++ knows how this feels.)
Therein lies Go's value proposition. It does not make it possible to make things pretty (ugh, nil). It just makes it impossible (ok, really hard) to overcomplicate things. When you write Go code, you can picture what the C equivalent would look like. You want to deal with errors? Here's an if statement. Data structures? Here's a struct. Generics? Here's another if statement, put it inside your for loop.
Obviously, Go is not the right choice of language for most things. When you're doing application development, you may be able to afford the cost of abstractions. But for tools that only need to do one thing and do it extremely well, it's either that or C. And I'm not going back to managing my own memory anytime soon.
> When abstractions break, you not only have to deal with a broken system but the broken abstraction itself too. (Anyone who has ever seen a gcc compiler error for C++ knows how this feels.)
Using C++ template error messages to attack generics in Rust and Haskell is pretty weak, because typeclasses were explicitly designed to avoid the problems of "ad-hoc" templates in languages like C++. Error messages are in fact what typeclasses are really good at.
As someone who's spent more time than I'd like to remember improving type class error messages, this sentence boggled my mind. Then I remembered what C++ template error messages are like and it all made sense again.
There's no question that Rust and Haskell have better tools than C++ for abstracting code. Go is just demonstrating that you can write great software without the aid (and cost) of generics.
What is Go demonstrating about programming without generics that Lisp, Java (pre-2004), Python and tons of other languages haven't already demonstrated?
Nothing, thats the point.. its not language that the creators did because they are vain, or want to prove they are smart, or know what beauty is.. its a language that are created to get things done! the beauty of it its the same beauty that we see in Unix or C.. its simplicity..
Dear God.. I dont know why so many programming languages anyway to do the same thing all over again.. just because of the sake of the sintax or the type system.. or because the guy fell sooo smart because hes using FP.. so he can fell instelectually superior to all human beings
Since C.. its all the same programming paradigm.. the rest is just detail.. the only langs that have its own way that are not cover by the C paradigm are Lisps
Really my language of dreams.. will be to use notes like in a music sheet.. this is a really different paradigm.. or use DSP with just I/O signals.. this is something new.. the rest is just vanity..
And i dont want to be the rat lab of some language designer full of himself, that doesnt think of me, the poor programmer that has to maintain the code in the lang he creates!!
This is the unix philosophy... theres too much noise, im sure these are the kind of things that make people run away from technology...
We need to somehow find our way to simplicity.. for our own sake
Im sure some languages, are created just for the sake of sintax, others to prove a marginal point, and others because the author want to be a tech celebrity.. this is "the noise" i was refering to..
We can spot really important languages, that actually put something new in the table.. with real originality and genius.. and the immitators that follow
The clear evidence to what im talking about, is that if you think in a niche that you wnat to create a program, you will have 3 or more langs to choose..
Choice is good? not always.. because once we create codebases in the langs we choose we are trapped..
All the languages we heard of, create programs somehow, otherwise we wouldnt know about.. but some are more pragmatic than others..
My sentence was to explain that the authors of the language were thinking more in the enginnering aspect, than in some theoretical or research
There are a lot of research languages out there... so in my point of view, they were not created with a more pragmatic goal..
They are cool and will push the envelope.. sure! but i dont understand the elitism here, trying to bash things that were created thinking more in the enginnering axis..
while puttting some "not working yet" but "research using the poor programmers as rat labs" in a pedestal..and in the same level as something that works, because it were design to work in a more conservative matter
Is there demonstrably great software written in Go yet? I've seen some people using it but I've yet to see a "killer app" (the way e.g. MLDonkey convinced me I should take OCaml seriously, or Pandoc convinced me I should take Haskell seriously).
> But, for systems programming, abstractions suck. They always, always have a cost.
> Generics? Here's another if statement, put it inside your for loop.
If you care about speed (and many systems programmers do), this is exactly the opposite of what you want to do. Unlike your proposal of putting potentially-costly if-statements inside of for loops, generics/templates in c++ provide zero-cost abstraction (in terms of execution time. If you think dealing with the error messages presents too high cost in terms of developer-time, switch to clang).
If you truly care about speed you'll have different optimizations for int32 and int64.
Depending on who you are working with, the lack of generics is a blessing. Some developers can't restrain themselves and create over-complex abstractions that are used only once.
Generics generally simplify code, you know. Plus, generic code tend to make guarantees non-generic code cannot.
Picture this function:
foo :: (a -> b) -> (b -> c) -> (a -> c)
What does it do?
As you can see this function takes 2 arguments, which appears to be functions. It also returns a function. The argument and return type of these functions are unknown, so you can't manipulate them. You can just pass them around directly. This puts really tight constraints on your code. So, assuming nothing fancy happens, there is only one correct body of code for this type signature:
foo f g = \x -> g (f x)
In other words, function composition.
---
Generic functions have more guarantees than non-generic functions. Therefore, you are more likely to know what a generic function is actually doing.
Parametric polymorphism does not simplify code, it obfuscates it as now you have to see _what_ is calling it. A better form IMO is restricting the types allowed as it: 1. makes understanding much easier, 2. Doesn't create bloat in the form of n copies for visible symbols.
> Parametric polymorphism does not simplify code, it obfuscates it as now you have to see _what_ is calling it.
This is backward. A function, polymorphic or otherwise, does not influence code that does not call it. Modulo far-reaching side effects of course.
It's when you look at the call site that you have to figure out what this strange `fold` function could possibly be about.
I'll grant that polymorphic functions are often more abstract than monomorphic ones. But they are simpler. They are also better at separating concerns. Take `map` and `filter` for instance. They capture common iteration patterns, so you don't have to entangle that pattern with the actual logic of your loop. Without parametric polymorphism, you could not write them (more precisely, you would have to repeat yourself over and over).
> A better form IMO is restricting the types allowed as it: 1. makes understanding much easier
That's just false. If you restrict the types a function can operate on, you allow the function to do more things to its data. The more you know about its input, the less you know about the function. With parametric polymorphism, you are actually hiding information from the function, preventing it from making whole classes of mistake. Free tests!
Parametric polymorphism makes functions that use it simpler (as in, less moving parts). How could that possibly be harder to understand? Please give a concrete example, I don't understand where you're coming from right now.
> 2. Doesn't create bloat in the form of n copies for visible symbols.
That's an implementation detail, and mostly false anyway. Not every language is C++. Most languages that make use of parametric polymorphism don't duplicate code.
> A function, polymorphic or otherwise, does not influence code that does not call it.
Obfuscate does not mean influence, it obfuscates it you now have to follow the rabbit hole to where the type information is (language detail but it is how almost all parametric polymorphism is) which makes reasoning about it a pain.
> That's an implementation detail, and mostly false anyway.
No it's not 'unbound' parametric polymorphism in a compiled language has to produce symbols for any visible (not internal) function as there would be no way to know what might get called.
> Most languages that make use of parametric polymorphism don't duplicate code.
Yes any compiled language does how on earth would you call a symbol that took a bool vs a size_t (I recommend you look at how ELF works). On somewhat related note a sufficiently smart compiler could 'deduplicate' common parts of the slow path within a generic function and create calls but that's about all it could do for deduplication without performance hits.
You need practice with an ML derivative. Fetch a Haskell tutorial, then go write a little project of your choice that requires a few dozens lines.
> Obfuscate does not mean influence, it obfuscates it you now have to—
Wow, slow down. And give me an example, or I won't know what you mean.
> No it's not 'unbound' parametric polymorphism in a compiled language has to produce symbols for any visible (not internal) function as there would be no way to know what might get called.
Your lack of punctuation is hard to parse.
Anyway, it doesn't work like that. C++ for instance doesn't instantiate the polymorphic function for every possible type. Actually, it tries to compile monomorphic code first and only specialized polymorphic stuff as needed. This is why you need to actually use template code before the compiler can check it properly. (Notice how some error messages only surface when you use template code?
Let me give you an example (untested code):
template<typename T>
T sum(vector<T> vec)
{
T sum = vec[0];
for (int i = 0; i < vec.size(); i++)
sum = sum + vec[i];
return sum;
}
Why the generic code? Because I'm likely to perform sums on integers, floating point numbers, complex numbers and other fancy stuff. I fail to see how this approach obfuscates anything, since it let me write less code.
Now my C++ compiler will not compile this function for integers, floats, and every user-defined class I have in my program. If my program only uses it on vectors of integers, it will only instantiate the integer version, even if I have floats in my program.
> Yes any compiled language does how on earth would you call a symbol that took a bool vs a size_t (I recommend you look at how ELF works).
Muhaha, you foolish mortal. Let me tell you how this works in OCaml.
Under the hood, OCaml data are one of two things: an integer, or a pointer to heap data. The compiler can distinguish them thanks to the least significant bit: 1 when it is an integer, 0 when it is a pointer.
This is possible because in most machines pointers are generally aligned to word boundaries, and words are almost always 16 bits or more. Integers on the other hand have one less bit. On a 32 bit machine for instance, OCaml integers fit in the 31 most significant bits. More precisely, when the program sees a 32 bit word whose value is 43, it knows that it's an integer, and that the underlying integer is 21 (meaning, 43>>1).
You will note this suspiciously looks like a dynamic language implementation of runtime tags. It is a bit cleverer than that. First, the code is statically checked, so it never performs any runtime check with respect to this tag. The garbage collector on the other hand knows little about the program, and needs a way to distinguish raw data from heap pointers to do its job.
Now polymorphic code. In OCaml, a polymorphic function knows nothing about its polymorphic arguments. This is important, because it mean the code inside won't ever inspect nor modify the values at runtime. Take this example:
let app f x = f x (* app: (a -> b) -> a -> b *)
`app` is function application reified into a function. Yes it's silly in most cases. Bear with me. Look at its type. It accepts two arguments (of type a->b and a respectively), and returns a value (of type b). As you may have noticed, we have no frigging clue what those `a` and `b` mean. That's what it means to be polymorphic.
Now let's call the function on actual arguments.
app (fun x -> x + 1) 42 (* the result is 43 *)
Okay, so the first argument is a function from integers to integers, and the second argument is… 42 (an integer). And poof magic, it works.
Under the hood it's not complicated. `app` knows that its first argument is a function, and it knows that the type of its second argument is compatible with that function. Since the first argument is a function, at runtime, it must be represented by a pointer to the heap. More specifically, it will point to a closure on the heap. We don't know much about this closure:
+---+-------+
| f | Data… |
+---+-------+
We don't know anything about that `Data` stuff, but we do know that `f` is a pointer to code that will accept at least one argument.
Then there is 42. In the CPU it will be represented as 85 (42<<1 + 1). But it doesn't matter. `app` doesn't know if it's an integer or a pointer to a heap: from where it stands that word is just an opaque blob of data. The only safe thing it can do with it is copy it around. (And the static type checker ensures it does no more than that.)
So… `app` has 3 things: a pointer to a closure, a pointer to a piece of code, and an opaque blob of data (which happens to be an integer, but it doesn't know that). What it must do is clear:
- Push the opaque blob of data to the heap.
- Push the pointer to the closure to the heap.
- Call f
And voilà, we have polymorphic code at the assembly language level. By carefully not inspecting the data, it works on every kind of data. No need for de-duplication.
Still, we don't have our result. We just called `f`, which has 2 arguments to contend with: its closure, and its "real" argument: 42. Now as you can see in the source code, `f` is not polymorphic at all. It works on integers. So it knows about its argument. Actually, it knows two things: the `Data` part of the closure is empty, and its argument is an integer. So it just adds 1 to 42 (possibly using some clever CPU arithmetic involving the LEA instruction), pops 2 elements off the stack, and pushes its result (43, which we represent as 87).
Now we're back to our polymorphic `f` which has this 87 blob of opaque data at the top of the stack. Well, it just returns it to its own caller, who hopefully will know how to handle that data.
---
As I have just illustrated, there is no need for duplication in the first place. Polymorphic code in OCaml generates polymorphic code at the assembly language level. And this was a naive compilation scheme. "Dumb" turns out to be sufficiently smart. De-duplication out of the box if you will.
And about that "slow path" (implying a fast path somewhere) that is typical of JIT compilation, we don't have that shit in statically typed functional languages. The "slow path" is already fast, since it doesn't perform any test at runtime!
> If you truly care about speed you'll have different optimizations for int32 and int64.
This is exactly why c++ allows template specialization, and if you don't care for hand-optimizing, you can get both implementations almost for free.
> Depending on who you are working with, the lack of generics is a blessing. Some developers can't restrain themselves and create over-complex abstractions that are used only once.
I can't comment on the competency of your coworkers, but I certainly see how Go could be useful in situations without the kind of performance-constraints which demand a language like c++.
Which in c++ you can easily do, as you probably know. All you need to do is have an if statement that compares the type parameter of your template to int32 and int64. The good thing is that this kind of optimizations is transparent to the user if the code works. Even Haskell has something similar, with the SPECIALIZE pragma. Although in this case you have to trust the compiler to come up with the optimizations.
Exactly! Go makes the cost of generic code explicitly visible.
Generics encourage over-generalizing behavior that runs counter to writing highly performant code. If you care about speed, you don't spend time making your code generic. You optimize closely to your use case.
>Go makes the cost of generic code explicitly visible. Generics encourage over-generalizing behavior that runs counter to writing highly performant code.
I think you may be missing some info regarding generics in Rust and Haskell. As I mentioned in the article, there is zero runtime overhead for generic programming in Rust and Haskell. Zip. Zilch. Nada. That's why their constraint-based static generics system is awesome.
Haskell's generic programming constructs do not have zero runtime overhead (at least on GHC, the dominant compiler). Typeclass-overloaded functions take an extra argument at runtime, in which the class "methods" are looked up. In particular, the typeclass-based definition of add3 turns into this Core code, which has an extra "$dNum_az0" arg to carry Num methods:
ghci>let add3 a b c = a + b + c
==================== Simplified expression ====================
GHC.Base.returnIO
(GHC.Types.:
((\ (@ a_ayZ)
($dNum_az0 :: GHC.Num.Num a_ayZ)
(a_ayG :: a_ayZ)
(b_ayH :: a_ayZ)
(c_ayI :: a_ayZ) ->
GHC.Num.+ $dNum_az0 (GHC.Num.+ $dNum_az0 a_ayG b_ayH) c_ayI)
`cast` ...)
(GHC.Types.[]))
Compare this to the Int-specialized add3, which does not have to be passed the extra $dNum_az0 argument:
ghci>let add3 a b c = a + b + c; add3 :: Int -> Int -> Int -> Int
==================== Simplified expression ====================
GHC.Base.returnIO
(GHC.Types.:
((\ (a_azj :: GHC.Types.Int)
(b_azk :: GHC.Types.Int)
(c_azl :: GHC.Types.Int) ->
GHC.Num.+
GHC.Num.$fNumInt (GHC.Num.+ GHC.Num.$fNumInt a_azj b_azk) c_azl)
`cast` ...)
(GHC.Types.[]))
Now, am I saying the the typeclass method isn't fast, or that GHC can't then optimize that Num dictionary away via specialization or inlining? No, I am not saying that. But it certainly doesn't always do that, resulting in a performance hit at runtime. More info @ http://www.haskell.org/haskellwiki/Performance/Overloading
Please correct me if I am wrong about boxing in Rust, but doesn't boxing produce some overhead, by creating extra heap allocations (and therefore possible memory fragmentation and almost certainly poor cache locality), as well as pointer dereferences?
I really hate it in Java when I need an array of bytes but don't know the size in advance, or the size changes, and I have to incur all the cost of boxing those bytes up. On 64-bit systems (most of them), pointers are 64 bits which is at least twice as large as the most common things you put in a list (int, float, byte).
> Lack of generics is part of the reason why…the Go compiler is faster than, say, Rust.
The speed of the Rust compiler has little to do with generics and everything to do with LLVM and its optimizations and code generation. (Run with -Z time-passes if you don't believe me.)
I don't know if it has to do with generics, but you can't blame it on LLVM.
/usr/src/rust/src/libsyntax % time /usr/src/rust/x/x86_64-apple-darwin/stage2/bin/rustc lib.rs -o /tmp/x.dylib --crate-type dylib -C prefer-dynamic -Z time-passes
[most output removed...]
time: 6.186 s type checking
time: 5.084 s translation
time: 7.221 s LLVM passes
/usr/src/rust/x/x86_64-apple-darwin/stage2/bin/rustc lib.rs -o /tmp/x.dylib 22.44s user 0.76s system 99% cpu 23.301 total
First of all: no more than 1/3 to 1/2 of the time is spent in LLVM in this particular example. Okay, that's cheating because there is no -O, as I'm guessing your statement supposed, but 22 seconds is already a huge amount of time to compile 30,000 lines of code, so unoptimized builds are relevant to claims that rustc is slow. The other points apply regardless of optimization setting.
Second: rustc will take this long every time to recompile libsyntax every time anything in it changes. If this were written in C, most changes would only require recompiling one source file, even though, again, header files are not treated well (something that Rust does not need to replicate). In practice this means that typical latency between changing something and seeing the output in a C/C++ program is an order of magnitude faster.
Third: The same separation that makes incremental compilation work in C/C++ allows parallelism (make -j) in full builds. rustc uses only one core per crate. Again, headers reduce the gains but Rust doesn't have that problem.
Fourth: If we compare to C rather than C++, we're off by sometimes an order of magnitude regardless of parallelism. Here is some random C program (Apple as), a total of 26929 lines, compiling in 0.90 seconds:
clang -o foo -I ../include -I ../include/gnu -I. -DNeXT_MOD -DI386 -Di486 0.73s user 0.16s system 98% cpu 0.904 total
(With optimizations it is 2.426 seconds.)
Or libpng, 32433 lines in 0.83 seconds:
clang -o png *.c -I. -lz 0.70s user 0.12s system 98% cpu 0.831 total
With the Linux kernel, compiled without -j and at -O1 using GCC (both of which make it slower), it's not an order of magnitude off, but it's still significantly faster than Rust:
make ARCH=arm zImage 540.25s user 73.71s system 96% cpu 10:33.81 total
It compiled 1572713 lines of code; normalizing to 30,000 gives us about 12 seconds.
On the Rust side, linking libstd, about the same size, takes 6 seconds, but librustc, three times the size, took 216 seconds (10 times as long as libsyntax) to get to linking. So it varies, but I guess libsyntax is representative.
I do not know enough to have a definitive opinion of why this difference exists, but judging from the difference between C and C++, I wouldn't be surprised if it were related to Rust's complexity, which includes generics.
> First of all: no more than 1/3 to 1/2 of the time is spent in LLVM in this particular example.
"translation" is mostly LLVM (in particular, allocation of LLVM data structures).
> I do not know enough to have a definitive opinion of why this difference exists, but judging from the difference between C and C++, I wouldn't be surprised if it were related to Rust's complexity, which includes generics.
If you look at the amount of code in a typical large Rust program that is related to generic instantiations, then it's pretty miniscule (less than 10%).
Typechecking is known to be slow because the way we do method lookups is not well optimized. I don't believe that this is a fundamental issue; it's more that nobody has gotten around to improving it yet.
I don't think the rust compiler has been particularly extensively optimised at this point. There's no blindingly obvious hotspots (last I profiled the biggest single time sink was hashmap lookups in type checking), but there's not been any real effort in reducing compile time yet, as far as I know.
Based on this logic I assume you code in nothing but machine code? After all even assembler is an abstraction and therefore must have a cost. Heaven forbid you should do something as extravagant as use C for something, that level of abstraction (all those long jumps and stack manipulations, oh my) must positively destroy performance. Do you also happen to program using the gentle flapping of butterfly wings to disturb air currents and redirect cosmic rays?
The rust compiler is actually really quick. The LLVM optimization passes and code gen is where most time is spent (this is after monomorphisation of generics). The Go compiler does very few optimizations compared to LLVM, which leads to faster build times but slower code.
Last I checked compilation time wasn't really a thing a ton of folks worry about (myself included). Faster machines and reasonably better compilers have mostly solved this problem. I'd take zero-overhead generics over a slightly faster compiler any day of the week.
(incremental) compilation needs to be fast enough. A too-slow build cycle can really kill productivity, both in trying out new ideas for code and in debugging. The build times of the rust compiler are pretty close to the limit of what I'll tolerate, and I would really love to have compilation speeds similar to go.
Speak for yourself - slow compilation drives me crazy, as it increases the latency between making a change and seeing the result. Not that I believe that generics require slow compilation.
This is incorrect. Generics in C++ are zero cost, since they are specialized at compile time. On the other hand if you want to write generic code in Go you have to use Object types everywhere. That means that objects have to be tagged, those tags checked with run time checks, additional pointers everywhere, bad memory layout, etc. So generic code in Go is significantly slower than in C++.
It depends on how you quantify cost. There isn't any performance cost, yes (which is what 'zero-cost abstractions' usually means in C++), but there is a) an increase in code size, and b) an increase in complexity/difficulty of understanding of implementation (and to some extent use). These may be good tradeoffs to make (in many of the areas where C++ is used they make sense), but I think 'zero-cost' is a disingenuous way to put it.
That's the thing. It's easier to hack something together with casts that looks like it's okay, and not understand that you're doing something wrong.
With generics, the novice finds cases where the compiler catches them doing something wrong, so they rewrite the code using casts in a way that looks right but is subtly wrong.
The end result is code that appears correct to the novice, the novice walking away with the feeling that generics are too complicated, and a mysterious corner-case bug that bites off someone's arm once every five years.
The nature of failing to understand is that the person who fails to understand often fails to understand that they misunderstand, or often misattribute their misunderstanding. The tool gets blamed for getting in their way of writing "good" code. On the other hand, there are plenty of tools that give perfectly sensible error messages to anyone with a PhD in type theory, but a second year university student sees "Attempt to cast non-monoid endofunctor to monad. Please uninstall compiler and shave off neck beard."
The implementers of MLton, an ML compiler that does C++ style monomorphization, found that after optimization the code size is actually smaller with monomorphization. That's because the specialized code is simpler and can be further optimized. So even if you have multiple specialized copies, the total is still smaller. See here: http://mlton.org/Performance
> Could you clarify what you mean by "systems programming"? To me, that means working with embedded systems, which Go is certainly not appropriate for.
The blunt but approximately correct version is that embedded means that you're running on hardware that isn't powerful enough to run a Linux kernel.
Systems programming just means you're working below the application layer. So if you take your laptop and write a device driver, or work on filesystem or networking code, you're doing systems programming without doing embedded.
No, the point of the parent poster was that "systems" is not the same as "embedded", and that while go may not work on the latter it can work on the former.
Systems does not necessarily imply embedded. I work in systems and, as far as I am aware, the term simply means software which primarily services the hardware (as opposed to the user). Drivers, HW interfaces, anything which has to make assumptions about the underlying hardware it is running on/servicing.
> Yes there are GCs for C, but is anyone successfully doing "systems programming" (whatever that may be) in C with GCs?
Actually, yes. But it's hackish (relies on some pretty complex macros) and requires you to adapt certain conventions. Still, it's doable and a whole lot safer than managing your memory directly in terms of leaks and re-use after free. The cost to me really is that macro magic, that should not be required but it's the only thing I could think of to make this work. To give you an idea of just how ugly this is I re-defined 'return'. Any C hacker will be able to deduce the rest from that one hint ;)
On another note, I felt - and feel - that this was not the proper solution but the various policy choices made this pretty much the only way in which it could be done. And it works.
In three letters: NIH. Management decision was that all IP had to be 100% owned by the company and had to be in 'C', in spite of an enormous amount of friction between C and the project as well as a bunch of work by others that could have been leveraged if we had decided to use code from other contributors. I got called in long after these decisions were made and it was very clear they weren't going to budge on those. There is a lot more to this story but I'm not at liberty to tell. Let's just say I learned a lot.
So you had to write your own compiler, operating system, and runtime libraries too?
-- I know, you didn't think it made any sense either. I'm just pointing out that a line has to be drawn somewhere; where it gets drawn is actually arbitrary.
Yeah, I've had to deal with ridiculous mandates from on high too, though none anywhere near that onerous. In my previous job, we were writing a compiler. It was mandated to be in C++ -- the first mistake -- and we had to use smart pointers instead of GC -- also a mistake. But the completely idiotic thing was that we were not allowed to declare any exception classes. The VP of Engineering -- a very smart and experienced but very arrogant guy -- had seen exception hierarchies get out of control before and decided the solution was to ban them.
But that's on a pretty small scale compared to what you're talking about.
Reminds me of my last^2 job: we were using Scala and the "technical architect" made two decisions that were mildly wrong in isolation but interacted rather badly: we'd make heavy use of monads for core functionality, and we wouldn't use scalaz. So I spent a while reimplementing parts of scalaz, and learned quite a lot (though I doubt I was adding much business value while doing so).
No, no need to apologize. It's one of the strangest assignments I've ever had and there were a few twists to the whole story that would make for a good book.
That's true. It's not just GC actually. Slices and goroutines don't have a direct analogues in C either. But it is fairly easy to reason about the runtime complexity of these conveniences.
But like I said, if I didn't care about GC or concurrency, I'd be writing C.
I don't agree with the GP, but if Go were very similar to C and the only big difference would be that C has no GC it would pretty easy to picture what the C equivalent of Go code would be. Exactly the same but with calls to `free()` at the end of some functions. (or preempted between instructions at unpredictable places)
Don't make GC's a bigger deal then they are. They are a tool to remove the need to call `free()` at the right time, with the downside that you don't get to control what the GC thinks is a right time instead.
There's also the overhead of the mark phase, which has to work out dynamically what could be worked out statically in a system with manual memory management. That's where much of the overhead of GC comes from.
I'll add to that that the overhead is not proportional to the amount of garbage your application generates, it's proportional to the amount of data blocks your application has allocated. Garbage collection is also often performed by suspending the application (stop the world) at unpredictable moments and with an unpredictable duration. Garbage collection can thus be a serious problem for some type of applications. This is why the GC should be optional.
> They are a tool to remove the need to call `free()` at the right time, with the downside that you don't get to control what the GC thinks is a right time instead.
That's not actually true. They also allow you to do things that you otherwise couldn't. Try implementing persistent [1] maps or sets without a GC.
Sort of. But you can't statically decide it. In fact, really the only way to do it is with a GC (or ref. counting, etc.). You can't know until runtime when a node will need to be freed.
And my point still stands. The GC allows you to do things you otherwise couldn't.
I'm not writing the following to make people pick another language - if Go is suitable for your project in terms of features, runtime, tools and community, then by all means use Go. It's a fairly decent platform to target and it can further evolve to meet more stringent needs.
But if we are talking about the cost of abstractions, the biggest elephant in the room is that Go's GC is NOT optional, which makes it unsuitable for ... (1) systems programming and (2) real-time systems.
C++ and Rust do not suffer from this. And Go is not even suitable for soft real-time systems, because for that you need a GC that never stops the world - right now Go is even less suitable than Java in this regard, because at least for Java you've got the pauseless GC from Azul Systems.
> "But, for systems programming, abstractions suck. They always, always have a cost."
That's a logical fallacy, because if all abstractions suck, then why aren't we doing "systems programming" in assembly (were systems programming is whatever the definition du-jour you prefer to fit Go in)? Clearly, it depends on the project on where it can draw the line, since we are always doing compromises for gained productivity, no? And going back to the non-optional garbage collection that's not even suitable for soft real-time systems, it kind of makes the point on Go avoiding higher-level abstractions on purpose kind of bullshit.
> "It does not make it possible to make things pretty (ugh, nil)."
It's not about pretty-ness, it's about correctness - which in a language containing memory unsafe constructs that can lead to billion dollar bugs (i.e. Heartbleed), is a freaking huge deal. Rust is very innovative in this regard, because it's a systems programming language that solves many issues by means of its more advanced type system - and surely no type system is perfect, but even a single bug that's caught by the compiler, that's a bug that won't reach production.
> "It just makes it impossible (ok, really hard) to overcomplicate things."
I wish developers would stop equating "complicated" to things "I don't understand". That's not what complicated means. Here's the definition: "consisting of many interconnecting parts or elements". That Go doesn't allow certain higher-level abstractions, that's in itself a recipe for complications.
> "for tools that only need to do one thing and do it extremely well, it's either that or C. And I'm not going back to managing my own memory anytime soon"
The choice between C and Go, given that Go is garbage collected, is a false dichotomy.
>I wish developers would stop equating "complicated" to things "I don't understand".
The argument is a bit more about evolved than that, but flawed nonetheless. The argument usually invoked for Go is that, by eschewing selected language features, it prevents developers shooting themselves in the foot with unneeded complexity introduced by faulty abstractions.
I get the argument. I have seen my fair share of dug-out-from-hell complex projects. What the argument misses though is that not all abstractions are faulty. Computer science, as most science, is a game of ever increasingly abstract reasoning. If implemented correctly, the more abstract the better. The endgame is "Computer, build me a Mars round-trip ship".
Abstractions are good, when well written. They allow us to think on a higher level. Think of them as a fixed learning cost replacing a variable development cost.
Go kind of throws out the baby with the bathwater.
> The endgame is "Computer, build me a Mars round-trip ship".
I think nobody would argue against that endgame, in the broad strokes. But I think Go can be understood as a response to [what the Go developers perceive as] overly expressive languages, languages that have overstepped our ability to responsibly abstract details, languages whose abstractions hide details that are still important and necessary to make explicit.
Go is "lower level" in that it purposefully eschews those abstractions, but I think it does that successfully, without a significant loss in expressivity, and with a more-than-commensurate gain in understandability, maintainability, performance, etc.
I agree. That is a correct reading. Where I do not agree is on the assumption that a basic feature set can be construed as a simpler language. While true for the language proper, it isn't true for real usage of language plus libraries.
Take operator overloading. It can be used to create hellish code. It can also be used to create great libraries (numpy for instance). Because of the danger of hellish code, Go makes it impossible to create numpy in Go.
In the end, while Go is simpler, a numpy in Go would be more complex[1], because the language is not expressive enough. The simplicity argument, while true for the language proper, is false for advanced usage.
[1] For example, you can't write Ma * Mb, but must remember the dot product method name.
Go is not even suitable for soft real-time systems, because for that you need a GC that never stops the world - right now Go is even less suitable than Java in this regard, because at least for Java you've got the pauseless GC from Azul Systems.
This is an inadequate analysis. I am writing a soft real-time system in Go, and GC pause simply isn't an issue for me. Go allows one to greatly limit the reliance on GC. The GC in Go certainly places an ultimate limit to the memory footprint of any one Go process, but a whole lot of productive work can be done within such limits.
Also, my program is a rewrite of one in Clojure, so it ran on the JVM. Go is giving me better performance for my particular application. Also given that Go is just starting out in its development, I expect there to be some improvement in the future.
"But, for systems programming, abstractions suck. [...] But for tools that only need to do one thing and do it extremely well, it's either that or C."
C and Go are just two particular piles of abstractions. The tools work better for some problems, but that's not because "abstractions suck" for those problems.
> These languages whisper in our ears "you are brilliant and here's a blank canvas where you can design the most perfect abstraction the world has ever seen."
These languages have built-in abstraction tools (templates) that you can use to create your own abstractions. What I like about Go is the abstraction tools are primitive and allow for consistent & precise expression. The expression may not be as concise in certain cases, however, you can build in the mechanisms into your architecture.
> But, for systems programming, abstractions suck. They always, always have a cost. When abstractions break, you not only have to deal with a broken system but the broken abstraction itself too. (Anyone who has ever seen a gcc compiler error for C++ knows how this feels.)
That is why custom abstractions to your problem domain are important. A framework or a language with lots of features will get you started quickly by providing out-of-the-box tools that you can hang your program architecture on. However, I prefer to have a custom architecture & idioms which are appropriate to the current domain & evolution of the domain.
They claim this is a feature, not a bug. It means that range will never block or do weird stuff. Except of course when it does (bastard question for those who think they know : what does range do on a nil channel ?)
Despite all these clarity claims, go has significant pitfalls, like the nil channel above (and you will enounter nil channels). There's other things, like "what is a pointer in Go", if your answer involves "*", I urge you to reconsider (hint : what's the difference between []int and [5]int ? Is one of them a pointer ? What about channels (of course I talked about nil channels) ? Maps ?
But every type can be typedeffed to a pointer type of itself, like in Pascal (lots of things look like pascal), and result in completely unpredictable reference or value semantics (or my favorite : partial reference semantics).
Does go have generics ? YES (make, range, ...). Go has something no other language has : return type generic function types (meaning a functions meaning changes depending on what you assign the result to, like range). Does Go have operator overloading ? Is Go object oriented ? YES (including single inheritance). YES. Does go have (complicated language feature X) ? Probably yes. But all of these features are only accessible to Rob Pike, who has apparently decided that nobody has any use for any kind of tree or graph data structures, matrices, complex numbers, or so.
In practice you can catch the go team themselves in errors on the language semantics in their presentations, so I think a VERY strong case can be made that it's not at all that obvious.
But the truth is : this language, due to politics (high position of it's inventor) has 10 or so FTE behind it, with lots of paid people contributing various small bits. Is it anything more than some guys idea of his own favorite programming language ?
The honest answer is simply : no.
The only real advantage Go has is a small, yet functional and pretty complete standard library (like C++ had in the 1980s). It is an advantage that will fade, just like it's faded for every other language.
I believe Go is "popular" because of Google, it might solve a particular internal problem for them, but for the rest of the world, they're better languages.
To its credit, it makes Java look advanced, and that's no small feat!
For some applications. But once you start venturing into the realms that Rust is targeting, having user defined generic data structures is very important.
> These languages whisper in our ears "you are brilliant and here's a blank canvas where you can design the most perfect abstraction the world has ever seen."
They also tell me "now you don't have to wait for the language designers or compiler writers in order to 'implement another feature'." Not that _I_ would necessarily be this "brilliant" guy that implements these features. Most likely I will just find some third party library that does it.
The problem is when you have to maintain your code with millions of lines, and hundreds of abstractions.. those "features" will hunt you in your nightmares at night
This is the "The Curse of C++" and some languages pointed in the article while beautiful and correct at first sight are going down in the same road..
Do we use a programming language to look smart, to create correct code or to efficiently solve problems in a maintanable and sane way?
Go is pragmatic.. theres nothing wrong with that.. but i agree that adding some features to it would not hurt either (like generics and enums) :)
To understand a language, one must know what problem it was designed to solve. It isn't always obvious, or what it initially looks like it was designed to solve, or even what the community thinks it was designed to solve.
Erlang, for instance, isn't about concurrency. It's about reliability.
Go, I think, is also not about concurrency. It's about building a language that can be sanely used by reasonably large groups of people of varying levels of skill, yet still produce fairly good software even so, without the language forcing a complexity explosion to deal with it.
Consequently, this does not appeal to a lot of relatively skilled programmers used to programming alone. It isn't my personal pick of favorite language, for instance. However, if I could push a button for free, I would convert my workplace of a couple hundred developers to it in a heartbeat, whereas I probably wouldn't actually do that with my favorite language. It is not, of course, a magical fountain of code quality, but it would give me the best tools and best foundation to clean up code bases that in all the other candidate languages I know are one or another sort of mess.
If I were starting a new startup right now and Go were even remotely appropriate, I'd use it. But in my hobby projects? Not really. Except maybe to smash out a microwebsite, it's pretty good there.
So, you know, a lot of the question is what exactly are you looking for in a language? I like Haskell, but the idea of even proposing to change a project at work to it is laughable... and this is important... nor would I expect to enjoy the result two years later if I won. The mess of code that would result from people hitting Haskell with a stick until it did what they wanted it to do would be an unstoppable torrent of ill-conceived code. On the other hand, Go would almost certainly produce much cleaner code, because that's where it really shines. Maybe it isn't "good", but it's the best choice right now in a lot of places.
(It's interesting to contrast Go's approach to this problem with the other major language to tackle this problem space, Java. Despite attacking the same problem, the approaches are significantly different, and I think Go's way better. I'd hesitate to actively predict this, but Java could definitely be feeling some heat from Go in three to six years in a way that very few languages have actually managed to provide any challenge to Java in a long time.)
I have the same line of thinking as you do.. is the difference in use something you like and can work, and choose something you can use in a bigger group..
HN crowd are top smart.. things like Rust and Haskell are a breeze for people here.. but this is not the reality of the tech field.. the majority of people i know in tech, cant handle more powerful languages.. its too much for them
In the end is just that.. know how to choose the right tool for the job.. and dont do it with your ego..
Adaptation is really important.. and in your thoughts we can see a lot of that..
In Wonderland people may have the IQ to spend for the extra concepts and power a language may provide.. but experience is antagonic to this dream..
The cool thing of smart people to make complex things more simple, is that much more people are able to follow.. is the democratization of computing.. this is in total odds against the elitism we can see in some tech circles.. and im totally against it..
Rust and Haskell are made to reduce the amount of problems you have.
I've seen newbie programmers learn Haskell as a first language in under a term. So I don't believe the marketing from the Google people that their language is worse because it is simpler.
Choosing something because marketing told you it was easier is just as silly as choosing something because of ego. Calling people egoists when they choose a tool because of reasoned arguments based on evidence is simply anti-intellectual, and rude.
"I've seen newbie programmers learn Haskell as a first language in under a term."
Even as a fan of Haskell, I have to say that if you spent one term on Haskell and one term on Go, the latter students are going to be far closer to a place where you could drop them into a real job and get real work out of them. (That said, as I sit here and look back on what "one term" really constitutes, it's not much, regardless of language.)
> Choosing something because marketing told you it was easier is just as silly as choosing something because of ego. Calling people egoists when they choose a tool because of reasoned arguments based on evidence is simply anti-intellectual, and rude.
I Just dont know where in the post i did say something like that.. marketing? egoists? where did i said that?
If you happen to be above average in say english language for instance.. and you know a lot of words and sentences, poems.. but the thing is.. if you choose the speak in the english you know better, you will communicate with fewer people.. or you can just jump to a lower level to communicate so everybody can understand..
Of course you can use the advanced english with elite people.. you can teach people the 'better english' but a lot of them wont care, because they care about other things.. and i dont blame them.. (they may be worried about math or trips)
My point was just something like that.. nothing more, nothing less
If you happen to be above average in say english language for instance.. and you know a lot of words and sentences, poems.. but the thing is.. if you choose the speak in the english you know better, you will communicate with fewer people.. or you can just jump to a lower level to communicate so everybody can understand..
I just spent the weekend learning Go and writing a single writer multi-reader hashtable for threadsafe access. I picked it deliberately because it's against the philosophy of the language, which is to share by communicating instead of sharing data structures directly. It was painful to write:
// Do NOT reorder this, otherwise parallel lookup could find key but see empty value
atomic.StoreInt32((*int32)(unsafe.Pointer(uintptr(uint64(slot)+4))), val)
atomic.StoreInt32((*int32)(unsafe.Pointer(slot)), key)
However, the non volatile, non unsafe parts of the code were an absolute joy. Testing was a joy, compiling was a joy, and benchmarking was a joy. I was impressed that it allowed me to bypass the type system completely and do nasty, nasty things in the pursuit of performance. I want a language that lets me do nasty things where I must, but that makes the other 95% of the program, and the job of compiling, testing, and maintaining that program easy. Go excels here. Rust, C++, Haskell, Scala will never be good at that because they're too damn complicated (although each of them make the nasty parts a little less painful!)
The end result of my weekend's hacking? On an i7-4770K @ 3.5ghz
About 4x faster than Go's builtin map, for int32 key/value on both insert and lookup. And it allows any number of reader threads concurrent access with a writer without any synchronization or blocking, unlike Go maps. It doesn't allow zero keys, unlike go maps, and it doesn't allow deletes. Hardly apples to apples, but the performance of pure Go code is impressive nonetheless. 200 LOC, not counting tests.
> However, the non volatile, non unsafe parts of the code were an absolute joy. Testing was a joy, compiling was a joy, and benchmarking was a joy. I was impressed that it allowed me to bypass the type system completely and do nasty, nasty things in the pursuit of performance. I want a language that lets me do nasty things where I must, but that makes the other 95% of the program, and the job of compiling, testing, and maintaining that program easy. Go excels here. Rust, C++, Haskell, Scala will never be good at that
FWIW, I actually think that Rust positively excels at this sort of isolated low-level work due to explicit `unsafe` blocks. Furthermore, the type system is more expressive meaning the need for this is rarer[1].
In my experience, the rest of the language (i.e. non-`unsafe` things) works very well for maintenance and testing, also in part due to the more expressive typesystem, and things like algebraic data types with exhaustive matches by default (I've done some huge bug-free refactorings to the standard library and compiler, mostly due to the compiler automatically catching all the places that need updating).
On the other hand this comes with the cost of making the "job of compiling" more difficult: the compiler complains about more things.
Rust has `unsafe` blocks in which you're allowed to do all nasty hacks you want.
Rust actually isn't that complicated. Don't get discouraged by comparisons to Haskell — it's still a C-family language where you can play with pointers and mutable state.
To me Rust still feels like a "small" language (similar size as Go or ObjC, not even scratching complexity of C++). It's mostly just functions + structs + enums, but they're more flexible and can be combined to be more powerful than in C.
I think comparisons to Haskell are not too far off the mark. Haskell is not that complicated of a language either. You can do all sorts of complicated things with it, but the language itself is relatively simple. It just has a lot of stuff that you're likely to have never seen before (extensive use of higher-order functions, higher-kinded types, etc), and its type system produces error messages that seem obscure from the outside. Similarly Rust can have some rather obscure error messages that you're probably not going to have seen before during compilation - lifetime specifiers, errors about using things in the wrong contexts, heck, even "Bare str is not a type (what?)"
I'm much more familiar with Haskell than Rust, but having played around with Rust I think they're on a par with each other in terms of difficulty, depending on your background.
> Haskell is a research language; Rust is designed to be a practical language. It makes a lot of concessions to practicality, C-like syntax and imperative control flow being high among them.
Perhaps I wasn't clear in the post you're responding to: my point was that both Rust and Haskell are fairly "simple" programming languages which seem more complicated, because they introduce a lot of features which are likely to be new to those who are using it for the first time. I wasn't really comparing them as languages per se; that's a separate discussion.
> Common in any language, including Go.
Higher-order functions are common in most languages, but not in the way that Haskell does. Most languages use first-class functions as lists of instructions (do some stuff, and perform the steps in this argument). Haskell makes them truly first-class, such that they're positively ubiquitous: an example is currying, which is everywhere in Haskell and rare in most other languages; another example is monads, which are obviously a core part of Haskell and which require first-class functions (e.g. in >>=) to do anything useful. There are other examples.
> Rust does not have these.
I know; I was speaking about Haskell.
> This is because we want memory safety without performance tradeoffs (global concurrent garbage collection).
Right; it's a perfectly understandable thing to have, but it's not something that (to my knowledge) exists in any other mainstream language. It's an example of something in Rust which is obscure to newcomers.
> Could you elaborate?
I'd have to write some code and run the compiler to get the actual error message, but I recall getting errors about using some reference outside of a context or something. In my fuzzy recollection, it would be something like where I had written `match foo { a => b; c => d}` and I would get some error message which would be fixed by writing `let foo1 = foo; match foo1 {a => b; c => d}`. Unfortunately I don't remember the specifics, but long story short: compiling Rust code produces a lot of very strange error messages to someone unfamiliar with the language. :) In this way it's not dissimilar from Haskell.
> This is because dynamically-sized types are not yet implemented, but they will be for 1.0.
Great to know! If only there were a more helpful error message than "Bare str is not a type." :)
> In my fuzzy recollection, it would be something like where I had written `match foo { a => b; c => d}` and I would get some error message which would be fixed by writing `let foo1 = foo; match foo1 {a => b; c => d}`. Unfortunately I don't remember the specifics, but long story short: compiling Rust code produces a lot of very strange error messages to someone unfamiliar with the language.
Ah, sounds like you were using a value after it the destructor on it ran, which was fixed by moving it to a separate variable (so that the destructor ran later). This kind of error is familiar to C++ programmers, so I wouldn't say it's unique to Rust, although it's a runtime error (actually, undefined behavior) in C++ and not in Rust. A good static analysis package for C++ would emit the same error that the Rust compiler did.
Yeah that's probably right. And yes for sure I'm not only talking about things which are unique to these languages, but just things for which there might be large groups of users who are not familiar with them. Similarly, users of ML (or category theoreticians) are going to be less confused by many of Haskell's idiosyncrasies. :)
I love Rust from a theoretical standpoint, it's more beautiful than Go. But Go is more practical, when it comes to getting things done with a team, it just makes more sense (maybe not for every case!)
> Rust, C++, Haskell, Scala will never be good at that because they're too damn complicated (although each of them make the nasty parts a little less painful!)
Why is Rust too complicated to allow you to do low-level hacking?
I don't think it's bad, but to me(and only to me), disappointing. I'm a long time mozilla fan...heck a Netscape fan really. Go was a pleasure to learn, there were no 'gotchas' initially...just a small, easy to reason about language. Rust...with its arrows, angle brackets, pattern matching, etc. seemed just so complex to fit in my brain.
I'd love to be proven wrong and try again, but the docs aren't the best. And I know Rust isn't 'released' yet. But if you are defending it, I feel like it should be at a very learnable state. Do you have plans for a 'tour', for a play by play like Go offers that can bring others up to speed?
Further(and this is really reaching while I got you)... my favorite part is cross compilation and being able to deploy a single binary anywhere. I tried to compile something in rustc once, but when I moved it to an older box, it failed to run. Will rust offer such stable static compilation in the future?
I'm not saying you're wrong nor will I argue about it as in the end it really comes down to taste I feel. To me, and perhaps to others, these things make a language harder to read/use, regardless of how pervasive...
While some of it can come to taste, I think having higher abstractions (as long as they don't leak) is beneficial to reading/using a language. Like, objectively.
And foreach vs for is not a leaky abstraction. Nor is pattern matching vs traditional checking and extracting the values. So their value is not geting lost on having corner cases that the equivalent "based on primitives" code wouldn't have. If anything they make the operation even more explicit.
So I think a lot of it comes down to getting familiar with them.
While something made of "first principles" might be more instantly familiar, it will be harder to reason about because of its low-level ness as the code size increases.
Case in point, assembly. It's quite primitive language with very few constructs, so it's easy to learn to use. But an actual high level "if" or function call is much easier to grasp at once than checking 10-20 lines that implement the same thing in assembly (even if you have trained yourself to see the assembly "pattern" for a function call, etc).
To expand on this: isn't what Go does with built-in channel and parallelism primitives similar to that?
They sure didn't exist in C. But people see the value of having this higher level abstractions.
So, why draw the line on Channels and Goroutines, and not add pattern matching in there? Just because Go already came out offering the former?
If anything, pattern matching is even more applicable to the work we do day-to-day, than channels and goroutines are.
Millions of people write programs without parallelism/concurrency every day, but nobody writes programs without pattern matching. They just do it by hand with lots of if/else and various manual extraction techniques, if their language doesn't offer it.
This week is week two of my being contracted by Mozilla to write docs full time. First up, a new tutorial. You can see my work from last week http://doc.rust-lang.org/guide.html , and my first task after I finish breakfast is to clean up https://github.com/rust-lang/rust/pull/15229 , which got some review over the weekend.
So, you're right (at least about the docs) but I'm on it.
Steve - thanks for your reply and work on the docs. Is there currently a place - or will there be one, that describes precisely build/deployment options? This really in the end is the most important thing to me. I've searched unsuccessfully for this wrt Rust. It seems that as it sits, items built on a newer machine won't run on an older machine due to libc library mismatches. Will this always be a potential issue? Or will I be able to generate a single binary a la go?
Basically, as of right now, when you link statically, Rust will not build in glibc (and jemalloc, IIRC). So, you'll need to make sure that your glibc versions line up. My understanding is that glibc isn't able to be statically linked in without breaking things.
You can use `objdump -T` to see these dependencies. On my system, compiling 'Hello world,' I get symbols for glibc and gcc.
(Go gets away with this by reimplementing the world, rather than relying on glibc. The benefit is a wholly-contained binary, as you've seen. The downside is compatibility bugs, like https://code.google.com/p/go/issues/detail?id=1435)
I mean too complicated for writing, reading, and maintaining and generally just working with from day to day. It costs too much time to do the same thing.
I ask because this has not been my experience writing several hundreds of thousands of lines of Rust code, nor has this been the experience of anyone I have helped get up to speed. Moreover, I believe there is no feature in Rust that is not necessary to achieve safety without sacrificing performance.
Just a very concurrent browser engine ;) One that does everything in parallel. Basically every task that can be parallelized becomes parallel (rendering, JS execution, CSS matcher, etc.).
Also you can, there is Github for Servo (said parallel browser engine project by Mozilla).
Yes, concurrent with a writer. If there is more than one writer, the writers must use a mutex, but the single writer and readers don't block each other. I updated the text to make it clear.
There are some batshit crazy "solutions" offered from Go fans, like "no biggie, just use a templating engine to generate the code for the types you want and compile it".
Not quite, given that C++ templates do higher-level type-checking than just expanding and checking.
Also, they support higher-order kinds (a template parameterized by a template). And specialization (specifying special cases). And automatic instantiation based on the static types at the call site. Etc.
Its not like you will be creating a collections library all the time you create some program... yes generics are cool for code reuse.. but i think some people just overreact, turning it into something that will make the language unusable or that you will have to copy-and-paste just because of the lack of generics.. this is far from truth..
Reading this article and the HN comments made me realize, how people want to use the same language for everything. Unfortunately, this is not possible, since each language was design with certain use cases in mind. I too, am guilty of wanting a language to do everything, to be fast, memory efficient and also easy to program in.
Perhaps our ultimate quest in terms of designing languages is to design one smart enough that can be used to program toasters and clusters alike. Until then, we might as well use the right tool for the job.
> to be fast, memory efficient and also easy to program in.
I think the issue Go detractors have is that it is none of those things, nor is there any pair of those things for which there is no better alternative than Go. If you want fast and memory efficient, you could pick C++ or C. If you want something a little easier to program in, you could pick C#, which dominates Go in all three categories. If you want to go easier to program in, there are plenty of languages like Python that are more powerful than Go.
I'd make the case for Scala for almost anything (nothing so low-level that you need to avoid GC, and I guess not command-line utilities (JVM startup time), but other than that). Haskell or OCaml could probably make a case for being suitable for just about anything. There are big advantages to using a single language in terms of code reuse, deployment tooling and so on.
This was a good read. Can anyone comment on whether they find the problems outlined in the article to really be painful in day-to-day go development?
From my initial dabblings with the language, it feels like its constraints may not actually be a big deal in practice, and may even be more of a help than a hindrance in large projects. It would be nice to get some commentary from more experienced go users.
I chose between Go and Haskell for a project some time around 2012. I was a beginner to both, but came from a background of imperative languages (C, C++, Java, etc.)
Initially I felt the same as you: Go was much easier to get things done in, and I could be reasonably productive quite quickly (moreso than Haskell, which I found very difficult to learn).
However, after some time I found many of the same problems mentioned in this article. Particularly, in many cases I had to fall back to the kind of nasty unsafe code mentioned in this article (like using interface{}). Often, I felt that my code was needlessly verbose. I would frequently write code and feel that the language was preventing me from doing what I wanted directly. Ironically, this is exactly how I felt with Haskell at first (not anymore).
Ultimately, I ended up switching to Haskell, and although it was significantly harder to learn, I felt like it has a lot more flexibility, safety, and importantly lends a clarity to thinking when designing a program.
This sounds strikingly similar to my experience! Though my imperative language experience was mostly with dynamic and/or scripting languages aside from C#.
In practice, Go has caused me less frustration than any other language I've used. I feel like the author's complaints here aren't really grounded in much experience, or maybe he's trying to use the wrong tool for the job.
The author's conclusion:
· Go doesn't really do anything new.
· Go isn't well-designed from the ground up.
· Go is a regression from other modern programming languages.
is hardly sustainable. Go was production-ready in 2011 with a stable version 1.0. It has a surprisingly mature tool chain and vibrant community. Go cross-compiles from my 64-bit Mac to a 32-bit Raspberry Pi or ARM Android phone on a whim. I can deploy my app by copying a single, self-contained binary. Tell me again that Go does nothing new for us.
Go makes concurrent programming safe and easy (with a nice syntax) -- something that we frankly should have done 30-40 years ago when we first started thinking about multiprocessing. Go was invented by folks like Ken Thompson (who created Unix) and Rob Pike (who created the Plan 9 operating system and co-created UTF-8). Tell me again that there isn't good engineering behind Go.
Finally, Go attacks the needs of modern programming from a different paradigm than we have been using for the last 10-20 years. From the first paragraph of Effective Go:
> ... thinking about the problem from a Go perspective could produce a successful but quite different program. In other words, to write Go well, it's important to understand its properties and idioms.
So of course it's different than a lot of other aged languages. Go tackles newer problems in a newer way. Tell me again that Go is a regression from other programming languages.
Ok, I'll point out again that it emphasizes both concurrency and mutability which is a match made in hell and has a type system that's constantly subverted by null pointers and casts to interface which drastically reduce safety. It has a static type system released in the 2010s that doesn't have generics and deploying static binaries is not a new technology.
Mutability & concurrency, nils, interface casts -- these things all go against safe.
> Tell me again that there isn't good engineering behind Go.
You seem to think that a language that has baked in syntax for concurrency, or that has famous people behind it necessarily has "good engineering" behind it. I don't understand how one leads to the other.
When so many mistakes and regressions go into a language, one shouldn't care that famous names are behind it.
> Go tackles newer problems in a newer way
Go is essentially Algol 69 with baked in concurrency syntax.
> Tell me again that Go is a regression from other programming languages
Losing null safety, sum types & pattern matching, parameteric polymorphism and type-classes, all form a huge regression in PL design from the state of the art.
You're thinking wrong. You're also proving the grandparent's point.
You're thinking in terms of "here's this set of bullet point features that I think a language has to have to be a proper, modern language." But the grandparent was asking you to consider that a different set of features might have value form some real-world problems that Go's authors had really bumped into. You reply, "Nope, couldn't have - it doesn't have my bullet point features!"
There are more things in programming than are dreamt of in your philosophy of what a programming language should be.
He said Go had something new to offer, and listed old things.
He said Go makes concurrency safe & easy, when Go emphasizes features that contradict safety and ease.
He said Go tackled problems in a newer way, when Go is really Algol69 + coroutines.
He denied Go regressing from other languages, when it throws away decades of PL research.
In none of this did he say "Here's an alternate set of features that ...". No. He said concrete, specific, wrong things.
What you are saying is a different thing -- and I also disagree with you.
These "bullet list" features weren't invented for the lols. They were created to solve real problems. Problems that Go doesn't have an alternate solution to.
Go programs dereference null. That is a bad thing. Languages with my "bullet point features" do not dereference null.
Go programs duplicate code to handle different types. Ditto.
Go programs can (and thus do!) mutate some data after it was sent to a concurrent goroutine. Ditto.
Go programs can cast down to incorrect types. Ditto.
The "bullet point features" are important. There are alternatives, but Go doesn't sport any of them.
I do agree with the author here: Go the language does nothing new. Go the platform, on the other hand, is a really pleasant new experience when compared with other languages.
The language is a regression in features compared to what other languages can do, but that is totally understandable when you look at what Go is aimed at.
Hmm, so some people are bent out of shape because of feature regression and over the fact that Go doesn't have the newest, shiniest gadgets. However, fans keep saying that their overall experience is great. Reminds me of something else...
The newest, shiniest gadgets? Like generics? You're kidding right?
Like putting words into people's mouths as a discussion tactic? You're kidding, right? So it also lacks some not-new "old dependable" gadgets as well. So what? I don't see what point you're making. Did you write that comment just to overlay in indefensible position on me?
As an exercise: Name a language feature Go doesn't have that's newer than generics.
The newest, shiniest gadgets? Like generics? You're kidding right?
Like putting words into people's mouths as a discussion tactic? You're kidding, right? So it also lacks some not-new "old dependable" gadgets as well. So what?
Go packs together a lot of nice things that previously existed in other languages. It still has room for improvement though, as IMHO the language is pretty basic ATM - but exhaustive enough to cover most needs (whether they require stuff like generics or not) in a very painless way.
This. I love Go's simplicity. Coming back to Go code I wrote months ago, I can immediately understand what it does virtually every time, which required lots of discipline I didn't always have in other languages. Lots of languages have obscure corners that allow you to do really cool things that aren't obvious, but for the most part, Go doesn't have these; what you see is what's happening.
Are there things that would make Go a better language? Sure! Should the type system be improved? Yup! One thing that makes me cringe is when I open up library code and see interface{} and calls to the reflection package all over the place, but general solutions often require that in Go, and that's a problem. In practice, though, this is almost a feature: if you see that stuff in code you're reading, it's a giant red flag that this code is tricky and possibly slow, and care is needed.
As much as I like Go, in many ways Go is not what I would consider "beautiful" in an esthetic sense.
Go does, however, have a very pragmatic feel to it. The creators, in general, seem to take a very measured look at things before adding them, and are very careful to keep the compatibility promise for 1.x. The overall result feels very "engineered" (especially when using the tooling).
Go clearly isn't perfect, but yet it feels rather robust for such a young language.
I'm an experienced Go user. I'm also a lover of Haskell and Hindley Milner type systems. and in practice these complaints are not that big of a deal. Generics may or may not get added in the future but in practice you can go a long way with just slices and maps.
And while the Hindley Milner type system is a wonder to behold and I love working in languages that have them sometimes those same languages introduce a non-trivial amount of friction to development.
Go's single best feature and the one around which almost every decision in the languages is centered is an almost total lack of developer friction. If Go has a slogan that slogan is "Frictionless Development". It's easily the simplest, least annoying, and most "get out of your way" language I've ever used.
If Go has a slogan that slogan is "Frictionless Development". It's easily the simplest, least annoying, and most "get out of your way" language I've ever used.
I suspect this is where many philosophical differences in these discussions originate. I appreciate the value of having quick and easy tools, but for production software where I care about quality, I don't want the language to get out of my way if I'm doing something silly, like treating data as if it's a different type to what it really is or assuming there is data to work with at all if a null value is a possibility. The web is plagued by security problems, so it seems odd to me that anyone would promote a new language for writing web-based software that retains obvious and entirely avoidable ways to introduce vulnerabilities.
I doubt that null pointers lead to security vulnerabilities. Panics, yes; worse performance, yes; vulnerabilities, unlikely. Null pointers are not dangling or wild pointers, which are the problematic ones.
I suppose that depends on how broadly you define a security vulnerability. Almost anything that can crash a process on a server -- either literally at OS level or figuratively by requiring something to reset itself before it can continue to do its job -- is probably a DoS attack waiting to happen. That might or might not be as dangerous as something like a remote root vulnerability, but I would argue that it is a serious security issue just the same.
(I'm sure it also goes without saying that having code that can crash with a the equivalent of a null pointer dereference is still highly undesirable in a public-facing web server, even if it doesn't actually risk things like data leakage/loss.)
Interesting response... Do the multiple people downvoting not realise that even in Go a null dereference might trigger some sort of automatic reset in your server process (if the unexpected panic is only recovered by some high-level generic error handling logic) or even crash the program entirely (if you didn't have anything up the stack that recovered at all)? Or maybe not realise how these kinds of behaviours might allow denial of service attacks?
What if e.g. a programmer calls some normalize function on data before hashing it? Then if the normalize function sometimes returns null and the programmer hasn't handled this case, an attacker could use this to generate collisions.
You might want to google dereference null pointer code execution. Go regresses language design, because it allows constructs that have been proven to fail and are already fixed in other languages.
This has nothing to do with shiny features of the newest language or whether language A or B is someone's favorite. This has to do with program correctness.
Those have to do with the undefined nature of null pointer dereference in C. In Go, accessing a null pointer is guaranteed to produce a panic on every architecture.
>Go's single best feature and the one around which almost every decision in the languages is centered is an almost total lack of developer friction. If Go has a slogan that slogan is "Frictionless Development". It's easily the simplest, least annoying, and most "get out of your way" language I've ever used.
Unless you want to do generics. Of extend the language to have custom operators, for things like scientific computing. Or tons of other things.
The generics thing turns out to not really be a problem. Yes, writing 100% Abstract Data Types is unsatisfying. However, when I'm actually writing custom data structures, I generally have an interface that I want to use because I want the data structure to take advantage of a peculiarity of the data being stored.
>can anyone comment on whether they find the problems outlined in the article to really be painful in day-to-day go development?
Author here. Depends on what I'm doing. For most simple programs, the things I listed in the article don't really get in the way. So writing my web server in Go was not that bad at all. It was pretty good, in fact.
It's when I start making larger programs that I start feeling the constraints of the things I listed in the article. Having a strong, capable type system really helps me keep track of large projects.
Been writing Go at least weekly for a while now - no, none of these things are painful in daily usage - they seem like items that are painful in theory but not so much in everyday use, especially in the realm it is intended for.
I've been using Go in a large web application for a few months now. I only skimmed the article, but I didn't notice anything that's caused me problems. I quite enjoy working with it, although I like the idea of adding generics (not that I've needed them yet)/operator overloading/range extension.
I would not say that I am particularly experienced, but I will tell my point of view from a not particularly sophisticated programmer. I am interested in seeing what objections others may have to what I write.
Compared to other popular programming languages aimed at web programming, such as Python, Ruby and PHP, Go provides more type safety. Comparable to Java's but for less code.
Go's runtime and development tools are very lightweight, which is important to me as I use multiple, often dated computers with limited memory.
It is very easy to learn, a low investment. This means that it is conceivable for a student previously only exposed to Java to get on board on your software project on short-time notice.
I am completely in support of Haskell and functional languages in general. There are some gaps in Go and definitely some glaring problems. But this comparison also only lists the bad. Go is good for what it was intended for which is concurrent programming and server/web application.
>Go is good for what it was intended for which is concurrent programming and server/web application.
I absolutely agree that Go is nice for writing web servers! However, there is no reason that it can't still be nice for that and also a well-designed language in general.
For example, Haskell has awesome concurrency features, and writing a web server in Haskell is reasonably nice (not quite as nice as in Go, IMO).
`const` is for compile-time constants where something like `let` in Rust can be used for values generated on the fly.
The latter is immensely more powerful and actually provides the described benefits like guaranteeing that data is not changing under your feet.
Edit: While `const PI = 3.1415…` is threadsafe, it's not very useful in comparison to runtime-immutability :)
Yes, the guarantees provided by true immutability are not met by "const". I wrote the note loosely using the author's expectations and advantages of immutability, not the exact definition. But thank you for letting me clarify.
Also I just noticed with Go's Unicode support we can write `const π = 3.14..` if we really wanted.
Yeah, this post seems to be most about general purpose language features, and not so much about stuff that is relevant to the niche Go is trying to carve for itself.
So he wants Haskell. Haskell already exists and has all the features he wants. He should have written his blog in Haskell, but he didn't, and I know why: because a language, which throws all these features together is no longer a practical language. He only sees the benefits of features, not the cost they introduce.
>He should have written his blog in Haskell, but he didn't, and I know why: because a language, which throws all these features together is no longer a practical language.
Actually, it's because I like Go's standard library HTTP server implementation!
I'm not claiming that Haskell or Rust are magic bullets, or that Go is useless. Far from it!
Hindley-Milner inferrence, operators-as-functions, and certain other features, I guess I can see arguments against (although I think they're overstated). However, I don't see any reason why generics, no null pointers, pattern matching, and certain other features would be problematic, let alone not be useful, in a language like Go. I'd be curious as to hear your justification of the claim that "a language, which throws all these features together is no longer a practical language."
I do think that Meijer and Griesemer agree somewhere in this intervew that generics are hard to get right (co and contravariance) and that forgoing generics is a valid design choice: http://channel9.msdn.com/Blogs/Charles/Erik-Meijer-and-Rober...
It's been a while and might have been another interview tough.
> A Good Solution: Constraint Based Generics and Parametric Polymorphism
> A Good Solution: Operators are Functions
> A Good Solution: Algebraic Types and Type-safe Failure
> A Good Solution: Pattern Matching and Compound Expressions
People have tried this approach. See languages like C++ and Scala, with hundreds of features and language specification that run into the thousands of pages.
Go was created by the forefathers of C and Unix. They left out all of those features on purpose. Not unlike the original C or the original Unix, Go is "as simple as possible, but no simpler".
Go's feature set is not merely a subset of other langages. It also has canonical solutions to important practical problems that most other languages leave do not solve out of the box:
* Testing
* Documentation
* Sharing code and specifying dependencies
* Formatting
* Cross compiling
Go's feature set is small but carefully chosen. I've found it to be productive and a joy to work with.
You seem completely ignorant of the things you're attempting to talk about. Scala doesn't have "hundreds' of features, nor is the language specification thousands of pages. It's just an outright fabrication to say so.
>Go was created by the forefathers of C and Unix.
Yeah, and it's obvious (and sad) they ignored the last twenty years of PL research and progress.
>They left out all of those features on purpose
Did they? I don't believe this is the case, as I've heard from the creators many times that they want to add generics but haven't figured out the details yet.
Are you really going to sit here and argue that static typing is important EXCEPT for when working with collection? That parametric polymorphism doesn't make things simpler?
Of the 4 you mentioned (Constraint based generics and parametric polymorphism, operators as functions, algebraic types and type-safe failures, and pattern matching/compound expression) C++ really only has 1 (operators as functions).
>with hundreds of features and language specification that run into the thousands of pages.
This describes neither Rust nor Haskell.
>Go is "as simple as possible, but no simpler".
It has mandatory heap usage, garbage collection, green threads. It's more than generous to call that "as simple as possible".
Of the 5 features you mention that Go has "canonical solutions" to (in the form of external tools), I know off the top of my head that Haskell's Cabal takes care of at least 4 of them. I'm not sure about formatting. Rust probably has similar tools, or if it doesn't, they can certainly be added without changing the language.
The newly released 'cargo': http://crates.io/https://github.com/rust-lang/cargo/ (alpha, but quickly improving). This will be Rust's cabal equivalent, almost certainly with support for generating documentation and cross-compiling (it already has basic support for running the tests described above).
(Well, to be precise, the compiler has the '--pretty normal' option, but it's not so good. https://github.com/pcwalton/rustfmt is the work-in-progress replacement.)
* Cross compiling
Already supported, although it requires manually compiling Rust with the appropriate --target flag passed to ./configure, to cross-compile the standard library.
The Scala specification is two hundred and something pages, around a third the length of the Java specification (largely because Scala has, in some sense, fewer features than Java, in the sense that Java has lots of edge cases with their own special handling, whereas Scala has a smaller number of general-purpose features. The complexity comes because it's easy to use all of them at once)
I get what you are saying, but why convolute C++ and Scala? One is a horribly designed complex language with macro-like templates rather than modular parametric polymorphism. The other is much more well designed but, as you say, still complicated. You could have stayed at Scala without degrading into a comparison with C++, which is a universally kicked dog anyways.
It is possible to do type parameters in a way that is simple yet effective. But I can understand why it wasn't done this early in Go's lifetime, especially since Rob Pike isn't exactly well into generics (vs. Odersky's experience with Java/Pizza).
This article presumes that everybody wants an elaborate type system. I'm not sure that is the case. I still see an elaborate type system as incidental complexity. I may be in the minority and I may not have worked in domains which benefit from such modeling. Maybe I'm stuck in a blub paradigm.
Here's my reasoning. I'm a fan of human language & domain ontologies. Word definitions are quite flexible & do not have an elaborate type system. I don't feel the need to have a provably correct logical system to have a useful conceptual tool (i.e. analogies). I enjoy ambiguity. Ambiguity can lead to paradoxes, which in turn leads to exploration & novelty.
Strongly typed systems, by default, give me the impression that the domain ontology is figured out on a highly precise level. That is never the case. You can almost always go deeper. Domain language precision is tough to model & express.
I prefer data structures to drive operations. I suppose that a schema is often useful, however I don't feel like I need the programming language to enforce the schema.
I also like to evolve the design, using tests. Tests are really examples to exercise the program's API with expected I/O.
People often equate an evolved programming language/paradigm as being better. In the case of Javascript, they point out that is was created in a few days as evidence that it's "bad". The thing about an evolved language/paradigm is it has evolved down a certain path. That often means restricting the freedom of the programmer to evolve the program down another path. I'm not picking one way to be better than another. However, I do see tradoffs to both approaches. I personally prefer a more flexible language. It can be evolved, as long as the evolution does not restrict my freedom to evolve the program.
I recommend Dijkstra's paper: "On the foolishness of "natural language programming" [1].
It explains how natural language is a very poor way to express programs, and how it held back science and progress for many centuries.
Strong types describe your code precisely. If your code doesn't match the model yet, that's fine.
But your code has invariants in it. Things like: "this variable can never be nil, that variable can". These invariants can relatively easily be encoded as static types. Not doing so is just throwing away safety and documentation for virtually no benefit.
Other invariants are similar. Instead of documenting them or keeping them in your head, you let your compiler worry about them. And then when you break those invariants, the compiler is your friend! He helps you go and find all the other pieces of code that relied on the broken invariant, so you can fix them.
You say you prefer a more flexible language: But Go is extremely inflexible. It has a very primitive set of tools to do everything, clumsily. A language like Haskell, for example, lets you have Go-like coroutines and channels. But it also lets you program with software transactional memory, or use other parallelism constructs.
Also, the inability to specify invariants of your program isn't flexibility. The ability to specify them or opt out of specifying them, which is what strong type systems give you, is.
> It explains how natural language is a very poor way to express programs, and how it held back science and progress for many centuries.
I'm not interested in using natural language to implement the software (write code). I'm interested in using natural/technical language to create an ontology for the architecture. This is where things get gray.
When I think of an architecture, I think of something that evolves over time. I think of architecture as a tool to facilitate communication between programmers, designers, project management, domain experts, & users.
This ontology evolves over time as the software, understanding of the domain, & the domain itself changes.
I see this fuzziness as an accurate model of the conceptual domain, which is ultimately based on the understanding of multiple humans. This understand is fuzzy and heavily dependent on context. And yet, the ontology attempts to coral this fuzziness into more strongly defined concepts, which map to the implementation. The implementation should not be fuzzy at all.
> Other invariants are similar. Instead of documenting them or keeping them in your head, you let your compiler worry about them.
I usually use guard clauses to protect against nulls. I rely on my tests & production monitoring systems to prove that the implementation is incorrect. When there is such proof, I correct the system. In environments that support rapid feedback & deployment, like the web, this works well. In environments that don't support rapid deployment, iteration, and where lives are at stake, not so well.
> But Go is extremely inflexible. It has a very primitive set of tools to do everything, clumsily. A language like Haskell, for example, lets you have Go-like coroutines and channels. But it also lets you program with software transactional memory, or use other parallelism constructs.
That sounds good to me.
> Also, the inability to specify invariants of your program isn't flexibility.
That mostly sounds good. I would want invariants to be optional, which sounds like is the case.
> I usually use guard clauses to protect against nulls.
An interesting (to me) insight was that in a language with a flexible type system, the types are effectively just a set of assertions at the start and return of every function, that say that the inputs and outputs have certain properties. With a compact syntax and zero runtime overhead, which is nice.
I do think that some strongly typed languages make it too difficult to step outside the type system; in Scala I have to do something like (x.asInstanceOf[{def foo(): String}]).foo() whereas in Python I can just write x.foo(). But once I started seeing the type system not as a fixed piece of the language but as a framework for encoding my own assertions, it became useful enough that I can't stand to live without it.
It's much more than just at the start end return of every function. But OTOH, it's much less than assertions, since they're restricted to a subset that can be proven (usually automatically).
Note the Scala verbosity here is Scala-specific. In HM-style languages, type inference works much better and you don't have to do such things. You might still have to explicitly "lift" a value from one type to another (e.g: Wrap a value in "Just", or use "lift"), but that's a much more minor price to pay.
> I do think that some strongly typed languages make it too difficult to step outside the type system; in Scala I have to do something like (x.asInstanceOf[{def foo(): String}]).foo() whereas in Python I can just write x.foo()
Have to make an explicit cast with a structural type? Surely you can do better, like, say, a trait.
> I'm talking about casting, calling a method that the type system doesn't know is present.
I think a larger example is necessary to see how you ended up in such a situation, but I suppose it's off-topic...
> This ontology evolves over time as the software, understanding of the domain, & the domain itself changes.
So what does this have to do with Go or type systems?
> I usually use guard clauses to protect against nulls. I rely on my tests & production monitoring systems to prove that the implementation is incorrect.
And that's a bad thing. There are better tools for this job. What's the downside of encoding nullability into the types?
> That mostly sounds good. I would want invariants to be optional, which sounds like is the case.
Every good type system lets you "opt out".
Therefore, it is a bit silly to look at dynamic typing, where you cannot "opt in", as more flexible.
> This ontology evolves over time as the software, understanding of the domain, & the domain itself changes.
> So what does this have to do with Go or type systems?
I've found that type systems, that aren't utilizing Duck Typing, as being restrictive & causing incidental complexity when evolving the design. I don't really care if something is a categorization of something else. I usually (> 98% of the time) only care if that something adheres to an interface.
I don't like to label people in life either :-)
> I usually use guard clauses to protect against nulls. I rely on my tests & production monitoring systems to prove that the implementation is incorrect.
>And that's a bad thing. There are better tools for this job. What's the downside of encoding nullability into the types?
For the web, it's not that bad. The design is constantly evolving, so most of the development period is spent completing something that is not finished.
There's no downside in encoding nullability, unless extra syntax & incidental complexity is added. It's not a big problem for me so I'd rather not have to do extra work for this feature.
> That mostly sounds good. I would want invariants to be optional, which sounds like is the case.
> Every good type system lets you "opt out".
Explicitly or implicitly? I'd rather be opted out by default and opt in when I want to. Again, I don't want to do extra work or have incidental complexity.
> I usually (> 98% of the time) only care if that something adheres to an interface.
Well, whether something is "nil" or not definitely matters to whether it adheres to the interface, doesn't it?
> There's no downside in encoding nullability, unless extra syntax & incidental complexity is added
You just need sum types and pattern-matching, which are a straight-forward addition to the language -- and very fundamental to computation, so not quite "incidental complexity".
> I'd rather be opted out by default and opt in when I want to
Then why do you use Go and not a fully dynamically typed language? In Go you opt out explicitly with "interface {}".
Why would you rather opt in? Opt out makes sense because types are so cheap 99% of the time.
> I usually (> 98% of the time) only care if that something adheres to an interface.
>Well, whether something is "nil" or not definitely matters to whether it adheres to the interface, doesn't it?
True. Though we are discussing nil/null as being a potential state of data. I actually like & utilize Javascript's notion of falsy (false, "", undefined, null, 0). It's not precise, but most of the time, precision is not needed. Just the general notion that there is a value to operate on or not. Optimizing toward brevity supersedes precision in many cases.
> I'd rather be opted out by default and opt in when I want to
> Then why do you use Go and not a fully dynamically typed language? In Go you opt out explicitly with "interface {}".
I mostly do use dynamic languages. Though, the notion of the Go interface makes the api explicit, yet remains decoupled from the rest of the type system, which seems ok. I've seen proponents use these interfaces to later "discover" types.
> Why would you rather opt in? Opt out makes sense because types are so cheap 99% of the time.
I would rather opt in if I would otherwise have to think about it every time the situation comes up.
Take a collection as an example. Most of the time, I really just want to put a bunch of objects (data) into the collection. I don't want to be a bookkeeper of what type of data is going into the collection. I trust that the data "works" with the rest of the program and will utilize other mechanisms to prove that it doesn't work. I don't want to have to fight compile errors and have to craft a type system just to put an object into a collection.
---
As a general notion, I like to evolve the design from a simple understanding to a more precise & intricate understanding. My ideal programming language would be forgiving of my initial simplistic domain understanding and facilitate the growth of precision as time goes onward.
This is a good argument in favor of dynamic typing, but Go is not dynamically typed. It has a static type system that its critics consider insufficiently powerful.
Ambiguity is not something that's generally, if ever, desirable in a program, so if a simpler type system has that to offer, I'm not sure it's a benefit. You could also argue that a more elaborate type system allows you to be more expressive in the design of your programs, rather than restricting you.
Regarding generic data structures, the author should consult the sort package, which has typed generics; much the same approach can be used for generic data structures.
More complex type inference requires a more complex (and hence buggier) compiler.
Finally, the author should investigate the unsafe package; I believe the following code will do what he wants:
There is a subtle yet key difference between range and for. Ranges will only start execution on input of known size and guarantee termination. Regular for loops do not require a terminating state.
Go is designed to be a niche language. A very big niche, but essentially still a niche: service components in clouds, where computing efficiency financially trumps development times.
Go is more productive than C++, but less so than Python or some other alternatives. Go's tooling, linking and libraries make it useful in the Cloud, less so on mobile or personal computers. And the lack of third party libraries, combined with a relatively slower speed of developing these libraries (again compared to Python, javascript and others) means that Go will have a hard time going beyond cloud services.
To be honest, I feel a lot of people are missing the point of Go, and I think none of the points made are important. The single most important thing about Go is it's simplicity:
For me it is a language that feels like a hybrid between a scripting language and a 'real' programming language. Simple syntax with some powerful, easy to use features, impressive library support for being only a few years old, but compiles (static) to native code.
That fills a gap that Haskell and Rust don't, These more advanced languages sacrifice simplicity for an attempt to be perfect. Go makes the clear statement of being simple above everything else.
Give an average python/ruby/<insert scripting language here> coder the link to "A 30-minute Introduction to Rust" ( http://doc.rust-lang.org/intro.html ), and he/she will give you a strange look and not understand half of what is being said there. In the end they'll conclude it's not something for them. Give it to a C++ coder and he'll say 'oh nice, but I can do that in C++, use Boost<whatever>, because C++ is superior to all!' - and that coder there is their target audience. A decent C++ coder will have invested too much time to learn another language to solve problems he already learned to solve for himself in C++ a long time ago. Rust might be better and would make his life easier in a lot of cases, but still the majority won't make the switch.
Give the same <insert scripting language here> coder the Go documentation, and he'll be off in no time, writing better code than he used to do, producing a single binary which will not be an absolute nightmare to deploy. And that's what every coder of scripting languages has always dreamed of - being able to make programs in a simple straight-forward way, with as little dependencies as possible, without needing a <language X> runtime. On top of that, Go makes cross-compilation dead-easy.
There are a LOT more <insert scripting language here> coders out-there than there are C++ programmers. Giving them Go makes running the stuff they write more efficient confronts them with Git/source control (you would be surprised how many don't know about SCM)
I agree with you partially, but OTOH Go, I don't know exactly why, feels pragmatic and ready for production.
Maybe its because of the creators, Google backing it, or the promise that 1.x remains compatible, or that it ships with a standard library good enough to write useful server stuff.
So despite all those flaws (I miss generics the most), I think it will become the static Python replacement for the next 10 years.
(Its like how Factor handled 3rd party contributions: one library for some particular task is blessed and shipped w/ Factor. Of course it doesn't scale..)
Go, Rust, Haskell, come from diffent ways of thinking about how to solve problems using programming languages..
Go aims to be more simple and concise, in the end you write less code to do the same thing, as you would in C++, Haskell or Rust.. because those 3 languages decided to "cover everything" and are worried about other things, creating more burden to the programmer, but with something else to gain
Go is more of a productivity language.. it remind us the we have better things to do in life, and not spend all the time coding, but enjoying that extra time with your family for instance..
Therefore Go is good.. its only MAYBE "not good" for the same thing that Rust, or Haskell are..
Besides.. this is the wrong way to market some technology or idea.. the best way would be "Why Rust or Haskell are Good"
instead of envy the success of others..
Its all about tradeoffs.. and i think this article misses the point.. and care only some things that will obviouslly make some languages more fit.. like, if you care more about memory control,type systems and safety.. its obvious that Rust and Haskell will look good and "correct"
But this is not all about it.. theres team working, productivity.. its a balance.. and always depends of the problem domains.. some language are more fit than others.. theres no need for bashing
You made the claim that "in the end you write less code to do the same thing, as you would in C++, Haskell or Rust". Can you provide any examples where the Haskell equivalent isn't more concise than the Go equivalent?
As a data point, here are links to the Haskell and Go implementations of the TechEmpower benchmarks:
Anyone can encapsulate or "hide" the inner workings so the resulting code will look small.. how much of that code in the frontend is already implemented in the standard library of each language??
To see the reality of it.. a better example would be something without any support library...
Picking something without a library will give an advantage to the other language in most cases because Haskell is very library based.
I didn't cherry-pick btw, I just went to the first benchmark which included community-written snippets of both Go and Haskell I could think of. Cherry-picking would be me intentionally skipping over examples such as the one you provided and instead posting my example.
I´ve provided the links as a example of cherry-pick from my part.. showing that its easy to come with something that looks good..
FP tend to be more expressive, and make you feel more powerful.. but they also tend to be more complex and more verbose.. thats the price of power.. it's the trade of
I think its not enough to show something simple, but with the real complexity hidden in some hidden layers..
Also if line of code is the measure of simplicity the Brainfuck language would win everything.. but it doesnt mean you can read what the code express with less effort..
Nothing against Haskell in particular, only that it doesnt shine when the matter is simplicity..
Expressiveness.. Abstraction power.. shure.... but that was not the original point i was making in the original article
The problem of 'summing any kind of list' is not a problem that is solved in Go via the proposed kind of parametric polymorphism. Instead, one might define a type, `type Adder Interface{Add(Adder)Adder}`, and then a function to add anything you want is fairly trivial, `func Sum(a ...Adder) Adder`, put anything you want in it, then assert the type of what comes out.
When it comes to iteration, there is the generator pattern, in which a channel is returned, and then the thread 'drinks' the channel until it is dry, for example `func (m myType) Walk() chan->myType` can be iterated over via `range v := mt.Walk(){ [...] }`. Non-channel based patterns also exist, tokenisers usually have a Next() which can be used to step into the next token, etc.
The Nil pointer is not unsafe as far as I know, from the FAQ: http://golang.org/doc/faq#no_pointer_arithmetic
The writer seems to believe that functions on nil pointers crash the program, this is not the case. It's a common pattern in lazy construction to check if the receiving pointer is nil before continuing.
Go is not flawless by any means, but it warrants a specific style of simplistic but powerful programming that I personally enjoy.