
Rust and Go - mperham
https://medium.com/@adamhjk/rust-and-go-e18d511fbd95
======
Animats
The article is a lightweight analysis by someone who writes small programs. He
does get that, for Rust, "If the compiler accepted my input, it ran — fast and
correctly. Period." That's was a common experience with the very tight
languages, such as Ada and the various Modulas. It's been a while since a
language that tight was mainstream. We need one now, badly.

Go isn't bad for writing routine server-side web stuff that has to scale and
run fast, which is why Google created it. Go is a modern language with a dated
feel. No user-defined objects, just structs. No generics or templates. It was
designed by old C programmers, and it looks it. Go has generic objects - maps
and channels - and syntax for creating object instances - "Make". Only the
built-in generics are available, though; you can't write new ones.

Go's "reflection" package thus tends to be overused to work around the lack of
generic. This means doing work for each data item at run time for things that
could have been done once at compile time. "interface{}" (Go's answer to type
Any from Visual Basic) tends to be over-used.

Go (especially "Effective Go") has a lot of hand-waving about parallelism.
Go's mantra is "share by communicating, not by sharing", but all the examples
have data shared between threads. The channels are just used as a locking
mechanism. Race conditions are possible in Go, and there's an exploit which
uses this. (That's why Google AppEngine limits Go programs to single threads.)
Go doesn't use immutability much, which is a lack in a shared-data parallel
language with garbage collection. If you can make data immutable, you can
safely share it, which is a way to avoid copying without introducing race
conditions.

Rust, like Erlang, takes a much harder line on enforcing separation and
locking. I haven't used Rust myself yet, so I can't say more on what it's like
to use it. My hope is that Rust will provide a solution to buffer overflows in
production code. After 35 years of C and its discontents, it's time to move
on. I really hope the Rust crowd doesn't fuck up.

~~~
tptacek
Like you†, I've had the pleasure of working with some fairly large concurrent
codebases and the character-building experience of tracking down deadlocks,
random memory corruption bugs that turn out to be race conditions, and (my
most favorite of all) unexpected serializations that randomly bring programs
to a halt. Most of that experience has been in C++, with a little C and a
little Java mixed in there.

Over & over I see language aficionados ding Golang for not taking advantage of
immutability and for allowing shared data --- or, in your case, going a step
further and reducing all communication among processes in Golang to instances
of synchronized sharing.

What I'd like to know is: why don't all those hundreds of thousands of lines
of concurrent Golang code out there, including all the library code I can just
"go get" and whose authors have been encouraged by Rob Pike to use, basically,
threads with near total abandon (watch his video about designing a lexer!) ---
why don't all those libraries and programs randomly deadlock and corrupt
themselves all the time?

Because my experience is that Golang code is quite a bit more reliable than,
for instance, Python code.

What am I missing? The "share by communicating" model in Golang seems to work
pretty darn well, especially given the extent to which Golang begs programmers
to make programs concurrent.

† _OK probably you more than me but still._

~~~
Jweb_Guru
> why don't all those libraries and programs randomly deadlock and corrupt
> themselves all the time?

The simplest answer would be "they do." In aphyr's recent presentation on
Jepsen, where he tested etcd (a Go database implemented on top of Raft), he
noted that when he started using it he encountered a ton of easily
reproducible races and deadlocks (which he sarcastically noted was surprising
because he thought goroutines were supposed to make concurrency issues a thing
of the past).

I am not saying that Go channels don't help the situation at all--and the
inclusion of a race detector doesn't hurt either--but you still have plenty of
ways to shoot yourself in the foot. The thing that probably helps most is that
GOMAXPROCS is 1 by default, since data races are a multicore phenomenon in Go.

~~~
tptacek
Distributed systems programming is its own special concurrency problem, and
distributed systems also exhibit deadlock, races, and serialization, no matter
what language they're implemented in. I'm not sure what finding a race
condition in a distributed commit implementation says about a language; at the
very least, it's nothing you couldn't say about Rust as well, which is also
not a language that solves distributed systems concurrency problems.

Maybe I'm wrong and etcd was riddled with concurrency problems _between the
goroutines of a single etcd process_?

In any case: as anyone who has worked on a large-scale threaded C++ codebase
can tell you: Golang programs simply do not exhibit the concurrency failures
that conventional threaded programming environments do. It would be one thing
if Golang code only used concurrency for, say, network calls. But goroutine
calls are littered throughout the standard library, and throughout everyone's
library code.

~~~
rdtsc
It is not a black and white situation probably. Golang is better because it
has built-in channels and encourages users to take advantage of them. It also
has garbage collection. So those 2 things right of the bat help.

But there are better things out there -- isolated heaps (Erlang), borrow
checkers (Rust), stronger type systems and immutability (Haskell) etc. There
are no magic unicorns so those things often come at a price -- sequential code
slowdown.

Getting back to go. One can of course say, "Oh, send only messages. We are all
adults here. Let's just agree to be nice. Stop sharing mutable memory between
goroutines!" But all it takes is "that guy" or "that library", doing it "that
one time" and then there are crashes during a customer demo or during some
critical mission. It crashes and then good luck trying to reproduce it.
Setting watchpoints in gdb (or the equivalent Go tool), asking customers "Can
you tell me exactly what you did that day. Think harder!" and so on.

Also, as others have pointed, with Golang though, often it is run with just
one OS thread backing all the concurrency. So many potential races could be
just be hidden.

There is also some confirmation bias involved. When something is broken, often
authors don't write blogs about it, don't advertise. They fix it, and move on.
So maybe a lot of programs are full of concurrency bugs but just nobody is
blogging about it. They've invested time and energy into learning a new
ecosystem and now they have to blog about its flaws and so on. That is hard to
do.

Another observation is that when spending a lot of time debugging and handling
segfaults, pointer errors, user-after free errors, concurrency issues, that
becomes the default and expected view of how programming works. It becomes
hard to imagine how it could work another way. It becomes obvious that weeks
would be spent tracking one concurrency bug or having to add cron jobs to
watch for crashed programs and restart them because the system is so complex
and non-deterministic, replicating the bug is too hard.

~~~
jaytaylor
Just out of curiosity - is there a problem with sharing data across goroutines
when access to/mutation of said data is controlled by a mutex?

    
    
        func (*Mutex) Lock
    
        Lock locks m. If the lock is already in use,
        the calling goroutine blocks until the mutex is
        available.
    

[http://golang.org/pkg/sync/](http://golang.org/pkg/sync/)

It seems to me this is a valid alternative to [rigidly] sticking to pure
message passing.

~~~
Animats
In most languages, the language says nothing about what data is protected by
the mutex. Modula and Ada did, and Java has "synchronized" objects, but
C/C++/Go lack any syntax for talking about that. This typically becomes a
problem as a program is modified over time, and the relationship between mutex
and data is forgotten.

~~~
masklinn
> In most languages, the language says nothing about what data is protected by
> the mutex.

Or the other way around, what mutex protects a piece of data (or even that a
piece of data should be protected at all), so it's easy to forget it and just
manipulate a bit of data without correctly locking it.

I was pleasantly surprised to discover that Rust's sync::Mutex owns the data
it protects, so you can only access the data through the mutex (and the
relation thus becomes obvious).

------
freyr
It's not clear how much experience the author really acquired with each
language, and whether that experience was sufficient experience to justify his
statement:

 _Go felt that way to me — it was good at everything, but nothing grabbed me
and made me feel excited in a way I wasn’t already about something else in my
ecosystem._

He's apparently using each language to write relatively small command-line
utilities. If Go is "amazing" at anything, its usually cited as a language of
choice for (1) networked systems, and (2) large yet maintainable systems. I'm
not sure his initial foray into the language would have provided enough
experience to accurately assess those merits one way or the other.

Rob Pike once expressed surprise that people migrating to Go weren't C++
programmers, but Ruby/Python/etc. programmers who needed more performance.
That leads you to wonder: (EDIT: removed pejoratives) if a programmer desired
to switch from C/C++ to another language but hasn't by now, why not?

1\. They require the performance benefits of C/C++ (and as humanrebar pointed
out, manual memory management).

2\. They're tied to legacy code, with too little incentive to switch.

3\. They have an organizational mandate.

Any programmer who wasn't subject to the above constraints and wanted to
switch could have done so before Go showed up on the scene. And if a
programmer uses C or C++ solely because the above constraints, Go isn't likely
to change that.

Rust may have a better chance of converting C++ programmers, if it offers the
performance and control demanded by programmers who are using C++ by
necessity. It will be interesting to see if people migrating from Python/Ruby
to a higher performance language will choose Go or Rust in the future. Kind of
like the OP, I like Go but I'm excited about Rust.

~~~
threeseed
>(2) large yet maintainable systems

I have never seen anyone suggest Go for large systems or seen any open source
code that even comes close to enterprise system size. I would argue that Go is
inadequate for large systems compared to the JVM languages. The absence of
operational tooling, exceptions, declarative annotations, runtime management
etc all make it much harder to support and scale to large numbers of
developers.

Go seems perfect for micro services, command line utilities and single purpose
applications. Which is where it seems to have gained a lot of traction in
companies to date.

~~~
e12e
>>(2) large yet maintainable systems

>>

>I have never seen anyone suggest Go for large systems or seen any open source
code that even comes close to enterprise system size.

I might be moving the goal posts a bit here, but I think go's C heritage, and
focus on message passing -- possibly coupled with something like protobuf or
similar -- encourages breaking large systems into small services. So if you
view a system as a "sum of functionality" \-- I think one might still use go
for "large, yet maintainable systems".

Now, it is of course possible to write micro services both in C++ and java --
but historically it appears at least in the java world, you end up throwing
everything into a massive jboss container, exposing yourself to thousands upon
thousands of lines of code.

With go, you deploy (relatively) small binaries, and a service can live as 20
binaries on one box, or as 20 binaries (along with some load balancers like
haproxy or what not) across 300 machines. Or something in between.

I'd argue that some of the more sane java frameworks and projects also revolve
around simplicity and separation of concerns -- typically leading to micro
services. But a lot of people seem to end up working with large, poorly
architectured beasts. That's probably more of a culture thing, than a language
thing -- so I think people will make huge swats of unmaintainable go code as
well...

~~~
waps
> encourages breaking large systems into small services

That's as much of a curse as it is a blessing. To some extent, small services
are handy for dev/ops type folks, as they can quickly see which specific part
of an application is misbehaving with memory or cpu or diskspace, so they like
it.

But small services means that you lock down the interface between parts of the
system by using another language to specify the communications protocol (e.g.
protobuf, json, ...) and 2 different codebases have to understand it. And even
if you manage to get the code to change, now you have the problem of migrating
the running program.

In other words, the interface is now set in stone. Nobody will ever touch it
again. This is exactly what you do not want to happen. Small services are the
enemy of large, flexible programs.

Contrast this to Java/C# (and, somewhat less perfectly, C++) and their
refactoring tools. What a difference. Changing an interface is something that
is mainly done by computer code, not by a programmer, and all parts are
modified and all problems identified.

There are points where this is not a problem, like a file system interface, or
a socket interface, that sort of thing (and even there you may change your
mind ...). Places where flexibility is not needed or wanted (I would argue,
looking at linux file systems, that the POSIX API is not, in fact, a good API
for quite a few file systems, but looking at the kernel I can see why this is
not going to change. Of course, half the distributed file systems are user
space libraries, partly for this reason). This is exactly the sort of thing C
programmers deal with.

~~~
NateDad
One of go's strengths is how easy it is to refactor. Implicit interfaces means
that you can change a function to take an interface, and the caller who is
passing in a concrete type doesn't have to get updated at all.

also, there's a relatively recent tool created called gorename that does 100%
type-safe renaming.

Plus there's been gofmt and gofix for forever which you can use to
automatically rewrite your code.

Finally, because almost all go code is formatted with gofmt, you can often do
simple find and replace changes because all the code is completely regular.

~~~
waps
> One of go's strengths is how easy it is to refactor

Give me a minute to collect my jaw of the floor here. Nope ... need some more
time. Unless you mean in the same way as C and pascal are easy to refactor, I
disagree in the strongest possible way. I may conceed to a very limited
extent. While Go and it's tools don't allow refactoring, due the static nature
of Go, it's actually possible, through careful design and constantly putting
extra effort in, to make sure that it's reasonably easy to refactor. As long
as you stay away from using interfaces, use long and unique enough names for
your variables, make sure no variable names are substrings of other variable
names, have a convention for polymorphic method names (ie.
Matrix.MakeWithFloatArray(), Matrix.MakeWithIntArray(),
Matrix.MakeWithZeroes(), ...), ...

> Implicit interfaces means that you can change a function to take an
> interface, and the caller who is passing in a concrete type doesn't have to
> get updated at all.

Yes because that's what refactoring is ... what you're showing here is called
"polymorphism", and Go "doesn't support it" (except when it does, like as you
point out here, in interfaces, oh and in range, make, new, append, close,
copy, delete, imag, len, print, println, real, go, defer, most of which are
also generic and polymorphic in really, really bad ways (some have completely
unrelated and surprising behavior when passing different types to them), and I
doubt I've got all of them).

> also, there's a relatively recent tool created called gorename that does
> 100% type-safe renaming. > Plus there's been gofmt and gofix for forever
> which you can use to automatically rewrite your code. >Finally, because
> almost all go code is formatted with gofmt, you can often do simple find and
> replace changes because all the code is completely regular.

I have tried that tool. It only does a single file. Again that makes it not
refactoring. Just so we're clear. Here's the definition of refactoring :

    
    
      Code refactoring is the process of restructuring existing computer code – changing the factoring – without changing its external behavior.
    

Which is not what those tools do. Change the name of a method ... _boom_ 5
objects don't satisfy the interface they did 5 seconds ago anymore. Change the
name of an interface ... doesn't change in all other parts of the code. Change
an exported variable ... everything fails to compile.

Next major point of criticism of the go tools. When do you want to do
refactoring ? Well, during development. Of course in order to refactor during
development, when 2-3 of your program's files don't compile, you obviously
cannot use a normal compiler to refactor, since it won't understand the
program. While this is not technically part of the definition, it frustrated
me to no end the first, and last, time I used gofix to attempt to refactor
something. Me and vim are faster at refactoring a 10000 line Go codebases than
gofix + cleaning up after it is. Gofix knows a cute trick with symbol tables
that is 1% complete (because making it functional will require a full rework
of the Go compiler), which is not refactoring (since it doesn't look at the
full source tree), and it will require a rework of Go itself (I'm not yet
positive, but I think that because Go works with implicit interfaces, it is
not actually possible to refactor anything related to object methods or
interfaces correctly).

~~~
robfig
According to the announcement [1], gorename is able to rename just about any
identifier (function, method, exported variable, local variable) throughout
your entire GOPATH, not just a file. I tried it out and it seemed to work
fine. Additionally, it seems to detect at least some cases where the rename
would cause the resulting code to not work, although you can -force it to
apply the changes anyway.

I happen to agree that Go could use more refactoring tools, but I think it's
in good shape considering how young it is.

[1] [https://groups.google.com/forum/#!topic/golang-
nuts/96hGPXYf...](https://groups.google.com/forum/#!topic/golang-
nuts/96hGPXYfqsM)

------
alkonaut
If you are considering Go, or just want a good laugh, just read discussions
where higher order functions are discussed. Or for that matter, generics.

Here is a gem: [https://groups.google.com/forum/#!topic/golang-
nuts/RKymTuSC...](https://groups.google.com/forum/#!topic/golang-
nuts/RKymTuSCHS0)

There's a chance you'll laugh at the people dismissing higher order functions
as nonsense, in which case Go might not be for you. This is a good test of
whether you want to try it out or not.

~~~
tptacek
Let me save the reader of this comment a long and unproductive session of
reading tea leaves out of mailing list posts:

* Golang doesn't have generics. This comes up so often on the mailing list that it's in the FAQ:

[http://golang.org/doc/faq#generics](http://golang.org/doc/faq#generics)

* Golang has a similar attitude to idiomatic functional programming tools as Python; it doesn't have map() and it's not easy to write a general-purpose map.

(I'll take Golang's lack of generics and map over Python's busted closures,
and so call it a draw).

If you want to write programs in functional style, you don't want to use
Golang. Pretty simple.

~~~
nightpool
Umm... Am I just interpreting you wrong? Python has map in the builtin
namespace:

    
    
      >>> map(lambda x: x*5, range(5))
      [0, 5, 10, 15, 20]

~~~
tptacek
Python obviously has those primitives (that's what I meant by the tradeoff
between it and Golang) but Guido infamously discourages their use.

~~~
nightpool
Hmm, sorry I didn't see this earlier, but I personally feel that Python
supports a healthy mix of imperative, functional and object-oriented styles.
(especially with the (over?) use of the "operators" library I tend to see
around nowadays...)

Certainly when I picked up a purely functional language (Racket) after mainly
having only Python and Java experience before I didn't feel too disoriented or
out of touch.

~~~
burntsushi
The parent is referring to the fact that Guido van Rossum (the Benevolent
Dictator For Life of Python) is pretty down on functional programming in
Python. You can read some of the history (from the horse's mouth) here:
[http://python-history.blogspot.com/2009/04/origins-of-
python...](http://python-history.blogspot.com/2009/04/origins-of-pythons-
functional-features.html)

TL;DR - "I have never considered Python to be heavily influenced by functional
languages, no matter what people say or think."

Python, IMO, has _flutters_ of functional programming in it. But its broken
closures, lack of uncripppled anonymous functions and lack of tail-call
optimization are pretty damning strikes against calling Python's support for
functional programming similar to its support for imperative or OO paradigms.
It just isn't. Yeah, we get `map` and `filter`, big whoop. :-)

~~~
masklinn
> But its broken closures

They were "unbroken" in Python 3, though because of the way scoping works in
the language it requires marking variables (with `nonlocal`)

~~~
burntsushi
I'm aware. The presence of nonlocal and global still makes them broken to me,
even if it's a result of how scoping works. Lua and Javascript both manage to
have unbroken closures.

~~~
masklinn
> Lua and Javascript both manage to have unbroken closures.

Because they use explicit local declaration (and implicitly declared variables
are global in both)…

~~~
ufo
Yes, and I would say that requiring explicit local declarations is the right
thing to do. Having a "default" variable scope is very error prone (typos are
treated as new variables and closures don't work right) but at least with
"global by default" you can use a linter to enforce that all your globals are
explicitly declared. In Python its impossible to do something similar.

~~~
masklinn
> Yes, and I would say that requiring explicit local declarations is the right
> thing to do.

And I could hardly agree more with that, I simply disagree that Python's
closures remain broken, they're simply fixed within the constraints set by
previous design decisions.

> at least with "global by default" you can use a linter to enforce that all
> your globals are explicitly declared

Still, there's really no excuse for global as default, there's no convenience
justification and could just as easily be an error (more easily really)

~~~
burntsushi
> I simply disagree that Python's closures remain broken, they're simply fixed
> within the constraints set by previous design decisions.

The fact that I have to distinguish between `global`, `nonlocal` and `default`
scope makes them broken. `nonlocal` is a _hack_.

Saying that it's a result of previous design decisions is a sound technical
reason for why `nonlocal` is necessary to make closures work. But as an
abstraction, at least, they are broken.

With that, I will concede that they are broken in two different ways between
Python 2 and Python 3. This is IMO.

------
eridius
Not a bad write-up. The Rust code snippets can be slimmed down very slightly
though. Here's main():

    
    
        fn main() {
            let args = os::args();
            let washed_args = args.iter().map(|arg| arg.as_slice()).collect::<Vec<_>>();
            match washed_args.as_slice() {
                [_, "review", opts..] => review(opts),
                _ => usage()
            }
        }
    

although I might actually suggest the alternative approach:

    
    
        fn main() {
            let mut args = os::args().into_iter();
            args.next(); // skip program name
            match args.next().map(|s| s.as_slice()) {
                Some("review") => review(args.collect::<Vec<_>>()),
                _ => usage()
            }
        }
    

(this approach requires `review()` to take a `Vec<String>` instead of a
`&[&str]`, but that's not a difficult change, and we could fix it using a
second line of code if we wanted but at the cost of introducing a new
allocation, like the original code does)

For review() I'd suggest changing the original code:

    
    
        let cwd = os::getcwd();
        let have_dot_git = have_dot_git(cwd.clone());
    
        let dot_git_dir: &Path = match have_dot_git.as_ref() {
            Some(path) => path,
            None => { panic!("{} does not appear to have a controlling .git directory; you are not in a git repository!", cwd.display()) },
        };
    

to the following:

    
    
        let cwd = os::getcwd();
        let dot_git_dir = have_dot_git(&cwd).expect(format!("{} does not appear to have a controlling .git directory; you are not in a git repository!", cwd.display()).as_slice());
    

This actually leaves `dot_git_dir` as a `Path` instead of a `&Path`, but I
think that's better anyway. It also requires `have_dot_git()` to take a
`&Path` instead of (what I assume it takes now,) a `Path`, which is an
appropriate change as there's no need for cloning the path.

~~~
dbaupp
Your expect version changes behaviour: it does the allocation and string
formatting unconditionally even if `have_dot_git(&cwd)` is `Some`. It is
probably not a problem for this since the other operations are significantly
more expensive, but it can be a gotcha if used inside a loop.

~~~
eridius
You're right, that's what I get for doing this fast. In that case I'd say

    
    
      let dot_git_dir = have_dot_git(&cwd).unwrap_or_else(|| panic!("{} does not appear to have a controlling .git directory; you are not in a git repository!", cwd.display()));

------
peterevans
I really like the idea of testing a language by writing a small command-line
utility with it, even one that -- as the author mentioned -- already exists.

Way back when, when I first was learning C, I didn't comprehend it very well.
I was OK with it. A friend gave me some CDs with FreeBSD on it, one with the
OS, and one with program sources. It was that source code which really opened
my eyes, and you could digest small programs (like chmod) and see, you know,
this is working, production code, and it's not hard, and you can do this.

------
codezero
I was worried when I saw "I decided to write a little Rust and, because
everyone in my world is seems swoony over it, Go."

That's a pretty bad reason for using a language and usually leads to some
pretty ridiculous criticism.

This post was not that, I think they nailed a lot of the good and bad things
about Go, in fact, they could have been a lot more harsh. There is a depth
lacking in just checking out a language in this way though as there's no
evaluation of some of the larger reasons Go exists, like concurrency and and
fast compile times. Then again, most people probably don't even need/care
about these.

~~~
ryandvm
Hmm. Learning a language/framework that is exploding in popularity is probably
one of the best things a dev can do to stay relevant (read: employed).

Hell, very few of us would be using Javascript if it weren't for it's
ubiquity/community/popularity. I sure as hell am not using it because it's a
well designed language.

~~~
bad_user
Quite the contrary, learning a language _because_ it is exploding in
popularity is a nice way to ensure that you'll end up being abused, poorly
paid and irrelevant. You can't possibly find a worse reason for learning a new
language.

On Javascript you're missing the causality. Some people are learning
Javascript because it is popular of course, because they've read on some forum
that learning popular things gets them hired, but those people are totally
uninteresting in the pool of developers that learned Javascript because they
had shit to do and things to build and Javascript was the answer.

There's a big difference between learning X because you want to make your job
easier, because you want to build things in a new way, because you want
exposure to new ways of thinking, because there isn't an alternative to what
you're trying to do, etc... and learning X because that's fashionable. In the
former case, it's pretty much a gamble but you might actually get something
out of it. In the later case all you're earning is the ability to slap another
keyword on a resume that only mediocre HR people care about, being actually a
time waist as it's keeping you from learning things that might actually be of
interest for the things you're trying to accomplish (other than getting a
shitty job of course).

Just as people that learned PHP or Java back in the day found out, learning
for the sake of getting a job is a sure way to land you a shitty and boring
job in which you are replaceable cog. Managers love popular languages and
frameworks, of course they do. And the interesting jobs that are out there,
that make people happy and that pay well - well, let me tell you, those
companies aren't looking for keywords on your resume, but for things that
you've actually built.

And after some years from now, when you'll be over 40, you'll find yourself
either a manager that does LinkedIn searches for keywords, or an obsolete
developer that can't find a job because everything you've learned is obsolete
and you waisted your time in learning syntax, instead of learning new
abstractions, algorithms and mathematics - you know, stuff that never gets
old.

~~~
LunaSea
By learning new programming languages or frameworks you stand out from the
crowd.

Guess who's going to get a job:

A) The fool who just learned JavaScript and Angular.js

B) The university student who only knows Java

The answer will be (A) every single time because companies don't have the time
to train people for months.

~~~
bad_user
Depending on context, it is going to be (C) the developer who has something to
show on GitHub or (D) the developer that knows about algorithms, networking,
computer architecture and that can solve concurrency issues.

Thanks for the example BTW - you've highlighted the problem precisely - the
question is do you want your competition to be college students that only know
Java? Because that says something about the company in question. And yes, it's
a choice.

------
burntsushi
> A good (trivial) example was a great command parsing library just doesn’t
> exist yet.

There is a Docopt implementation in Rust[1], which is used by Cargo. It tracks
master and is regularly updated.

Interestingly, I've found people either love or hate Docopt, so maybe you knew
about it but don't like it. :P

/shameless plug

[1] -
[https://github.com/docopt/docopt.rs](https://github.com/docopt/docopt.rs)

~~~
Ygg2
Yeah, that whole part got the me the wrong way. Docopt is as far as I know the
best. Also rust comes with a simpler getopt.

It is broken now, but with the collection reform landing, what isn't broken.

~~~
burntsushi
It was only broken because I merged a PR that updated it to Rust master
_before_ the nightly was updated on Travis. :-)

It's passing now that Travis updated to the new nightly.

(But yup, it was collection reform.)

------
programminggeek
Having spent a little time with Go, I ended up feeling like it was both better
and worse than Ruby. In many ways it's many of the things that I want from a
language - static typing, fast complier, pretty sensible defaults and so on. A
lot of things I wish Ruby did, Go does great.

I think Go does really well in tooling, but it doesn't feel as great in syntax
or language features that I would really like. The two big ones I wish Go had
were named parameters on functions, and immutable data structures.

It's probably a blub paradox thing, but my ideal language has immutable data
structures and named function parameters. Kotlin, Scala, and Swift all kind of
nailed these features, but Go has not.

For me, named parameters and a little verbosity goes a long way to allow me to
communicate intent in my code. Immutability allows me to have a built in sense
of safety that once the data is set, it stays that way.

In my experience, a whole class of problems goes away when you have those two
features (alongside the many other features we'd expect form a language like
Go).

What I felt from Go is that a year ago when I was playing with it, that they
didn't care much about either of those two features, which is fine, but it
keeps go from being my ideal language.

Swift I wish was a bit more general purpose, Kotlin and Scala don't compile as
fast as Go, and Ruby doesn't have a compiler to do some of the checking I wish
it would.

Go's fast compile times with something like Ruby's syntax and a feature set
similar to Kotlin/Swift would be darn near perfect.

For me, there is no perfect language.

~~~
zem
the thing that has got me excited about learning go some day is that it's
supposed to be really good at cross-compiling code into small, standalone
binaries for the three major platforms (linux, windows, osx). haxe is another
language on my radar for much the same write-once-deploy-all-over-the-place
reason.

~~~
untothebreach
Rust has that too. You can cross-compile binaries that will run on any
platform that LLVM supports.

~~~
zem
nice :) already excited about rust for other reasons, but that's going to be a
great plus

------
twtwtaway
Comparing Go and Rust doesn't feel right. They are obviously designed for
solving different kind of problems. Go is a simple language, maybe even too
simple for my taste. But simplicity is its greatest strength. And I can
understand people who would prefer Go as their go-to language for dealing with
specific kind of problems. Go is a boring language but gets you where you want
to be in a short time and without much suprises. Rust, on the other hand is
designed for systems programming. It's got some nice features but it's also a
lot more complex language than Go. I don't want to fight compiler all the
time. Sometimes I don't need that kind of safety.

~~~
bjz_
I agree that it doesn't feel quite right, but this post seems to be more of a
personal exploration of going from some ignorance of both languages, to having
a better understand of where each language sits. His concluding section is
nicely written.

------
exacube
I think to properly write a language comparison, you need to have extensively
used both languages and with multiple use cases.

For example: I've recently attempted writing a small service in Go and it only
took a few hours for me to figure out how weak a language could be without
some sort of type-abstraction or generics: I had to implement a
FindValueInArray() twice for two different types. This _should_ be a big issue
in any respectable language. (I mostly like Go otherwise though!)

~~~
alkonaut
True, but you can assess how approachable a language is by simply approaching
it. This is useful information. You could even argue that experience would
disqualify you from reviewing using that angle!

~~~
exacube
hm.. but why would anyone want to read how approachable Go is for beginners on
HN? Do all negative experiences disqualify anyone from publishing a
comparison? which negative experiences qualify?

------
bsaul
I'm surprised at the paragraph about rust having erlang style actor based
parallellism... From what i've read, parallel programming was still heavily a
work in design in Rust ( had a very recent discussion on using rust for http
server side coding on HN whith people confirming this to me).

golang goroutine let me built a standalone binary with embedded https server
and websocket support. Would Rust be able to do that even at 1.0 release
without relying on low-level C library wrappers ?

~~~
steveklabnik
Yeah, "Erlang style-actor" isn't exactly accurate anymore. A while ago, this
was true, but it's not exactly true today.

That said, Rust does encourage message passing by default, but also gives you
the ability to safely do shared-memory concurrency if you need.

~~~
bsaul
Ok. I'm not a language designer so correct if i'm wrong, but doing anything
remotely looking like actors implies being able to do m:n threading in some
way, which rust decided not to do recently. Correct ?

~~~
steveklabnik
I am not clear that "Actors" directly implies M:N threading. Rust's spawn()
makes a 1:1 thread, but you still talk between threads with channels, and you
can't get shared memory without going through an Arc<Mutex<T>> or something
similar.

> which rust decided not to do recently.

It's more subtle than this. Rust's I/O libraries are being re-done to remove
the M:N stuff, yes, but Rust is low enough that I/O is just a library; anyone
can implement alternate I/O stuff, it's not privileged other than coming with
Rust. See mio as an example of an in-progress alternative.

------
jeorgun
Does 'the language prevents errors at compile time' really mean 'better
function signatures in the standard library'? Because that's all I'm getting
out of the regex example given. So far as I know the fail! macro still exists
in Rust.

Edit: Evidently I should have put a /s at the end of that first sentence. I
know what the idea behind compile-time checking is, but I don't see that the
given example actually illustrates it.

~~~
Dewie
> Does 'the language prevents errors at compile time' really mean 'better
> function signatures in the standard library'? Because that's all I'm getting
> out of the regex example given.

Of course any function can be forced to crash by inserting some crash-inducing
code into the function. You can always make a function that assumes that value
is of a certain form, and crashes the program if it is not (like unwrap for
extracting a value from an Option, or crashing if it there is no value). In
order to have "code that works if it compiles" (comparatively speaking; often
used as an exaggeration), you have to have the discipline to use the
facilities that the language provides you.

I guess a language would need some kind of totality checker in order to make
sure that you couldn't make function diverge in some way. Like for example
Idris has.

Rust also has the `!` type, aka bottom, for marking functions that diverge.

> So far as I know the fail! macro still exists in Rust.

What should that do? Crash the program? If so it should have been renamed to
something like `panic!` now, since "fail" now is associated with Error types,
while "panic" is associated with crashing the program/exceptions.

~~~
masklinn
> What should that do? Crash the program? If so it should have been renamed to
> something like `panic!` now

It crashes the current task (thread/process), not the whole program, and has
in fact been renamed panic! recently: [https://github.com/rust-
lang/rust/pull/17894](https://github.com/rust-lang/rust/pull/17894)

------
amelius
How mature is Rust and its compiler actually at this moment? Is it in a state
ready to replace C++?

Edit: Also, I missed a good overview of features in one language and lacking
in the other. In that respect, I find the wikipedia page [1] deeply broken,
but that aside.

[1]
[http://en.wikipedia.org/wiki/Comparison_of_programming_langu...](http://en.wikipedia.org/wiki/Comparison_of_programming_languages)

~~~
grayrest
Depends on what you mean by solid. It uses LLVM for all the codegen and perf
is generally in line with C++. The core ideas around
traits/borrowing/inference/mutablility have been stable for a while.

What's not stable is everything around the core ideas. Many things are in
flight at the moment in anticipation of 1.0 stability. The core collection
libraries had a major refactor land last night. How error handling works got
changed earlier this week.

If you're interested in writing libraries for a new ecosystem, now is the time
to get into the language. I've tried to get into Rust a couple times since the
0.4 timeframe and I couldn't keep up with the pace of change. My current
attempt started at 0.11 and while there are steady chanes, I've found them to
be manageable.

If you want to have idioms and ecosystem relatively settled, hold off until
the 1.2 or 1.3 timeframe (6 week cycles). I say this because a lot of the
current RFCs are split between things that need to happen before 1.0 and
things that need to happen after. This generally means the base functionality
in-place for 1.0 with the sugar/convenience stuff to come later. I expect
it'll take a few cycles after 1.0 before Rust settles into what most people
are thinking when they think stable.

~~~
StevePerkins
_" What's not stable is everything around the core ideas. Many things are in
flight at the moment in anticipation of 1.0 stability. The core collection
libraries had a major refactor land last night. How error handling works got
changed earlier this week."_

Huh? I'm seeing comments in this thread about people using Rust right now in
_production_. How... why... what?!?

~~~
steveklabnik
There are two big deployments and some smaller ones. The two big ones are
OpenDNS and Skylight.io.

We don't recommend it currently, but some people are just eager. :)

~~~
grey413
I would venture to guess that those organizations feel that the pains of
working with a prototype is worth the payoff of shaping the final product (not
to mention the immediate advantages of rust's already stable safety features).

~~~
steveklabnik
Yup. I know much more about Skylight's case, but this is exactly true.

------
toomanymike
I wonder what actor library the author is using with Rust. Given the status of
github issues like [https://github.com/rust-
lang/rust/issues/3573](https://github.com/rust-lang/rust/issues/3573) I
thought there weren't any real options.

~~~
steveklabnik
It's more that Rust _used_ to have that style of concurrency, but it has
changed over the last year.

That said, Rust does encourage message passing by default, but also gives you
the ability to safely do shared-memory concurrency if you need.

------
agapos
I have seen it for many cases how Rust's strictness (no null pointers and
other things) has a positive effect on safety, but I am curious: does this
also affect reliability or stability to the better?

I would like to know the thoughts of someone experienced with Rust.

------
fideloper
tl;dr: Guy who knows Rust better than Go prefers Rust over Go.

~~~
oldmanjay
Since it's clear to anyone who read the post that this is an incorrect
kneejerk summary, I am dying to know why? Do you feel an emotional attachment
to go and have a deep-seated need to preemptively defend it even when no one
is attacking it?

------
jff
"Where line 9 there just blindly assumes the regex found a match, and causes
quite the run-time error message."

You blindly assume the regex found a match, because you ignored the part of
the docs where they tell you how to check for a match:
[http://golang.org/pkg/regexp/#Regexp.FindStringSubmatch](http://golang.org/pkg/regexp/#Regexp.FindStringSubmatch)

~~~
sswezey
The article didn't say there was no way to check, just that the compiler does
not enforce checking. Rust's type system requires that all possible paths are
accounted for when trying to extract the value from the option.

