
Rust via its Core Values - pcwalton
http://designisrefactoring.com/2016/04/01/rust-via-its-core-values/
======
Animats
This should be titled "A Ruby programmer looks at Rust". There are now a very
large number of programmers who have zero experience with anything below the
level of a scripting language, and have never had to think about memory
allocation. To them, Rust is rather intimidating, and this writer is trying to
make it simpler.

Unfortunately, it looks like he doesn't really understand borrowing. The key
idea of borrowing is that a borrowed reference can't outlive the thing it came
from. It's a lifetime thing, not an immutability thing.

Go is a hard-compiled language compatible with the mindset of scripting
language programmers. That's just what Google needs. They have a lot of server
side code to write, and it has to go reasonably fast or they have to build
additional acres of data centers. With Go, they can put programmers from the
scripting language community on the job.

~~~
nickpsecurity
Your analysis is good except that last part. Google hires tons of geniuses who
could handle Rust easily with a bit of training. The extra efficiency would
reduce datacenter requirements even more. Most of their tools are also
internal where scripting community has minimal contribution.

So, I don't see a clear benefit for them to choose Go over Rust except saving
face. They'd be better off investing in both for various reasons with critical
stuff like F1 RDBMS in Rust and rapid dev in Go.

~~~
arto
"The key point here is our programmers are Googlers, they're not researchers.
They're typically, fairly young, fresh out of school, probably learned Java,
maybe learned C or C++, probably learned Python. They're not capable of
understanding a brilliant language but we want to use them to build good
software. So, the language that we give them has to be easy for them to
understand and easy to adopt."

\-- Rob Pike ([http://channel9.msdn.com/Events/Lang-NEXT/Lang-
NEXT-2014/Fro...](http://channel9.msdn.com/Events/Lang-NEXT/Lang-
NEXT-2014/From-Parallel-to-Concurrent))

~~~
jholman
Rob Pike has his ambitions. Great. Meanwhile, almost no one at Google uses Go.
They use C++ and Java almost exclusively. Not Python either, outside of
Youtube. (There's a reason GvR left.... all of the stuff he was proud of
writing in Python got re-written, e.g. Mondrian.)

Pike is right that a significant fraction of Googlers are young and fresh out
of school, and aren't 'researchers' (like maybe 33%?). They're also crazy
smart, and can learn whatever language you need them to. Note that Google
makes new grads learn a new storage model (BigTable etc), learn a bunch of
custom infrastructure (everything except compilers and VCS is custom-built),
learn all sorts of crazy shit. What Google does with these facts about their
developers is that it uses C++ for huge-scale things (underlying
infrastructure like BigTable, or massive-scale products like Search) and Java
for the less-incredibly-demanding. Not Go. Unless you're Rob Pike and someone
asked you if you could take care of something that needed a rewrite anyway.

It only makes sense to talk about Go as a Google language if you are under the
mistaken impression that there are about 40 engineers at Google.

Look at
[https://golang.org/doc/faq#Is_Google_using_go_internally](https://golang.org/doc/faq#Is_Google_using_go_internally)
That's basically an admission of defeat. "We eat our own dogfood, and also a
few tiny projects use it (one of which is tiny but super important)." Note
that "scaling MySQL" is not exactly a long-term priority in a company that
built three or four alternatives to MySQL, all superior.

It's possible that, if Google were starting from scratch, that ideas like
"write the fast stuff in Rust and the less-urgent stuff in Go, which is still
fairly fast" would be good choices. Sounds like a good idea to me. But the
value proposition of Go over Java is, I suspect, not big enough to bother
turning the enormous ship.

I'd _love_ to hear any current Googlers chime in about a Google product
written in Go that actually matters and took more than a hundred engineer-
hours to write.

~~~
jzelinskie
The linked section of the FAQ was written a few years ago. I'm not a Googler,
but I can tell you some things visible from the outside.

>Note that "scaling MySQL" is not exactly a long-term priority in a company
that built three or four alternatives to MySQL, all superior.

It might be more valuable than you think since GCE sells a hosted MySQL
service. I'm speculating here; they could be using something else that's
MySQL-compatible, but I doubt it.

>I'd love to hear any current Googlers chime in about a Google product written
in Go that actually matters and took more than a hundred engineer-hours to
write.

Kubernetes is a non-trivial Go project largely done by Google that's important
to the future of GCE.

~~~
jholman
Thank you, those both seem like convincing evidence that, at minimum, it's not
_quite_ as bleak as I had believed.

~~~
TheDong
Note, however, that kubernetes is _not_ used within Google (rather, they use a
C++ alternative).

Part of why kubernetes uses Go is that it's in a docker-centric container
ecosystem. Docker picked Go for its own reasons, not because of Google, so in
a way, Kubernetes is picking Go because of the outside community, not because
of Google or the language itself.

~~~
jzelinskie
GCE operates GKE which is a hosted Kubernetes as a service. I'm not saying
that Google is using Kubernetes instead of Borg, but that doesn't mean they
don't use Kubernetes at all.

~~~
nickpsecurity
TheParent's explanation makes more sense. Note that the key use case they
mentioned for that was an offering for the Docker ecosystem:

[http://www.infoq.com/news/2014/11/google-cloud-container-
eng...](http://www.infoq.com/news/2014/11/google-cloud-container-engine)

So, people are using Kubernetes in the Docker community. Google offers a GCE
service for that. This seems incidental to Go with community and demand really
driving its use. That they avoid it internally where possible is still a
strong point given the large ecosystem developing around Go. Assuming it's
true that they avoid it for C++ and Java of course.

------
abritishguy
I like this a lot, it's always tempting to think that we can create a perfect
language that would be the best in all situations but we can't.

Rust is great for projects where its values are valued:

\- Speed

\- Memory Safety

\- Concurrency

Go is great for projects where its values are valued:

\- productivity (from being able to immediately understand a new code base due
to the small and simple language to the toolchain it provides). \- concurrency

Likewise, Ruby and Python are great choices for some projects.

~~~
Pyxl101
> it's always tempting to think that we can create a perfect language that
> would be the best in all situations but we can't

Why can't we design a language that would be the best in all situations? I see
no reason necessarily to believe that's true, though I recognize that it has
not yet been done.

It may be impossible to create a car that is also an excellent submarine and a
great airplane, but software is not subject to the same kind of physical
limitations. I see no reason in principle why a perfect language could not be
malleable enough to accommodate all situations optimally. I realize that my
writing here is not a convincing assertion that such a language _can_ exist. I
recognize that. I'm just saying that there doesn't seem to be strong evidence
that it _can 't_ exist or that we should give up trying to design it.

A person who wishes to argue that the perfect language can't exist should
perhaps give an example of two different problems that cannot be solved well
by any single known language, and that can be solved much better by two
programs in two different languages. I would like to dissect that example and
see if we can indeed find such examples of sets of problems that cannot be
solved well by one language. If we cannot find such problem sets, then that
may suggest that we can indeed "unify" programming under one perfect language,
by tweaking its syntax and semantics appropriately.

~~~
jerf
Tacking on another example to kibwen, while a language can be multiparadigm,
where one paradigm is in conflict with another, one of them must win. Further,
the language generally must choose some default values (as in philosophical
values, not variable values) which will become ingrained into the core library
and shape the entire rest of the ecosystem.

If you agree in the utility of multiple paradigms, then there can not be one
language that does them all. If you're going to be a logic language like
prolog you're going to have to privilege syntax and semantics to make that
work. If you're going to be a query language like SQL, you're going to have to
privilege syntax and semantics to make that work.

If you agree with the utility of multiple different language values, you can't
have one language that does them all equally well. If your language permits
mutability, it will work its way throughout all the standard language code and
all the library code, and once you have that you basically can't write
immutable programs anymore. If your language is based on immutability, all the
library code will be based on that and it will be difficult to drop that last
"log n" factor that immutable code often imposes on your runtime, plus some of
the other characteristics it has that can cause trouble with things like cache
coherency. In both cases you can layer something else on top of the core
language that may recover the functionality, but the second layer addition
will always come with a lot of caveats, sharp edges, and poor interaction with
the rest of the standard library. There are many of these dimensions where a
language must pick one place on the spectrum, and even if you pick "in the
middle" that never ends up meaning "the best of both worlds with none of the
drawbacks!"... it's just a point in the middle.

"A person who wishes to argue that the perfect language can't exist should
perhaps give an example of two different problems that cannot be solved well
by any single known language"

I want high reliability code that I would literally trust my life to, that can
be verified both by the compiler and by any other arbitrary external tool that
may assert proofs of things that may be desirable, such as "it never crashes
as long as the hardware operates correctly" and "there is never a null pointer
exception".

I had a cool idea last night, and I want a website up that implements it by
next week.

You will never be able to bridge the gap between someone who wants to slap
something together quickly and someone who wants to write provably correct
code. I don't think we're at the Pareto optimality frontier and that there is
still some improvements to be made in general, but even after those
improvements, the two ends of that spectrum will forever be separated by a
huge gulf, and no one language could possibly straddle it. You can't
practically script in Idris or Coq and you can't practically write provably-
correct code in a dynamic scripting language. In either case, by the time you
made the changes that might permit it, you would no longer have the original
language, i.e., TypeScript may take many steps towards letting you write
reliable Javascript, but it's no longer the same language anymore, it's a new
one with a JS compile target.

~~~
nickpsecurity
"I want high reliability code that I would literally trust my life to, that
can be verified both by the compiler and by any other arbitrary external tool
that may assert proofs of things that may be desirable, such as "it never
crashes as long as the hardware operates correctly" and "there is never a null
pointer exception".

I had a cool idea last night, and I want a website up that implements it by
next week."

We're closer to that than you think. The pieces of it are just scattered in
quite a few CompSci subfields. Some are done, some getting closer, and a ton
of integration work will be necessary to pull it all together. There will
still be a gap between the two. Yet, it's nowhere near as big as people think
with the right tooling available.

Example of the first such tool from the same woman that founded robust,
software engineering:

[http://htius.com/Product/Product.htm](http://htius.com/Product/Product.htm)

A NASA evaluation indicated difficulty with notation, performance hit, and
something else. Thing is, modern work in each area that report griped about
got to point where gripes should be eliminated. There's also more automation
available in some areas with better results. The good DSL's and tools like Cyc
showed the body of knowledge necessary for semi-automating the dev process can
be developed. Hell, so did StackOverflow's cut-and-paste-driven development.
So, the dream is a dream but a similar reality isn't far off if the labor is
put in. :)

~~~
jerf
Like I said, I don't think we're at the optimal point yet.

However, when it really comes down to it, writing reliable software and
bashing some libraries and bits and pieces together require fundamentally
different _mentalities_. To see that clearly, note how you can learn in just a
few weeks to bash libraries together to do some nifty things, but to learn how
to write high-quality provable code is always going to be a multi-year
enterprise, even for very, very smart people. And you're never going to be
able to get rid of that "bashing together" use case... you can offer Bob the
manager a spreadsheet that will only produce provably correct spreadsheets for
some suitable definition of "provably correct", but Bob ain't gonna use it.
It's way more work than he's interested in doing.

A programming language can't force the user to care. If the programming
language requires a level of caring in excess of what the user wants, they
will use another programming language.

~~~
nickpsecurity
That's true. The mentality is more important. There will consistently be
people who barely care.

------
catnaroek
No complaints about the meat of the article, just with two introductory
remarks:

> For example, Ruby famously values Developer Happiness and that value has
> impacted Ruby’s features.

Really? Every time I give Ruby another chance (admittedly, not too often), I
end up feeling angry. I can stare at five lines of code for several minutes,
and have no idea what it will do. Not even Haskell.

> Ruby also protects you from segmentation fault errors. But to do so it uses
> a garbage collector. This is great, but it has a big impact on your
> program’s speed.

Is garbage collection the main culprit? Not duck typing? Not dynamic
metaprogramming? I reckon performing a hashtable lookup during every single
method call has a bigger effect on performance than garbage collection.

~~~
pcwalton
> I reckon performing a hashtable lookup during every single method call has a
> bigger effect on performance than garbage collection.

Well, a high-performance Ruby VM will perform inline caching to mitigate this.
And if the VM isn't a high-performance VM (i.e. it's interpreted), then the
overhead of the interpreter dispatch loop dominates everything.

~~~
catnaroek
I'm admittedly not very familiar with compiler backends - especially not JIT
compilers. But how does this optimization even work if I'm constantly
modifying the method tables of objects at runtime? If I understand correctly,
ORMs and Web frameworks for dynamic languages do this sort of thing all the
time.

~~~
cwzwarich
In a JIT for a dynamic language, you generally have to predicate your more
aggressive optimizations on some runtime condition, e.g. "this variable is
actually an integer", or "this object has the same method layout as other
objects did before". These conditions are checked before relying on the
optimization, and if they fail, the generated code jumps into a runtime that
may decide to use an interpreter, generate deoptimized code, etc.

~~~
catnaroek
This actually sounds more complicated thank thinking ahead how to define your
abstraction in more precise terms. (So that you don't need to perform
optimistic optimizations that you might later need to roll back.)

~~~
icebraining
I don't understand, who should define such abstractions? If you're the JIT
developer and you define a value encoding that supports all possible values
(so that the JITed function never has to be rolled back), then you can't
really optimize.

For example, if the JIT has the code "function(a, b) { return a+b; }", and it
sees that the function has been called 50 times with a and b as integers, it
can generate a machine code implementation that is just a couple of
instructions adding two registers.

If you keep the abstractions wide enough to support other values, you can
never achieve this efficiency.

~~~
catnaroek
> I don't understand, who should define such abstractions?

The language designer.

> If you're the JIT developer and you define a value encoding that supports
> all possible values (so that the JITed function never has to be rolled
> back), then you can't really optimize.

Obviously, the solution is to know the types of values ahead of time. This is
precisely where a language with more precise abstractions can make the lives
of implementors easier.

~~~
icebraining
Well, yes, but the whole point is to make the lives of the end programmers
easier, not of the language implementors. If you want to avoid forcing people
to manually tag each variable, the implementors either need to write an
optimistic JIT or a really good type inference engine.

~~~
catnaroek
More precise abstractions benefit end programmers too:

(0) They increase the effectiveness of code as a communication medium between
programmers. To use an analogy: If I tell you a story, you don't need to hire
actors to enact it, just to understand the plot. You can just use your
knowledge of the English language, right? Imagine if we could do the same with
code: programmers conversing in code, because they understand it natively,
without having to “interpret” it (in a REPL, using a test suite, etc.).

(1) They decrease the burden of anticipating, preventing and handling things
that may go wrong. Again, to use an analogy: Defensive programming in a
language with imprecise abstractions is like spending 75% of a construction
project's budget on buying insurance.

The fact that precise abstractions usually lend themselves to simpler and more
elegant implementations than imprecise abstractions, well, it is just a nice
additional benefit.

~~~
icebraining
Clearly there are millions of end programmers who disagree with you, since
they could be using a language with more precise abstractions and yet they
choose not to. As one of them, my claim is that you need to eliminate the
burden of manually tagging each value, because that effort usually isn't worth
it. And eliminating that burden while keeping precise abstractions is not an
easy task for implementors.

------
vaibhavkul
Maybe this is off topic, but while reading this article it occurred to me that
wouldn't it be useful if the ownership transfer is syntax highlighted?

e.g. in

    
    
      let new_owner = original_owner;
      println!("{}", original_owner);
    

we could have original_owner on the second line have a different color
signifying it doesn't own anything.

Or, does such a syntax highlighter already exist?

~~~
kam
Atom with linter-rust highlights compile errors:
[http://i.imgur.com/ac7FTpW.png](http://i.imgur.com/ac7FTpW.png)

Or do you have a more specific visualisation for moves/ownership in mind?

~~~
vaibhavkul
Didn't know about linter-rust. Thanks for sharing.

It didn't occur to me that: ownership transfer means variable cannot be used,
so if it's used means there's an error, so checking for error itself gives you
the information that you cannot use it. So highlighting _error_ can be used
instead of the specific _ownership transfer_.

I don't know of any other specific cases where there's no error, but it would
be helpful to have a different color for certain ownership semantics.

------
andrewvijay
"Of course, Rust developers also wants people to write programs in Rust. So we
can declare things to be mutable if we really need to." \- I laughed so hard
on this more than I should have :D Really a good one!

------
bogomipz
Could someone explain this - "Ruby also protects you from segmentation fault
errors. But to do so it uses a garbage collector."?

How does GC protect you from a segfault?

Surely trying that doesn't help you when you attempt to access the 5th element
of an array with only two elements. Maybe its just confusing that the author
used that as an example. I'm guessing aside form a dangling pointer GC doesn't
offer much protection form a segfault?

~~~
catnaroek
Of course, garbage collection per se doesn't protect you from segfaults. For
example, there exist garbage collectors for C. They reclaim unused memory, but
you can still cause segfaults if you want to (or are careless).

Automatic memory management (whether done at runtime with a garbage collector,
or at compile time as in Rust) frees you from logical memory errors by making
them inexpressible. But automatic memory management doesn't guarantee that
segfaults won't happen - it only guarantees that they won't matter to the
programmer. For example, a garbage collector could deliberately trigger
segfaults (by `mprotect()`ing all managed memory) to forcibly pause the
mutator during a garbage collection cycle. Every managed thread must install a
`SIGSEGV` handler that waits until the collection process is done.

~~~
bogomipz
"For example, a garbage collector could deliberately trigger segfaults (by
`mprotect()`ing all managed memory) to forcibly pause the mutator during a
garbage collection cycle. Every managed thread must install a `SIGSEGV`
handler that waits until the collection process is done."

Interesting, is this actually a common implementation pattern for languages
that have a GC built into their run time?

~~~
samth
I don't think the exact thing you describe is common, but plenty of garbage
collectors use memory protection to implement the write barrier. This is
particularly useful when integrating with arbitrary code you don't control,
since it will also be affected by memory protection.

------
dschiptsov
Mixing some C++ into Standard ML is not necessarily a good idea, like mixing a
bit of shit into a bucket of honey.

