
Rob Pike: Simplicity Is Complicated [video] - sylvinus
http://www.thedotpost.com/2015/11/rob-pike-simplicity-is-complicated
======
knucklesandwich
I strongly disagree with this notion of "simplicity" as being attributable to
scarcity of language features. Some of the languages that I felt were the
easiest to use had quite a number of language features, but had simple
_semantics_. I think Rich Hickey nailed this in his "Simple Made Easy"[1]
talk. Complexity is _not_ about additivity, it's about entanglement.

[1] [http://www.infoq.com/presentations/Simple-Made-
Easy](http://www.infoq.com/presentations/Simple-Made-Easy)

~~~
catnaroek
> Complexity is _not_ about additivity, it's about entanglement.

This. And nothing reflects entanglement better than a formal semantics.
English (or any other natural language) always lets you sweep it under the
rug. The only objective measure of simplicity is the size of a formal
semantics.

I expand on this here:
[https://www.reddit.com/r/programming/comments/3sstis/for_bet...](https://www.reddit.com/r/programming/comments/3sstis/for_better_or_for_worse/cx0cp0q)

~~~
pron
> The only objective measure of simplicity is the size of a formal semantics.

If we accept that, then simplicity alone is not a desirable goal. Something
may well be formally simple but at the same time incompatible with human
cognition. Indeed, that may not be objective, but since when do we value
things only by objective measures? That the only _objective_ measure of
simplicity may be the size of formal semantics does not mean that it is the
most _useful_ measure of simplicity (if we wish to view simplicity as
possessing a positive value that implies ease of understanding).

~~~
catnaroek
> If we accept that, then simplicity alone is not a desirable goal.

Agreed. Otherwise, Forth and Scheme would've taken over the world.

> Something may well be formally simple but at the same time incompatible with
> human cognition.

Do you have a concrete example?

> (if we wish to view simplicity as possessing a positive value that implies
> ease of understanding).

I don't particularly fetishize simplicity. What I want is the least effort
path to writing correct programs. The following features help:

0\. Simplicity - smaller formal systems have less room for nasty surprises.

1\. Using the right tool for resource management - sometimes it's a garbage
collector, sometimes it's substructural types.

2\. Typeful programming - it's an invaluable tool for navigating the logical
structure of the problem domain.

~~~
pron
> Do you have a concrete example?

Off the top of my head, and since we're talking about computation, I'd say SK
combinator calculus. Or Church numerals.

> Typeful programming

It is, but it can also be a hindrance. Finding the sweet spot is a matter for
empirical study.

~~~
catnaroek
> I'd say SK combinator calculus. Or Church numerals.

They're a PITA to use, but not because they're hard to understand.

~~~
AnimalMuppet
But for writing actual programs, the complexity of using matters as much as
the complexity of understanding.

(I recognize that this doesn't invalidate the point you are trying to make in
the parent post. They aren't incompatible with human understanding. They're
incompatible with writing programs in a reasonable amount of time, though.)

------
chubot
From reading the abstract, it sounds like this addresses precisely the topic
of "Worse is Better" [1].

There are frequent misunderstandings of this essay -- the argument isn't as
coarse as "crappy software wins", or "release early and often".

The tradeoff is: do you want a simple interface (MIT style) or a simple
implementation (NJ style)? If you want a simple interface, you have to hide a
bunch of complexity underneath. If you want a simple implementation, you punt
on some hard things and expose it to the user.

Despite its authorship, and marketing as a better C (which is the epitome of
NJ style), Go is MIT style! The concurrency model hides a lot of stuff under
the covers: sync vs async syscalls, segmented stacks, etc. And GC is probably
the ultimate MIT-style feature, in that the interface is very simple (just
pretend you have infinite memory), but implementations are tremendously
complex -- basically each GC is unique and its own research project.

[1] [https://www.jwz.org/doc/worse-is-
better.html](https://www.jwz.org/doc/worse-is-better.html)

EDIT: From paging through the slides, this seems to be basically the gist of
the last half, but I don't see that he mentioned "Worse is Better" or MIT vs
NJ style...

~~~
Peaker
Go is a simple language, with 1970's features.

It is very easy to implement, and leaves a lot of the complexity (null
pointers, generics, etc) to the user instead of solving it in the language.

This is a classic example of worse-is-better.

An MIT-style language would be Rust, which indeed struggled to get to v1.0 and
Go vs. Rust plays out quite similarly to what worse-is-better would predict.

~~~
catnaroek
> This is a classic example of worse-is-better.

C won over Lisp, not because “worse is better”, but because the C's advantages
over Lisp (performance on cheap machines) were more pronounced than the other
way around (“safety” achieved by means of lots of runtime checking - by no
means was it possible to statically rule out errors). Lispers fancy their
language of choice the pinnacle of computer science, but the falsity of this
claim becomes evident under close scrutiny.

> Go vs. Rust plays out quite similarly to what worse-is-better would predict.

I see a brighter future for Rust in the long run, for the following reasons:

First, Go doesn't do enough for the programmer: Haskell and Erlang are safer
and easier to use, and have much better parallelism and concurrency stories.
What kind of silly value proposition is “safer than C” or “less verbose than
Java”? Rust offers something genuinely new: half-assed attempts at “better
C++s” have existed for a while, but “safer C++s” haven't.

Second, Rust plays much more nicely with other languages. Rust is no more
difficult than C to call from, say, Python. On the other hand, Go seems to be
more of an all-or-nothing proposition: “If you want to use Go libraries, use
them from applications written in Go.”

~~~
themartorana
"...Go doesn't do enough for the programmer"

Seems like a "boo Go" echo chamber in here, but as someone that moved their
entire backend infrastructure to Go, the offering was so compelling it was
worth going against common wisdom and actually rewriting huge pieces of
infrastructure code. And I'm hardly alone - many huge companies felt the same
value proposition and made the same decisions.

Go isn't perfect for many things, but where it's placed itself - among often
headless networking infrastructure projects and backends, it's exceedingly
good at what it offers.

Edit: had repeated myself.

~~~
BurningFrog
> as someone that moved their entire backend infrastructure to Go

What did you move from?

~~~
themartorana
Mostly Python. It's a similar story to Dropbox, CloudFlare, etc.

------
threatofrain
I think that code readability is different from system comprehensibility, and
that readability is merely a subfactor to comprehensibility. Readability often
means the time it takes for me to grok a function, module, or unit of code,
and often what fits on my eyeball.

System comprehensibility is how long it takes for me to learn the minimal set
of atoms in order for me to comprehend an entire system.

Rob Pike criticizes map and reduce and other functional idioms as not machine
simple. And although Rob Pike doesn't say this, I might argue on his behalf
that functional code has a mild reputation for less readability.

But I would argue that functional idioms permit tasteful tradeoffs between
machine simplicity and system comprehensibility. I would also argue that when
a system gets to a certain largeness, I would be sometimes okay with
sacrificing some readability and machine simplicity for system
comprehensibility. I would also say that although I might lose some
performance advantages with a functional approach, the improvement to system
comprehension might mean an improved cognitive capacity for the human to
optimize for distributed multi-core contexts, which might offset the losses
from machine simplicity.

~~~
pcwalton
> I would also say that although I might lose some performance advantages with
> a functional approach, the improvement to system comprehension means better
> human optimization of distributed multi-core contexts.

You don't even need to bring in parallelism for functional approaches to gain
in performance. Here's an example (in Rust) of how you'd implement collecting
an iterator into a vector using the "machine simple" approach:

    
    
        let mut array = vec![];
        for element in some_iterator {
            array.push(element);
        }
    

Now for the functional, less "machine simple" approach:

    
    
        let array = some_iterator.collect();
    

Which is faster? Without knowing the details of the compiler, you might think
the former is faster, because there are fewer function calls and less
indirection. But this is wrong, as the compiler will trivially inline
"collect" for you and eliminate any indirection.

In fact, it turns out that the second approach is often _significantly_
faster, because the standard library contains the equivalent of:

    
    
        let mut array = Vec::with_capacity(some_iterator.size_hint());
        for element in some_iterator {
            array.push(element);
        }
    

That is, it uses the size_hint() method to avoid reallocating during the
collection operation, reducing the number of memory allocation and array copy
operations to O(1) instead of O(log n).

What are the chances that you'd do this important optimization manually? Not
high: it adds noise, it's esoteric, and it's easy to forget. But if you use
the functional idiom, you benefit from the optimized approach every time.
_That_ is the true benefit of abstraction, which is really the key point here.
"Machine simple" and "fast" are frequently in _opposition_ , and too often
people think that the former implies the latter.

~~~
lomnakkus
An excellent example. This is also one of those things that is hard to profile
away, because it essentially leads to a very flat profile because you're
probably forgetting to use with_capacity()/reserve() _all over the place_ not
just in single very-costly cases[1]. So you end up wasting, say, 1% CPU time
all over the place... add up enough of these time-wasters and soon you're
going to be wasting significant amounts of CPU time for no good reason.

Obviously, this kind of thing isn't a magic bullet, but in general I'd say the
key for any kind of program is to express your _intent_ at the highest level
possible. This benefits both optimization _and_ human readers! (That obviously
means that the programming language must _support_ such abstraction in a non-
leaky way.)

[1] Which _would_ show up prominently on a profile.

~~~
tbrownaw
Not really... the wasted time is still all malloc() / realloc() / whatever
under array.push(), and a half-decent callgraph profiler will catch that.
array.push() will score far higher than it ought to, and the bulk of its time
will be the reallocation.

Source: I've used oprofile, and later perf, to hunt exactly this kind of
performance bug.

.

Of course, _fixing_ a pervasive issue like that is much more tedious than just
_finding_ it. :-(

------
i_s
This talk has a lot of very weak points. He made the claim that more features
hurt readability (~6:10), because when you are reading you have to waste time
thinking about why the programmer chose the set of features he did to write
the code. To make that kind of claim without qualifications is just ridiculous
- if it were true, why add any feature to a programming language at all?
Especially if one believes readability is the most important thing in a
programming language, which Rob Pike says he does. It may be true that _some_
features hurt readability, but it is obviously not the case that all features
hurt readability for all people all of the time.

It is also strange that he makes so many claims about readability, considering
it is the most subjective attribute of a programming language there is. For
example, some code that would have been completely unapproachable to me a
couple of years ago is perfectly readable to me today, because I've learned
new concepts.

~~~
kevinr
> because when you are reading you have to waste time thinking about why the
> programmer chose the set of features he did to write the code.

It seems to me that what Pike is getting at here is how easy it is to build
mental models of the code, the language, the underlying system, down through
all its layers, and, yes, the code's original author and any subsequent
authors.

I share your frustration at the subjectivity of readability claims. There's
actually starting to be good academic research on code readability, for
instance this paper:

[https://www.cs.virginia.edu/~weimer/p/weimer-
tse2010-readabi...](https://www.cs.virginia.edu/~weimer/p/weimer-
tse2010-readability-preprint.pdf)

~~~
kevinr
Oh, or this investigation into programming language syntax for learnability!

> Recent studies in the literature have shown that syntax remains a
> significant barrier to novice computer science students in the field. While
> this syntax barrier is known to exist, whether and how it varies across
> programming languages has not been carefully investigated. For this article,
> we conducted four empirical studies on programming language syntax as part
> of a larger analysis into the, so called, programming language wars.

[http://neverworkintheory.org/2014/01/29/stefik-siebert-
synta...](http://neverworkintheory.org/2014/01/29/stefik-siebert-syntax.html)

------
tbrownaw
I'm trying to think how the simple/complex from this talk compare to the
simple/complex from "Simple Made Easy".

The part about how features should be orthogonal does seem similar to the "not
entangled" (and to DRY / SRP).

The part about visible surface area -- garbage collection is "simple" because
you can't explicitly talk to it -- seems to be a closer match for "easy" tho,
as do the examples (smell test: anything with "and" in the name probably means
there are entangled concepts).

...huh. Which means that his concept of "simple" _does_ in fact contain
multiple things entangled together, or in other words is complicated. ;-)

------
lIlIlIlIIlI
I can't even begin to measure the amount of time I've wasted trying to make go
code look as concise as what I can have trivially in C where things like
macros and do { } while() are available.

Instead of "simplicity" saving me time as Mr. Pike suggests, it leaves me
dissatisfied with the results after wasting my time in attempts to prevent my
dissatisfaction.

This probably isn't an issue for someone new to programming without the
expectations experience with other languages may bring, but for a seasoned C
programmer Go often feels like a regression.

IMHO, what largely makes Go useful over C is the language-level support of
coroutines, and not because the simplicity of the "go" keyword or channels,
but because it eliminates the need to design and integrate with snowflake
event loops or the specifics of blocking/non-blocking file descriptors.

As a result, packages are written without concern for how the caller's event
loop will be integrated, you just write blocking code everywhere and it can be
used by every Go program with significantly less potential for impedance
mismatch. The garbage collection and overall elimination of the need for
allocating and freeing memory also contributes to this; no longer do you have
to understand the resource life-cycle strategy of some C library you've
decided to use, or if it's using the C library's allocator or its own, or if
it's capable of using your allocator via some elaborate and unique
initialization scheme.

Unfortunately Golang's concurrent execution model is also what makes it a
pretty terrible language for system programming. Golang's inescapable and
largely opaque underlying use of threads is a constant source of pain for
anyone programming operating system facilities which influence only the
calling thread's state, particularly state inherited by child threads.

I often encounter misuse of runtime.LockOSThread() in attempts to overcome
this problem but as of now this is completely inadequate because the Go
scheduler creates new OS threads on an as-needed basis from whichever
currently-executing OS thread happens to be executing at the time. This leaks
whatever unique state the thread may have into a newly created, unlocked OS
thread, which may then go execute any goroutine - likely the very thing you
were attempting to prevent in using runtime.LockOSThread().

There's plenty more to whine about, but I'll stop here.

~~~
zaphar
As someone who reads far more C code than I write. Those macros you seem to be
missing infuriate me. They hide far too much from me when I'm trying to
understand the code I'm reading.

And as far as do and while go, why would you hide part of the state
determining when you're loop exits at the _end_ of the loop just to save a few
lines at the top. I am going to naturally try to read your code from top to
bottom not top to bottom to top. The more you make me skip around the longer
it will take me to understand what you are doing. 90% of my job is maintaining
code not writing new code. Programmers who write as if that isn't the case
just make the long term cost of maintaining the software they write larger.

------
aikah
I absolutely agree with all Rob Pike says. But in practice the problem is
still the trade off. How limited a language should be ? what are the features
a language should absolutely have to make the life of the developer easy yet
still give him some room to express himself the way he wants? I think that Go
failed if one considers that trade off. Often with Go, a developer ends up
doing the job of the compiler, because its flat type system doesn't allow a
certain degree of expressiveness ( or worse, forces the developer to use
reflection , i.e. putting aside the type system all together and using a
"generic" 'interface {}' type).

While C++ is something extreme in term of complexity , the other extreme, Go,
is problematic as well. I've yet to find a language that has a good balance
between the 2. But Go certainly showed that the tooling is as important as the
language itself, and it's a great part of its success.

"Philosophically" , no question, I agree with Rob Pike. On practice, things
are a bit more nuanced.

------
theGimp
I love the vector space analogy.

I have been thinking about the same thing: that the simplest languages have a
minimal set of "orthogonal" features; though it never occurred to me to
describe them in those terms.

But hey, it's Rob Pike. I was bound to walk away with something after
listening to him talk :)

------
infogulch
The _mere presence_ of choice incurs a tangible mental cost, _even if that
choice is not exercised_.

In the case of programming languages, this is not only a development cost,
it's a maintenance cost as well. But this concept seems to apply to many more
things than just programming.

~~~
Ace17
Choices are made at the time of writing the code. And complex code has proven
to be very easy to write. So the mental cost of all the existing choices must
be minimal.

------
jnpatel
For another interesting talk on "extracting simplicity" vs. "mastering
complexity", listen to Scott Shenker motivate Software Defined Networking:

[https://youtu.be/WVs7Pc99S7w?t=375](https://youtu.be/WVs7Pc99S7w?t=375)

~~~
euyyn
Nice point; thanks for the link!

------
cronjobber
Some quibbling here in the comments about whether Pike uses the right
definition of "simple". There is of course a context to Pike's notion of the
word:

> The key point here is our programmers are Googlers [...] They’re not capable
> of understanding a brilliant language but we want to use them to build good
> software. So, the language that we give them has to be easy for them to
> understand and easy to adopt.

[http://channel9.msdn.com/Events/Lang-NEXT/Lang-
NEXT-2014/Fro...](http://channel9.msdn.com/Events/Lang-NEXT/Lang-
NEXT-2014/From-Parallel-to-Concurrent)

~~~
knucklesandwich
That quote is a hilariously denigrating claim to make of your coworkers.

Even if it were true (which I don't believe for a second), you've created a
language released to the community at large. Your users are not just google
engineers. If you want to encourage broader acceptance of your language,
you're going to need to make a good faith attempt to listen to their requests.

~~~
emmelaich
I think that it would be taken in the spirit it was intended. Programming is
hard. (even when you take into account
[https://en.wikipedia.org/wiki/Hofstadter%27s_law](https://en.wikipedia.org/wiki/Hofstadter%27s_law))

Also, I'm pretty sure the "brilliant" is meant somewhat sardonically.

~~~
knucklesandwich
If this is all tongue-in-cheek then what does it really amount to though? A
good-natured ribbing of Google employees? I'm in total agreement that
programming is hard, but I don't think it's hard because of languages that
pack in features, its hard because of languages that don't ensure reasonable
semantics or provide you the tools to enforce them. Programming is hard
because its inherently powerful... languages that allow you to circumscribe
that power are what keeps me sane though, not particularly the ones you can
fit on an index card.

------
merb
I think most of his talk is wrong.

Or at least wrong when we talk about general programming.

Go isn't a 'bad' language. Go is simple another tool for one job and it's
doing his job good, cause of his simplicity.

However not every problem is so easy to solve with this simplicity. Some
problems doesn't fit well with the Go approach. And I always hate it when
people talk about "why x is superior because less/more features", mostly
things emerge when the problem emerges.

------
brightball
I agree with him in principle although I agree with a lot of the other
counterpoints made here too.

The biggest reason that other features get added to languages to make them
ubiquitous is because in MANY cases companies marry their infrastructure to a
particular language. With that infrastructure commitment you end up with an
investment into doing everything with that particular language so if you have
a feature or methodology in another language that would be good for a
particular use case, your existing investments prevent you from being able to
EASILY introduce a new language just for that feature.

Microservice architecture goes a lot way toward countering those investments.
Heroku actually does a great job of removing language specific infrastructure
investments from your process that make it easier to introduce language-per-
purpose approaches.

Wrote a bit article on it for Codeship a couple of months ago.
[http://blog.codeship.com/exploring-microservices-
architectur...](http://blog.codeship.com/exploring-microservices-architecture-
on-heroku/)

------
LaPingvino
There are several points where he won't mention the language. Which languages
do you think he meant?

~~~
catwell
Probably C++ or Java (Go was written to replace them at Google).

~~~
slowmovintarget
The first mention sounded like C++. For verbosity (second mention), I'd guess
Java. The third mention sounded like JavaScript where he said something to the
effect of unexpected tangling, but those are guesses based on what Google uses
and Rob's brief description.

------
trymas
[https://youtu.be/rFejpH_tAHM?t=23s](https://youtu.be/rFejpH_tAHM?t=23s)

haha, that's a good one :)

