
Blue. No, Yellow - nikbackm
http://blog.cleancoder.com/uncle-bob/2016/05/21/BlueNoYellow.html
======
aftbit
I think this misses a very key point - subject matter. Some languages are
well-suited to do particular things and comparatively poorly suited to other
tasks.

For example, Erlang is great when you need to do lots of concurrent operations
with guarantees about safety and uptime, but not really what I'd write a CSV
parsing program or email client in.

C is very easy to link against on almost every platform and language, so basic
libraries are often written in C.

Javascript (in the form of nodejs) is great for writing simple web servers
because most of its standard library is asynchronous by default. Compare
Python (with Tornado or similar) where lots of popular libraries cannot easily
be used in an event loop.

I could go on forever, and you might disagree with me about a particular
details, but the general point stands. If you took an expert Erlang programmer
and gave her a day to write a spreadsheet application, you'd get a worse
result than the same day spent by a C# programmer.

~~~
capitalsigma
Yeah, the effect of libraries is _huge_ and ignored by this analysis. There's
a good reason we see so many web apps written in ruby/python/node and so few
in C++.

~~~
FeepingCreature
Also C++ still has atrocious cycle time with medium to large projects.

If you look at feedback times, PHP is the new Smalltalk. (Yuck.)

~~~
informatimago
[https://root.cern.ch/cling](https://root.cern.ch/cling)

------
ysleepy
What a load of hot air.

Reducing the differences between programming languages to their
performance/productivity and then claiming the choice is just bikeshedding.

Have fun with your ruby pacemaker, js air traffic control and life without
static typing in 10mioLOC codebases.

Why do I feel like i have to apologize for my negative view on this? He has a
commercial interest in maintaining his guru status.

~~~
SkyMarshal
Was my first thought too. He's judging "workload" by how much time it takes up
front to write a program, which may be an important metric for "move fast,
break things" social/mobile/local/web apps and the like. But he doesn't
mention error rates, maintenance costs, LoC, or any other metric that
mission/safety/life critical systems prioritize.

The world view underlying this post is one of the reasons for shoddy software
and all the breaches, data thefts, etc. that have successfully exploited
systems flaws in recent years.

~~~
lightcatcher
I also think his upfront time estimates were off. He estimated Ruby to be ~15%
more productive than C. I don't write Ruby, but I do write a lot of C and
Python. I'd estimate that it takes 5x less time for me to write a program in
Python than C. Even if the 5x number is off, it's still a way larger reduction
factor than 15%!

------
chrismonsanto
1\. This post would have been more interesting if the person he was having
this dialog with was a PL researcher/designer.

2\. Uncle Bob is trying to speak with authority, but this subject matter is
outside his area of expertise. At least in his previous post, he was nice
enough to state this up front:

> Learning swift has been an interesting experience. The language is very
> opinionated about type safety. Indeed, I don't remember using a language
> that was quite so assiduous about types.

3\. I see that Rust, Haskell, and OCaml were conveniently left out. He hand-
waves away other modern languages with "negligible," which is hilarious
because he is so obnoxiously wordy in the rest of this post. Not very
convincing.

~~~
nickpeterson
I like a lot of Uncle Bob's posts, but his opinion on functional languages is
bizarre. He seems to like them (especially clojure), and talks about how they
tend to force programmers to stick closer to patterns pretty universally
regarded as better (immutability for instance), but then sort of punts when
asked to compare them to OO languages. I think, to some degree, his belief in
TDD, Solid Principles, and short cycles means that functional languages have
less to offer developers than many think because he's comparing Java executed
with excellent practices to stock functional languages. I think that C#, Java,
and OO-design just leave too much room for developers to go astray. Something
like F# results in better code not because of the features of the language,
but because of what it doesn't allow.

~~~
wpietri
Interesting! I agree entirely that Java and OO leave a lot of room for
developer to go astray. And I've enjoyed my limited coding in functional
languages. But is something like F# really better in practice?

I ask because every functional-language code base I've been able to look at
myself has two advantages: 1) it's not very large, and 2) it was made by very
smart, experienced developers with a passion for functional languages. When I
look at similarly sized OO code bases produced by people with similar levels
of passion and expertise, the code base is also very good.

The terrible Java code I've seen, on the other hand, mostly comes from less
experienced people whose priorities are things like "keep boss happy", "put in
all required hours", and "comply with project plan". Which is, sadly, the
average case in the industry.

So I'm wondering: What happens if some functional language becomes mainstream?
Will junior programmers at EnormoBank's latest death march produce better
software?

~~~
iopq
If Haskell becomes mainstream, there will be some absolutely horrifying
Haskell code.

But Haskell allows people who know what they're doing to also check that their
code doesn't violate certain constraints. That means that the amount of work
the compiler does is greater, which makes more guarantees about correctness.

It's not about making the worst code less bad, it's about making the median
project take less time and have less defects.

~~~
tinco
There already is some very horrifying Haskell code written by masters of the
language. Well written Haskell in my opinion can be readily understood by
anyone who has studied the language for a week or two (provided they are
experienced programmers). The depth of potential for abstraction in Haskell is
so tempting however that often clarity is lost in dense semantics.

------
raverbashing
And this is a good example of why a lot of people can't stand this so-called
"guru"

The whole text reads more like one of those "reality shows" on TV with a load
of fluff, lots of recaps and very few substance

Then you scroll down and his brilliant conclusion is that Java and Ruby has
the same coding efficiency when you do it his way. What a load of crap.

I stand by my assertion: I wouldn't trust Uncle Bob with anything bigger than
a Hello World. But if you have an over-budgeted corporate project you may look
nice to your boss if you contract him

~~~
radicalbyte
You just have to look at some of the horrible software that he has worked on.

He does, however, know his audience: mediocre managers of dysfunctional teams.

For those groups - people who cannot themselves articulate the benefits of
keeping code organized and testing the important stuff - his talks and books
are invaluable.

~~~
mwcampbell
> You just have to look at some of the horrible software that he has worked
> on.

Can you give an example?

~~~
radicalbyte
FitNesse and Jenkins come to mind. I don't think that he has worked on any
significant project for some time now (which is to be expected, he's a
training / consultant / blogger nowadays).

------
GnarfGnarf
Here's a benchmark: in 1970, I had a full-time job maintaining a 4,000-line
assembler program on punched cards. Today I manage a 400,000-line system in
C++. That's 100:1.

~~~
santoshalper
That's an interesting point also - scale. I don't think programming a simple
"hello world" in C or C++ would be 100x faster than in assembly (assuming you
were equally familiar with both languages), but I think it would be nearly
impossible, or at least _incredibly_ expensive to maintain 400 kloc of
assembly. I honestly shudder at the thought.

~~~
Someone
You would use a macro assembler, and if you have good programmers, they would
effectively develop a DSL for writing your program in it.

That would postpone the point at which that assembly program reaches 400kloc.
In the end, it would probably grow as fast as a C program of the same
functionality.

Of course, they also would have to develop their own tooling around that DSL.
That, I think is the problem lies with assembly. Higher level languages imply
that more stuff gets shared between projects (even across companies; everybody
will use the same C language and standard library), and that means time spent
to develop tooling around and libraries on top of the shared stuff is useful
for more people. Hence, tooling and libraries will generally be of higher
quality. that home-grown GUI library built on top of a home-grown set of
assembler macros using a home-grown ABI may be better than using, say, QT on
top of X11, but it isn't that likely.

On the other hand, the smaller the system, the more important memory usage,
and if the system is small enough, that can and will swing the advantage to
the home-grown system. That happens less and less, though.

------
mmatants
It feels like the early language wins were due to the languages themselves
introducing new _concepts_ , not just syntax changes. Allowing higher-level
reasoning.

Then, past a certain point, most of the new concepts started coming in via
frameworks and libraries (although of course things like functional
programming and interesting typing approaches are still language-driven
tools).

Thus, these days, I would look at libraries and frameworks as sources of new
productivity unlocks. E.g. in the Web world, jQuery saved millions of work-
hours and qualitatively unlocked some new things. Then Angular (and eventually
React) started realizing huge savings from declarative UI definitions. My
preference for one next innovation in this area is a higher-level framework
for user input modeling (a huge source of bugs these days).

~~~
skybrian
The language still matters, but indirectly, because it constrains which
frameworks can easily be built. It's doubtful that jQuery and React would have
been invented using Java. Even JavaScript is nicer with a language extension
(JSX).

~~~
kuschku
The thing is, Java has (with some tricks) a fully pluggable compiler. And even
without tricks, compile time processing is possible.

But Java is "uncool". That’s the big issue.

~~~
skybrian
The issue is that if you want to do template-like things using language
constructs (internal DSL), the syntax is quite limiting.

There are tricks using annotations and strings, and Java 8's lambda
expressions do help. But it's still awkward to define a React-like mini-
language within Java.

~~~
kuschku
Well, anything that is a valid syntax can then be modified by the annotation
processor.

Add the fact that you can do blocks, and label blocks, and you can do
practically s-expressions.

------
chadcmulligan
Actually I would argue the converse, that many "modern" improvements have
taken a step backwards. My first job was using SQL*Forms (an oracle product)
on a Sun, it was for making CRUD apps on a tty (a terminal). We had a metric -
small form (just some data fields) 1/2 day, medium form 1-2 days (perhaps a
master detail), large form (master detail detail perhaps with some popups) 2-5
days.

Next was VB and Delphi - drag and drop stuff on the screen, you could write
whole systems in a few weeks. Had wonderful screen painters and so on.

Then the browser - a disaster in productivity, writing things in a tag
language - no visual ide support (still).

Now the maximum complication of programming with MVC and languages with so
many features no one person can know them all (C++ I'm looking at you, C# and
Java are up there though).

I look forward to the next innovation

~~~
derefr
Productivity has to be measured with the implemented use-case held stable.

Visual FoxPro was probably the pinnacle of "easy to make CRUD apps", but the
result is, at most, an program that runs on a few computers on a corporate
Intranet, that each must use a specific version of Windows with a pre-
installed runtime.

A modern LAMP app, despite being composed of layers upon layers of kludges,
can be immediately interacted with from any computer in the world, with no
sysadmin experience, by entering a URL, and—with no particular effort into
architecture or scaling—allows for thousands of simultaneous users to submit
changes to the same data in a perfectly consistent fashion with no "whole
document is locked for modification"-type problems.

The LAMP app also has a number of modular boundaries that the FoxPro app
doesn't: the backend server treats its database as a black box, so it could
actually be a replicated cluster, or a different DBMS the next week; the
frontend JS treats the backend as an opaque HTTP API, so either of the two
could be rewritten or outsourced, and the API could be repurposed by new
clients with no additional effort put in on the backend team's behalf; the
frontend is built up from layers of specifications (e.g. CSS features) that
are each optional, such that you can visit the site from a copy of IE7 on a
Windows XP machine in a dusty Korean net-cafe and it'll still let you _use_
the app, just in a less pretty fashion; and so forth.

(And this is ignoring the 10x advantage in accessibility for blind users
markup gives over old pixel-blitting GUIs. Modern GUIs are "markup-based" in a
similar sense—Cocoa NIBs, Microsoft XAML, etc.—because it's far easier to
accessibility-enable a frontend when it exists in the form of an UA-
inspectable DOM, rather than an opaque framebuffer.)

If all you need is the paired-Windows-98-machines-in-your-office use-case,
sure, use Visual FoxPro (or whatever the modern equivalent is. FileMaker Pro?)
You can go even simpler, if you like, and just use green-screen terminals
pointed at an ncurses app running on your office machine. But if you need the
Internet use-case—where _everyone_ with _any tech experience level_ can use
your app _from anywhere_ on _any old device, all at the same time_ —nothing
less than the stack of kludges we've got is actually up to solving that
problem.

~~~
chadcmulligan
I never mentioned visual fox pro, it's not comparable and I would never say it
was the pinnacle of easy to make CRUD apps which is why I mentioned Delphi and
VB, Delphi in particular I'd say was (and still is in some ways).

I don't know where to begin with the rest - "whole document is locked for
modification" \- what on earth is that - and why does the web have anything to
do with that, if you mean optimistic locking, then many apps use that.

All the products I mentioned produced a product comparable to a web product -
client server enterprise level applications, which could be run over the
nascent web (dial ups even - it was done regularly). The complification of
modern software stacks has reduced productivity, perhaps your argument is that
the use cases addressed by modern applications (web in particular) differs
from earlier systems?

~~~
derefr
> "whole document is locked for modification" \- what on earth is that

It's what happens when everyone in the office is trying to edit the same Excel
spreadsheets over SMB/CIFS at the same time. Most "easy CRUD app maker"
solutions didn't produce something much better, architecturally, than that
baseline. I mentioned FoxPro because a lot of people give it as an example of
such an "easy CRUD app maker." Hypercard and Lotus Notes are others. These are
the points clustered on one side of the "easy design vs. architecturally-sound
product" spectrum.

Delphi/VB are closer to the middle of that spectrum; not quite as easy, but
they at least have a concept of "a database" that doesn't refer to a
proprietary file format built into the runtime.

The web is on the right of that spectrum: not at all easy, but the product is
something that _works_ , in a sense that other apps can hardly hope for. A
correctly-architected web-app will work on Chrome Canary, IE6, w3m, a Palm
Pilot's WAP browser, the Wayback Machine, several web spiders, IFTTT, as an
offline cache, through corporate proxies, with Cloudflare, through Tor, with
screen-readers, when printed, with UA CSS overrides for e.g. contrast-
enhancement, with any sized viewport, with any sized fonts, and even with
people who disable Javascript.

 _That_ is the use-case. As you go to the right on the spectrum, you approach
being able to solve for that use-case—but it takes a slight while longer to
build.

~~~
chadcmulligan
OK,

for > It's what happens when everyone in the office is trying to edit the same
Excel spreadsheets ...

That really isn't solved by the web either so lets forget about that

> Delphi/VB are closer to the middle of that spectrum...

I disagree - and have evidential support - I have worked on cross platform
Delphi products (iOS/android/OSX/Windows) that have a REST backend and what
would be called 'web scale' products

> The web is on the right of that spectrum...

Well this is where the argument gathers some steam - when people say 'the web'
they usually mean a large blob of technologies from Web browsers to LAMP
stacks etc. So can we differentiate these into two components?

1\. The web browser

2\. The backend stack

------
FunnyLookinHat
It's not quite as simple as preferring one color over another - some
languages, by (their own) definition, lend themselves to certain tasks more
willingly than others. Rather than expecting a specific gain from a language
(e.g. 20% faster development), you have to think in terms of avoiding the
language that might be 20% slower for the specific problem being solved.

For example, a seasoned engineer would likely not choose to use PHP for a
stateful, socket-based messaging system to connect a dozen or so users. Why?
Because PHP is not designed to do that task well. It could do it - you could
poll an HTTP end-point and use a cache for really fast persistence to wire it
all up - but you'd likely start having to write code around the problems you'd
encounter for all of the nuances to your specific implementation.

Yet another example: The main reason Go looks attractive to a certain set of
developers is that it solves a problem with describing and handling
concurrency that they've had with a lot of other languages. They would be dumb
to say that Go is carte-blanche better than PHP (or Ruby, or Java, or even
C/C++), but that doesn't mean they won't see a potentially significant
improvement in using it.

------
xentronium
This post is a great oversimplification. Normally something like that could be
ignored to get the bigger point across, but it kinda _is_ the point, so I call
BS.

Have 10000 experienced ruby programmers and 10000 experienced java programmers
solve different tasks. I'm pretty sure that rubyists are going to solve their
problems using 50% less time with at least 30% less LOC just by the virtue of
having a more expressive language. Now I'm not trying to argue that ruby is
strictly better than java, I know both languages' deficiencies. However, the
metric author uses is development time, and by that metric more expressive
languages will always win, hands down.

~~~
pka
And now have those 10k programmers work on a single project.

I bet the 10000 Ruby guys are gonna create a big, unmaintainable, broken mess.
Even Java's primitive type system would be a big help when it's about software
development at scale.

Yes yes, _TDD_ :)

~~~
krisdol
I felt like I spent roughly the same amount of time working around Java's type
system with copious amounts of boilerplate as I did fixing wrapping my head
around type-related bugs in ruby, especially at scale because of how
inflexible every Java architecture seems to become after 5 years. Java's types
give you so, so little additional real world "safety" but easily require 10x
the verbosity to express the same logic as a dynamic language. You still end
up getting hit with null pointers and invalid argument errors.

I think Rust and Swift and Elm strike the right balance between developer
productivity and a strict type system that actually improves the quality of
production software.

~~~
twblalock
> You still end up getting hit with null pointers and invalid argument errors.

Those are not the kinds of problems a type system is intended to solve. Those
are problems of data, not of types.

The amount of boilerplate in a Java program is pretty low if you aren't trying
to abuse or work around the type system. It certainly is not 10x what you end
up with in Ruby or Python. Java 8 has made significant improvements, but it
wasn't that bad before in many cases.

Instead of trying to work around Java's type safety, you should learn to use
it properly. Good programmers think about types whether or not they are forced
to by the language, so the restrictions imposed by Java should not be
burdensome -- in general, you should already be mentally applying some of
those restrictions to your code so you can avoid passing the wrong types
around.

~~~
hibikir
They are problems type systems are intended to solve: It just happens that
Java's is too weak to deal with it well: They best you can do is use an option
type, but even then, Java's Option type is far weaker than in other languages.
Languages where null doesn't exist are far nicer.

Java's boilerplate comes from lacking type inference and from having a type
system that is too weak, not one that is too strong: I'd argue that people's
love for dynamic languages without type systems is cause by how people's idea
of what types are, and what they do for you, come from Java and old C++, as
opposed to something more powerful.

~~~
twblalock
Nulls are not a problem type systems are intended to solve -- they are a
feature of type systems that was created on purpose. You might not like them,
but that doesn't mean they constitute an oversight or omission on the part of
the designers of the type system -- they wanted them there. They are useful.

Many people misunderstand the option type in Java. It was created to make
stream processing easier. It was not intended for general use, as in other
languages.

Regarding illegal arguments, they are unavoidable in any type system. Suppose
you defined a function that splits a string into an array of n-grams, to which
you pass the string as well as int n. If you pass an n which is greater than
the length of the string, no type system will help you figure out what to do.
It's just an invalid/illegal argument, and you will either have to decide what
to return for that case (maybe an empty array, or even... null), or throw some
kind of exception.

~~~
jestar_jokin
Regarding nulls, have a look at how Haskell deals with nulls. The basic idea
is that any nullable value gets wrapped in a "Maybe", and you use pattern
matching to handle the case of missing/empty values. This is much more
explicit than null values in Java, where any object could be null at any time,
and actually leads to cleaner code (i.e. to avoid lots of "Maybe" boilerplate,
you can match for "Nothing" and return a default value as soon as possible to
first matching "Nothing").

Regarding illegal arguments, have a look at something like Idris, which has a
type system with "dependent types". The Wikipedia page has an example[0];
types can have values, so you can express a "pairAdd" function that accepts
two vectors, each vector requiring the same length.

I guess "types" and "compile-time checks" are sometimes used interchangeably.
I absolutely think tools (such as compilers) should be leveraged to provide as
much assistance as possible; whether that comes in the form of types, or some
other mechanism that resembles types.

[0]
[https://en.wikipedia.org/wiki/Idris_(programming_language)#D...](https://en.wikipedia.org/wiki/Idris_\(programming_language\)#Dependent_types)

~~~
twblalock
So instead of checking if a variable is null, I have to check if the variable
is a Maybe type, and if it contains a value.

I hope you can see how this is pretty much the same as checking if a variable
is null. It takes just as much work, and has the same outcome.

~~~
jestar_jokin
Not as such; functional languages use "pattern matching", so you are forced to
match each possible outcome. This way, it explicitly brings the issue of nulls
out into the open; if you have a Maybe it might be null, if you have a String,
or Number, or anything else, it is never null!

This means the language allows you to define areas of your program that are
null-safe, and areas where nulls are expected.

Compare to languages where you must remember to check null on every use of a
null variable. I guarantee I can pick any Java codebase and find a method
where arguments to that method, or a class's properties, are not checked for
null on every use. How do I know that the use is safe? Usually by coding
conventions; maybe immutable instances represented by values set only in a
constructor, although what's to stop null being passed in?

------
mynegation
Small nitpick, especially given that all numbers are "guesstimates", but if "X
is 30% more efficient than Y" it is not "10 programmers can do a work of 13",
it is ((13*0.7 = 9.1 ~= 9)) programmers.

~~~
sokoloff
Since you've taken us to nitpick land, I'd argue that you've expressed "Y is
30% less efficient than X", and that 10 can indeed do the work of 13 if
instead "X is 30% more efficient than Y".

To me, the reference standard is assigned unity.

In the first case, X is 1.00 and Y is 0.70.

In the second case, Y is 1.00 and X is 1.30.

------
Confiks
> How enormous? Give me a number.

> I need a number

> The number?

The article places annoying attention to quantifying "workload estimates",
whatever that even means. Because the written interview is so verbose, we know
that the author just brushes away the interviewee's remark that 'other
important factors effect that workload', and then proceeds to cryptically
dance around those factors with very opinionated questions.

------
sdenton4
When writing Python these dates I almost always use iPython notebooks for
prototyping, before throwing things into modules. When I switched off to Java
and go a while back, the increased overtime for compiling was pretty
noticeable.

But even more noticeable was the loads of the ability to play around with each
little block of code, with data in-memory, until I was completely happy with
it. It's a bit like interactive debugging, but with a great deal more freedom.
As a result, I get a huge reduction in mental load when writing Python in a
notebook; my attention is extremely focused on the little bit of code in
working up in that moment. This is a real win for productivity, beyond what
you get from the zero compile time.

(Also, Java compile times are not zero. Even waiting a minute for a large
compile is enough to create a break in concentration.)

~~~
PaulHoule
That's why Java IDE's usually have a visual debugger that actually works.

~~~
reitanqild
I am sometimes (i.e. I think this happens a lot) bit puzzled by looking at
very intelligent people who doesn't know about the debugger or knows about it
and doesn't use it anyway.

If you program Java then please feel free to use the excellent tooling that is
available.

~~~
jjoonathan
It's just habit. Code in a language with bad tooling and you too would develop
habits to work around the lack of tools. It just takes time (and sometimes a
bit of convincing) to adjust. And to adjust back, of course.

"Type A" coders are especially vulnerable to this kind of habituation on
account of often being interested in esoteric languages, systems programming,
or other "extreme" environments with tooling restrictions.

~~~
TeMPOraL
It helps also to not make the type of bugs that need to be solved with the
heavy and laggy Java debugger. I've been doing quite a lot of Java over the
years, especially in the last year, working on medium and large projects. I've
compared my workflow to that of my cow-orkers, and I sometimes look at the way
they debug things. For most cases, I can find and fix the bug with occasional
logging / print statements faster than my cow-orker can pause the program on
breakpoint and then click through stuff to find the right element in the right
container in the right variable that's maybe causing the problem.

The trick usually is to test often and don't write something if you don't
understand how it works - that also includes interacting with parts of your
application that you didn't write - and to not proceed if you feel you don't
understand how the code you wrote works. 99% of the time when I have a bug in
my Java code I realize where the bug is before my window manager switches over
from the program to the IDE. That comes simply from understanding what one
wrote (+ a bit of experience).

That said, debugging multithreaded programs is a pain in the arse, and I take
any help I can get there. Though I haven't seen many debuggers that would be
helpful in those cases.

Interactive debugging is fun in Lisp, where you basically code interactively
all the time and code/debug phases are pretty much blended together (just like
compile/load/runtime is).

------
koenigdavidmj
I think it was the GoF Design Patterns book that also spoke into this. They
said that every level of language would have its own patterns, and that lower
level languages' patterns might be language features with first class support
in higher languages. Examples:

* In assembly language, the design pattern might be the function (a published interface for a common block of code, and a well-defined protocol for getting data into and out of it).

* In C, it might be the basics of C++'s object-oriented programming features: single inheritance at first (have your parent class be the first member of the structure, then cast to whatever level you need), followed by vtables.

* In C++, it might be 'interpreter' or 'visitor'. Lisp macros make building DSLs a lot easier (see something like LOOP for a basic example), and CLOS' multiple dispatch nearly obviates the visitor pattern.

I'll agree with the article, though, that most languages are settling on the
same norms, mostly borrowed from the functional world. Swift, Rust, C++, and
Java are all gaining most of these: making it easy to avoid nulls safely,
pattern matching, parameterized types, method chaining over explicit loops,
and a non-dogmatic preference for immutability over in-place modification.

~~~
hythloday
I would suggest that pattern matching over ADTs is what has subsumed the
visitor pattern.

------
dahart
I'm many times more productive in python than I am in C++ for lots of reasons,
but I feel like the clerical parts and compile times only add up to a small
minority of the reason why. The respective standard libraries included with
the language, the ease of finding existing code you can build on, the
ecosystems surrounding the languages are all things that lead to meaningful
differences in productivity.

Interpreted vs compiled is indeed a major difference, but I feel like focusing
on the compile time difference doesn't account for it or adequately summarize,
there are significant mentality and workflow changes.

Bob might be right that we're approaching negligible differences in
programming languages, but I hope not, and I'm not at all convinced. It seems
like the introduction of new useful concepts into mainstream languages is
accelerating right now. And I expect to see huge advancements in programming
soon that leverage today's huge advancements in AI & natural language
processing. I think it's within our reach to be able to describe to a computer
what our end goals are and have it figure out how to put together the pipeline
to get there.

------
robbrown451
I enjoyed reading it, I particularly liked the interview style.

I'm surprised he didn't mention, in addition to compiles times changing, that
editors and environments improve dramatically. (I guess he alludes to punch
cards, but still). And they can affect whether one language or another is an
improvement (for instance, Java with is stricter types really benefits from an
editor that can do autocomplete, in my opinion)

I do think that most technologies follow this sort of progression. I mean, if
I go buy a new computer now, it is going to make me incrementally more
productive. I'm 52, so there was a day when I was buying a computer to replace
a dedicated word processor (i.e. a typewriter with a tiny amount of memory and
and small LCD display), and it was a dramatic improvement. As was the word
processor compared to a plain old typewriter. I don't expect that kind of
drama when buying something to type on now. Things are slightly more dramatic
with touch screen devices, but that is starting to slow down. Other things,
like a dishwasher, even less so.

I hope something interesting will speed things up again, but I expect it will
be like what happened with phones and tablets coming in and replacing many
functions of old-school computers for so many people. That is, a new way of
automating computer behavior that doesn't mostly come down to editing text
files.

As an example, a technical person could train a humanoid robot arm to wash
dishes by talking to it while the robot mirrors the motions of the trainer's
arms. Is that programming? I'm not sure, but when we start seeing more of that
sort of thing, and it increases in sophistication, I would expect more of
those big jumps like moving from binary to assembly.

~~~
bloaf
This is one thing that bugs me. The editors and environments _haven 't_ been
improving dramatically. Emacs is 40+ years old at this point, and still as
hard to learn as ever. The 1970s Unix-like OS is still at the core of most
dev-environments, despite people having better ideas since then (e.g.
Inferno.) Sure, there have been incremental improvements to the editors and
OSs since their inception, but there were no major revolutions.

It has always seemed to me that the story of improved programming environments
is driven more by improved hardware than improved software, and that companies
like Microsoft/JetBrains are the only ones who've even bothered to wonder if
its possible to create a better environment for programming than "arcane text
editor."

------
AstroJetson
In my opinion it's not the language, it's the libraries. Lets take Java vs
Fortran and look at the language basics. They pretty much have the same
basics: assignments, control structures, function calls. What makes Java rock
isn't the language it's all the library code that comes with it.

People are always saying "Look at me I can write a web server in six lines of
code." No, you can call a web server library that someone wrote for you in six
lines of code.

Aftbit (some place in this thread) has a good point, some languages are better
with some subject matters. I'd like to also posit that some languages have
better libraries written for a subject matter which makes it appear they are
better for that subject. We've seen lots of people port libraries to a
different language to help them.

While I do agree that some languages give you a boost up because the compiler
is doing some heavy lifting in the background, it's the huge collections of
libraries that we can call that lets us stand on the shoulders of giants.

~~~
derefr
True; it's a crying shame, though. It's 2016: why are libraries still limited
to the language-runtime they were written for? Why must there be more than one
library ecosystem? Why can't I import Javascript libraries from Ruby, Python
libraries from Java, Erlang libraries from Haskell, Go libraries from Rust?
Why can't they all just be "libraries", fullstop?

(Right now, we have to explicitly embed one runtime into another, creating
programmer's turducken, if we want anything close. This, though, is an
artifact of the way we think about efficiency as requiring address-space
cohabitation. And while it's easy to set up a runtime-heterogenous melange
using IPC, the default for _that_ is inefficient serialized streams on
sockets. Where's my zeromq-like zero-copy message-passing IPC as a batteries-
included part of every runtime? Where's my binary wire-type-encoding standard
aimed at producing "toll-free-bridged" native types in multiple runtimes?
Where's my "managed" OS with malloc-time kernel-side type-tagged memory-
buffers[1] that all runtimes for that OS support loading? Where are my multi-
runtime application servers with jail/lxc-like application domains to isolate
mutually-untrustworthy clients?)

[1] Speaking of, whatever happened to capability-based operating systems?
Hardware support for capabilities would basically let us get rid of the
"process" abstraction altogether, and just have a big OS-wide heap with
various units of concurrent execution holding capabilities on various memory-
objects.

~~~
mwcampbell
capnproto seems to get the closest to what you want.

------
kordless
As soon as AI kicks in, we'll have another order of magnitude increase in the
ease of programming.

"Alexa, write me an API for this $5 wifi enabled light on my desk that I
soldered together yesterday."

~~~
andrewstuart2
"Alexa, this is crap. I told you two spaces, not four. And everything's in the
global scope."

It was then that Alexa began to plot our demise.

~~~
fizx
I wish there was an IDE that would present code in your preferred format (e.g.
4 space tabs, curly braces on independent lines, etc), while still keeping the
code on disk and in scm the same as the existing style (so diffs stay small,
and your teammates are happy).

It shouldn't be too hard to learn a bijective mapping between your style and
everyone else's style. If we're still using IDEs in twenty years, I'd expect
this to have been done by then.

~~~
alexandercrohde
This 100 times over. I cannot estimate the amount of time otherwise brilliant
engineers have wasted over aesthetics that really aren't a meaningful part of
the code and should be rendered distinctly on a per-developer basis.

------
spraak
The dialog format of that is very pleasing to read

~~~
lukevers
Definitely. It's probably the best formatted dialog blog post I've seen in a
long time. I know it's so simple, just using blockquotes, but that's what
makes it so great.

------
guard-of-terra
This reminds me that at the beginning of XX century, physicists talked how
they have mostly completed the grand building of Physics, figured all the
important things out but maybe just overlooked a few details.

~~~
YuriNiyazov
He keeps on pressing his imaginary partner in this conversation to just
concentrate on the number of equivalent programmers rather than any other
measure like performance or probability of bugs making it into production and
the cost of fixing them, and then when they decide there's no more gains to be
made from programming speed, all of a sudden we are at the pinnacle.

No, we are not at the pinnacle, because programmer speed (as he very well
knows, because he keeps on putting those objections aside) is not the only
thing that matters.

------
spion
Another post contributing towards the already high likelihood that Uncle Bob
is completely oblivious of typed pure functional languages like Haskell (or
Idris, etc)

~~~
spion
Forgot to add: Uncle Bob, please try Haskell. But not just for a tiny program,
because it does take some time to get used to it.

------
fiatjaf
So a Python programmer today can only make things 25 times faster than someone
writing binary code in 1950? Really?

~~~
emeraldd
That's only true for a small subset of applications. Beyond the code that the
programmer is actually writing themselves and direct language features, Python
(and most modern languages with good library support) makes a standard library
(and many third party libraries). The library of external code that developers
don't have to think about wasn't really covered in the article but is one of
the things which multiply that 25 significantly higher.

------
skybrian
Nice blog post. Of course it doesn't take into consideration the time to
understand and fix bugs, to add new features, to run tests, to review changes,
to make large-scale changes, and to on-board new programmers. The arguments in
[1] still apply.

[1]
[http://martinfowler.com/bliki/CannotMeasureProductivity.html](http://martinfowler.com/bliki/CannotMeasureProductivity.html)

------
lowbloodsugar
I've published video games in 100% assembly language, and others 98% C++ (with
some tiny bits in ASM). The latter games were easily more than 10x complex,
and were done in comparable time. Games now are done with tens of programmers,
a lot of it written in DSLs or programmed by game designers using visual
programming languages. ASM doesn't scale. It can be joyful though.

------
alkonaut
Misses the point that _languages_ aren't as important as tools and ecosystems.

With JS the stdlib train already left With C++ I doubt we'll see a nice module
system and package repo similar to Rust (because we staple more legs to the
dog in the name of backwards compatibility instead).

We invent new languages not merely because we want a better language but
because we want a better ecosystem, and each language gets just one or two
shots at getting it right.

Rust is aimed at C++ (and others) and offers a saner module and dependency
system. It might seem like a crazy idea to offset C++ in the systems
programming space - until you realize that it seems easy compared to adding
modules to C++.

------
dunkelheit
I was waiting for something like "and then people invented TDD and that gave
another 10x improvement" but it didn't come. A surprisingly balanced post for
Uncle Bob.

------
partycoder
After C things start becoming very subjective.

C++'s standard library goes beyond C's. Java's standard library goes beyond
C++'s.

Then, having a standard library is necessary for compatibility. e.g: in C++
you can go and use something like Boost or POCO but then they might not play
well with each other and you will need some glue code.

Those "glue code" problems are extremely time consuming.

------
lalaithion
I dunno, I'd still take a 5% improvement.

Let's say I work exactly 40 hours and week and 50 weeks a year. Then a 5%
increase in efficiency means that I can do 2 extra weeks of work per year.
That's not a lot, but on a team of 26 programmers that's essentially the
equivalent of hiring a new employee (without the downsides of a bigger team
and the cost to bring someone up to speed).

~~~
liw
Alternatively, you could give yourself and your employees a decent amount of
vacation each year.

Two weeks per year. Sheesh.

------
YeGoblynQueenne
Weeell, we're kind of taking liberties here aren't we? How is Java to C++ what
C is to assembler? How is C++ to C what C is to assembler for that matter?

C, C++, Java, Smalltalk and Ruby are all high level languages. Assembler is,
well, assembler and machine code is machine code. Those are three steps in the
evolution of programming languages not seven. The differences in - whatever
metric - between high-level languages are negligible compared to their
differences to assembler using the same metric.

I guess that's what the post concludes in the end but it gets there in a bit
of a strange way.

Also - does anyone seriously consider those vague metrics when making a
decision of what language to choose? "How much does my language improve
productivity over Java"? Pragmatically speaking, either you 're in charge of
your project so you choose the language that seems to offer some advantage for
the targeted platform, or you're not so you suck it up and code in whatever
your shop uses.

------
mbfg
Debuggers in managed languages are probably the biggest workload reducers as a
percentage since binary -> assembly.

------
scotty79
How much workload language like awk saves when compared to java? On text
processing tasks, nearly all of it.

------
paulus_magnus2
> compile times for Java are effectively zero

Compile of a single class yes, but most J2EE projects I worked on compile
between 10-20minutes including tests. Sometimes longer, a lot longer...

While writing code in some languages is slightly faster than in others but
most probably by less than a factor of 2 (provided you're using the right IDE
and know what you're doing).

The increase in productivity we can achieve is "standing on the shoulders of
giants": ie. assembling / wiring together pre-existing bigger and bigger
pieces of code we can trust and know how to use. Also more and more problems
are solved and in public domain.

IDEs usually These days

------
kdazzle
If Uncle Bob is talking strictly about development cycles, running tests in
Python - even in a heavy framework like Django - is pretty fast compared to
like Java with the Play framework, which is excruciatingly slow, or even
Swift.

------
rhythmvs
But then again, by what number did opensource and package managers decrease
the clerical workload?

— Sure, if we’d not only consider machine architecture, language design and
compile times, but add social factors to the equation, like ecosystems,
business models, and network effects, well okay, then maybe a single,
apprentice developer might do the work of a 100k MSc programmers, and at least
as much as he `require`s modules. :-p

------
jbverschoor
Java is very verbose. Although many languages are moving towards the same
constructs.

The BIG differentiator is how a _framework_ handles the write-compile-run-
debug cycle.

Rails is very nice in that respect.

When I was doing a lot of java I'd run the server in debug mode, so eclipse
could hot-swap code. Cycle was instant compared to a few minutes

As for the "productivity" examples of 5% and 10%.. That's way too little.

~~~
TeMPOraL
Yeah. It's sad that most languages can't handle write-compile-run-debug cycle
right. That is, it's sad they have an explicit cycle like that at all.

I tend to switch back and forth between Java (at work) and Lisp (after work),
and the workflow in Java annoys me to no end. It's so much easier and faster
to write when you can compile-in new functions, methods and classes (or hot-
swap existing ones) in a running program, while inspecting it at the same
time. It's hard to go back from the convenience of having write-compile-run-
debug cycle all blended into one.

~~~
carterehsmith
Did you try something like JRebel? It does allow you to
add/change/rename/remove methods and fields, on the fly. I've been using it
for some years and it does cut on restarts big time. Commercial license (that
your employer pays for) is ~$500/year/developer.

Or you can try something like
[https://github.com/HotswapProjects/HotswapAgent](https://github.com/HotswapProjects/HotswapAgent),
that is a modified JVM that supposedly (I did not try it yet) does the same,
and it is open source.

------
EugeneOZ
Last point is bullshit - new languages sometimes contains new ideas and
possibilities (like Rust), sometimes they are for specialized scopes (like R),
sometimes they are just evolution of existing languages.

Just unfollowed him in Twitter - it's not the first time I read such low-
quality narcissistic "dialogues" in his blog, only because in past he had high
quality book about clean code.

------
tardo99
Part of why Java is slow is it requires so much typing. I think Ruby winds up
taking maybe 50% or less the time of Java just because of this.

~~~
douche
Type three or four letters and hit enter, and have the IDE autocomplete the
rest of your SpringBeanFactoryAdapterFactoryFactory type declaration? I think
you're putting up a strawman.

------
jpolitz
Another important metric: a handful of C programmers can write arbitrarily
more buffer overflows than thousands of Java or Ruby programmers.

------
kstenerud
If I were in this conversation, it would turn a bit like this:

"It's kind of complicated..."

"I need a number."

"I'm not giving you a number."

"Why not?"

"Ask me an intelligent question and I'll give you an intelligent answer."

"What??"

"On the day we use horsepower as the sole measure of a car, I'll give you a
sole measure of a programming language."

------
venomsnake
I think that we have solved how to write programs. The biggest gains will be
for solving how to write correct programs. Haskell is pushing hard in that
direction.

------
zaro
Well he kind of has point. If you think about it, with all these tools that
make so much more productive we actually work more than people did in the 50s.

------
tschellenbach
My estimate is that Python & Ruby are about 4 to 5 times more productive than
Java for the case of web/API development. The benefit grows with the size of
the codebase as Python & Ruby are also easier to read and maintain.

The point the author raises about types is only partially true. Both in Java
and Python you should be unit testing your code. If you have good test
coverage and CI than it really doesn't matter that Ruby and Python don't check
your types.

So blue is my color :)

~~~
twblalock
> If you have good test coverage and CI than it really doesn't matter that
> Ruby and Python don't check your types.

People say this a lot, but it's not true unless you have 100% coverage, 100%
control over the data that gets input into your program, and you can think of
every possible test case that covers every possible situation that can occur.

Many of the tests you have to write for programs in dynamic languages are
simply not necessary for strongly typed languages.

Furthermore, compilers for static languages catch (or warn about) a lot of
problems that would otherwise have to be reproduced by the kinds of test cases
nobody is likely to think of -- this results in bugs making it to production
when you use dynamic languages.

~~~
winstonewert
> People say this a lot, but it's not true unless you have 100% coverage, 100%
> control over the data that gets input into your program, and you can think
> of every possible test case that covers every possible situation that can
> occur.

This is different from static typing, how?

> Many of the tests you have to write for programs in dynamic languages are
> simply not necessary for strongly typed languages.

Actually, dynamic typing users write very few additional tests. Users of
static typing seem to imagine a lot of additional tests being needed, but
dynamic typing users don't write them. They are usually for scenarios that are
important enough to test.

> Furthermore, compilers for static languages catch (or warn about) a lot of
> problems that would otherwise have to be reproduced by the kinds of test
> cases nobody is likely to think of -- this results in bugs making it to
> production when you use dynamic languages.

Not my experience. Almost all bugs caught by static compilers are ones that a
single execution of the relevant code would also catch.

~~~
twblalock
Your experience must be limited.

~~~
winstonewert
Sure, instead of answering my questions or points try to dismiss me as not
having experience.

As a matter of fact, I have developed applications in Python, C++, Java,
Javascript, GWT, Javascript, and Coffeescript, for myself, at small companies,
and at Google. I don't think you can simply dismiss me as having limited
experience unless you've got more extensive experience shipping apps written
in both dynamic and static languages.

Now, I'm not saying that dynamic typing is better: there are tradeoffs to
either approach. But I am saying that your original post gives an overly
simplistic appraisal. The pros and cons of static and dynamic typing and much
more complicated then that.

------
jfoutz
I disagree. If all you have is switches on the front panel, symbolic assembler
isn't really much help. You just memorize that 0101 0000 0001 = register move
from r0 to r1. It may take a week or a year, but eventually you sling that
binary like it's nothing. I suspect the same is true of punch cards.

I built a computer with TTLs back in the early 90's whopping 256 bits of ram.
but it was programmed with switches. it really doesn't take that long to
memorize the states. To be fair, i never had to toggle switches for work.
there might be other factors i'm neglecting.

The problem is like Carmack said, errors are depressingly statistical. you'll
inevitably fat finger one of those switches, and everything sucks.

Now, if i have a terminal, and can type the program into a file, things are a
little different. but mostly, i just don't want to wear out the 0 and 1 keys.
with a 4 bit encoding, i don't think there's really _that_ much win with mov
vs 0f. with a 5 bit encoding i'd have 32 keys to play with, and could probably
type my intentions very quickly, with some practice. heck, you can sling bf
pretty quick if you spend a week encoding something hard. The big thing is
variable names are chosen for you - instead of 'money' you have 0x0128 or
whatever.

A symbolic assembler buys you variable names. That's a pretty big win when you
get more than, say 100 variables and functions. Small programs, no biggie, you
can keep it in your head. Things get bigger it gets tough to memorize
everything. So clearly the benefit grows as programs get larger. But i'd be
skeptical of even 30% on front panel switches. In an editor? If i can use
emacs, i can probably set up syntax highlighting for instructions and
arguments. Play some goofy game with variable names in comments an have emacs
auto manage synching the comment and memory address. so i'm not sure it would
be a huge win then.

The big wins (for me) with C are total insulation from register <-> memory
moves and stack discipline. this is a WAY bigger deal than binary vs symbolic.
you can call any function you want without having to remember what registers
this function will stomp. But i've never tried to toggle in C on switches. I
can say, i get more done in C than assembler, but that's not claiming a whole
lot.

C++ vs java? Well, valgrind helped me a lot with c++. java means never running
valgrind. That's not to say, i haven't had my share of NPE's in java. Java
just tells you where it happened rather than crashing.

Once you know what you want to say, You want to say that concisely, to
minimize those statistical errors. IMHO I can be more concise with a higher
level language, so those statistical errors don't bite as often. If i have 1
bug per 100 lines of code, i'd rather write in the more concise language. But
i have upper limits to concision. APL is too hard.

------
source99
Regardless of my opinion this was a very entertaining read.

------
agumonkey
NaN, try to prove anything in assembly.

------
ajdlinux
cf Fred Brooks, "No Silver Bullet - Essence and Accident of Software
Engineering", 1986

------
Bromskloss
What is the message here?

~~~
alexandercrohde
Reading between the lines, the author actually feels this: "Stop making a new
language every month. The ones we have are good enough, and it fractures
inter-developer cooperation when we don't know the same languages, rebuild all
the same libraries over and over for different languages."

Which is one valid viewpoint. But he couches it in the socratic method against
a strawman (presumably himself) which ultimately leaves the reader feeling
dirty.

~~~
Retra
The problem of handling an environment with richly diverse hardware and
languages is itself an interesting problem, and probably much more analogous
and appropriate to the problems of meaning human society.

So if someone were saying to me "stop developing new languages and
environments; it makes things hard", it'd be like saying "stop developing new
culture or human relationships, because life is easier when we're all bland
and homogenous." I find it a little bit offensive and at least a failure to
recognize the important problems (due to obsession over the easy ones.)

~~~
alexandercrohde
Well, that's one analogy. How about this analogy, what would you say to me if
I said "Hey I invented a new spoken language, it's more efficient than
English?" And what if I did this in a climate where dozens of people
independently were already doing this too?

So, to simplify, I think the reason a lot of developers roll their eyes at
another new language is because the problems the languages are solving are
less important than the fracturing they make in the engineering community.

If you think a language is missing something, why not contribute to the
language with an RFC. C++, Java, and PHP have all been advancing significantly
over the years. Or why don't you get together with 100 other language-makers
and come up with a unified solution?

~~~
Retra
I would say "It's wonderful that your human brain is capable of learning and
designing such a language."

The problem is still in understanding what the words mean. It's easier if you
have standard languages that everyone speaks, but if you want to have
computers do amazing things, they'll have to be able to handle that problem,
regardless of how many languages exist.

And it would be wonderful if computers could learn to understand languages
regardless of who designed them and why. That problem isn't going to be solved
by running away from it.

