
Don't Be Distracted by Superior Technology - BruceIV
http://prog21.dadgum.com/168.html
======
tikhonj
Meh, this is unfortunate advice because everyone tends to act this way _too
much_. Almost everybody errs towards stable technologies; the exceptions are
people with a genuine interest in the novel. They are disproportionately vocal
so judging from the internet there's a lot of them, but in reality they're a
minority.

This makes sense psychologically. The cost of spending an extra day setting up
a build system? It's obvious, hard to miss and annoying. A 10% benefit to
productivity thanks to a superior programming model? Sure, that'll add up to a
day in just two weeks of use, but most people wouldn't notice it at all! This
is doubly true if the benefit is delayed--for example, if you win out on less
time spent on maintenance and debugging.

I've seen far more people ignore great technologies for fairly limited and
superficial reasons that I've seen use superior technology in the face of true
practical concerns. Too often it's something like "well, yes, it's much
better, but it doesn't have a JSON parser built in". Of course, writing a JSON
parser should take less than a day, so the cost is essentially negligible.

That example, coincidentally, is taken from a talk by Brian O'Sullivan about
running a startup with Haskell. (I don't have a link to it handy, I'm afraid.)
Essentially, he found that the productivity benefits outweighed having to
write some basic libraries (including a JSON one) himself. And he shared those
libraries with the community, so everyone is now better off.

So my advice is completely the opposite: don't be distracted by superior
polish.

~~~
dllthomas
> Of course, writing a JSON parser should take less than a day, so the cost is
> essentially negligible.

Writing a JSON parser that parses my examples correctly is a couple hours at
_most_ , yes. Writing a JSON parser that I am confident is correct and robust
and efficient takes somewhat longer.

~~~
ams6110
It's really more of a case of _porting_ a JSON parser because many working
examples of JSON parsers already exist, as do complete sets of test data that
your parser must be able to handle. There's really not a lot of original
thought that has to go into recreating something that's already well-
established elsewhere.

~~~
dllthomas
Fair point. There can still be a lot of work to ensure robustness and
efficiency in the new setting (especially so if the execution model is
particularly different).

------
jseliger
Or: Worse is better: <http://www.jwz.org/doc/worse-is-better.html> .

Or, in a different way, you need to be in fire and motion:
<http://www.joelonsoftware.com/articles/fog0000000339.html> .

(Note: this is reinforcing the original submission, not belittling it.)

~~~
thebear
I would also recommend Jeff Atwood's (second) blog post on the subject:

[http://www.codinghorror.com/blog/2008/01/is-worse-really-
bet...](http://www.codinghorror.com/blog/2008/01/is-worse-really-better.html)

Jeff Atwood just blows me away with the way he combines information,
entertainment, and insight in his articles. What I like most about this one is
that he brings in Steve Martin's take on good vs. great, thus showing us that
the whole issue is by no means limited to the world of software development.
(Just like the poster of the parent of this reply said, this is to reinforce
and complement the original submission.)

------
RougeFemme
I’m not a programmer but have witnessed – and mediated - a lot of these
arguments. Some are very esoteric while others are pretty practical. But some
programmers forget that when you’re trying to implement a “large” project
(relatively large user base and relatively long expected life), the
programming language is _just_ a tool. You can tinker forever with a
technologically superior language – and enjoy the intellectual challenge – and
never implement your project. Or you can _settle_ for a less sophisticated
language and implement your project in a reasonable amount of time, though not
as elegantly as you might like.

~~~
jiggy2011
Or you could implement the project in the superior language in a reasonable
amount of time.

~~~
RougeFemme
Of course, this the preferred option. But when the arguments ensue, it's
usually because the developers are saying I can't give you the project in the
superior language in a reasonable amount of time. So if a choice must be made
between superior language and reasonable amount of time, project managers
usually choose reasonable amount of time.

~~~
lelele
> So if a choice must be made between superior language and reasonable amount
> of time, project managers usually choose reasonable amount of time.

Sure. Of course project managers usually choose deadlines over people, then
when talented people burn out and leave, they wonder why talents are so
difficult to keep. We know what the safe choice is: hire average programmers
who see programming languages as interchangeable, and are themselves
interchangeable. No worries about competitors, for they are doing the same.
Nothing wrong with that, but please let's not pretend things are any
different.

------
miga
Times change. Interestingly enough, "fringe" languages were breathed new life
with introduction of LLVM, and Haskell is no longer research project, so there
are industry job offers there too.

And we know what happened with "winners" within a wave of dynamic languages:
JavaScript, Python and Ruby developers are quite often paid higher wages, than
old-style C programmers.

~~~
coldtea
> _Haskell is no longer research project, so there are industry job offers
> there too._

Well, there are far fewer Haskell job offers than Haskell programmers.

> _And we know what happened with "winners" within a wave of dynamic
> languages: JavaScript, Python and Ruby developers are quite often paid
> higher wages, than old-style C programmers._

Are they? I'm not so sure a competent C programmers earns less than a
Javascript/Python/Ruby guy.

~~~
miga
An example HN article about that: [http://firstround.com/article/The-inside-
story-of-how-382-re...](http://firstround.com/article/The-inside-story-of-
how-382-recruiters-pursued-an-imaginary-engineer)

------
hakaaaaak
The point that the OP is making is not to avoid new technologies, but to avoid
technology that isn't ready for prime time (and higher ed efforts are a great
examples of this). That is "ok" advice in many cases, but don't avoid things
because they are new. Few get great jobs and salaries for failing to take
risks.

------
fus
There is a nice short story on this subject: "Superiority" by Arthur C.
Clarke. You can find a summary and a full text link here:
[http://www.kareemamin.com/post/5060094569/3-lessons-for-
star...](http://www.kareemamin.com/post/5060094569/3-lessons-for-startups-
from-arthur-c-clarkes)

~~~
saurabh
That was a good read. Thanks.

------
chipsy
I believe the pace of change has actually increased recently. As Hague himself
has noted, we have a lot of computing power to throw around now. 30 years ago,
serious usage of academic languages was still restricted to "big iron"
environments; today you can expect any language to do some useful work even on
a smartphone. As a result we can make "softerware" that benchmarks less well,
but is massively cheaper to make and maintain.

As well, there's been a cultural shift related to the hardware changes. In
webdev it's become routine in some circles to make polyglot systems, or to
cross-compile between different languages or runtimes. DSLs are increasingly
slipping into "everyday" environments. There's a mindshare war going on there,
but architecting against one language, one environment, and one toolchain is
increasingly seen as the "old way". Similarly, developers who don't know how
to architect towards performance increasingly get away with it - it goes hand
in hand with the idea of software becoming softer, since that process moves
the optimization burden towards the people making runtimes or libraries,
rather than the app dev.

All of this favors a faster transition from academia.

~~~
jacquesm
> There's a mindshare war going on there, but architecting against one
> language, one environment, and one toolchain is increasingly seen as the
> "old way".

That old way is there for a reason: maintenance. If you write throw-away
software (and most websites, especially the front end stuff are) then this is
not a problem. But if you're supposed to support your creation for the next 15
years then having a stable toolset really pays off.

The smaller the project and the shorter lived it is the more corners you can
cut.

~~~
chipsy
No, that's still making the assumption that all worthwhile solutions are balls
of mud, leading to "freeze it in time" as the only sane option for
preservation. That's cultural, not inherent to the technology.

If every part of the stack is tiny and connected via common, documented
protocols you have ample room for maintenance. That's easily validated by the
Internet as a whole (and not the Web, which was burdened by its early design).
A present-day "throw-away" that can do more than in the past is, more likely
than not, leveraging a bigger and more decomposed software ecosystem, where
bits of infrastructure can get remixed more readily. Pooh-poohing the results
by saying "they solved a trivial problem" dismisses this underlying trend -
tools and services that are currently "weekend hack" or "prototype only" have
a habit of turning into tomorrow's "production-grade" if they gain substantial
adoption.

There's no particular reason we can't achieve Internet-scale maintainability
for all problems, other than it taking time to build protocols of the right
size/complexity for every problem domain.

------
ams6110
Ah Modula-2. Used it in one of my undergrad CS classes, ca. 1986. Never
touched it anywhere else since. Back then every semester was a different
language. Used at least C, PL/1, Pascal, Modula-2, Scheme, assembler. The
language itself was not covered in the lectures, you were supposed to figure
that out on your own time.

------
BruceIV
A good reminder of the engineering virtue of using tech which is commonly used
and thus well-supported (by the creator, by third parties, by Stack Overflow,
etc.)

------
malandrew
Can someone point us to a reference explaining Modula-2's module system and
what made it a superior module system to other module systems?

~~~
mahmud
Modules can be implemented in any programming language, if the programmer is
disciplined enough[1]. However, Modula-2 originated modules as language
constructs, not only for grouping & namespacing, but also as compilation
units.

Very important to repeat that namespacing bit; Modula-2 modules allowed for a
way to group & scope names. Other block structured languages allowed for name
shadowing within a block, but with modules, one has access to shadowed names
in enclosing blocks.

At the implementation level, you can think of per-Module block structured
languages as having a stack of hash-tables for environment/symbol-table. As
each name is declared, a fresh entry is created in the hash-table and the
initialization value for the variable stored as value for the key. When
processing enters a new scope, say BLOCK, BEGIN, LET, new function declaration
or similar name _hiding_ construct, a fresh hash-table is created and pushed.
When the block is exited, the stack is popped and we return to previous
definitions.

Except the environment stack is actually implemented as a list, to allow non-
shadowed names to be available without popping. The compiler can walk up and
down the stack list to look up identifiers, often assigning an stack-depth
number to each nested environment. Say, a top-level global variable might
actually be internally represented as {env: 0, name: x, val:3.0, type: float}.

Programmers don't have access to that numeric environment ID. Once a variable
is shadowed, we lose all access to it, if we don't keep a copy, and even that
is useless with side-effects.

Modules change this in one important way. The hash-tables have names! They are
not just anonymous values to be pushed around (eh? ;-) but named entities that
we can look up. Why settle for environment IDs when you have glorious,
descriptive, human readable names?

If you substitute a graph for the environment stack, you get yourself a more
interesting structure. One that allows for module composition and structure
sharing, so that two or more environments can have their common bits factored
out.

To allow for separate compilation of modules, we need to know in advance what
services they offer (i.e. what keys are in their environment table.) If we can
find that out quickly, without processing the module itself, we can move along
faster. This is very important. A common pattern in language compilation is
name resolution. Some languages force programmers to declare all names before
use. Other languages are more forgiving, and try to resolve the names
themselves, often by processing input code in multiple passes, and only then
signaling errors for yet still unbound names. For modules, we can help the
compiler discover names by abstracting out the _keys_ ahead of time. So break
the module definition into signature declaration, and actual module body known
as structure. The signature is a compact, high-level view of the map that
tells code processors and other modules what names they can expect from the
module. (A primitive form of signature/structure separation is C & C++'s
header files, but those have nothing to offer us, intellectually.)

Like any form of cooperation, contractual agreements between modules will have
to be in place. We need a certificate of authenticity of sorts. The addition
of types to module definition is a pillar of modern software engineering.

Modules make software _serious_.

For the full story on modules, see Standard ML.

\--

[1] David L.Parnas(1972). _On the criteria to be used in decomposing systems
into modules_.

~~~
malandrew
Can you give an example (or link to an example) of either real code or pseudo-
code where a there is a reference in a sub-block (sub-module) to both a
shadowed variable in that sub-block and also a reference made to the same
variable in the parent scope by way of name hash-tables? I'm just curious to
see how this is done in practice and some real examples of where a module
system that permits access to a shadowed variable proves more useful, flexible
and generally more robust.

Also, how does Modula-2 (or Standard ML's) module system enforce contracts
beyond what a C/C++ header file offers us? Why does C/C++'s primitive system
offer us nothing intellectually?

Lastly, if you were to try adding a more robust module system to a language
like JavaScript, how would you go about doing it? (assuming such a thing is
possible).

------
martinced
A different take on superior technologies by someone some here may have heard
about:

<http://www.paulgraham.com/power.html>

<http://www.paulgraham.com/diff.html>

<http://www.paulgraham.com/avg.html>

