
What's Worked in Computer Science - cjbprime
http://danluu.com/butler-lampson-1999/
======
silverpikezero
The claim about RISC being a No is bizarre. RISC has definitely won, for 2 big
reasons: 1\. Intel redesigned their x86 processors to execute using a RISC
model internally. AMD is using the same idea. A processor architecture is
defined by its execution model (not it's external encoding). 2\. ARM is more
ubiquitous than Intel, and ARM is a RISC architecture. In fact, ARM may be the
dominant processor architecture in the next 10 years.

RISC is resoundingly a Yes.

~~~
thristian
The author addresses this:

> _It’s possible to nitpick RISC being a no by saying that modern processors
> translate x86 ops into RISC micro-ops internally, but if you listened to
> talk at the time, people thought that having a external RISC ISA would be so
> much lower overhead that RISC would win, which has clearly not happened.
> Moreover, modern chips also do micro-op fusion in order to fuse operations
> into decidedly un-RISC-y operations._

~~~
wodenokoto
The success criteria is not mentioned at all, so there is no clear way of
settling this.

yes the brand "risc" failed, but does that mean the idea behind risc presented
by the computer science community failed?

I'm no processor expert, but it sounds like to me the paradigm behind risc is
both theoretical and practically sound and is a key component in most modern
CPU's.

~~~
qznc
I would disagree. Intel adds more and more special instructions for niche uses
(crypto,HPC,virtualization,etc) for the sake of efficiency. This becomes
increasingly necessary for performance improvements, because frequency scaling
hit its boundary for now and manycore is about to (see Dark Silicon).

------
jwatte
We write web services in Haskell every day and it's much, much easier to
refactor, and runs much faster than, the PHP that came before it. This is at
large scale for a mature tech company, too.

We've also had good success with Erlang for messaging, although its standard
libraries have annoying bugs.

Based on this, I'd say: Functional programming: yes. Fancy type systems:
maybe.

The reason for "maybe" is that we use ByteString a little too happily and
don't new type enough to prevent all bugs ahead of time...

Then again, C++ templates are definitely fancy type systems, and we ship lots
of important code on that every day, too!

~~~
rifung
If you don't mind me asking, what company is this that uses Haskell everyday?

~~~
codygman
I have a video in my watch queue that shows Facebook is:

[https://youtu.be/sl2zo7tzrO8](https://youtu.be/sl2zo7tzrO8)

------
uint32
This article discusses "today", yet it has no publication date. Well, it
mentions "2015" in the conclusion, but that's it.

This trend to publish without a date is ill-conceived.

------
vezzy-fnord
Author is slyly redefining "what's worked" to "what's widely used".

~~~
chubot
What are some examples of things that "worked" but are not "widely used"?
Erlang?

~~~
Jtsummers
Functional programming, even when not in, strictly speaking, functional
programming languages (MLs, Haskell, lisps, Erlang), has worked. It's moving
more and more into mainstream languages. Either as a sublanguage (Linq) or by
piecemeal incorporation of its concepts (pattern matching, anonymous
functions, TCO so recursion can be used penalty-free, etc.).

~~~
pron
> Functional programming, even when not in, strictly speaking, functional
> programming languages (MLs, Haskell, lisps, Erlang), has worked

How do you know? By "worked" the author means "has unambiguously turned out to
be a good idea with very significant benefits". That some people really like
something and are able to write good, working programs with it doesn't mean
it's "worked" in the sense discussed.

First, each of these languages is "functional" in a very different way, so
much so that even their inclusion in the same category is tenuous. Second,
none of these (except maybe Erlang) has been used extensively enough in large-
scale projects to give us a definitive answer regarding the actual benefits.

That functional concepts are finding their way into the mainstream is very
true, but some of these have also been associated with OO for a very long time
now (in fact, they've been a part of OO longer than three of the four FP
languages you list have existed). Smalltalk, the most OO of OO languages, had
anonymous functions from the beginning, and would tell you that anonymous
functions are just as much OO as functional. TCO is not very mainstream, and
as Guy Steele demonstrated, is just as relevant for OO as it is for
functional. Also, adoption of some concepts doesn't mean FP as a paradigm has
worked (the bigger problem is that FP doesn't even have a definition; it is
continuously redefined to mean whatever it is people associate with languages
that call themselves FP).

In short, we're not sure quite yet, hence "maybe".

~~~
markatkinson
I agree with you that from the perspective of the article Functional
Programming is still a "maybe" as it doesn't have the same sort of following
or use in large-scale projects as OO, but I suppose it might be the
perspective of the article that is incorrect.

Whatsapp has demonstrated the power of FP with Erlang, and other programming
languages like Elixir are starting to gain traction as they improve on the
older Frameworks and make development more "comfortable".

Whether they will get enough traction and garner enough of a following to
become a "Yes" from this articles perspective is difficult to say but I am
confident people will see the light of FP sooner than they think, and I feel
like it won't be long till FP is the new cool kid on the block, I'm just not
sure who will be responsible for leading that.

~~~
pron
> Whatsapp has demonstrated the power of FP with Erlang

Whatsapp demonstrated the power of actors and lightweight threads with Erlang.
FP had little to do with that.

~~~
markatkinson
Good point... I suppose that is also what Elixir is bringing to the party.

------
andrewflnr
Speaking as someone currently in the "enamored with capabilities" phase, I'd
like to know more about how people get disillusioned with them and whether it
can be prevented. Is it something we just need to keep trying until we get out
right, or are they somehow fundamentally unworkable despite their elegance?

Also, does the increasing importance of power consumption affect the RISC vs
CISC issue? Naïvely I would expect that to work in favor of RISC.

~~~
bascule
I would put the author in the "doesn't know what he's talking about camp" in
regard to capabilities. The same could probably be said regarding the claim
that distributed computing works and that maybe we're ok at security now. All
of these feel like fields in their infancy.

Capabilities are among the few systems that can solve authorization decisions
involving three or more principals. Ambient authority systems e.g. systems
based on ACLs are inherently broken when dealing with 3 or more principals
(see
[http://waterken.sourceforge.net/aclsdont/](http://waterken.sourceforge.net/aclsdont/)):

"The ACL model is unable to make correct access decisions for interactions
involving more than two principals, since required information is not retained
across message sends. Though this deficiency has long been documented in the
published literature, it is not widely understood."

If you're looking for a place this arises in practice, look no further than
the same-origin policy in web browsers, and the complexity of three principal
interactions where one principal is a user, another is a web site, and the
third is a malicious web site.

That's not to say we should abandon the same-origin policy, but we need
authorization primitives that seamlessly span multiple principals.

Projects like Capsicum are adding capabilities to OSes like Linux and FreeBSD.

Cap'n Proto is demonstrating what's possible with a modern implementation of
CapTP.

I think the need for authorization-centric (as opposed to identity-centric)
access control systems is becoming increasingly clear. Capabilities
(particularly in the CapTP sense) are one realization of this idea but there
are others.

~~~
pron
I would put your comment in the "doesn't understand what the author is saying"
camp. When he says something has "worked" he doesn't mean it has promising
working prototypes or has the potential to improve things. He means that the
benefits have been very, very clearly demonstrated to be very significant and
widely applicable. And "clearly demonstrated" means their impact has been
shown in numerous production settings.

As this is the case for distributed systems but not for capabilities, the
author's categorization seems spot on, even if current work on distributed
systems is "broken", when "broken" has its common modern connotation of "has
(possibly much) room for improvement".

~~~
bascule
To reiterate, because you missed my original point: The author thinks we're
doing okay at security, and capabilities failed. We aren't doing okay at
security, and capabilities are still a promising solution.

The solutions where capabilities are truly most promising have no embedded
competitors. I'm looking at things like SELinux here. Few other solutions
(except e.g. Macaroons) are actually capable of making correct authorization
decisions in scenarios involving 3+ principals: the competition is broken, and
vicariously, so are most authorization systems which try to solve 3+ principal
authorization problem correctly.

You may as well be arguing that memory safe / garbage collected languages lost
to C circa 1995. C is broken and programs written in C will always be full of
holes as compared to equivalent programs in memory safe languages just in the
same way as authorization systems not built to handle 3+ principals will make
wrong decisions. This is why it's 2015 and we're still dealing with CSRF.

If you implement SELinux at your job like I do, and have some informed
criticism about how Capsicum is unneeded because SELinux is great, I'd love to
hear it! But I'm guessing that isn't the case... I am also guessing it's an
area the OP is not particularly informed about.

~~~
pron
> you missed my original point

I think you have simply misunderstood what the author is actually saying. He
is not saying what you think he is, and I don't think you disagree with him at
all.

> The author thinks we're doing okay at security

Where does he say that? He says, "security still isn’t a first class concern
for most programmers", and puts it in the "No" column.

> capabilities are still a promising solution.

The author doesn't dispute that. In fact, he says nothing about the promise
certain technologies hold. In fact, he says: "I’m much more optimistic about
research areas that haven’t yielded much real-world impact (yet), like
capability based computing and fancy type systems. It seems basically
impossible to predict what areas will become valuable over the next thirty
years."

The article, however, is not concerned with possible solutions, even those
that show great promise. It is only and solely concerned with solutions that
have been conclusively shown to work well in the field by having a wide
applicability and usage in numerous production projects. As capabilities -- so
he says and you don't seem to dispute -- aren't there _yet_ (he emphasizes the
"yet"), they belong in the "no" column. That they show great promise -- or
even maybe contain the only solution to problems we haven't been able to
tackle -- bears absolutely no relevance to the issue of whether they "have
worked" or not. It is a statement of fact that they haven't yet, and you don't
seem to dispute that.

Similarly, putting something in the "no" column doesn't mean that something
has lost. The author makes that abundantly clear. A no may well turn out to be
a yes in time. A no simply means "not yet" (while "maybe" means that it may in
fact be "working" now, we just don't have enough information to conclusively
say).

The article therefore voices no criticism on the validity of certain
technologies at all. That, too, the author makes abundantly clear. It is
nothing more than a list of those technologies that to this day have (or
haven't yet) been conclusively proven to provide significant benefits and wide
applicability in the field. Those that only show promise belong in the "no"
category. It is an inventory, not a critique.

You have no argument (so it seems) either with me or the author. In fact, I
have no knowledge on this subject at all: I have no idea what capabilities
are, I have never heard of Capsicum (or SELinux for that matter) and have
never even worked on security related issue. it was just clear to me that you
are responding to criticism that is simply not there, and responding with
great force by dismissing the author (who is very well versed in technology),
which is rarely a good idea, especially if you don't carefully read what he
has to say.

~~~
bascule
> Where does he say that? He says, "security still isn’t a first class concern
> for most programmers", and puts it in the "No" column.

He put it in the "Maybe" column for 2015. Try searching for "Security"

So Capsicum patches landed in the Linux kernel mainline, Kenton Varda is
shipping Cap'n Proto and Sandstorm.io, and capabilities get a "no" but
security gets a "maybe"? K.

> I have no knowledge on this subject at all: I have no idea what capabilities
> are, I have never heard of Capsicum (or SELinux for that matter) and have
> never even worked on security related issue.

Well that explains a lot...

~~~
pron
You still fail to say where you disagree with what the article says as opposed
to disagreeing with what you think the article says. It's not a matter of
knowing the subject matter but of text comprehension. Sorry, I missed the
"maybe" on security, but he explains that "a handful of projects with very
high real world impact get a lot of mileage out of security research". With
all due respect to Cap'n Proto and Sandstorm.io, neither have "very high real
world impact". If you think capabilities are widely used in production, in
multiple domains, and have had a significant and conclusive contribution to
the industry, then you'd be in disagreement with the author, and I'm sure he'd
gladly change his description in light of the new information. Like I said,
it's an inventory, not a critique.

------
cm2187
I am not sure I would categorize a type system as a maybe. Dynamically typed
languages succeeded only because they are simple and popular with beginers
(python, php, javascript) and hence are popular by number of programmers, or
because they are the only way to be cross platform (javascript). But I am not
convinced that dynamic languages "work". It is great for simple things but
completely breaks on large projects.

------
vseloved
First of all, this isn't so much about CS as about Computer Engineering. Next,
as pointed out already, this selection is quite random and subjective

------
chvid
What a strange random list of things big and small loosely related to
computers.

So "bitmaps" work? "Software engineering" does not.

Sure things change in this industry. But you really need a bit more focus to
capture what is going on and what drives it.

~~~
brudgers
In 1995, "Software Engineering" would usually have implied CASE tools and
heavy weight UML. It's current "success" is largely based on a bit of
redefinition toward testing and team interactions and common values. They are
rather different notions.

~~~
chvid
Yes. Where are as bitmaps here probably refer to pixel-level raster graphics.
Where as "software engineering" is an entire field of its own.

If anything we are moving away from raster graphics to vector graphics,
declarative approaches and display schemes that are adaptive to the device and
its resolution.

------
sklogic
RPC has won indeed, in many different ways. Classic RISC made its way into the
embedded, low power devices and clearly won, at least by the sheer numbers of
chips produced. Formal methods settled strongly in the hardware design [1].
Distributed computing made a very strong comeback (map-reduce, clouds, all
that). So, in general, it's very outdated. It was already outdated even in
1999.

[1]
[https://www.cl.cam.ac.uk/~jrh13/slides/nasa-14apr10/slides.p...](https://www.cl.cam.ac.uk/~jrh13/slides/nasa-14apr10/slides.pdf)

------
toolslive
What about Objects/Subtypes ?: a 'Yes' in 1999 but these days, we've learned
the hard way that there are at least 2 contexts were objects don't quite work:

    
    
      - relational databases 
        (objects & classes are organized in trees, 
         and a tree is a sin in RDB land)
      - concurrency & parallelism
    

so a No? (although some people keep on trying)

------
guard-of-terra
I won't say RPC won.

We don't use Remote Procedure Calls much, we do Remote Endpoint Calls. Every
endpoint still have to be considered, specified, secured, protected against
DoS. None of that is automated - for example, endpoint specification systems
famously didn't work. It's not that we fire random procedures on remote
machines. We use protocols in the end, not procedures.

------
lambdalambda
> _" DEC started a project to do dynamic translation from x86 to Alpha; at the
> time the project started, the projected performance of x86 basically running
> in emulation on Alpha was substantially better than native x86 on Intel
> chips."_

I've heard this several times, but would love to read more about it. Anyone
know where to find more information?

~~~
topkekz
[https://en.wikipedia.org/wiki/FX!32](https://en.wikipedia.org/wiki/FX!32)

(page 7)
[http://web.stanford.edu/class/cs343/resources/fx32.pdf](http://web.stanford.edu/class/cs343/resources/fx32.pdf)

note that it was 500 MHz DEC Alpha vs 200 Mhz Pentium

------
rbanffy
> In retrospect, the reason RISC chips looked so good in the 80s was that you
> could fit a complete RISC microprocessor onto a single chip, which wasn’t
> true of x86 chips at the time.

This is simply not true. In fact, the first RISC I saw, the 88000, would not
fit on one chip.

------
peter303
Software Engineering is a loaded question. It covers the range from formal
planning bureaucracies to seat-of-pants extreme programming. Any software
company that makes it into its second decade has probably hacked together
something that works.

------
tikhonj
As usual, read this as "what's popular in Computer Science" more than "what's
worked". Lots of things listed in the Maybe and No category (both the original
table and in the article) _do_ work but aren't super-popular and some things
listed as "Yes" like bitmaps and subtypes are bad but ubiquitous.

Apart from my usual spiel on how popularity is not particularly meaningful[1],
this is also an example of a fascinating idea from poker: "results oriented
thinking"[2]. It's a terrible name, but a crucial concept: you can do
everything right and still fail or you can do things wrong and still succeed.
In Poker it means you can bet correctly and still lose to the luck of the draw
and vice-versa; you can't generalize from a single example or even a handful
of examples. Of course, this is exactly what so many people in industry do.
_Results oriented thinking is endemic in practical software engineering._

We even have our own name for it, or at least part of it: the IBM effect.
"Nobody ever got fired for buying IBM."[3] If you do the same old thing and
fail, well, things happen. If you try something new and fail, well that thing
you tried has to be terrible! Never do it again! Of course, in reality, those
"things that happen" still dominate, but that's not how people make decisions.
Then it's all magnified through institutional memory and "common wisdom".

It's also magnified because there's a certain kind of personality that values
pragmatism above all else and categorically opposes anything new, idealistic
or academic. (To be fair, pretty much the opposite of me :P.) The tricky thing
is that they have a point... some of the time. But they'll use any sort of
failure to rationalize their view, and because decision makers tend to be so
results-oriented this sort of rationalization has an outsized influence.

Putting this all together both this article and Butler Lampson's original
argument tell us a lot more about _the social dynamics of software
engineering_ than they do about _computer science_. There are still
interesting insights to be had, but _certainly not the ones the article is
arguing for_.

[1]:
[https://news.ycombinator.com/item?id=10567962](https://news.ycombinator.com/item?id=10567962)

[2]: Unfortunately, I haven't found a single good description of "results
oriented thinking" as the phrase is used here. This blog post is okay, but the
first part plays dubious semantic games with words that you may as well
ignore: [http://randomdirections.com/why-being-results-oriented-is-
ac...](http://randomdirections.com/why-being-results-oriented-is-actually-
bad/)

[3]: Hilariously, this was the first Google result for the phrase:
[https://en.wikipedia.org/wiki/Fear,_uncertainty_and_doubt](https://en.wikipedia.org/wiki/Fear,_uncertainty_and_doubt)

~~~
pron
> As usual, read this as "what's popular in Computer Science" more than
> "what's worked"

I don't think so. By "worked" he means that the benefits have been very, very
clearly demonstrated to be very significant and widely applicable. And
"clearly demonstrated" means their impact has been shown in numerous
production settings.

> Lots of things listed in the Maybe and No category (both the original table
> and in the article) do work but aren't super-popular

Not in the sense he means, which doesn't mean popular, but rather "clearly
demonstrated to provide great benefits in the field with wide applicability".

> and some things listed as "Yes" like bitmaps and subtypes are bad but
> ubiquitous.

I don't know what you mean by "bad". If by _bad_ you mean "have some serious
drawbacks" or "can be possibly replaced by something better", then sure, but
that doesn't mean they haven't worked. If by "bad" you mean that their
benefits haven't been clearly demonstrated, then I'd disagree about your
conclusion on bitmaps and subtypes.

> It's also magnified because there's a certain kind of personality that
> values pragmatism above all else and categorically opposes anything new,
> idealistic or academic.

Or is simply wary of incorporating ideas that haven't been sufficiently proven
into products with billions of dollars on the line. OTOH, there is another
kind of personality that confuses pure research with battle-tested ideas, and
think that something that is shown to have benefits in research settings will
surely have those benefits in the field without worse side-effects.

> decision makers tend to be so results-oriented

Shouldn't they be? Would you rather the people making the decisions on the
software used to run your power stations, airports, banks and government be
embrace "new" and be "idealistic or academic"?

Research is research, and the industry is the industry, and ideas have a very,
very long maturation process of going from the research stage to the
"demonstrably works at a large scale in the industry" phase. That's a good
thing, and that's what it's like in all disciplines. Software is rather unique
in that it has people who think this shouldn't be the case. Thank God they're
not the decision makers.

> We even have our own name for it, or at least part of it: the IBM effect

And now, ironically, you're "spreading FUD" against an approach that is simply
skeptical of things touted to be "the next big thing". Some of us have been in
this industry long enough to know that most great ideas turn out not to be.
That's not "nobody ever got fired for buying IBM", but "let's not bet billions
of dollars and possibly people's lives on the flavor of the week just yet
until we gather more evidence".

~~~
tikhonj
> Shouldn't they be?

No. And the people who make nuclear powerplants and planes certainly _don 't_
engage in results-oriented thinking: you could easily take out half the safety
features they build and not run into any catastrophes, but that's not an
acceptable risk. Software companies, on the other hand, are perfectly happy
taking on long-term and long-tail risk if it's more or less worked in the
past, in large part because failure is far more forgivable. (Which, to be
clear, is a good thing and and of itself, but does incentivize some
unfortunate kinds of behavior.)

The whole point is that good decisions lead to bad outcomes and bad decisions
lead to good outcomes all the time, but politics pushes people to react
_strongly_ to results, ignoring this fact. This is quite different from other
fields in engineering which by necessity, tradition, culture or even
regulation operate differently.

As I said, "results oriented thinking" is a terrible name which leads to
confusion, but it's what we have. Unfortunately I can't think of a better one
and if I did it probably wouldn't catch on. (Which, amusingly, is an example
of [2] above :).)

~~~
pron
> And the people who make nuclear powerplants and planes certainly don't
> engage in results-oriented thinking: you could easily take out half the
> safety features they build and not run into any catastrophes

As someone who's once been there (not power plants but ensuring the safety of
systems whose failure may endanger the lives of many people), it's much more
complicated than that. If by "results oriented" you mean "look at nothing but
the bottom line", I think you're disrespecting a lot of people by assuming
(with little evidence) that this is indeed the process. I can tell you that it
isn't, and not just in safety-critical software. Of course, there may be some
bad people as in every field, but I've seen no evidence to suggest that
"decision makers" are on average any worse at their job than anyone else. We
_were_ result-oriented in the sense that we preferred not to let anything that
didn't have _industry-proven_ benefits into our system.

Ironically, it is sometimes the safety-critical systems (well, some classes of
them) that can afford to take _more_ risk, because they have the resources to
build a new system and then run it in production alongside the old one only
it's outputs are not piped to real actuators (we called it a "shadow system"),
and do that for a _long time_ (a year if not more) before flipping the switch.
"Plain" software usually can't afford to be so patient.

> The whole point is that good decisions lead to bad outcomes and bad
> decisions lead to good outcomes all the time

Of course. Our disagreement is on what constitutes a good decision.

> politics pushes people to react strongly to results, ignoring this fact.

I don't think so. I've been a decision maker (though not alone, and sometimes
didn't have the final say) in both safety-critical settings and less risky
ones, and it's not politics but often the lack of a better metric. I can
assure you that we've extensively studied the reasons for each failure or
success _to the best of our abilities_ and never looked _just_ at the bottom
line if we had any more information. We took quite a few calculated risks
(changed our numerical algorithms, chose the yet-unproven-at-the-time real-
time Java over C++ for a safety-critical, hard-realtime system and more), but
always opted for things that had been proven _in the industry_ (or tried to
prove them ourselves). It's not politics that led us to that, but lots of
experience. That's because things that work in the lab fail in the field more
often than not. Ignoring _that_ fact is just stupid (or, rather, shows a lack
of experience in the field), especially when the stakes are high (and they're
always high to some degree in the industry).

Besides, _why_ risk anything on something that doesn't have strong evidence of
significant benefits? We look at people like you to show us those benefits,
and if you don't -- or you think you've found clear benefits, but those
benefits simply aren't significant enough for us -- why take any risk? We
gladly risk quite a bit when there's good reason to believe the payoff would
be large enough. Even that (let alone battle-testing) is something some of the
things you'd put in the "works" category have yet to demonstrate.

That is something we in the industry tell researchers all the time. If you're
content doing pure research, do it and don't whine about us not adopting your
work. If not, and you would like us to use your stuff, there's a lot more you
need to do than your pure research, like extensive applied research with
empirical results, and, most importantly, it requires understanding the
industry's needs (e.g. in the industry something is not adopted because it is
"better"; it will only be adopted if it is better enough to offset the
adoption cost plus associated risk). But what you can't do is have it both
ways -- do only the pure research _and_ whine about us not using it. Doing so
shows a lack of understanding of what the industry is, and reflects badly on
those researchers much more than on the industry (which, BTW, has made amazing
achievements in software).

The industry adopts brand new stuff _all the time_ when the researchers do
their job properly (well, to be fair, for some it's a lot easier). Suppose I'm
a decision maker and someone says to me, my new sorting algorithm would be 20%
faster for the data you're sorting. Well, the adoption cost is nearly zero, I
can test they payoff rather easily, there is risk involved but it's mitigated
by my ability to switch back, so I find the 20% payoff to be high enough and I
go for it. This happens _all the time_. In fact, millions of people will soon
switch to a brand new -- and rather ambitious -- GC algorithm (granted, one
that's been tested for nearly a decade). But then some other guy shows up and
says, my new programming language would make your development better. How much
better? I ask. Much! he says. Talk to me in numbers, I say. By how much would
my development cost be reduced? Hmm, he says. I'm not sure; 30%... maybe? So I
look at the adoption cost (huge), risk (very high), and find that 30% to be
rather low, but let's suppose it's borderline good-enough. Only he's not even
sure about the 30% because he has no experience with large systems and he's
never even tested his claim. Maybe it's 5%, maybe 70% and maybe -20%! So let
me tell you that in that case, I will be _very_ results oriented, look around,
see that practically no one else uses that language and those who do see
payoffs nowhere near high enough, so I pass[1]... and that's when you say I'm
motivated by politics, hostile to academia and afraid of new stuff.

[1]: True, it isn't fair. Technologies with high adoption costs incur a much
higher testing burden _and_ are required to yield much higher payoffs than
those with lower adoption costs. But that's no one's fault, and just the way
it is. Researchers in those fields should keep on working, realizing that
their payoffs would be high enough to be adopted by the industry every 20
years instead of every 5 as in low-adoption-cost cases.

------
robotresearcher
Bad title. the list of topics is a very small fraction of the activity in
computer science. The original Lampson title was fine. Why change it?

------
Olshansky
Dependency injection.

\- No one ever

