
What's worked in Computer Science (2015) - wheresvic1
https://danluu.com/butler-lampson-1999/
======
zellyn
I'd love to see an updated version of this. After reading the stories from the
Rust/Servo/Stylo developers, it feels like Rust might just be pushing both
"fancy type systems" and "parallelism" to the left.

Also, if you count GPU programming under "parallelism" then it's gotta be a
huge YES.

~~~
hyperpape
He agrees that both of these points are reasonable in 2017
[https://twitter.com/danluu/status/935878538623901696](https://twitter.com/danluu/status/935878538623901696)

(though I guess _you_ know that, since you're the one who asked him...)

~~~
zellyn
:-) Yes, I thought of twitter after posting here.

------
theandrewbailey
I don't understand what is meant by "capabilities". My understanding of what
capabilities are is so abstract, that asking if they worked is not something
that could possibly be asked. I might have missed something, or this guy is an
architecture astronaut (both?).

~~~
pjc50
[https://linux.die.net/man/7/capabilities](https://linux.die.net/man/7/capabilities)

In some ways the Android/iOS permissions system is a form of "capabilities".
Granularity is a problem: too granular and they're tedious, not granular
enough and you end up leaking your contacts to your torch app.

~~~
chubot
Linux capabilities aren't capabilities in the sense of capability-based
security! They don't follow that security model.

It's just an unfortunate pun. Capability-based security wasn't mainstream at
the time they were invented. It arguably isn't now either. You can argue that
Android/iOS are capability-based, but the model is pretty watered down.

Capability-based security for end users (rather than sys admins) is an
unsolved problem as far as I can tell.

See this page:

[https://en.wikipedia.org/wiki/Capability-
based_security](https://en.wikipedia.org/wiki/Capability-based_security)

 _A capability (known in some systems as a key) is a communicable, unforgeable
token of authority._

Linux capabilities don't meet this definition. They're just permissions.

The sibling comment is also correct:

[https://news.ycombinator.com/item?id=15807241](https://news.ycombinator.com/item?id=15807241)

------
geophile
Functional programming and GC are slowly moving up. Looking forward to the
(pure) Lisp resurgence of the 2020s.

~~~
Sammi
Lisp languages aren't really growing. The most hyped lisp style language is
Clojure and even it isn't growing. Here's a Google trends comparison of
Clojure, Clojurescript, List, Racket, and Scheme:

[https://trends.google.com/trends/explore?date=2007-01-01%202...](https://trends.google.com/trends/explore?date=2007-01-01%202017-11-29&q=clojure,clojurescript,%2Fm%2F04kyw,%2Fm%2F0974fb,%2Fm%2F06zrb)

Things aren't looking so hot for ML style languages like F#, Ocaml, and
Haskell either:

[https://trends.google.com/trends/explore?date=all&q=%2Fm%2F0...](https://trends.google.com/trends/explore?date=all&q=%2Fm%2F03j_q,f%23,%2Fm%2F09wmx)

The only functional programming language that is showing any real growth is
Scala, but that is also modest compared to the ones that are already much
bigger or growing much faster:

[https://stackoverflow.blog/2017/09/06/incredible-growth-
pyth...](https://stackoverflow.blog/2017/09/06/incredible-growth-python/)

Where functional programming is growing is in existing large OO languages. All
of them now have lambdas. Most have mature immutable and composable data
structure libraries. And other functional style syntaxes like pattern matching
are being introduced in many of them.

~~~
charlysl
"other functional style syntaxes like pattern matching are being introduced in
many of them"

The Interpreter and its dual the Visitor GOF pattern have made what amounts to
pattern matching possible in OO languages for a few decades now [1].

The whole point of these patterns is to write recursive functions following
recursive data types, which is one of the main use cases for pattern matching
(ok, more generally these are used to break data structures apart).

In particular, if you look at an example of the Visitor pattern, you will see
that it has exactly the same structure as a recursive function that separates
cases in a FP language [2]

Here is a detailed explanation of how to do it [3]; it is the "little
language" technique (which I suspect is the reason why it is called the
Interpreter).

[1] [https://eli.thegreenplace.net/2016/the-expression-problem-
an...](https://eli.thegreenplace.net/2016/the-expression-problem-and-its-
solutions/)

[2] [http://blog.higher-order.com/blog/2009/08/21/structural-
patt...](http://blog.higher-order.com/blog/2009/08/21/structural-pattern-
matching-in-java/)

[3] [https://ocw.mit.edu/courses/electrical-engineering-and-
compu...](https://ocw.mit.edu/courses/electrical-engineering-and-computer-
science/6-005-elements-of-software-construction-fall-2008/lecture-
notes/MIT6_005f08_lec15.pdf)

------
lph
RISC: "If this was a Maybe in 1999 it’s certainly a No now."

My cell phone begs to differ.

~~~
mastax
The Acorn Risc Machine is pretty CISCy nowadays.

~~~
tmccrmck
I tend to agree. Group loads and stores isn't very RISCy to me. But then
again, you could make the case that any x86 machine is RISC under the covers.

------
zackmorris
I learned to program sometime around 1988 or 89 when I was 12 and have been
doing it for 28 years. I saw several of these rise and fall, and conventions
have changed greatly even though most of us are still using C-style languages.

The highest leverage I've found comes from spreadsheets and relational
databases because they require little or no code to accomplish so much. So I
think that declarative or 4th generation programming languages deserve an
honorable mention in the "yes" category.

The lowest leverage has been on the object-oriented side, especially with C++.
Java is the least abstract language I've used in recent memory, due to its
obsession with staying as far from functional programming as it can (closures
were accomplished with anonymous objects until version 8!). There is simply no
way to infer where a value was set without tracing the entire code, which
slows development time by a factor of 10 to 100 over more functional
techniques that use methods to derive state instead of values. So I think the
"No" category needs a mention of highly imperative code and its tendency to
become spaghetti.

The single most damaging practice I encounter daily is the breaking of
convention and the loss of programming context that we've built up in our
minds over the years. So while most functional programming languages are
conceptually brilliant, they stray so far from mainstream languages like
Javascript that I think they complicate syntax unnecessarily. Heavy-handed
patterns also urge us to segregate code into parts when the logic doesn't
require it, which I often find unintuitive. Mobile development especially,
with its obsession with delegation, event broadcasting, observers and
nondeterministic behavior of partially uninitialized views (especially
recycler view and UITableView) feels like always having to remember 10 or 20
digits when we can only comfortably manage 7. Types for the sake of types will
likely expand code size and complexity by an order of magnitude when a simple
compiler would have accomplished nearly as much. I have low confidence that
any of these will survive the next tech bubble.

I view parallelism and AI as a great uncharted sector of computer science that
was stifled by Microsoft, Apple, Intel and NVIDIA. Had FPGAs or open source
graphics cards taken off after the 90s, we would have a very different view of
things like the Actor model (Erlang, Go, etc). So shared threads are a huge
"no", but the private address spaces/streams/message passing and copy-on-write
model of UNIX is a huge "yes" and we'll likely see these techniques adopted at
the language level in the next decade. It would also be nice to have the
numerical computing techniques from MATLAB so we can operate on matrices
instead of primitives and saturate the hardware with less effort. Under these
tools and conventions, AI loses much of its mystery because you realize you
can always brute force a solution and evolve more elegant ones later to find a
better local optimum.

~~~
charlysl
"closures were accomplished with anonymous objects until version 8!"

Java has had closures through inner classes since the beginning, albeit
admittedly in a verbose way. This is a quote from page 540 of "Concepts,
Techniques and Models of Computer Programming": _Full lexical scoping. Full
lexical scoping means that the language supports procedure values with
external references. This allows a class to be defined inside the ￼￼￼￼￼scope
of a procedure or another class. Both Smalltalk-80 and Java support procedure
values (with some restrictions). In Java they are instances of inner classes
(i.e., nested classes). They are quite verbose due to the class syntax (see
the epigraph at the beginning of this section)._

But I think this is already enough to do high order programming, given that it
effectively allows having functions as values (and hence high order
functions), and support all 4 basic techniques high order programming builds
upon: procedural abstraction (which relies on closures), genericity,
instantiation and embedding [1].

I have to admit that I am not sure if I understand what you mean by "There is
simply no way to infer where a value was set without tracing the entire code,
which slows development time by a factor of 10 to 100 over more functional
techniques that use methods to derive state instead of values". My (possibly
wrong) interpretation is that you are referring to lazy execution, given that
this technique associates a variable with a function that creates its value
but whose execution is delayed until triggered when the value is actually
needed.

Is this correct? If it is, then explicit lazy execution can certainly be
accomplished in Java, given that, as I have said before, it allows embedding
procedure values in a data structure. Of course, this is not as convenient as
implicit lazy execution as in Haskell with its non strict evaluation, but it
achieves the main purpose of laziness all the same, which is to optimize
resources by allowing consumers rather than producers to drive value creation
and hence generate data structures incrementally (as they are needed) rather
than eagerly (as in simpler data driven techniques): stream programming;
another advantage of laziness is of course purely functional persistent data
structures in some cases with good performance ([3] and [4]).

"It would also be nice to have the numerical computing techniques from MATLAB"

I couldn't agree more with this. For instance, if possible, for a ML project I
think it's better to do as much research/experiments/development as possible
in Matlab/Octave, and then translate it to Python or whatever the production
language is (would love to hear opinions on this). It would be so nice if such
translation could be avoided (maybe a preprocessor could automate it, but
still ...).

I believe that the reason is that Matlab is just a much more natural language
for mathematical programming. I did the wonderful mooc [2] in Matlab, and much
prefered it compared to the Python solutions I saw.

[1] [https://ocw.mit.edu/courses/electrical-engineering-and-
compu...](https://ocw.mit.edu/courses/electrical-engineering-and-computer-
science/6-005-elements-of-software-construction-fall-2008/lecture-
notes/MIT6_005f08_lec15.pdf)

[2] [https://www.edx.org/course/learning-data-introductory-
machin...](https://www.edx.org/course/learning-data-introductory-machine-
caltechx-cs1156x-0)

[3] [http://lambda-the-ultimate.org/node/2665](http://lambda-the-
ultimate.org/node/2665)

[4]
[https://www.cs.cmu.edu/~rwh/theses/okasaki.pdf](https://www.cs.cmu.edu/~rwh/theses/okasaki.pdf)

~~~
charlysl
OK, apologies for making such a long post. Here is my point (which has turned
out to be pretty long too):

1) I have realized that systematically learning programming concepts, models
and algorithms first, and how to apply them through programming techniques is
much more important than just keeping on learning new languages and
frameworks. For this, I am finding "Concepts, Techniques and Models of
Computer Programming" invaluable; it covers an amazing amount of ground, and
has an excellent bibliography to know where to learn more of a particular
aspect. It is is a successor of SICP, which is very good too, and, I suspect
as a tribute, is the first reference in CTM.

2) Learn how to do lightweight design upfront based only on the
problem/requirement independently from language; the book above helps
enormously, but I also really like the approach in the 2008 version of the MIT
course 6.005 (later versions are watered down IMHO) [1]. It reinforces this
design approach, and has, AFAIK, the best approach to using Design Patterns
properly and systematically: to bridge the gap between design and code, which
this way just flows almost automatically from the design. It may not be
immediately obvious, but this course has a lot in common with SICP. For this
reason I am highly sceptical of TDD, although I love to write the tests first
and incrementally.

3) Once you know 1), many aspects of learning a new language and using it well
are a piece of cake, because what you learned in 1) transcends any particular
language and is timeless; I am now starting to enjoy mapping what I learned in
1) to different languages.

4) if you have to or want to use a language that is lacking direct support for
anything in 1), if the language's computing model allows it (and many do, it
is amazing how adding just one little concept to a language opens a whole
range of possibilities but also problems, i.e. destructive assignment,
threads, exceptions, triggers, ports, dataflow variables, etc), you can just
build an abstraction or find a mature library that implements it. It is
equally amazing how much you can still accomplish even if you remove one of
the those concepts from a language, like destructive assignment, if you know
the right techniques. It may not be as efficient and convenient as a built in
abstraction, but will very likely still be much better than the alternative of
implementing a design that is not the right one for the problem/requirement.

5) there is no need at all to be intimidated by a language or system with an
enormous amount of features/libraries. Most of the time you only need to
select a subset of the language to accomplish your goal efficiently. From 1)
you will learn exactly how to do this and exactly why it is crazy to try to
use the full power of a language; you should actually do the exact opposite,
use as few concepts of a language as you can get away with while still writing
a program that has a good balance of all required good properties.

[1] [https://ocw.mit.edu/courses/electrical-engineering-and-
compu...](https://ocw.mit.edu/courses/electrical-engineering-and-computer-
science/6-005-elements-of-software-construction-fall-2008/)

------
bogomipz
The author states:

>" DEC started a project to do dynamic translation from x86 to Alpha; at the
time the project started, the projected performance of x86 basically running
in emulation on Alpha was substantially better than native x86 on Intel
chips."

I'm really curious how an Alpha emulating x86 could be faster than the native
x86? Can someone shed some light on how this would be possible

~~~
draven
And a bit further in the article: "By the time DEC finished their dynamic
translation efforts, x86 in translation was barely faster than native x86 in
floating point code, and substantially slower in integer code."

So perhaps they were overly optimistic.

------
pyrale
It appears that the author uses adoption as his main metric of what works.

It's also quite hilarious that the author lists fancy type systems as a
definite No, but formal methods as a maybe.

------
raiflip
I'm confused as to why he doesn't put functional programming as a yes. He
writes "Functional languages are still quite niche, but functional programming
ideas are now mainstream, at least for the HN/reddit/twitter crowd."

While purely Functional languages are definitely not mainstream, as he puts
functional programming ideas are pretty mainstream. At the same time major OO
languages like java have incorporated functional programming ideas, so
arguably even Java is not purely Object Oriented anymore. It seems to me by
his criteria that would also put OO as a no too.

If he comes from OO languages I think this is a bias because it is easy to
take OO for granted and put strict requirements on foreign Functional
Programming before accepting it as successful. However it seems to me the only
criteria that should be needed to call either paradigm successful is if their
ideas are are mainstream, which is true for both languages.

Basically if Function Programming ideas have gained currency then it should be
regarded as successful.

~~~
Tomis02
I think it's important to make the distinction between a successful idea and a
good one. For example OO as a paradigm in mainstream languages (C++, Java) has
been very successful, despite it being more often than not a bad idea (e.g.
horrible inheritance hierarchies are still implemented in this day and age,
with dire consequences on code quality).

In OP's article, OO is filed under "what works", which I strongly disagree
with. If anything, software development works DESPITE OO, which in my
experience constantly hinders progress because of rigid architectures.

~~~
raiflip
As much as FP has made my life easier, have to disagree with you on OO. Any
language/paradigm is capable of producing bad code. In my first job a high up
manager brought in Clojure to a department filled with Java and OO acolytes.
While the language was amazing, the Clojure code base was filled with gigantic
let statements simulating imperative code, functional spaghetti, and given the
lack of a good IDE difficult to debug (had to throw exceptions just to see
variable values... _shudders_ ). The point is not that FP is bad, just that
any tool in unskilled hands can produce bad code.

Not sure what it is like for the industry at large, but from what I've seen a
dominant paradigm in Java is to separate out model classes that are basically
just structs with Service or Workflow classes that manipulate the data. It's
questionable how much of that is still OO (at least when I started learning it
the narrative was to put your code where your data is. POJOs are the opposite
of that). Given say having a high level service that calls low level services
and external system while returning a result produces code thats very easy to
trace and provides solid abstraction to the calling code. Also if you've read
Uncle Bob's data structure vs. Object dichotomy you'll see how Objects are
objectively better in some scenarios, and OO languages give you powerful tools
for objects.

That said 100% agree with you on horrible inheritance hierarchies. What I
think would be best is some course on how to combine OO and FP paradigms to
get the best of both worlds (though tbh it'd probably be 90% FP with some OO).

~~~
charlysl
"What I think would be best is some course on how to combine OO and FP
paradigms to get the best of both worlds"

If you want to find out, read "Concepts, Techniques, and Models of Computer
Programming". It's mindblowing. It answers that question exactly.

This is a dense but exceptionally written 1000 page book, but maybe you can
just read section 4.8.5 to get an idea [1]. However, I think you will take
more than just reading this bit, to understand the different models (many more
than the 2 everybody talks about).

Section 4.8.7 is actually "Using different models together", with the key
"impedance matching" concept.

The author sort of summarizes it here [2]

There is also a mooc in edx, but it just scratches the surface of what's in
the book [3]

[1]
[https://books.google.com.vn/books?id=_bmyEnUnfTsC&pg=PA322&l...](https://books.google.com.vn/books?id=_bmyEnUnfTsC&pg=PA322&lpg=PA322&dq=picking+the+right+model+concepts+techniques&source=bl&ots=nCktxEdEEl&sig=G00yeodibw3ZSxOGoRdjF8x1cUQ&hl=en&sa=X&ved=0ahUKEwiLiOKOr-
TXAhULabwKHVdIB7wQ6AEIJzAE#v=onepage&q=picking%20the%20right%20model%20concepts%20techniques&f=false)

[2] [http://lambda-the-ultimate.org/node/4698](http://lambda-the-
ultimate.org/node/4698)

[3] [http://lambda-the-ultimate.org/node/4842](http://lambda-the-
ultimate.org/node/4842)

~~~
raiflip
Cool, thanks! Checking it out.

------
kyberias
What are the new Nos in 2017?

