
Programming Paradigms for Dummies: What Every Programmer Should Know (2009) [pdf] - tiniuclx
https://www.info.ucl.ac.be/~pvr/VanRoyChapter.pdf
======
charlysl
I have read CTM, the author's book. He did in fact dislike the word "paradigm"
and prefered "computation model" instead.

A model is a set of concepts. A concept is an orthogonal language feature,
like closures, concurrency, explicit state (which he now calls named state),
exceptions, etc.

His approach is not so much that you should select one language that supports
a paradigm that seems the most suitable for a given project. Rather, what he
advocates is that you should use a language that supports multiple cleanly
separated concepts (close to what is called multiparadigm), and then you
select the simplest set of concepts for each program component.

This is the principle of least expresiveness, you should choose the simplest
model (simple meaning that it is easy to reason about and to get right) that
keeps the code natural (meaning there is little code unrelated to the problem
at hand, little plumbing).

Each model has an associated set of programming techniques, like accumulators
for FP or transactions for stateful concurrency.

It is possible to mix and match components that are written in different
models by using impedance matching, which consists of creating an abstraction
in the more expressive model that wraps the other component. An example would
be a serializer, which allows you to plug a non concurrent component in
concurrent program.

Basically, you should use FP as much as you can, but you can't if your program
needs, say, visible non-determinism, like a server in a client-server
application. But then you can add just one concept, ports, to the declarative
concurrent model, and you have a new model that allows a whole constellation
of new programming techniques, the concurrent message passing model, Erlang-
like. The non-determinism in this model is restricted to the ports, the only
place where it's required, the rest of the program can still be functional.

Other situations where FP might get stretched are when modularity or
performance are a priority.

This book helped me realize the whole FP vs OO debate is sterile, the ideal is
to have a language that allows you to use FP in a natural way, and switch to,
say, OO, in a natural way in some cases.

~~~
tabtab
The problem with "choose the best paradigm/computational-model for the job" is
that mastering each of the possibilities takes a lot of time for the average
programmer. It may be better to master a few than spend so much extra time
mastering them all.

I have accused some academics of "promoting ideas that require more education"
so as to line their wallet. It didn't go over well and I got counter-accused
of "promoting mediocrity" so that "my type" don't have to learn. (I don't
believe such bias is intentional, just human nature. We are all biased in ways
we don't know just by the fact we only live one life.)

Rather than delve back into that bitter debate, I ask that people consider the
economics of it: is it better on a macro-economic scale to spend extra
education to master many paradigms/techniques, or to settle on a few to get
through school/training faster? (There's always going to be niches that need
specialized training/skills.)

The average programming career is relatively short-lived: you either have to
move into management, analysis, project planning, etc. or be subject to agism.
For good or bad, the industry doesn't "like" old programmers. RSI (wrist
problems) is also common with seasoned programmers. Thus, I believe the
shorter-education approach is the economically logical one. You are welcome to
disagree.

~~~
ergothus
> The average programming career is relatively short-lived: you either have to
> move into management, analysis, project planning, etc. or be subject to
> agism.

I won't dispute the problems in the industry, but I'm not sure you can really
say the career is relatively short-lived. I've been at it for 20 years, and
while there are far fewer of my age-peers than there are of younger coders,
the number is also far from zero. The "requirement" that you move to a related
field doesn't seem nearly as certain as when I was younger (I've avoided all
efforts at moving away from coding itself with only minimal effort on my
part), and just landed a new job pretty effortlessly.

In terms of the topic, I agree that newer coders should tackle a bunch of
problems in a few different approaches rather than trying to master them all
from the start (I certainly haven't, even as an old fogey), but in terms of
troubles I've found lack of free time/energy to do so to be the bigger
obstacle rather than no longer being a programmer. Then again, the problem I
see most among younger coders is not trying to master everything and failing,
it's treating everything as a nail for their single hammer.

My anecdotal experience is not data, but it is all I have right now: Do you
have a stronger source of information that the "average programming career is
relatively short-lived"? I've been at this for 20ish years and currently
expect another 20 more, but if that's foolish I'd like to know for
(reasonably) certain.

~~~
tabtab
I've seen several articles over the years which suggest a programming career
is shorter than the average career; let alone experiencing agism myself. I
can't re-find the prior articles at the moment, but here's an interesting
article about average programmer age:

[https://www.businessinsider.com/silicon-valley-age-
programme...](https://www.businessinsider.com/silicon-valley-age-
programmer-2015-4)

~~~
ergothus
Interesting. I'm certainly not disputing the agism of the industry, but having
fewer prospects is not equal to being forced out and I remain skeptical of the
"shorter career" \- then again, I believe women are discriminated against in
both hiring and opportunities, so I can't think the same of age and expect
different outcomes. The StackOverflow survey is likely not a great data
source, but it's a data source. Also, one must track for a growing industry -
if we have notably more developers now than 20 years ago, of COURSE they will
be younger (and drag down the average age).

The data has lots of issues, but I'm forced to conclude that....yeah, agism
likely reduces the average length of careers in coding.

~~~
tabtab
The industry has been good for coders and IT in general for the last decade or
so, but if the past tells me anything, it may not last. For example, our Web
"standards" are poor for many things they are being used for. The HTML browser
has been over-stretched. If better or more industry-specific standards appear,
then less IT labor could be needed to create and manage a good many systems
and apps, putting a lot of IT workers on the street.

------
randcraw
Relevant references...

A 2004 textbook by the OP:

Concepts, Techniques, and Models of Computer Programming

[https://www.amazon.com/Concepts-Techniques-Models-
Computer-P...](https://www.amazon.com/Concepts-Techniques-Models-Computer-
Programming/dp/0262220695)

And his 6 week edX course:

Paradigms of Computer Programming – Fundamentals

[https://www.edx.org/course/paradigms-of-computer-
programming...](https://www.edx.org/course/paradigms-of-computer-programming-
fundamentals)

------
pvanroy
As the author, I must say I really enjoyed your detailed comments and Biblical
exegesis on my paper. FYI, your guess on the big OOP failure is correct. Also,
the Baskin Robbins footnote is a joke between myself and a physicist friend.
The paper is a chapter in a book on computer music published by IRCAM that is
worth reading too. Keep up the good work!

------
sevensor
> We end our discussion of inheritance with a cautionary tale. In the 1980s, a
> very large multinational company initiated an ambitious project based on
> object-oriented programming. Despite a budget of several billion dollars,
> the project failed miserably. One of the principal reasons for this failure
> was a wrong use of inheritance.

Who did that, exactly?

~~~
ummonk
I would have guessed Plan 9 just from the timing, but I'm not sure the failure
there was due to the wrong use of inheritance.

~~~
1024core
My guess would be Ada. "Billions of dollars" => Government/DoD is involved.

~~~
jonathanstrange
Highly unlikely. Ada has objected-oriented programming only since 1995 (in the
form of tagged types).

It's also extremely well-suited for very large teams and much harder to mess
up projects in Ada than in many other languages.

------
atilaneves
"Popular mainstream languages such as Java or C++ support just one or two
separate paradigms"

Java, ok. But C++??? That language has everything and the kitchen sink,
including pure functional programming (templates).

~~~
pjmlp
Even Java, after lambdas and immutable data structures introduction it is
quite debatable.

~~~
rydel
Having lambda doesn’t mean to be FP. One of the core features missing in Java
(JavaScript also) is TCO(tail call optimization)

~~~
13415
Tail call optimization is more or less unimportant, because you can express
any recursion with iteration and most of the time the explicitly iterative
version is even safer and better. TCO only adds zero-cost for recursions based
on tail calls, that's nice to have but recursion is a bit of a hobby-horse of
CS professors anyway. It only makes sense in languages that have their own
stack, i.e., have no hard stack limit except for your main memory, otherwise
you will run out of stack space soon. Iterative versions of functions are
often easier to understand, too.

~~~
Jtsummers
Recursion is great, IMHO, as a non-professor type. It allows for much clearer
expressions of intent for my methods and functions than the iterative version
often achieves.

There are some recursive structures that are just much more natural than their
iterative counterparts. Parsing (as lysium suggests in the sibling post), for
example.

But also things like graph and tree traversals, and many search algorithms
related to those same structures. If you attempt a tree traversal iteratively,
you have to maintain the return stack manually, rather than permitting the
language to do it for you (assuming a full traversal and not a search, a
search could be done iteratively without much trouble).

~~~
13415
I'd say the opposite. Mutually recursive functions are notoriously hard to get
right and debug, and many languages have stack limits. Often it's better to
maintain the stack manually.

~~~
Jtsummers
That has not been my experience, but I know many people who agree with you.
IME, the difference has been that I came into CS with a more mathematical
(formal) approach to programming, and they tended to come at it with a more
mechanistic approach (especially when I hear it from non-CS developers, often
EEs).

------
tiniuclx
An interesting point raised by this paper is that when a program requires
pervasive modifications (e.g. checking the error code returned by C
functions), adding a new concept to the language (e.g. C++ exceptions) can
systematically make these changes unnecessary, therefore simplifying the
program.

Perhaps this is how programming language designers ought to vet language
ideas: do the proposed changes make certain patterns redundant? Looking at
Rust through this lens makes it clear that it aims to eliminate C-style manual
memory management.

However, it does make me wonder: How do languages such as JavaScript and
Python stand out from their predecessors, and what problems do they uniquely
solve?

~~~
pjc50
The wikipedia summary of the JavaScript design rational is decent; it also
kind of explains why it's so odd:

> In 1995, Netscape Communications recruited Brendan Eich with the goal of
> embedding the Scheme programming language into its Netscape Navigator.[11]
> Before he could get started, Netscape Communications collaborated with Sun
> Microsystems to include in Netscape Navigator Sun's more static programming
> language Java, in order to compete with Microsoft for user adoption of Web
> technologies and platforms.[12] Netscape Communications then decided that
> the scripting language they wanted to create would complement Java and
> should have a similar syntax, which excluded adopting other languages such
> as Perl, Python, TCL, or Scheme. To defend the idea of JavaScript against
> competing proposals, the company needed a prototype. Eich wrote one in 10
> days, in May 1995.

It had to look vaguely like Java (hence ALGOL-derived brace delimited blocks
rather than Scheme-derived prefix S-expressions, and the confusing name), and
it had to be implemented quickly (hence the lack of typechecker and other
advanced features), and it was intended to be beginner-friendly (hence all the
truthiness/falsiness stuff).

Javascript only succeeds because it's the only scripting language supported by
web browsers, and the brief attempt at getting VBScript into browsers was even
worse.

~~~
pier25
> and it had to be implemented quickly (hence the lack of typechecker and
> other advanced features)

Since then there has been more than enough time to implement optional static
types.

In fact, the EcmaScript 4 proposal had those, before it was trashed in 2007 or
so and the TC39 started EcmaScript 5 from scratch.

~~~
tabtab
JavaScript is fine for the job of hooking light-duty events to HTML, but
people try to write _entire_ GUI/graphics engines in it. It's the wrong tool
for that job, just as you don't write an OS in TCL. The whole Web UI
"standards" issue needs a big overhaul in my opinion, but Web UI's is probably
off topic. (I've ranted about it in other topics.)

------
Const-me
Didn't like it much.

> object-oriented programming is best for problems with a large number of
> related data abstractions organized in a hierarchy

In OO books, maybe. In practice, OO is the way to compose very large systems
out of big components. For hierarchies of data abstractions, very often OO is
far from best.

> Popular mainstream languages such as Java or C++ support just one or two
> separate paradigms.

They support most of them, esp. modern C++, and C#. E.g. quite recently I was
programming C++ in monotonic data flow paradigm, because MS media foundation.

~~~
tabtab
I generally agree about OO. It's good for small-to-medium-sized abstractions,
but scales poorly compared to say an RDBMS. When your OO diagrams start to
look like ER diagrams, you are probably outside of OO's comfort zone. For
large domain models, an RDBMS is superior to OOP in my opinion. OO is lousy at
many-to-many relationships, for one, and lacks a visible identifier (primary
key) to trouble-shoot data/state easily. OO may be helpful for modeling sub-
sets of a domain model, but if you try to do the whole thing in an OO model,
you'll either reinvent a database the hard way, turn grey, or do both.

------
doanguyen
With this kind of articles, I just want to thank HN and all of these useful
discussions. It's always enlightening moments while reading you guys comments.

~~~
tabtab
Re: _" always enlightening moments [reading your] comments."_

Not _always_ : I'm a jerk 22.7% of the time.

~~~
jplayer01
It's possible to be a jerk _and_ enlightening at the same time. As easy as it
is to forget in today's culture of "everybody's a victim" and "feelings over
every other concern", a good argument doesn't suddenly become invalid just
because the author is rude.

~~~
tabtab
I'd be interested in specific examples to learn from. I'm always striving to
make myself a more useful and enlightened jerk. I've already upgraded from
a__hole.

------
dahart
The footnote in section 2.1 caught my eye:

> Similar reasoning explains why Baskin-Robbins has exactly 31 flavors of ice
> cream. We postulate that they have only 5 flavors, which gives 25 − 1 = 31
> combinations with at least one flavor. The 32nd combination is the empty
> flavor. The taste of the empty flavor is an open research question.

I honestly can’t tell if this is a good joke or serious & bad logic. The 31
flavors are obviously not a mix of 5 base flavors. If that were the case, each
base flavor would be in 16 of the mixed flavors, and you wouldn’t have any
unique flavors at all, like mint or cookie dough. It’s just a coincidence that
the longest months in the year are 2^5-1 days.

~~~
de_Selby
>I honestly can’t tell if this is a good joke...

I'll help out: It's a joke.

~~~
dahart
For sure? I skimmed the whole thing and couldn’t immediately find any other
jokes. Seems an odd choice to throw in one random deadpan comment about the
coincidence of 31 being near a power of two.

~~~
wellpast
> Seems an odd choice to throw in one random deadpan comment...

But then again it gives the joke a special nonplussing hilarity that it
wouldn’t acquire otherwise.

------
dang
Discussed in 2010:
[https://news.ycombinator.com/item?id=1118597](https://news.ycombinator.com/item?id=1118597).

------
revskill
If you can't do FP properly, most of your OOP codes are smell. Why ? At its
root, OOP is born from FP.

------
mozareli
I have been using programs from here.. for many years
[https://www.filehorse.com/software-developer-
tools/](https://www.filehorse.com/software-developer-tools/) all the best!

------
iofiiiiiiiii
What utter rubbish. This type of paper has no place in a professional
environment.

~~~
AnimalMuppet
Could you give some reasons for your statement? Why do you think it's rubbish?

~~~
lbriner
Like many discussions, this seems to demonstrate that a Programmer != a
Developer != an Engineer != an Architect.

Of course, many programmers would never need this stuff and many have no
formal qualifications but they still produce what they need to without
problems. They certainly do not need to know about paradigms vs concepts vs
models etc. even if it is interesting.

Of course, there ARE people who need to know this stuff to do their job well
but they are quite up the food chain compared to most of us. They might also
be corporate, who of us sits down and thinks, shall I do this in OO or
functional? Which paradigm fits? Most of us know few languages and use what we
know.

~~~
macintux
You're saying uneducated programmers are more useful than educated ones? I
don't understand your point.

~~~
AnimalMuppet
I believe that lbriner is saying that uneducated programmers are still useful
- not more useful, just useful.

