Hacker News new | comments | show | ask | jobs | submit login
Programming Paradigms for Dummies: What Every Programmer Should Know (2009) [pdf] (ucl.ac.be)
504 points by tiniuclx 43 days ago | hide | past | web | favorite | 163 comments



I have read CTM, the author's book. He did in fact dislike the word "paradigm" and prefered "computation model" instead.

A model is a set of concepts. A concept is an orthogonal language feature, like closures, concurrency, explicit state (which he now calls named state), exceptions, etc.

His approach is not so much that you should select one language that supports a paradigm that seems the most suitable for a given project. Rather, what he advocates is that you should use a language that supports multiple cleanly separated concepts (close to what is called multiparadigm), and then you select the simplest set of concepts for each program component.

This is the principle of least expresiveness, you should choose the simplest model (simple meaning that it is easy to reason about and to get right) that keeps the code natural (meaning there is little code unrelated to the problem at hand, little plumbing).

Each model has an associated set of programming techniques, like accumulators for FP or transactions for stateful concurrency.

It is possible to mix and match components that are written in different models by using impedance matching, which consists of creating an abstraction in the more expressive model that wraps the other component. An example would be a serializer, which allows you to plug a non concurrent component in concurrent program.

Basically, you should use FP as much as you can, but you can't if your program needs, say, visible non-determinism, like a server in a client-server application. But then you can add just one concept, ports, to the declarative concurrent model, and you have a new model that allows a whole constellation of new programming techniques, the concurrent message passing model, Erlang-like. The non-determinism in this model is restricted to the ports, the only place where it's required, the rest of the program can still be functional.

Other situations where FP might get stretched are when modularity or performance are a priority.

This book helped me realize the whole FP vs OO debate is sterile, the ideal is to have a language that allows you to use FP in a natural way, and switch to, say, OO, in a natural way in some cases.


The problem with "choose the best paradigm/computational-model for the job" is that mastering each of the possibilities takes a lot of time for the average programmer. It may be better to master a few than spend so much extra time mastering them all.

I have accused some academics of "promoting ideas that require more education" so as to line their wallet. It didn't go over well and I got counter-accused of "promoting mediocrity" so that "my type" don't have to learn. (I don't believe such bias is intentional, just human nature. We are all biased in ways we don't know just by the fact we only live one life.)

Rather than delve back into that bitter debate, I ask that people consider the economics of it: is it better on a macro-economic scale to spend extra education to master many paradigms/techniques, or to settle on a few to get through school/training faster? (There's always going to be niches that need specialized training/skills.)

The average programming career is relatively short-lived: you either have to move into management, analysis, project planning, etc. or be subject to agism. For good or bad, the industry doesn't "like" old programmers. RSI (wrist problems) is also common with seasoned programmers. Thus, I believe the shorter-education approach is the economically logical one. You are welcome to disagree.


> The average programming career is relatively short-lived: you either have to move into management, analysis, project planning, etc. or be subject to agism.

I won't dispute the problems in the industry, but I'm not sure you can really say the career is relatively short-lived. I've been at it for 20 years, and while there are far fewer of my age-peers than there are of younger coders, the number is also far from zero. The "requirement" that you move to a related field doesn't seem nearly as certain as when I was younger (I've avoided all efforts at moving away from coding itself with only minimal effort on my part), and just landed a new job pretty effortlessly.

In terms of the topic, I agree that newer coders should tackle a bunch of problems in a few different approaches rather than trying to master them all from the start (I certainly haven't, even as an old fogey), but in terms of troubles I've found lack of free time/energy to do so to be the bigger obstacle rather than no longer being a programmer. Then again, the problem I see most among younger coders is not trying to master everything and failing, it's treating everything as a nail for their single hammer.

My anecdotal experience is not data, but it is all I have right now: Do you have a stronger source of information that the "average programming career is relatively short-lived"? I've been at this for 20ish years and currently expect another 20 more, but if that's foolish I'd like to know for (reasonably) certain.


I've seen several articles over the years which suggest a programming career is shorter than the average career; let alone experiencing agism myself. I can't re-find the prior articles at the moment, but here's an interesting article about average programmer age:

https://www.businessinsider.com/silicon-valley-age-programme...


Interesting. I'm certainly not disputing the agism of the industry, but having fewer prospects is not equal to being forced out and I remain skeptical of the "shorter career" - then again, I believe women are discriminated against in both hiring and opportunities, so I can't think the same of age and expect different outcomes. The StackOverflow survey is likely not a great data source, but it's a data source. Also, one must track for a growing industry - if we have notably more developers now than 20 years ago, of COURSE they will be younger (and drag down the average age).

The data has lots of issues, but I'm forced to conclude that....yeah, agism likely reduces the average length of careers in coding.


The industry has been good for coders and IT in general for the last decade or so, but if the past tells me anything, it may not last. For example, our Web "standards" are poor for many things they are being used for. The HTML browser has been over-stretched. If better or more industry-specific standards appear, then less IT labor could be needed to create and manage a good many systems and apps, putting a lot of IT workers on the street.


In Java, I can assume that other developers took an OO approach, even if that isn't ideal. In Clojure, I can assume that they took a functional approach. In Scala, it is hard to be sure.

So, is the increased expressiveness worth the decreased predictability? For one developer, it probably is worth it. For a large team distributed over space and time, it probably isn't, even if everyone is comfortable with each individual paradigm.


Perhaps if most everything an industry uses is learned in college, you're setting that industry up for a low skill cap and thus maybe ageism. Anyways, if someone can get more done by using skills XYZ, maybe they'll outcompete firms not using it, and now we all have to learn XYZ or be relegated to more junior positions ¯\_(ツ)_/¯


Re: if someone can get more done by using skills XYZ, maybe they'll outcompete firms not using it...

Some orgs might be, but they are better off hiding their secret from competitors, and thus we are not likely to hear about it.

Warren Buffett often expresses shock that universities and investment organizations ignore his well-published techniques, following BS and fads instead. He and other "value investors" have big bucks lasting decades to prove it; they don't. They only have silver words and catchy-sounding theories. IT is the same, I'm afraid to say.

The GREAT LIE of IT is that there's not a lot of real science in "computer science", beyond machine efficiency. It's much easier to do science and math to test and measure machines than it is the human mind, but code is meant for the human mind as much as machines. Humans make, read, and modify software, not machines: computers are just dumb automatons (hopefully) following instructions verbatim.


For people like me that didn't know: CTM stands for Concepts, Techniques, and Models of Computer Programming.

More info here: https://www.info.ucl.ac.be/~pvr/book.html

Seems like a very interesting read!


I'd say the book itself is on par with SICP (if you can deal with Oz IDE).


> Rather, what he advocates is that you should use a language that supports multiple cleanly separated concepts (close to what is called multiparadigm), and then you select the simplest set of concepts for each program component.

Working in mostly Java shops the last 10 years has shown me that most programmers only know how to add complexity, and not restrict themselves to any strict subset of features. Spring is a good example of this - statically defined, decoupled configuration has morphed into everything configured with the full expressiveness of the host language.

I feel like a curmudgeon when no one seems to understand my preference for well defined boundaries.


CTM has a wonderful premise; and unlike SICP, it does not aim to be freshman text book, and hence can jump headlong into the depth of the matter. Though I could'nt progress after a while, when they started to use a difference-list method, to achieve a semblance of stateful behaviour while using a declarative model. I couldn't understand that till until I later studied some Prolog.


Relevant references...

A 2004 textbook by the OP:

Concepts, Techniques, and Models of Computer Programming

https://www.amazon.com/Concepts-Techniques-Models-Computer-P...

And his 6 week edX course:

Paradigms of Computer Programming – Fundamentals

https://www.edx.org/course/paradigms-of-computer-programming...


As the author, I must say I really enjoyed your detailed comments and Biblical exegesis on my paper. FYI, your guess on the big OOP failure is correct. Also, the Baskin Robbins footnote is a joke between myself and a physicist friend. The paper is a chapter in a book on computer music published by IRCAM that is worth reading too. Keep up the good work!


> We end our discussion of inheritance with a cautionary tale. In the 1980s, a very large multinational company initiated an ambitious project based on object-oriented programming. Despite a budget of several billion dollars, the project failed miserably. One of the principal reasons for this failure was a wrong use of inheritance.

Who did that, exactly?


"Two main errors were committed:

"• Violating the substitution principle. A procedure that worked with objects of a class no longer worked with objects of a subclass. As a result, many almost-identical procedures needed to be written.

"• Using subclasses to mask bugs. Instead of correcting bugs, subclasses were created to mask bugs, i.e., to test for and handle those cases where the bugs occurred. As a result, the class hierarchy was very deep, complicated, slow, and filled with bugs."

That's a good question. I can think of several candidates, but none that match the specific problems.


Based on the footnote I'm fairly certain it was Ellemtel, a joint venture between Ericsson and Televerket (the Swedish national telecom before it was privatised), and the project was AXE-N:

The AXE-N venture was to be the most expensive industrial project in Sweden after Saab’s JAS fighter. One calculation estimates that it cost Ericsson SEK 10 billion. The project has often been described as a total failure.

https://www.ericsson.com/en/about-us/history/changing-the-wo...

Swedish wikipedia has more information: https://sv.wikipedia.org/wiki/AXE-N

Some people here might know Ellemtel from their C++ style guide, which was a byproduct of the AXE-N project. In Emacs "ellemtel" is one of the built in choices for CC Mode style.


This is the most plausible candidate I've seen suggested.


One of the oddities of early OOP was that it was often touted as being "more natural" than the "old way". But when designers used OOP and inheritance "wrong", experts said, "well, you did it wrong; you need more training." If it takes special training to do it "right", it's not "natural" by definition.

There is a right time and place to use OO (and inheritance) and wrong places and times to use it, and it takes training AND experience to know the difference. Further, a lot of it depends on the language; some languages have poor OO models, forcing one to use lambda's etc. instead. In my opinion, a better OO language reduces the need to use lambdas. I know this is a controversial statement, but I stand by it.


Apple (later joined by IBM), with Pink/Taligent?

Microsoft (Cairo) seems to fit the bill almost exactly, but it was only initiated in 1991.


If ever there were a case of, citation needed.


I would have guessed Plan 9 just from the timing, but I'm not sure the failure there was due to the wrong use of inheritance.


My guess would be Ada. "Billions of dollars" => Government/DoD is involved.


Highly unlikely. Ada has objected-oriented programming only since 1995 (in the form of tagged types).

It's also extremely well-suited for very large teams and much harder to mess up projects in Ada than in many other languages.


If ever a forum that could draw a comment from somebody involved, it's this one. I was hoping this was a well known catastrophe somebody could fill me in on, but it sounds like the failure wasn't widely reported. I did enjoy this article on Wikipedia: https://en.wikipedia.org/wiki/List_of_failed_and_overbudget_...

Unless a survivor of the project can chime in, I'm going indulge in a little idle speculation and agree that defense sounds like a good candidate for losing a billion dollars. Maybe not DoD itself, but a big contractor?


Almost certainly not Plan 9, because that was written in C. Also, there's no way they gave them a billion dollars for development of a research OS. :)


That most certainly wasn't the case. The source for Plan9 is pretty plainly not hierarchical OOP.

Plus, the failure of Plan 9 was more an early 90s thing.


And I suspect the budget never got near billions of dollars. Plan 9 was a research project.


IBM: OS/2?


"Popular mainstream languages such as Java or C++ support just one or two separate paradigms"

Java, ok. But C++??? That language has everything and the kitchen sink, including pure functional programming (templates).


Keep in mind that the author is also one of the authors of Concepts, Techniques, and Models and the Mozart language, which supports logic programming anda variety of concurrent programming natively.

Edit: Oz is the language. Sorry.



Even Java, after lambdas and immutable data structures introduction it is quite debatable.


Higher-order functions are extremely constrained given that lambdas boil down to fixed interfaces, and there's very few tools for e.g. partial application and converting between function types. A lot of that is probably due to primitive variants and checked exceptions causing genericity to be extremely limited, so they capped the utility of lambdas in their design.


What is relevant is that lambda calculus can be expressed.

Everything else are just variations on "What is FP" and sugar coating.


> What is relevant is that lambda calculus can be expressed.

That's a criterion that's so broad as to be almost meaningless, though. The lambda calculus can be expressed in any Turing-complete language.

I wouldn't personally consider a language to support a paradigm unless programming in that paradigm feels natural in that language. Java supports using a few functional techniques. The experience of trying to write in a truly functional style, though, is painful.


Thing is, what does actually "The experience of trying to write in a truly functional style" mean in practice.

FP Lisp/Scheme, FP ML, FP Haskell/Miranda, FP Idris, FP Scala, FP Kotlin, FP OCaml, FP ATS, FP .... ?

All of them express different views of what Functional Programming is supposed to be like.


True. But I think that you can generally break them down into a few broad categories of functional style, and none of them can be followed comfortably in Java.


Comfort is not a programming paradigm.

Just because only might need a bit more boilerplate for currying or partial applications, doesn't prevent writing FP libraries in modern Java.


But since its all turing complete... this can be said of anything. There’s nothing stopping you from doing OOP in haskell, or functional in C, or procedural in prolog, if you’re willing to put in the effort.

Disregarding “comfort”, or rather, how much the language lends itself to a style, makes the notion of programming paradigms meaningless, and we’re still left with something we all know exists, but have no way to express.


Different programming paradigms are not about what can be computed it is about what can be expressed in the syntax and semantics of the language. This is an important distinction. Brainfuck is capable, technically, of computing anything that Haskell can compute. However, it would be foolish to call it a functional (paradigm-sense) language. It lacks higher order functions and many other elements. You could construct a lambda calculus interpreter in brainfuck, but then that language would be functional, but not the underlying implementation language. By the same measure, Haskell implemented in C does not make C a functional (paradigm-sense) language.


There's a whole lot of stuff you can do, short of writing interpreters, to construct library-level support for a paradigm in a language that lacks language-level support for that paradigm.

For example, I would say that GTK+ is written in an object-oriented style. But, despite that, I would not say that OOP is a member of C's repertoire of language paradigms.


That is true, some languages are sufficiently expressive to allow other paradigms to be used within them even though they aren't baked in. But it's not true that all languages can support all paradigms (either directly in the language as designed, or indirectly via library support).

OO languages and functional languages support each other's paradigms (more easily) than a pure imperative language would support either. And both OO and FP languages support declarative styles (like relational or logic languages) better than C would.

EDIT: It's also worth pointing out that C++ really achieved OO (initially) by using macros on top of C. So having a sufficiently expressive meta-language is also important to this. Via such a meta-language, you can achieve many more paradigms in a language than using the language alone.

But many languages lack a meta-language or don't have a standard meta-language which people can rely on.


> It's also worth pointing out that C++ really achieved OO (initially) by using macros on top of C.

And this is extremely relevant to the point. If the primary complaint against function programming in Java is "comfort" and "boilerplate"... Macros address both those problems very well. If Java had macros, it would be very simple to isolate and minimize that boilerplate.


Java emulates macros via annotations and compiler plugins.


Annotation-based methods are crippled by the limitations of annotation placement. For instance, you can't "annotate" an expression, which is where you will tend to see boilerplate of the type needed for partial application. (Because partial application is easiest emulated as a lambda which routes to the target function while providing fixed values to parameters.)


Having lambda doesn’t mean to be FP. One of the core features missing in Java (JavaScript also) is TCO(tail call optimization)


First of all not all FP languages have TCO, Scheme is probably the only one that actually requires it on its language specification.

Secondly stuff like LINQ was already available in Smalltalk.

So all those map/filter/fold/.... constructs from lambda calculus, which Java now enjoys.

Then if we apply the modern concept of only Haskell is FP, then there are a couple of FP languages that won't meet the classification.

Ah, and Haskell does not require TCO on their language specification, so it isn't an FP language according to your arbitrary definition.


There are many definitions of FP. IMHO the “no side effects” is the best one. Even Clojure is functional, but partially IMHO, bc it is for JVM which has not been design for FP. And my definition is not an arbitrary one. This is the most broadly one I think. But when you use same word in different contexts, then the word might have different meaning.


Which means OCaml, Common Lisp, Scheme, F#, SML are out of the game as FP languages according to you.


Is I wrote: IMO these language are not fully functional. TCO/TCE is crucial for operations on tree like data structures. And also as I wrote: it all depends in which context we are talking to. Scheme specification makes it clear that TCO/TCE is required. If I am not wrong, F# is the same thing as Scala, but in .Net world(?) - Scala cannot be treaded as clear functional, because it is for JVM which was not designed for functional programming. Surely, many languages can have more or less functional functionalities, but having a subset of properties which defines what is functional, cannot be treated as functional in the full sense.


Java class can be functional also. Depends how you look. If a class doesn’t operates on side effects, but keeps the mutation only inside the class, then the class can be defined as functional. But on the method level it might be not.


It depends. As we know, FP is all about not having side effects. Having “for” loop requires to mutate the pointer of given iteration.


Oh, and I thought all these years that OCaml was a FP language, go figure!

    open Printf;;

    printf "After all OCaml isn't a FP language\n";
    for idx = 1 to 10 do
        printf "%d\n" idx
    done
Same goes to Common Lisp, F#, Scala, Clojure.


According to the pdf's author, the crucial feature of FP is that there is no visible non-determinism. This means that every time you call a function with the same arguments it is guaranteed that you will get the same result. The other key feature is that there are no visible side-effects when calling a function.

Tail recursion of course is great to have, but you can certainly FP without it, even in a language that supports it. I mean, what if you don't put the recursive call in tail position in a function written in a language that supports tail recursion? It would still be FP.


The no non-determinism feature is very good example why TCO/TCE is crucial - each time you call a recursive function the state is changing- another stack is created. Therefore the same function which operates on unknown argument size(list, tree) can have different behavior - stack overflow might happen or not and we don’t know when.


You can get that in Java as well.

Just create a static class without member fields where the class plays the role of a poor man's ML module, with all static functions only interacting with their parameters.

Then static import it into the client package.


This is not the way to do FP in Java though. Even before lambdas were introduced, you could do FP in Java. This is actually the whole point of GoF design patterns such as Interpreter and Visitor, which together with Composite are the way to write recursive data definitions (and functions that operate on them) in Java (or C++). The whole idea of "Little Languages" is FP, representing operations as data, as an AST.

How to do FP in Java is very well explained in the MIT OCW course 6.005 "Elements of Software Construction", 2008 [1], in particular in lectures 10, 11, 13, 14 & 15.

Remember that you can create closures with inner classes.

As for the fact that in Java you could have state inside, say, a Visitor, the program can still be FP, if you know what you are doing. The point is that when doing FP in a language like Java, you are adopting a definitional approach to FP, which is totally legit: any operation is FP if it is deterministic and has no visible side-effects (a corollary is that then it wouldn't have any observable state of its own, it would be reactive). This is sort of a "if it walks like a duck ..." approach to FP.

In fact, this is exactly what is going on when you are doing reactive programming in a non FP language like Javascript, where there are no restrictions to doing destructive assignment anywhere. This doesn't stop you from doing reactive programming, as long as you follow certain guidelines (because the language won't stop you from infringing them and having state).

How is this possible? The reason is that the OO computation model subsumes FP: anything you can express in an FP language, you can express in an OO language, and then some (although not as naturally, there is more plumbing in the way).

The reverse is not true, and this is a good thing, it is exactly what allows FP to have all those desirable properties.

[1] https://ocw.mit.edu/courses/electrical-engineering-and-compu...


Sounds good in theory but is never going to work in practice. Even static functions can store state and access global singletons, disk or the internet. Java doesn't have a way to specify constness of arguments either so preventing mutating an argument is very difficult.

Is time and memory usage a side effect btw? :)


Just wanted to point out that JS has TCO coming, just a question of if/when engines have it implemented. Looks like Safari/JSCore is pretty much the only one so far though: https://kangax.github.io/compat-table/es6/#test-proper_tail_...


Yes. I am well aware about that. I look forward to TCO/TCE be supported on other platforms.


I think the concept you describe is better named tail call eliminiation.

“Optimization” makes it sound as if you could turn it off or on for performance reasons. Instead, programs rely on tail call elimination being in place or otherwise they would not work.


Julia is missing TCO and it is for sure functional.


Tail call optimization is more or less unimportant, because you can express any recursion with iteration and most of the time the explicitly iterative version is even safer and better. TCO only adds zero-cost for recursions based on tail calls, that's nice to have but recursion is a bit of a hobby-horse of CS professors anyway. It only makes sense in languages that have their own stack, i.e., have no hard stack limit except for your main memory, otherwise you will run out of stack space soon. Iterative versions of functions are often easier to understand, too.


> TCO only adds zero-cost for recursions based on tail calls

TCO isn't about recursion; it applies to any function call in tail position. Some toolchains only manage to avoid over-allocating stack frames for self-recursive tail calls, but that isn't full TCO, and it doesn't help for other interesting case like mutual recursion, state machines, or continuation-passing style. Compilers which compromise on full support for TCO emit programs with built-in memory leaks; they fail to free data which is no longer needed, and thus unnecessarily exhaust their stack allocation.

Programs which use built-in iterative constructs are still recursive; all that "recursion" means is that the control flow folds back on itself. A traditional "while" loop looks like: (1) stop if a condition is false; (2) otherwise do something; (3) do it again. The "it" in (3) is a recursive reference to the loop.

Now, unstructured explicit recursion is barely better than unstructured "goto", so I'm not advocating that everyone start using tail calls in place of loops. However, bare loops are in much the same position with respect to higher-order primitives such as non-strict folds—and those higher-order primitives are much easier to implement in environments which properly support TCO. Languages where TCO is not customary tend to suffer from a proliferation of built-in constructs—iteration, generators, list comprehensions, coroutines, and the like are all implemented as language features requiring custom code generation, where another language where TCO is customary might relegate such primitives to a library.


Recursion is great, IMHO, as a non-professor type. It allows for much clearer expressions of intent for my methods and functions than the iterative version often achieves.

There are some recursive structures that are just much more natural than their iterative counterparts. Parsing (as lysium suggests in the sibling post), for example.

But also things like graph and tree traversals, and many search algorithms related to those same structures. If you attempt a tree traversal iteratively, you have to maintain the return stack manually, rather than permitting the language to do it for you (assuming a full traversal and not a search, a search could be done iteratively without much trouble).


I'd say the opposite. Mutually recursive functions are notoriously hard to get right and debug, and many languages have stack limits. Often it's better to maintain the stack manually.


That has not been my experience, but I know many people who agree with you. IME, the difference has been that I came into CS with a more mathematical (formal) approach to programming, and they tended to come at it with a more mechanistic approach (especially when I hear it from non-CS developers, often EEs).


It depends. As we know, FP is all about not having side effects. Having “for” loop requires to mutate the pointer of given iteration.


As replied on another thread, you just killed a couple of FP languages with that definition.


Those are not FP languages. They are, at best, "functional-first" multi-paradigm languages. (Common Lisp and Scheme are in this category.) Unfortunately, FP is more about what is excluded (side effects) than what is included (closures, continuations, sum types, etc.), so mixing FP with any amount of imperative/procedural code tends to result in an awkward and less efficient form of imperative programming without the primary benefit of FP (referential transparency).

The "function" in "functional programming" refers to mathematical functions, which are fixed mappings from inputs to results with no side effects. If your "functions" can have side-effects then they're not functions, they're procedures. Programs composed of effectful procedures are imperative, not functional.


Did you consider mutual recursive functions in your answer? For example implementations of parsers?


Another thing about recursion is that it is really powerful for operation on tree like data structures/objects


Purely Functional programming via templates is a gimmick, not a productive style. In general you can't write a whole useful program in templates.


Depends. If you define fp as being manipulation of immutable data (i.e. math-like), then it’s quite feasible in C++. Gimmick or no, templates aren’t even necessary.


Does C++ support lazy evaluation?


templates are lazily evaluated


> templates are lazily evaluated

At runtime? like Scheme?

http://www.shido.info/lisp/scheme_lazy_e.html


no, at compile time (though there are plenty of ways to implement run-time lazy evaluation, with e.g. expression templates)


An interesting point raised by this paper is that when a program requires pervasive modifications (e.g. checking the error code returned by C functions), adding a new concept to the language (e.g. C++ exceptions) can systematically make these changes unnecessary, therefore simplifying the program.

Perhaps this is how programming language designers ought to vet language ideas: do the proposed changes make certain patterns redundant? Looking at Rust through this lens makes it clear that it aims to eliminate C-style manual memory management.

However, it does make me wonder: How do languages such as JavaScript and Python stand out from their predecessors, and what problems do they uniquely solve?


The wikipedia summary of the JavaScript design rational is decent; it also kind of explains why it's so odd:

> In 1995, Netscape Communications recruited Brendan Eich with the goal of embedding the Scheme programming language into its Netscape Navigator.[11] Before he could get started, Netscape Communications collaborated with Sun Microsystems to include in Netscape Navigator Sun's more static programming language Java, in order to compete with Microsoft for user adoption of Web technologies and platforms.[12] Netscape Communications then decided that the scripting language they wanted to create would complement Java and should have a similar syntax, which excluded adopting other languages such as Perl, Python, TCL, or Scheme. To defend the idea of JavaScript against competing proposals, the company needed a prototype. Eich wrote one in 10 days, in May 1995.

It had to look vaguely like Java (hence ALGOL-derived brace delimited blocks rather than Scheme-derived prefix S-expressions, and the confusing name), and it had to be implemented quickly (hence the lack of typechecker and other advanced features), and it was intended to be beginner-friendly (hence all the truthiness/falsiness stuff).

Javascript only succeeds because it's the only scripting language supported by web browsers, and the brief attempt at getting VBScript into browsers was even worse.


This seems like an extreme case of “assume your hack will be used in production for years to come.”


> and it had to be implemented quickly (hence the lack of typechecker and other advanced features)

Since then there has been more than enough time to implement optional static types.

In fact, the EcmaScript 4 proposal had those, before it was trashed in 2007 or so and the TC39 started EcmaScript 5 from scratch.


JavaScript is fine for the job of hooking light-duty events to HTML, but people try to write entire GUI/graphics engines in it. It's the wrong tool for that job, just as you don't write an OS in TCL. The whole Web UI "standards" issue needs a big overhaul in my opinion, but Web UI's is probably off topic. (I've ranted about it in other topics.)


> exceptions > simplifying

Erm, no. Some consequences: non-obvious control flow, RAII, constructors, move semantics, exception safety, efficiency, stack unwinding...

A better way to deal with the "problem" of unchecked error codes is to have the compiler check that something happens to the result value. This is what Chandler Carruth suggests.

An even better (but orthogonal) way is to structure the code so it does only one thing at a time, and does any one thing only in one place (ideally).


You do have a point, adding exceptions to C++ does impact the language in many ways. They must be used responsibly: being able to jump to a completely different point of the program at any time is dangerous.

However, exceptions do simplify some situations, such as when an unrecoverable error happens at the bottom of a deep call stack. Exceptions make it simple to inform the user/system that an error has occurred without adding error checks to every function in the call stack. I believe this use-case justifies the feature, especially if you can tolerate the performance loss which might not even be that bad [0].

Having the compiler check error codes is one of the reasons I enjoy Haskell so much: the type system makes you handle failure (amongst other things).

[0] http://www.open-std.org/jtc1/sc22/wg21/docs/TR18015.pdf - Section 5.4.1.2 - the table approach to exception handling has no run-time cost during normal (non-exceptional) program flow.


The efficiency problem need not be the exception handling in itself (I can't say a lot about that. There are various claims and it probably depends on the tradeoffs of the language implementation).

The actual problem is the ramifications on the code structure. A deep call stack with a lot of implicit context is a problem in itself. Exceptions imply a temporal coupling from error occurrence to error handling. This is a subtle but severe problem. It might start out as a noticeable maintainability problem but it's likely to quickly become a performance problem as well...


> Exceptions imply a temporal coupling from error occurrence to error handling.

Could you expand on this? I understood your statement as "Exceptions sort-of force you to handle an error the moment it occurs", but I'm having trouble seeing why this would be specific to exceptions and not the case with other error handling solutions.

(I don't mean to sound like a big exception-defender – I prefer ML/Rust-style Result types)


In some cases immediate action might have to be taken (like calling abort()) but very often that is not the case. Another strategy is to collect errors. Then decide what to do them at another point (no temporal or physical coupling!). After all errors are collected, you can group them, sort them, or aggregate them. Whatever is appropriate.

Looking at errors as just data avoids the "exceptional vs non-exceptional" hair splitting, and gives a lot of flexibility for code structure. I don't necessarily disagree with the Result type viewpoint as a return type from functions, but I also find it pretty pointless. Very often the best action is to separate out errors from successes into different tables immediately. A built-in Result type couples them, and encourages keeping them coupled.

Now you could argue that you can do that with Exceptions, too, by catching them immediately and treating them as data. In which case I want to ask "what's the point then?" and also refer to my topmost comment. Exceptions have a significant cost in infrastructure even if you don't actually use them...


What are you going to do with the errors once you have collected them?

What is the point of sorting or grouping them?


A simple example from the compiler I'm working on right now: I output type errors of expressions sorted by their location in the source file, instead of in the order they are detected. I also don't error out on the first one, but find all at once. I could also group warnings and errors together, output only the first N errors, or... whatever I fancy. Errors are just data.


Well, this example seems somewhat contrived to me because these errors are not the errors happening in the compiler, they are diagnostic messages describing the errors in the input data.

Any chance you have another example?


"An error happening in the compiler". What's that? If you mean a bug detected, then it should be handled with abort().


An error related to the file system, for example.


A file system error isn't an error happening in the compiler. The file system is an external system. So this error isn't really any different than a syntax error or type error. I would not treat it differently except making a different error message. Of course you cannot parse a file you cannot read, so there is a dependency which means that sooner or later you need to "act" on the error.


Well, grouping errors in your example makes sense when all of them happened during processing a particular input which is expected to be invalid sometimes.

Could you share another example where accumulating errors makes sense?


You can do it with anything. Analyze all the files in a directory, for instance. Do all the IO in parallel and merge the results at a central point. Some files might be corrupt or whatever. Merge the results at a central point. Only then present them to the user in a sensible order.

A real life example: Asynchronous webservers usually have worker threads for I/O. I/O errors need to be routed to the originators of the I/O requests (which are in other threads). Exceptions cannot do that.

Here's the general point: Whether any given thing is an "Error" is highly subjective and context dependent. But clearly the error is data, so why take away the possibility to process it like any other data?


This is an interesting conversation for me, but I still don't quite understand your reasoning.

IMO, using exceptions doesn't preclude treating errors as data.

In your example with parallel processing, you could throw an error as exception, let it unwind the whole stack, catch it and return as the result of the task.

This way you don't couple the code that does the useful work with the code that handles errors.

In other words, I don't understand why you dislike exceptions.

IMO they are a good tool for any task that doesn't require different actions to handle different errors.


Exceptions are not just data, at least not if you do not always catch them immediately (in which case they are just pointless).

They introduce additional control paths which are hard to reason about. In many languages these are not even explicit. Exceptions require an additional syntax to handle them. And having exceptions requires significant language infrastructure and comes with a huge toll on the structure of software projects. See my topmost comment.

> In your example with parallel processing, you could throw an error as exception, let it unwind the whole stack, catch it and return as the result of the task.

Which would catch only the first encountered error per thread. And not catch all the other errors that we could encounter and report after that.


Well, if you don't have exceptions, stack unwinding, RAII, you've got to manually code the same sort of logic using repetitive code. Or you would have to change your design to minimise the pain of hand-coding stack unwinding, most likely paying a price for that in form of damaging other aspects of design. In other words, NOT having exceptions, IMO, results in having to write a huge amount of code accomplishing a trivial task - aborting some unit of functionality upon unexpected error.

"And not catch all the other errors" I think, in most cases after an unexpected error happens it doesn't make sense to continue the task and report more errors, most likely induced by the first error.

Don't get me wrong, there are plenty of cases when returning errors as values is the best fit and using the exceptions would be a disaster, but my point is that, IMO, the situation when errors abort tasks and are handled in more or less the same way is much more common.


A rather odd example with Result type systems: Say you have a function that returns the results of three other functions together in a tuple and the first one errors. The other two can still run and their result can be returned in the tuple along with an Error for the first function.


Ah, so using Result types you can return

  `(Optional<A>, Optional<B>, Optional<C>)`
– each can fail or succeed independently. But you can't do that if you're using exceptions to signal errors – it's like returning

  `Optional<(A, B, C)>`
, you get either all the results or nothing. Good example, thanks!


You can still get something like that using exceptions, it depends on where the exceptions are caught. If you have a function which has 3 calls:

  function f (...) {
    val a = something.a();
    val b = something.b();
    val c = something.c();
    return (a,b,c);
  }
If the exceptions can occur in the calls assigning to a, b, or c and are handled locally (here, in f) then you can construct that first case. It's only if you throw the exception up one level higher that you end up in the latter case. As written, it is more like your latter example. But with a modication:

  function f(...) {
    val a = default_a;
    val b = default_b;
    val c = default_c;
    try {
      a = something.a();
    } catch {
      a = error_value_a;
    }
    ...
  }
You can get the former case.


compared to error codes, exceptions are simpler IMO. Many issues with exceptions come from using them in non-exceptional circumstances, i.e. for control flow.


Don't mix up simple and easy, they are orthogonal concepts.

Exceptions are in no way simpler than C-style error codes, but in many cases they might be easier to work with.


simple to use versus simple to understand. I was referring to the former.


Look up the definition of "simple".

How exactly do you distinguish "exceptional" from "non-exceptional"?


The best description of exceptions that I've seen is Bertrand Meyer's exposition of them in _Object Oriented Software Construction_.

His definition depends on the notion of Design by Contract. An exception is an event that causes a function/method to fail because it is unable to satisfy its contract. The caller is then responsible for cleaning things up (so it can satisfy its contract) or it also triggers an exception resulting it its failure.

In the context of Design by Contract, the list of things that can cause an exception include:

  * hardware/OS errors
  * calling a method on a null reference
  * calling a method that itself fails
  * discovering a pre-condition isn't true
  * discovering that a post-condition isn't true
  * discovering that a class-invariant isn't true
  * loop invariant failure or lack of progress
  * assertion failures
  * explicit triggering of an exception
I highly recommend Object Oriented Software Construction.


>I highly recommend Object Oriented Software Construction.

I do, too. I had read most of the book [1] some years ago. It is really good. But readers should know that they need to be ready to put in enough time to read, understand and digest it, not just because it is a thick book, although it is that, but because, true to Bertrand Meyer's style, it is detailed, systematic, thorough, etc. It's not one of those books where you can read it in a few days and then start applying its stuff to your work, nor is it for casual programmers who are only into the field to make some quick bucks.

[1] https://en.wikipedia.org/wiki/Object-Oriented_Software_Const...

BTW, the book may have an Easter egg in it.


I've written an article on error handling some point and ended up with: "An error is when the program is operating outside the intended path of execution."

What counts as intended is a question of definition, but in concrete examples of local reasoning within a function this question is usually easier to answer. Thinking about it in terms of (explicit or implicit) contracts, like Bertrand Meyer, sounds like a smart idea.

https://blog.gnoack.org/post/error_handling/


> Warning: Textual logging is not an error handling strategy. Logs are not meant for machine consumption so it’s hard to have automated monitoring based on them.

I'd beg to disagree.

As for the general statement of your post, I like how you're talking about stakeholders and (specification) bounds but I don't think it's an actionable viewpoint. The idea of "Bubbling up errors" inside a program is caught in the software reuse mindset but it's not possible to take action on encountering a true error (here "true" means "outside specifiation"). That's because the state of your whole program is undefined after the encounter. Which means that the best you can do is to call abort() and hope that the program will end with some info for debugging.

(Unless we are talking about a VM or sandboxed code, in which case again you could just call some equivalent of abort() from the code inside and "catch" that in the host code and restart the VM or something).


Thanks for the feedback. :)

To clarify, in the context of the article, "making errors observable from the outside" is not meant at the per-function level, but meant at the level of the whole program, which includes abort(). I should improve the wording there.

abort() is perfectly legitimate if you can't recover within the same process. It's not at odds with routing to the right stakeholder, as long as the parent process catches the crash and does appropriate error handling (e.g. "Send a crash report" dialogs or other watchdogs).

Regarding textual logging, I'm not sure exactly which part you disagree with. :) I'm aware that people commonly write regex-based extractors, and I've done the same, but having the choice I'd always go for a more schema-ful error reporting channel. That's more lightweight both for reading and for writing it, and avoids a whole class of regex bugs.


OK I had probably skimmed too much but now I think we're just talking about different things here. I like your post, so let's give it another chance: https://news.ycombinator.com/item?id=18390030 :-)

Regarding textual logging, I think it works wonderfully and I don't think it's at odds with schema-ful reporting. One good way is to include error codes. Informal messages are at least as important. They are ergonomic to humans and their meanings are easy to look up with a web search.

Of course textual logging can always be supported by other means, like crash dumps.


In Java, your true error dichotomy is done using checked versus unchecked exceptions. Checked exceptions can usually be handled while unchecked exceptions usually can’t be, so they aren’t usually caught and instead just bubble all the way up.


I think this is a hair-splitting distinction and it does not work reliably. Sorry. Things that truly "cannot" happen => you do not know how to recover => you terminate the program. That's it.


That is why the definition is recursive. If every failure triggers a new exception all the way up the call stack, then your program fails.


Yeah, so what was the point to begin with?


Do you understand Design By Contract? That is the theory that provides the distinction between a function/method finishing with success or with failure. The notion of contract is what distinguishes an exception from a glorified GOTO.


In re-reading, my response comes across as much more flippant than I intended. I was just trying to re-emphasize that the definition of an exception is an integral component of Meyer's design by contract methodology and not a stand alone concept.


> Look up the definition of "simple".

There's no need to be dismissive. I refer to simple as simple to use, not simple to conceptualise.

I would say exceptional is a situation that is rare and unexpected. When saving a record to a table with a unique constraint, I would expect the constraint to prevent saving but I would not usually attempt to handle/recover the app running out of memory.

So it's situational and I'd say the distinction is between whether you will be explicitly handling the event as part of normal behaviour. The more stable the app needs to be, the more situations need to be handled.


You're not being dismissed. You're being corrected.

The GP, along with many other people, are trying to get software professionals to standardize on a definition of simple that means something like "composed of a single element; not compound" or "easy to reason about".

Under such a standard, it makes no sense to talk about "simple to use" vs "simple to understand". You can talk about "easy to use" vs "easy to understand". The latter might mean "simple". But "simple" never means the former. And you never say it's "simple to X" for any value of X.

Under such a standard. The degree to which people are converging on this standard in actuality is not clear to me.


no-one told me about this standard when I entered the industry. If it comes from some blog post that I haven't read, how am I meant to know that some small selection of developers prefer "simple/easy" as a dichotomy? It's not a correction if it's just a different opinion.


There is a very popular talk by Rich Hickey (creator of Clojure), "Simple vs easy". Maybe that was what popularized the distinction among many programmers.

It's really not an arbitrary definition if you think about it etymologically. Very freely sim-ple is "unfolded" (c.f. pliers, plier qc). Whereas com-plex is "folded together". "Easy" relates more to a state of ignorance. It's about the user of the thing, not about the thing itself. (NB I'm just making this up. I'm not a linguist so I might be wrong).


it makes sense as a distinction and I'll likely use it going forwards. The only thing I objected to was the idea that this distinction is something I should already know as a programmer. I guess it comes down to a case of separate bubbles/backgrounds, as I've never heard the distinction before in my time in industry and I'd guess it's considered standard jargon within your environment?


I would assume most HN folks have heard of Rich Hickey and his talks. Surely not all of them have. Learning never ends :-)


I knew about him, but haven't seen / heard of his talks.


I agree and I think the conclusion is that it's up to the caller to decide what is exceptional. But unfortunately it's the callee who decides how the result is returned.

And it's not only the caller, but also the caller of the caller, and so on, who needs to be prepared for exceptions. This all has severe ramifications on the code structure.

> There's no need to be dismissive. I refer to simple as simple to use, not simple to conceptualise.

Sorry. I need to control myself. It's not about you. As another commenter stated there is an agreed upon distinction between simple and easy. The distinction is important especially among programmers, and what you described might be easy but it's not simple.


agreed, there are better ways to handle such circumstances (such as an Either/Validation monad). But I still think exceptions are better than error codes. If you don't handle an error code, you can proceed through your application in an invalid state. If you don't handle an exception, the application crashes out. I think the latter is preferable in most circumstances.


Yes, and this is the reason why I'm happy with exceptions for short scripts, e.g. Python. It's a different story for larger projects where I make all the primitives by myself. There I start by explicitly aborting on cases that are not handled yet (no exceptions needed to do that), and then I gradually change the structure of the code to handle more and more "exceptional" cases.


I suspect you're the exceptional case though (ha ha) because error codes don't cater to inexperienced developers as well as exceptions do. They are much more liable to cause your application to fail silently unless you explicitly prevent it, as you do. Good tools are all about enabling average or inexperienced developers to not fall into those pitfalls so from that perspective, exceptions are better. Usually those tools aren't very nuanced so they can get in the way of experienced or skilled developers and you end up battling the framework instead of getting productive work done. That sucks, but it's a trade-off most companies (consciously or unconsciously) accept.


Case in point: Python’s StopIteration exception, which is raised to indicate iterator exhaustion. A lot of efficiency can be gained by not invoking Python’s heavy exception handling machinery for this.


I think it's also how you should pick languages for a project. What is the worst yak-shaving aspect of the code you're going to have to write for this particular project? What language will go the furthest toward making that yak shaving go away? Pick that language.


I pick the languages based on the platform stack I have to use and not the other way around.

Basically the first class languages on a given platform, or having official bindings for a specific set of libraries that must be used.

My experience has proven it is the best path for lowest attrition.

Every time I decided to do otherwise I repented later on.


Didn't like it much.

> object-oriented programming is best for problems with a large number of related data abstractions organized in a hierarchy

In OO books, maybe. In practice, OO is the way to compose very large systems out of big components. For hierarchies of data abstractions, very often OO is far from best.

> Popular mainstream languages such as Java or C++ support just one or two separate paradigms.

They support most of them, esp. modern C++, and C#. E.g. quite recently I was programming C++ in monotonic data flow paradigm, because MS media foundation.


I generally agree about OO. It's good for small-to-medium-sized abstractions, but scales poorly compared to say an RDBMS. When your OO diagrams start to look like ER diagrams, you are probably outside of OO's comfort zone. For large domain models, an RDBMS is superior to OOP in my opinion. OO is lousy at many-to-many relationships, for one, and lacks a visible identifier (primary key) to trouble-shoot data/state easily. OO may be helpful for modeling sub-sets of a domain model, but if you try to do the whole thing in an OO model, you'll either reinvent a database the hard way, turn grey, or do both.


With this kind of articles, I just want to thank HN and all of these useful discussions. It's always enlightening moments while reading you guys comments.


Re: "always enlightening moments [reading your] comments."

Not always: I'm a jerk 22.7% of the time.


It's possible to be a jerk and enlightening at the same time. As easy as it is to forget in today's culture of "everybody's a victim" and "feelings over every other concern", a good argument doesn't suddenly become invalid just because the author is rude.


I'd be interested in specific examples to learn from. I'm always striving to make myself a more useful and enlightened jerk. I've already upgraded from a__hole.


I'm curious where's 22.7% comes from?


I'm not telling you because today I happen to be a jerk.


The footnote in section 2.1 caught my eye:

> Similar reasoning explains why Baskin-Robbins has exactly 31 flavors of ice cream. We postulate that they have only 5 flavors, which gives 25 − 1 = 31 combinations with at least one flavor. The 32nd combination is the empty flavor. The taste of the empty flavor is an open research question.

I honestly can’t tell if this is a good joke or serious & bad logic. The 31 flavors are obviously not a mix of 5 base flavors. If that were the case, each base flavor would be in 16 of the mixed flavors, and you wouldn’t have any unique flavors at all, like mint or cookie dough. It’s just a coincidence that the longest months in the year are 2^5-1 days.


>I honestly can’t tell if this is a good joke...

I'll help out: It's a joke.


For sure? I skimmed the whole thing and couldn’t immediately find any other jokes. Seems an odd choice to throw in one random deadpan comment about the coincidence of 31 being near a power of two.


> Seems an odd choice to throw in one random deadpan comment...

But then again it gives the joke a special nonplussing hilarity that it wouldn’t acquire otherwise.


Definitely a joke. The "open research question" kicker makes that clear. There's a similar jokey footnote on page 33.


Months are 31 days because each day is made up of 5 base elements that can present pr absent. The empty day is theorized to exist but impossible for humans to experience and report.


I thought that was funny, I'm bummed it got downvoted. I tried imagining what a base element of time is. This is like some kind of complex or imaginary unit time quaternion or something.



If you can't do FP properly, most of your OOP codes are smell. Why ? At its root, OOP is born from FP.


I have been using programs from here.. for many years https://www.filehorse.com/software-developer-tools/ all the best!


What utter rubbish. This type of paper has no place in a professional environment.


You realize that the book it's excerpted from is considered one of the more important computer science texts?

You're going to have to offer a great deal more useful critique than "utter rubbish" to gain any meaningful agreement here.


Actually, the chapter is based on CTM, but seems to be from a more recent work.

"This chapter is partly based on the book [50], familiarly known as CTM, which gives much more information on many of the paradigms and concepts presented here. But this chapter goes further and presents ideas and paradigms not covered in CTM."

I poked around a bit on Van Roy's website, but couldn't find the source. It would be interesting to know what it is.


It's from here: https://www.amazon.com/New-computational-paradigms-computer-...

As attested by this: http://lambda-the-ultimate.org/node/3465 and the author's own intervention in these comments.


Thanks, didn't notice that.


Could you give some reasons for your statement? Why do you think it's rubbish?


Like many discussions, this seems to demonstrate that a Programmer != a Developer != an Engineer != an Architect.

Of course, many programmers would never need this stuff and many have no formal qualifications but they still produce what they need to without problems. They certainly do not need to know about paradigms vs concepts vs models etc. even if it is interesting.

Of course, there ARE people who need to know this stuff to do their job well but they are quite up the food chain compared to most of us. They might also be corporate, who of us sits down and thinks, shall I do this in OO or functional? Which paradigm fits? Most of us know few languages and use what we know.


You're saying uneducated programmers are more useful than educated ones? I don't understand your point.


I believe that lbriner is saying that uneducated programmers are still useful - not more useful, just useful.


He's got a point. 99.999% of programmers are never going to need to know this, and the time wasted reading it would better be spent inventioning a new JavaScript UI framework.

/s.


This type of comment has no place in a professional environment.




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: