Hacker News new | past | comments | ask | show | jobs | submit login
Seven Deadly Sins of Introductory Programming Language Design (1996) [pdf] (monash.edu)
150 points by breck 14 days ago | hide | past | favorite | 110 comments

> The premier example of the adverse effects of cleverness in programming language design (and one which is obvious to programmers at all skill levels) must surely be the C/C++ declaration syntax [10]. On the surface, it seems like an excellent notion: let declarations mirror usage. Unfortunately, the very concept undermines the principle of visually differentiating semantic difference.

Does it ever. From today's point of view, C's declaration syntax is just bizarre, especially if you spent some time with a "modern" type system[1]. The result of this declaration-mirrors-usage cleverness is not just the more encompassing insanity like the emergence of the "spiral rule", as found here: [2]. (Isn't it fun that that site promises to enable "any C programmer to parse in their head any C declaration!"? You know you have a problem when that's a novelty.)

It's also the emerging little paper cuts, like the fact that it should be "char* c" with the * hugging the char because * modifies the type and so is part of it, but because of the "mirror usage" concept you better actually write "char *c", because that's how the parser groups it, as part of the variable.

In effect, multiple declarations in one statement must have confused many novices, when they wrote something like "char* p, q", and surprisingly got a "char* p" and a "char q".

> Students have enough trouble mentally separating the concepts of declaration and usage, without the syntax conspiring to blur that crucial difference.

Yeah. I learned C in the 90s (and fortunately by now know exactly where to distribute my "volatile"s and "const"s in a complex type), but I still remember how confusing it was that the * practically did the opposite in declaration vs. usage. In a declaration it makes a pointer, in usage it dereferences it.

Consider the very important difference between those:

char *p = q; *p = q;


[1] "Modern" in quotes, because one prime example, Haskell, is old enough to appear in this 1996 article.

[2] http://c-faq.com/decl/spiral.anderson.html

The problems caused by "*" in C/C++ declarations and also in expressions are caused mostly by the mistake that "*" is a prefix operator instead of being a postfix operator.

This mistake has already been recognized by Dennis Ritchie a very long time ago, but it could not be corrected as it would break backward compatibility.

In 1965, the language EULER by Niklaus Wirth & H. Weber was the first high-level language with explicit pointer variables (previous languages, e.g. LISP I, FORTRAN IV and CPL used pointers only implicitly) and with the 2 operators "address-of" and indirect addressing, the ancestors of C "&" and "*".

However in EULER the indirection operator was postfix, as it should be. Niklaus Wirth kept this operator as postfix in his later languages, e.g. in Pascal.

With a postfix indirection operator, when you write some element of a hierarchical organization of data, which can include an arbitrary number of the 3 operations, array indexing, structure/class member selection and pointer indirection, all the operations are written and executed from the left to the right, without any parentheses and without any ambiguities. (also the C operator taken from PL/I, "->", is no longer needed, as it has no advantage over writing "*.").

This simple change would have avoided most pitfalls of the C declarations.

Rust copied this misstake. Rust would have been neater if taking references and dereferencing were postix operators. At least they made await a postfix operator.

To Rust's credit, 90% of the time I find I don't need to dereference much at all. For example, when you make a method call on a reference, you don't have to dereference first, and there is no -> operator in Rust. The only time I ever really have to dereference manually in my experience is if I'm matching on some struct reference, and I want to take some field and put it in a new place.


  match &foo {
    Foo{bar} => Baz{quux: *bar}, 
    _ => (),
(not valid Rust but you get the point)

I don't think prefix usage of * operator is a big flaw. I've learned C/C++ in high school and never had a an issue with it. I've understood that the operator is applied to the variable, not to the type and that declaring a pointer is not the same thing as dereferencing it .

Even if someone may have made a mistake when declaring a pointer in C or C++ it is something they can get over pretty fast.

Dereference syntax in C++ is very, very fixable.

C++ could easily adopt a postfix operator^ as a synonym for prefix op*, even at this late date. Then (*p)->m could be written p^->m, or p^^.m. (The last is clearly best.)

Rust, too, could do the same, for the same benefit. And C could, too, if there were any point.

The only people to complain would be IDE vendors.

If you were then allowed to declare a pointer argument like "char p^", or even "char^ p", you could begin to dispense with use of "*" as a unary operator. Declaring "char^ p, q" would mean both p and q are pointers, eliminating another source of beginners' confusion.

I didn't understand it either. Than learned assembly for a few months for a project -- switched back to C and everything came as natural.

Note that in C at least the '->' is completely redundant and the '.' alone would suffice. (C++ is a somewhat different story.)

Not without parentheses, which is why -> was introduced.

Only because of arbitrary precedence rules though. They didn't have to make . higher than *

*foo.bar() could be read as "dereference foo and then evaluate bar() on the result".

I think this was only considered acceptable because in early C you couldn't initialise automatic variables. So you would never be confused by an association like `char *p = q; *p = q;`, because you couldn't write that first part. And declarations were all separate at the front of the function.

The bigger mistake was to mix a prefix pointer operator in with the postfix [i] and (x). You end up with the spiraling-out parsing even for expressions, not just declarations. With Pascal's postfix ^ everything reads left to right.

While you are not wrong, the two major problems in C I would consider to be array-to-pointer decay (arrays should really have been a separate type), and assignments being expressions instead of statements.

Array decay means that pointers need the ability to point to different array elements, thus requiring pointer arithmatic - something that wouldn't have been needed at all without array to pointer decay.

And since assignments are expressions, they must return something, meaning they can be used in statements like 'if (a=b)...'. If they were statements they wouldn't return anything, so this mistake would be impossible. You'd lose assignment chaining, but the win here would be much greater than the loss.

If I had a third choice, it would be the mess around 'char'. At the very least there should have been two types: one for characters (where signedness is irrelevant), and one for the smallest type of integers (that can exist in signed or unsigned form). And in these modern times an additional type, something like 'byte', would be needed to deal with aliasing issues, so char (and things like uint8_t) can exist without that, allowing better optimisation for those types.

Assignments being expressions is frequently very useful for avoiding to write the same expressions twice, which could also be a source of errors.

You are right about the errors caused by unintended assignments, but the root cause for these errors is not the fact that assignments are expressions, but the fact that in B & C they have replaced the operators inherited by BCPL from ALGOL, ":=" for assignment and "=" for equality test with "=" and "==". With the original operators, typos would have been very unlikely.

This misguided substitution was justified by less typing, but a much better solution would have been to not change the syntax of the language but to just modify their text editor to insert ":=" when pressing ":", because deleting the "=" when not needed would have been required very seldom, as ":" in labels or "?:" is encountered much less frequently than both assignments and equality tests.

If you have the same expression twice as part of a compound assignment, there's nothing stopping you from assigning the variable instead of repeating the expression:

    a = b = f();

    a = f();
    b = a;
And tangential, but editors really shouldn't be mangling text as you type. Whenever I install Visual Studio anywhere, it's the first thing I turn off...

I also dislike array-to-pointer decay, but I'm not sure why do you say that it is one of the two major problems.

Could you expand on "Array decay means that pointers need the ability to point to different array elements, thus requiring pointer arithmatic"? I don't see what you mean.

A point that comes to mind is that if array-to-pointer decay were really such a bad thing, the "struct of array" and "pointer to array" types would be used much more than they are (they are almost never used).

If arrays are separate types, pointer arithmetic becomes redundant: you can't add one to a pointer to get to the next element anymore. That would remove an entire class of problems.

Pointer to array would still be a thing, of course, but it would always only point to the _array_, without the ability to manipulate it to point to another part of the array.

Pointer arithmetic is not needed any more only when the programming language provides high-level operations with arrays, like APL, J, K and similar languages.

A very good optimizing compiler could generate a code for accessing the arrays that would use either indices or pointers, depending on the target CPU and on the desired operation.

Otherwise, if you have to write explicit loops and the compiler is not extremely clever, for optimal performance you might need to use pointers instead of indices in many cases. The optimal choice can differ between target CPUs, e.g. for Intel/AMD CPUs indices are often better than pointers, while for ARM or POWER CPUs pointers are more frequently better.

You can have both, as proven by languages like Modula-2 or Object Pascal.

Arrays are a separate type.

Just in the exact scope in which they were declared, once they are passed as argument to any function they become a simple pointer.

Not from pointers, though.

> Students have enough trouble mentally separating the concepts of declaration and usage

I wonder if this is an argument in favour of Python’s implicit declarations (“there are no declarations, only usages”) or against them (“there are still declarations, but they’re invisible”)?

python is dynamically typed though, so most explicit declarations would be boring and pointless (hence the easy omission).

More to the point is probably type inference in statically typed languages, e.g. in Haskell you can often omit the type, or in C++ you can just call it "auto".

I'd say to learn a language it's better to explicitly specify your types in order to understand them, but I'm not sure it's necessarily "confusing" to omit them. I.e., "it has a type but I don't know it" is much easier to deal with than "what in the world is that type supposed to be".

> most explicit declarations would be boring and pointless

Not pointless - in similar languages (e.g. JavaScript's `let` and Lua's `local`), although declarations don't indicate a variable's type, they do indicate its scope.

Without explicit declarations, Python needs to make an assumption about scope - specifically, that each variable's scope is the innermost containing function. This makes the language easier for beginners, but more complex overall because it requires special features to support closures.

It's also a lot more error prone in large programs because misspellings introduce new variables instead of causing an error.

The fact that you can “declare” variables inside logic branches also leads to runtime exceptions when the code flow doesn’t touch any of those branches, but still expects the variable to exist later.

Couldn’t agree more. Every time I need to write/read C I’m forced to refer to some documentation just to recall what the right syntax is depending on whether or not I’m declaring or using pointers.

I think go is a good example on how to turn “cleverness” into clear conventions, which usually yield the benefits of cleverness without the headaches.

Semantic cleverness becomes problematic too at times. There’s a fine line between developing a useful abstraction and doing something so obtuse your users are scarcely able to discern what’s going on once they run into a problem and run in circles trying to debug weird underlying language design decisions.

Makes me wonder if using keywords like

var char foo; // foo is a char pointer char boo; // pointer to a boo type

or something I want.

alias baz = boo.foo; // bas is actually just an alias for boo.foo

I was so confused as a student because I has roughly the same objections as you have here, but I thought it was I who was missing something.

Also: In an hour I'm teaching pointers to novices haha

Haskell does not really have pointers, but let's say it does (or that ioref fulfills it). How would you then (as an example) declare an array of pointers pointing to functions accepting int and returning float declare in Haskell?

Array Int (IORef (Int -> Float))

The first "Int" is the type of the array's index, which in C is always "int" and hence not specified.

(As you note yourself, pointers, or even just arrays, are not very much of a Haskell concept, but I still think this is pretty readable and very obviously parseable if you know the involved types "Array i e" and "IORef a".)

In Rust, an array with 10 elements that are function pointers whose functions receive i32 and return f32 is like this:

    [fn(i32) -> f32; 10]

That's maybe cheating because this type doesn't use Rust raw pointers; C's char* in Rust is something like const char or mut char. But, if you wanted a raw pointer (and were ok with a double pointer: an array of pointers to function pointers), it would be like this:

    [*mut (fn(i32) -> f32); 10]
Which I think we can drop the (), since there is no ambiguity

    [*mut fn(i32) -> f32; 10]

ok, so HN formatting ate a couple * here and there. I meant that

    char* something;
in C is like

    let something: *mut char;

    let something: *const char;
In Rust. So complex pointers are easily expressible: they are just *mut something complex

For a C-like take, D: float function(int)[], or if you genuinely wanted an array of pointers to function pointers, float function(int)*[].

Thanks. Yes I wanted array of pointers. So then how would you do a pointer to an array of functions in D? * float function(int)[] ? Edit: or float function(int)[] *

float function(int)[]*, yes.

Like How Rust does it? via Box or using &

It would be `[fn(i32) -> f32; N]` for the reference (equivalent to `float (*[N])(int)` in C). Box or & would be only required for also allowing function-like values, as in `[Box<dyn Fn(i32) -> f32>; N]`.

I generally line up with this. Syntactic confusion is a huge barrier to "thinking" in a new computing language, now add to it by having to "think in new concepts" which lies in the semantics.

Its a kind of combinatorial explosion of meaning and power of ideas.

The expressive qualities of lambda notations can die in the inherent confusion "am I parsing this sentence-in-lambda left-to-right or right-to-left" -And I really do mean parse, the internal brain-model of "reading the words and symbols" informs how people construct a mental model of what they see.

yes a <- b and b -> a could both mean B is transformed into A but one of them is actually "read" as "a is derived from b" which is different, syntactically, and in comprehension terms

(I don't mean any computing language does this specific notation. I just mean, that how we comprehend this is subtly different, and in the case of first/new programming language, its a higher-order problem. New concepts, new notation)

Learning programming isn't so much about the language as it is about human beings and motivation. While language enthusiasts are very excited about type systems and technical details like syntax, what beginners need is to get over that initial threshold and to be able to self-motivate.

The first question to ask when choosing a language is: is it useful for practical programming? Can the students do something that feels like an actual accomplishment within the first week? If it isn't, then you can't use it as a beginner's language because it has a weak reward mechanism. The reward is to be able to solve practical problems that mean something to the student. And this is critical in the initial weeks and months.

Printing a fibonacci sequence is not rewarding for most students. It is for some, but in my experience, not most. Printing a fibonacci sequence with tail recursion even less so because a student will have no idea of why this may be important.

Being able to write a program that meaningfully communicates with its surroundings to do work IS inspiring. Whether it is consuming input from the outside and doing something with it, blinking a LED, making an HTTP request, or sending a message. (Or in the case of my nephew: write a Discord bot that honks a horn when his friends are online and try to get his attention. Which didn't just solve a practical problem: it gave him status among his peers, which is a strong motivator)

For instance: I tend to recommend Python as a first language. I don't like Python myself, but I recognize that it a) allows you to do practical things, b) it has a relatively simple syntax, and c) it doesn't really force a lot of fluff on you that "this will make sense later". It also allows you to postpone topics that initially just confuse students.

Java, JavaScript, C, C++, Lisps and perhaps more esoteric languages are for later. Yes, I know people think some of these are "the right way" to start, but that's often the teachers focusing on their own interests and not focusing on the student. Bad teacher.

Let's get people over the first couple of hurdles first, and then we can talk about what's next. But you always have to remind yourself that introductory programming is about humans.

I am a big fan of showing two languages at the start. Statically typed Java/something so that the idea of data types is ingrained. Then rapidly switching to Python after the first lesson.

A real problem I've seen (and had) with absolute beginners doing Python, is they really struggle with the types of abstract variables for the first little while when they're "getting stuff done."

"Knowing" to use type() to debug your programs isn't something that comes intuitively to a novice who's been programming for all of 25 minutes, They have absolutely no idea that builtin is there or they need to use it.

A quick Hello World in something with static types, and a "tour of data types" can go a long way. Also helps sell them on Python.

I wonder if Typescript is suitably fun for this. Because I'm convinced that a significant part of the attraction of Python to beginners is pure aesthetics.

I am not entirely convinced you need to worry too much about types in the sense we think of types. Sure, people have to have a basic understanding that there are different kinds of data, and that multiplying banana with poem doesn't make sense (well, actually, in Python it has an interpretation, so that was probably a bad example :-)).

If your program has to figure out what type some variable is before operating on it, I think the aspiring programmer has already overcome the first few barriers and is already tackling a more advanced subject.

I think initially the understanding that there are different classes of values you can assign to variables and that getting these confused can produce unwanted results is a very good first insight. And when that settles in the students mind, they might be receptive to "then there's this family of languages that get really particular about what goes where".

(I also think it is important to try to not involve too much technical terminology early on. Have a look at Richard Feynman's lecture about the double-slit experiment and note how he uses absolutely no technical language. It is so well done it didn't even occur to me until after I saw the talk)

Probably biased here, but back in the days before frameworks I found plain PHP with HTTP a good case of explaining, out of necessity. Every variable arrives as a string anyway and you need to check if it's actually an integer if you want to use it as such (let's ignore weird autocasting).

On the other hand I'm not sure it's teaching a good lesson here, because you're simply taking the un-typedness of HTTP parameters as an example, whereas in other environments you would simply explain there are number data types and strings and others...

I recommend JavaScript in the context of plain HTML as a language for the first steps in programming (especially for kids). Being able to write a simple HTML file and then see things happen immediately on a refresh is priceless.

I wouldn't say that JavaScript is necessarily an "easy" language. I think what you're after can easily be accomplished in most languages that have a REPL or equivalent (including interfaces such as https://play.golang.org/)

1) The subset of JavaScript that you need to know at first is extremely easy to get a grip on.

2) You already have everything you need to start playing with it (a web browser and a text editor).

Not having used JS for teaching someone how to program I'll take your word for it. It may be that JS only gets complicated when you try to make non-trivial software (and then things get complicated in a hurry).

I'm not sure how important 2 is. Installing VS Code and a handful of languages isn't all that hard. And it is just relatively quick setup.

But C/C++, Ada, and Haskell were never intended to be introductory programming languages! Complaining about them violating rules for introductory languages is judging them by the wrong standard. (Yes, I know, instructors have tried to use them as introductory languages. That isn't the fault of the languages, though...)

Haskell was intended for academic language on which lazy programming can be researched.

It was designed to a small extend to be used during CS studies. Witness some of those functions in prelude that will crash app on incorrect input, which could be trivially improved to total versions. Those where defined they way they are because it ease teaching.

I was not there, so I can not tell if Haskell comity had any thought put into teaching younger audiences.

However, Haskell was never, ever meant for teaching The Programming. It was always about lazy evaluation and static type system.

In deed, here in lies the trap. Python programmer expect different foundation to be laid during programming course to R programmer, to Prolog programmer to Haskell programmer to Clojure programmer.

There is no introductory programming course. We specialize from get go, and its only through learning multiple languages and/or advanced concepts that we generalize ;)

These are still problems developers face when learning that language for the first time, even if they have experience with another language

I don't know why this is still being done with new languages, but it really peeves me when I see syntactically privileged data structures. You get them in practically all languages, whether they're records or dicts or arrays or lists.

The only language that I've seen that even gets close to the right approach is Scala. You don't use special syntax to create an array that you wouldn't also use for a list or a map or a set. There's no special bracket type, no master data structure that all others must bow to. And the result is that you don't mindlessly default to the wrong data structures for your problem.

I like the way Kotlin does it: listOf(1, 2, 3) setOf(1, 2, 3) arrayOf(1, 2, 3)

I'm an eternally novice programmer: I only do prototypes. 15y ago I wrote my first program. Since then I learned a few languages, but never did I want to become an expert: I was always trying to do relatively easy stuff like a website, a numeric algorithm to take some decision depending on some data, or a macro inside some third-party software. It was all very unintuitive, filled with stuff "I would grasp later" - which I never did, because I didn't have the need. I still have zero clues on what a memory pointer is, for instance, and do not intend to learn it - nevertheless, I have successfully built a few working programs.

I absolutely hated working with html+css+javascript, and the most intuitive language I've used is Python, but I had a few big surprises/headaches while learning it (to my level) by doing. From the top of my head, the whole scope stuff was completely non-intuitive to learn (actually I didn't learn: I just made sure it was working properly by extensive testing and experimentation), aswell as how assigning and modifying lists work.

numbers = [1,2,3,4]

my_numbers = numbers

my_numbers[0] = 999

I would never expect, at first, numbers[0] to print 999 after these statements, etc.

Something else that bothered me was how some methods worked. If I want to make a string uppercase, why wouldn't upper(strvar, other arguments if existant) work, instead of strvar.upper()?, if that's the way the user will implement his function calls?

Of course I don't want to dispute how it should be, because I'm not an expert. But it seems to me a much better introductory programming language than those currently available is possible: all it has to do is limit itself to only novices. And also they shouldn't take "novices" as people with PhD in biology trying to get into datascience - these are not novices at all: they know a lot about math, for instance. The focus should be on simple operations, solely.

Hackernews breakline editing behavior is also unintuitive btw.

> I would never expect, at first, numbers[0] to print 999 after these statements, etc.

You are not alone! I've always disagreed with the idea that imperative programming is "more intuitive", more familiar to many perhaps. In a functional programming language, modifications to lists/collections result in new copies (with maximal sharing) and do not mutate the original. Such "persistent" collections have to be designed and implemented to make these operations cheap. Copying an array is typically not cheap and so Python just creates a new reference (pointer) for my_numbers.

Many languages target 'novice' audiences. Smalltalk is a classic example https://squeak.org ; a more recent example is Pyret https://www.pyret.org

Lest we forget —

"Ubiquitous Applications: Embedded Systems to Mainframe"


FTA: “A while loop doesn't execute "while" its condition is true, but rather until its condition ceases to be true at the end of its associated code block.”

For me, the canonical version of “a while loop” is “while f do … od”, not the “do … while f” that that assumes, and which, I assume, they would like to replace by something like Pascal’s “repeat … until not f”

I also disagree somewhat about the use of finite precision numbers. I think people (¿still?) will be familiar with them from using calculators, so having finite precision with propagating error values, IMO, would be fine (for signed int, I would use 0xFFF…FFF for that. As a bonus, it would mean restoring symmetry between negative and positive integers)

Syntactic synonyms and homonyms (from 3: Grammatical Traps) are some of my all-time peeves in programming languages. I find they're usually bound up with 6: Excessive Cleverness.

A very insightful article. Others have commented on its merits, but I have a different question - one I constantly return to: what about languages for professionals? Are there guidelines for design of languages meant for people who already know programming?

Our industry has an obsession with lowering barriers to entry, making everything easy for the novice at the expense of making it difficult for the seasoned professional. It's, like, the opposite of other professions, where tools are optimized to empower those doing the work, not for ease of learning. So are there any guidelines to design of powerful programming languages? Tools that sacrifice the learning curve to make them more powerful and better at managing complexity?

> Tools that sacrifice the learning curve to make them more powerful and better at managing complexity?

That's the value proposition of Haskell, as far as I can tell. Lisp might be a more powerful language, but it doesn't help so much in managing complexity. Haskell's type system is arguably the best at tackling complex systems, but the learning curve is tough.

Both the power and simplicity of a program come from its abstractions. Any language that makes new abstraction easy allows for a seasoned professional to easily make simple and powerful programs.

Beyond that, the language itself cannot do this work for you beyond providing libraries where other programmers have done it already, and shared their work.

And the reason for this is simple. To make an abstraction, we must think, and no language can think for us. The power and simplicity of our code is a product of our thinking. The rest is just technical implementation based on specs.

With languages that make abstraction hard, programmers have created tools to help. HTML and CSS have horrible abstraction features. Server-side Perl and PHP and client-side javascript has been used to abstract HTML. Sass, Less, and SCSS are tools invented for CSS.

The learning curve for abstraction itself is language independent.

The way that I think about it is in terms of N-dimensional spheres.


If you checkout the formulas there, you may notice that as the number of dimensions of the sphere increase, then the greater amount of "volume" of the sphere ends up right next to the surface of the sphere. For high dimensional spheres, the majority of "volume" is right at the edge of the sphere. Or poetically, The further you swim into the ocean the deeper the water gets.

So, my metaphor is that if you have a field which has many different orthogonal aspects (like programming does) then the beginners (living in the exact center of the sphere) will have a relatively shallow space to swim around in. This experience can be understood and optimized. However, the further you get from the center, the greater the possible volume of space that you have to explore. At the very beginning of a programming journey you're swimming in a puddle, but at the end you're swimming in a gas giant.

I suspect that there's so much space in programming alone (not even counting software engineering) that each individual programmer on earth has the opportunity to end up in a unique ocean. Coming up with any guidelines for THAT feels like a failing venture.

Although that didn't stop me from trying. You'll notice that there isn't actually any way for us to describe bad code. We've just got best practices and code smells. Best practices just being "someone else was successful once and also doing this" and code smells being "this code makes my tummy feel bad ... no I can't quantify that".

I think I've got a metric that can explain why code is bad regardless of language and domain. Although there's still a lot of work to do. Also, this is only one aspect of what you would need to do in order to design a powerful programming language for experts. I suspect we're currently in the very beginning of an effort that's going to take easily a hundred years (at least) for humanity to figure out.

I believe that there are enough examples in various niches.

One of the most obvious is APL and languages inspired by it.

While the original APL would not be a good choice today, due to various features missing in it, for managing program structure and more complex data organization, even the first version of APL had much more powerful methods for handling arrays/tensors than almost all popular programming languages used today.

When using a set of high-level APL-like operators, someone familiar with them can avoid wasting a lot of time with writing low-level redundant code, i.e. loops, like you are forced in most programming languages.

Professionals, which is anyone who makes a living programming for at least part of their job, suffer from exactly the same sins in the article.

If anything, pros suffer from even MORE sins of the stuff that surrounds the language... tooling, builds, deployment, environment concerns. One big one is configurational dumpster fires-- dealing with reams of brutally unpleasant YAML.

Indeed, the tooling is the most important aspect on which (a good) developer's productivity hinges these days. As to the languages, according to my observations (and conversations) more experienced professional developers care less about languages and think little about switching from one to another when necessary. (Also, they stop being fans of any language in particular.) Their focus is simple: getting stuff done in the most straightforward and coherent way possible given the constraints imposed on the various aspects of their work.

As an experienced dev, I agree and disagree with what you said. I do care less about languages, although I do have preferences, that part is true. However, I hate switching between them, and vastly prefer to stay in one language. The syntax is not the problem (although it is annoying trying to remember how to do an if-then statement when switching a bunch). It is the libraries and tooling that drives you crazy, plus the paradigms and idioms that are often used. I don't enjoy developing in Java, not because of the language, but because Java codebases are almost always an ornate mess of abstractions and layer after layer of tiny functions that make it incredibly difficult to hold enough of the codebase in your head to get things done.

> making everything easy for the novice at the expense of making it difficult for the seasoned professional.

This would be a valid concern if there was only a single programming language in the world. In reality there are thousands, some aimed at absolute beginners and some extremely complex and demanding, aimed for specialized professionals.

This is true in a sense, but in practice the usage of languages follows a Poisson-like distribution. There are maybe a handful of languages used by maybe 80% of people, and then maybe a dozen or so used by another 10%, with the thousands of obscure languages used by a sliver of people. Many of those thousands have at best a single user, the creator of the language.

And there is a cost to supporting lots of languages, because a language isn't just a compiler that takes up no room or resources. For a language to thrive it has to have a community, a development team, good documentation, integration with other languages and development tools etc. These things take money and power to bring them into existence, and there really only are so many good PL people in the world. It's not a super huge community of people, and we're not exactly churning PL experts out of colleges. These days the intro to AI will be waitlisted, but intro to compilers will be sending out e-mails reminding students the course still exists.

And it's not like there are a lot of paid positions for PL devs anyway. If you want to work on a language you can find jobs at Apple, Google, Microsoft, etc., but if you want to work on your own idea then there's not a lot of money to be had. A couple have gotten lucky with startup capital (Eve, Dark, Enso), others have gotten corporate patronage (Elm), and others have gotten user patronage (Zig). But I don't know how deep those dollars go, and how many languages they can support. I'm not sure but I have a feeling it's not thousands of languages.

I've been building a "programming" language for UI designers, and I had no idea how hard it would be to make a language feel simple. I have no experience in this area, so I'm just trying to make it feel as if it could follow naturally from verbal speech:

  style box
    fill: red
    width: when $size
      is small: 10px
      is medium: 20px
      is large: 30px
    height: when $size
      is small: 10px
      is medium: 20px
      is large: 30px

"if (x=0 || -10<y<10)"... it just feels so right, but nope: smackdown. Your first lesson in how programming languages will conspire against you and your logical thought processes. I think this paper is a great attempt to shake off the curse of knowledge, and I'd gladly read another dozen things that show the language getting in the way of the process. Is there a name for this kind of higher level analysis of programming language?

I presume that if you are learning something, you follow an instructor, a textbook or some kind of tutorial. Before learning conditions you should already know that = means attribution, not testing for equality.

I'd rather blame the quality of learning resources or student lack of attention here than the language design if someone mistakes = for equality operator.

Bad language design is not the fault of the student. After all, languages exist which doesn't have that problem.

> Before learning conditions you should already know that = means attribution, not testing for equality.

Depends on the language. F# and CL, for example, would like a word.

Maybe if it were a single student, but this is a problem that has persisted across generations of students. I'm a fan of blaming tools for being poorly designed rather than the sad student who has to suffer using them. All their lives these students are told that "=" means that the left hand side and right hand side are equivalent. Then we give them a tool that throws the well-understood semantic meaning of the symbol out the window (for no good reason, mind you), and then you say we are to blame the student for being confused?

> I'd rather blame the quality of learning resources or student lack of attention here than the language design if someone mistakes = for equality operator.

I see you don't have much experience with other programming languages, or you would have known that "=" is the equality operator in BASIC, Pascal, SQL and Excel... just to name a few of the most common occurrences where beginners -or- large-scale deployments would encounter this.

That is the entire point of this paper, it uses C as an example, but the concepts are the same.

The other symbol is also behaving in an unexpected way here. I can't think of any language that allows 10 < y < 20.

My experience learning programming languages (starting in grade school):

1980. BASIC (AppleSoft an TRS-80, but later DEC, Waterloo and QBASIC added to the mix)

1982. 6502 Assembler

1982. Pascal (this became my primary high-level programming language in 1986–1992)

1984. C

1985. Rexx (still the best scripting language I've worked with)


1986. TeX (programming in a macro-expansion language is it's own bizarro world)

1987. PostScript (the joys of coding in a stack-based Forth derivative)

1988. Modula-2

1988. Metafont (Another Knuth macro-expansion language, but one with fewer quirks than TeX)

1994. C++ (it didn't really click until STL came on the scene, there were some awful C++ books published in the 90s)

1995. Perl

1997. Java

1999. PHP

1999. JavaScript

2007. Ruby (I had a diversion from 2002–2006 where I tried to leave programming behind for mathematics)

2009. Groovy

2009. Erlang

2009. Scala

2011. Objective C

2016. Swift

2018. Kotlin

2018. Clojure

2021. Rust

I'm by no means an expert in all of these languages (I can really only claim current competency in the JVM languages these days, although my Rust is getting better and JavaScript is not so much a language anymore as it is a foundation for other languages built on top of it through the various libraries), but I do think that BASIC and Pascal still great languages for beginning programmers, although it's hard to go from, say, UCSD Pascal to dealing with OOP or FP which are the dominant paradigms of development these days.

It's also somewhat interesting to see that my picking up new languages tended to come in clumps, with things piling on fairly quickly after my return to programming work in 2006.

TurboPascal had a lovely IDE which made it really nice for teaching. Whereas teaching Java was a struggle initially for novices because of the relative difficulty in configuring the development environment. This is probably the most important thing PHP also got right for beginners.

I also learned programming in college with TurboPascal and absolutely loved it. Learning Smalltalk was also a pleasure due to not only the beauty of its concepts but also the development environment. The closest thing in Java for teaching the language is BlueJ, which gives the student a good intuition on the difference between class affordances (constructors, static methods and variables) and object affordances (instance methods) based on what's available when you right-click on them.

TurboC and Borland C had the same kind of IDE.

What does this have to do in the slightest with the content of the paper?

AppleSoft on a TRS-80? is that right?

d'oh, and, not an

One of the decisions that payed the most in my later programming endeavors was deciding to learn C and C++ in high school, after Basic and Pascal. That taught me a lot about how computers work and how the programs are supposed to run, once in memory.

When I've learned higher level languages later, it helped me to know what is under the hood. So, if I'm supposed to teach someone programming, I'll start with C or a sane subset of C++ rather than Python.

The other decision that payed a lot was learning algorithms and data structures.

So you learned Basic and Pascal before you tackled C and C++, but you think other students should start directly with C or C++?

I've given this advise before and it may sound really weird but I stand by it. Beginner programmers should read this book: http://www.z80.info/zaks.html

Yes, it's old. Yes, it's for a non-current CPU. But it explains how a CPU _really_ works, how binary numbers work, etc. It's a fantastic primer for understanding what's really going on inside the machine, and because it is old, it isn't yet burdened by all the complications of modern CPUs. At the same time, learning those will be easy after you understand what was written here.

Of course when I read it, I was 15, and sitting behind a Z80-based machine, writing assembly programs by writing them out on a piece of paper, and translating to hex codes by hand... Ah, fun times :-) The biggest thing I ever wrote that way was an RLE cruncher which I had designed myself. You take care of details when you program that way, I can tell you...

Sounds like you think everyone should learn programming the exact same way you did?

Wasn't there some rule about not immediately assuming the worst here?

Yes. Basic and Pascal did not help me much.

>One of the decisions that payed the most in my later programming endeavors was deciding to learn C and C++ in high school

I started with C++ in high school, but probably not $newest_version_at_the_time and it was mess for me

I felt like I was fighting language / compiler instead of trying to solve problems. "wtf? I remember that it worked fine week ago! <googling again some basic thing like array length>" or asking teacher about compiler errors

Somebody then showed me C# and I instantly switched

> C.. C++ ... That taught me a lot about how computers work and how the programs are supposed to run, once in memory.

Both langs are terrible ways to learn about how computers works. Only help how computers, that sadly have OS made with C/c++, need to so you interface with it.

I"m unsure if actually exist a lang that ACTUALLY teach how computers works.

I don't remember that a language have explicit ways to code with the CPU cache, or the CPU registers, and the context switches, and the traps, etc. C is not a model of how computers works. Is yet another abstraction, and not a good one.

I prefer much more pascal because is more sane, and Rust today. But not even Rust teach you much about computers.

Instead, you NEED to learn how computers works THEN map that to how Rust/c is and PROBABLY it will work well (In Rust, I have a word: Async. Nope, that is not how computers work!).

For example:

    a //if String, heap, if i64 stack
Different if:

This is more on line with the idea.

> but fails to take into account the apparently inability of many students to master the concept and practice of consistent indenting

I've seen seasoned programmers struggle with this.

Interesting to contrast this paper’s opinion of significant indentation (“detrimental cleverness”) with “I will never again voluntarily use a language without mandatory indentation for teaching novice programmers”.


the synonyms increasing learning time is huge. i used scala for over a year before i understood for comprehensions were equivalent to nested flatmaps.

i also don’t like when people refer to languages as “simple.” this pdf is great it calls out the ones with less rules have more complex logic. brainfuck is simple

i think the most important part of learning to program is not the language but having a good teacher

Yeah "simple" is one of the most abused words. Everyone agrees simple = good when it comes to programming, but what does it actually mean? Often is means "easy to understand", but what is easy to understand depend on prior knowledge.

For example plus(a, b) is simpler in the sense it doesn't special-case arithmetic and reqiore fewer language primitives, but a + b would seem simpler to most students.

The best definition of simple I've found is "one that takes the least cognitive load to use/comprehend."

With programming it's actually hugely complicated, because are you trying to lower the mental effort to read, or the mental effort to write?

a + b is simple to write because it reminds you of high school mathematics.

However, I'd argue

  from operator import add, mul
      def square(x):
          return mul(x, x)
      def sum_squares(x, y):
          return add(square(x), square(y))
      result = sum_squares(5, 12)
Is an excellent way for newer programmers to read code, and learn what is happening.

If the sins really were deadly, the languages guilty of them would be, y'know, dead.

But instead, all of the most successful languages ever are guilty of all of them. Instead, any language guilty of none of them is dead.

So, maybe it would better to avoid the sins, anyway, but there very clearly are overwhelmingly more important considerations to making a successful language. Luck seems to have a role, especially in the first year, but other things seem to matter later.

Very insightful paper, thanks for sharing.

Firstly, I'm surprised that having no garbage collection did not make to the list of the seven deadly sins.

Secondly, someone here also mentioned about the very useful Turbo Pascal IDE and having an excellent IDE can really help in the visual learning approach that's required by all learners [1]. If any programming languages has the intuitive capability for real-time visual interactions as described by Brett Victor it will be great [2].

Thirdly, the paper failed to mention that beginner friendly programming language has a very high chance to be adopted by the industry and becoming extremely popular. This is self evident by the recent exponential popularity of Python. When the paper was written back in 1996, Python was just another scripting programming language with "Jack of all trades, master at none" language. At the time it's playing second fiddle to popular programming languages including Perl, PHP and TCL! The same can be said to D language now but as they've always said, time will certainly tell.

Personally I think D language did not tick most of the seven dealy sins boxes and have made the most of the seven significant steps towards more teachable language mentioned in the paper. It has the most Pythonic feel to it compared to other compiled languages. It also the has default garbage collection that will make it an excellent introduction for programming. Of course you can exercise bit manipulation like a madman with D (also cautioned in the paper) but that's only when you really want to after you've considerably comfortable with the language [3]. The only crucial thing that's currently missing is an IDE that is intuitive and beginner friendly.

[1] The Biggest Myth In Education:


[2] Bret Victor - Inventing on Principle:


[3] Bit packing like a madman (DConf 2016):


Visual D[0] for VS and Code-D[1] the VS Code are quite good developer experiences for D.

D is what Java and C# 1.0 should have been, I still hope that it eventually finds its place on the mainstream.

As for Python, and the context of the paper, I would say it has won BASIC's place. It even comes as standard on Casio and TI programming calculators now.

However Basic still has an edge versus Python, it was designed since the begining to be compiled to native code (8 bit systems being the exception), something that Python still needs to improve on.

[0] - https://rainers.github.io/visuald/visuald/StartPage.html

[1] - https://marketplace.visualstudio.com/items?itemName=webfreak...

In 1996, it wasn't yet clear that GC languages were going to sweep the field.

Yes, Perl was at its zenith. But Microsoft treated VB6 like a red-headed stepchild in spite of its popularity. Tcl was ... okay. However, Lisps were on the downswing. Java and Javascript were just getting started. Python was considered one of the many Perl wannabes.

Computer memory was just becoming cheap, and overprovisioning of RAM is what allows GC languages to flourish.

I learned with QBASIC because you could put text and pixels on screen immediately. It is beyond useless to start with syntax, fibonacci and prime numbers.

I don't believe in the usefulness of a language designed for teaching, unless the student is 7 years old.

I think that someone will get much better dividends in long time if they start learning a language designed to be useful, to solve problems.

Do you have experience teaching programming to beginners?

Languages are not equally easy to learn. Both Python and C++ are designed to solve problems, but Python is much more approachable for beginners - perhaps because it was heavily inspired by the teaching language ABC.

I wonder if there was any followup, trying to assess languages explicitly addressing programming education (e.g., Logo, Pyret) against the sins and recommendations of this paper?

Besides all the C-bashing, I think it's hard to take a paper seriously when it first notes that some syntax seems to spread "memetically" between languages, such as square brackets for indexing and then turns around and complains about that, since it makes it harder to use knowledge you already have, such as subscripting for indexing (as in math).

At least some leeway for the differences in typography would be sensible there, in my opinion. Source code is "linear" basic text, and expressing subscripting which is a typographical feature is not directly possible, so I think it's rich to judge language designers based on that.

EDITS: Spelling, grammar.

If simplicity is good for a learning language, than Brainfuck wins hands down.

Everyone's first language should be Perl.

Why do you say so?

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact