I completely failed to recognize its significance when I first read that essay. Thanks for bring it to my attention.
"This practice is not only common, but institutionalized. For example, in the OO world you hear a good deal about "patterns". I wonder if these patterns are not sometimes evidence of case (c), the human compiler, at work. When I see patterns in my programs, I consider it a sign of trouble."
For write-only code, communication to humans doesn't matter. But if someone else is going to read it; or if you are going to read it (for example, debug it), then it matters.
Boilerplate - repeated pattern - helps orient a reader. The more familiar it is, the easier it is to understand. Every layer of abstraction increases the difficulty in understanding. The writing becomes like a dense mathematical article, instead of a newspaper article.
I believe without proof that this attitude is one of the reasons that fp has not become popular, despite being older than Java, older than C and older than COBOL: it encourages a programming style that is hard to read, because it strives to eliminate redundancy.
That said, my comp sci masters was about non-redundancy. Absolute non-redundancy is beautiful; beautiful in the sense that it is truth. That's all you need to know.
At the code snippet level, 40 lines of redundant code may well be easier to read than 2 lines of, say, function composition. And 100 lines of familiar redundancy will feel much easier to work with than a small amount of strange-looking symbolic compactness. But anyone who believes that a million lines of redundant code are easier to work with than ten thousand lines of compact code (I'm making these numbers up) is deeply deluded. This is how software projects end up with hundreds of programmers frozen in concrete. (People think of code as a commodity, as if it comes in sheet rolls that you cut enough of until your project is complete. This is a fundamental mistake.)
The root issue is that the number of programmers content to crank out redunant code is at least an order of magnitude (maybe two) greater than the number who are capable of working effectively with abstraction.
If that's correct, then there's a market opportunity in it. A small startup with programmers who do know how to program in more powerful languages (FP or otherwise) ought to be able to achieve very ambitious things compared to large teams working with weaker tools. At some point, the quantitative advantage becomes a qualitative one. Small teams are able to do things that large teams simply can't, and small codebases can be worked with in ways that large codebases simply can't.
This market opportunity is traditionally exploited by creating abstractions for other developers to use: a database; a language; a library; an OS.
The abstraction is sold many times, with each sale enabling the recipient to create more ambitious applications. This SOTSOG leverages those few who are capable of working effectively with abstraction.
I really, really think that there's something about learning imperative languages that breaks our brains in some way. Because, I've heard people argue that
theSum = 0
theArray.each do |n|
theSum += n
theSum = theArray.sum
Even worse, I've had people take correct functional-style code and make it "more readable" by expanding it out into a bunch of boilerplate, and fail to notice the typos and logic errors they introduced in the process. People don't actually read boilerplate code; they do a visual pattern match on it. If the pattern is close enough, they accept the code as correct even if there's a small error that they didn't notice.
Your point makes me think that Java's boiler-plate + modern IDE's are really onto something. They automatically add the boilerplate for you, so it's there for your to pattern-match on, without having to write it correctly. This helps me understand their extreme popularity.
Code that has repetition will not be as compact as code in which every symbol is new, original. The later will require much more thought to both write and understand and so is not necessarily desirable.
If we use the term pattern loosely, any programming involves a series of nested patterns with variations. That is how it should be. That is essentially how good writing works as well.
So, actually, patterns and repetition are good if they make the original parts clear. So, as far as language goes, it seems like a language which allows you to include what you need to include and exclude what is irrelevant would be desirable.
Any program that expresses your ideas in the most compact fashion possible will incomprehensible to anyone else and incomprehensible to you next week.
The problem with your argument about clarity is that clarity is very much in the eye of the beholder. To someone who doesn't know about first-class functions, code that defines a class to hold a function and passes instances of a "strategy pattern" object around is probably going to be clearer. To someone who does know about first-class functions, such code isn't "clear" at all, it's silly, and the name "strategy pattern" is ridiculous.
Okay here I go:
Every line of code introduces complexity, every abstraction is leaky, and every interface encodes certain assumptions. As you work your way up into application level functionality, reusability of code falls off a cliff.
At some point some possible abstraction will be apparent that will reduce some redundant code. What are some reasons you might not want to go ahead and refactor?
Reason #1 - It might not be worth the time to parameterize the method. Even if it's just a couple parameters you will need to write that code vs a copy and paste job. Depending on the circumstances this may or may not be worth it--what's the likelihood of reuse? How much code are you actually removing?
Reason #2 - Maybe it won't be more readable. Just having less code is not more readable per se. Obviously this is somewhat subjective, but hey, we're all human. The new abstraction may not be conceptually useful in the wider context of the application. At a minimum this is going to require the incoming developer to look at one additional place to trace the code flow, this applies even if the next programmer has all the skills of the first programmer (or are the same person).
Reason #3 - There might not be sufficient information to craft the right abstraction. Even if you know you are likely to reuse some element, if you don't build it with the right parameters then it will need to be refactored later, possibly even scrapped entirely. If you have a good instinct about this you can hedge your bet by simply duplicating some code for now.
Reason #4 - Several elements may be similar yet unrelated. Even if you have many things that are exactly the same, they may be implementations of different ideas that are moving in different directions. An abstraction of these things ends up being a form of coupling. This issue comes up in a lot of testing, mainly because testing is much more open ended than business logic since the sky is the limit when you are deciding what and how to test.
Reason #5 - The variability may just be too much. Having a parameter or two is the backbone of efficient abstraction. However what if a common task has much more variability requiring 10 or 20 parameters, or maybe just 5 parameters with complex interaction? Obviously there is going to be a line somewhere, and real business logic can get infinitely close to either side of the line.
As skilled hackers I think it's too easy to see the mistakes of amateurs and beginners who miss obvious opportunities for abstraction. LISPers in particular are keen to notice when a blub language like Java requires especially obtuse constructs and duplication. However those are just strawmen. A language like Java is just too easy a target.
The truth is that a balance must be struck lest we become architecture astronauts.
Now, all that said, I think the beginning of the OA is completely right. The notion of a pattern being unabstractable is nonsense in higher level languages. I think that's an artifact of too much Java causing people to internalize false dichotomies about what code can do. Meta-programming opens all doors.
On the other hand, the idea that a design pattern is a hint that a language is not powerful enough is equally ridiculous. It only makes sense if you look at patterns that emerged for blub languages and then observe where their typical implementations don't make any mistake when you have high level capabilities like macros. Even if you are coding in the theoretically most powerful programming language you are still going to run into patterns of things that don't have a good abstraction due to the aforementioned reasons.
I've spent most of my consulting career crash course hacking (no time/budget for a proper rewrite, incurring more technical debt... I do what I can to improve things structurally) around terrible, terrible pattern heavy libraries that sought to remove every single line of duplicate code and ended up abstracting out all the wrong things. I have literally spent man years dealing with bad object<->HTML mappers.
Code duplication can be overwhelmingly preferable to generalizations that are created too early. I say it is often good to duplicate code until you have a better perspective on what it really makes sense to abstract, and have a better perspective on what a good implementation of generalized abstractions will look like to allow maximum flexibility later.
Design pattern junkies have given me some of my worst days. I don't even know if this addresses your post, but I agree with it. Hacking some terrible code right now.
Serious question, not a troll -- I've always felt like I'm missing some key point here.
Or getting the result of an expression into another expression, like
x = a+b
y = x*c
$300: $(A) $(B) ADD $(C) MUL
And the result would be at the top of the stack.
Not all assembly languages are created equal. ;-)
In the end, I wrote an incomplete Forth compiler for it, but I don't know where the floppies I wrote them to ended up.
C to C++: virtual methods, scope-delimited allocation, dynamic typing.
Of course, if a function definition is a pattern, does that mean that Lisp (chock-full of function definitions) is a lower-level language than Prolog (which doesn't need functions at all and therefore abstracts away that whole idea)?
There's a reason there's no way to express loops and varibles in HTML, for example. That's the principle (or rule) of least power at work:
Take for instance context management (Common Lisp's `unwind-protect`/python's `with`/C#'s `using`, it's also possible to express it using first-class functions which is the approach used by Smalltalk or Scheme or using your objects's lifecycles which is what C++'s RAII does). It's essential to ensure that resources you're using for a limited time (a lock, a file, a transaction, …) is cleanly released when you don't need it anymore, even if an error occurs during usage.
In java, most of the time this is done with stacks of try/except/finally blocks which you're going to write again and again and again.
How do you abstract it? You can't. Well you could use anonymous classes to emulate first-class functions in theory, but it has limitations of its own and it's not really supported by the wider Java community. And note that before Python introduced `with` or C# introduced `using`, they were pretty much in the same situation (theoretically, C# 3.0 could deprecate `using` since it has anonymous functions, Python on the other hand can't given its still crippled lambdas)