Code that has repetition will not be as compact as code in which every symbol is new, original. The later will require much more thought to both write and understand and so is not necessarily desirable.
If we use the term pattern loosely, any programming involves a series of nested patterns with variations. That is how it should be. That is essentially how good writing works as well.
So, actually, patterns and repetition are good if they make the original parts clear. So, as far as language goes, it seems like a language which allows you to include what you need to include and exclude what is irrelevant would be desirable.
Any program that expresses your ideas in the most compact fashion possible will incomprehensible to anyone else and incomprehensible to you next week.
The problem with your argument about clarity is that clarity is very much in the eye of the beholder. To someone who doesn't know about first-class functions, code that defines a class to hold a function and passes instances of a "strategy pattern" object around is probably going to be clearer. To someone who does know about first-class functions, such code isn't "clear" at all, it's silly, and the name "strategy pattern" is ridiculous.
Okay here I go:
Every line of code introduces complexity, every abstraction is leaky, and every interface encodes certain assumptions. As you work your way up into application level functionality, reusability of code falls off a cliff.
At some point some possible abstraction will be apparent that will reduce some redundant code. What are some reasons you might not want to go ahead and refactor?
Reason #1 - It might not be worth the time to parameterize the method. Even if it's just a couple parameters you will need to write that code vs a copy and paste job. Depending on the circumstances this may or may not be worth it--what's the likelihood of reuse? How much code are you actually removing?
Reason #2 - Maybe it won't be more readable. Just having less code is not more readable per se. Obviously this is somewhat subjective, but hey, we're all human. The new abstraction may not be conceptually useful in the wider context of the application. At a minimum this is going to require the incoming developer to look at one additional place to trace the code flow, this applies even if the next programmer has all the skills of the first programmer (or are the same person).
Reason #3 - There might not be sufficient information to craft the right abstraction. Even if you know you are likely to reuse some element, if you don't build it with the right parameters then it will need to be refactored later, possibly even scrapped entirely. If you have a good instinct about this you can hedge your bet by simply duplicating some code for now.
Reason #4 - Several elements may be similar yet unrelated. Even if you have many things that are exactly the same, they may be implementations of different ideas that are moving in different directions. An abstraction of these things ends up being a form of coupling. This issue comes up in a lot of testing, mainly because testing is much more open ended than business logic since the sky is the limit when you are deciding what and how to test.
Reason #5 - The variability may just be too much. Having a parameter or two is the backbone of efficient abstraction. However what if a common task has much more variability requiring 10 or 20 parameters, or maybe just 5 parameters with complex interaction? Obviously there is going to be a line somewhere, and real business logic can get infinitely close to either side of the line.
As skilled hackers I think it's too easy to see the mistakes of amateurs and beginners who miss obvious opportunities for abstraction. LISPers in particular are keen to notice when a blub language like Java requires especially obtuse constructs and duplication. However those are just strawmen. A language like Java is just too easy a target.
The truth is that a balance must be struck lest we become architecture astronauts.
Now, all that said, I think the beginning of the OA is completely right. The notion of a pattern being unabstractable is nonsense in higher level languages. I think that's an artifact of too much Java causing people to internalize false dichotomies about what code can do. Meta-programming opens all doors.
On the other hand, the idea that a design pattern is a hint that a language is not powerful enough is equally ridiculous. It only makes sense if you look at patterns that emerged for blub languages and then observe where their typical implementations don't make any mistake when you have high level capabilities like macros. Even if you are coding in the theoretically most powerful programming language you are still going to run into patterns of things that don't have a good abstraction due to the aforementioned reasons.
I've spent most of my consulting career crash course hacking (no time/budget for a proper rewrite, incurring more technical debt... I do what I can to improve things structurally) around terrible, terrible pattern heavy libraries that sought to remove every single line of duplicate code and ended up abstracting out all the wrong things. I have literally spent man years dealing with bad object<->HTML mappers.
Code duplication can be overwhelmingly preferable to generalizations that are created too early. I say it is often good to duplicate code until you have a better perspective on what it really makes sense to abstract, and have a better perspective on what a good implementation of generalized abstractions will look like to allow maximum flexibility later.
Design pattern junkies have given me some of my worst days. I don't even know if this addresses your post, but I agree with it. Hacking some terrible code right now.