Hacker Newsnew | comments | show | ask | jobs | submit login

One of the things that originally caught my attention in a PG essay is a point he made about design patterns: if you're writing the same code over and over again, you're being a human compiler. I distinctly remember the shock of reading that; it was like someone poured a bucket of ice water over my head. It wasn't just that I recognized it as true. I recognized it as indicating the existence of an alternate programming universe that I knew nothing about.



It's the concluding paragraph of "Revenge of the Nerds" http://www.paulgraham.com/icad.html

I completely failed to recognize its significance when I first read that essay. Thanks for bring it to my attention.

"This practice is not only common, but institutionalized. For example, in the OO world you hear a good deal about "patterns". I wonder if these patterns are not sometimes evidence of case (c), the human compiler, at work. When I see patterns in my programs, I consider it a sign of trouble."

-----


This is one of his that I often reread, as there is a lot in it. Thus, easy to miss that particular aspect of the last paragraph.

-----


Just a counter-point: in human communication, redundancy is good. There's the saying "Tell 'em what you're gonna tell 'em; tell 'em; tell 'em what you told 'em" and the concepts of "topic sentences" and "signposting" in essays (such as pg's).

For write-only code, communication to humans doesn't matter. But if someone else is going to read it; or if you are going to read it (for example, debug it), then it matters.

Boilerplate - repeated pattern - helps orient a reader. The more familiar it is, the easier it is to understand. Every layer of abstraction increases the difficulty in understanding. The writing becomes like a dense mathematical article, instead of a newspaper article.

I believe without proof that this attitude is one of the reasons that fp has not become popular, despite being older than Java, older than C and older than COBOL: it encourages a programming style that is hard to read, because it strives to eliminate redundancy.

That said, my comp sci masters was about non-redundancy. Absolute non-redundancy is beautiful; beautiful in the sense that it is truth. That's all you need to know.

-----


I think you make a good psychological point. Many people prefer to read and write redundant code. This may well explain why FP is not more popular. But this attitude to writing complex systems leads to disaster, which is why most complex systems are so unbelievably bad.

At the code snippet level, 40 lines of redundant code may well be easier to read than 2 lines of, say, function composition. And 100 lines of familiar redundancy will feel much easier to work with than a small amount of strange-looking symbolic compactness. But anyone who believes that a million lines of redundant code are easier to work with than ten thousand lines of compact code (I'm making these numbers up) is deeply deluded. This is how software projects end up with hundreds of programmers frozen in concrete. (People think of code as a commodity, as if it comes in sheet rolls that you cut enough of until your project is complete. This is a fundamental mistake.)

The root issue is that the number of programmers content to crank out redunant code is at least an order of magnitude (maybe two) greater than the number who are capable of working effectively with abstraction.

If that's correct, then there's a market opportunity in it. A small startup with programmers who do know how to program in more powerful languages (FP or otherwise) ought to be able to achieve very ambitious things compared to large teams working with weaker tools. At some point, the quantitative advantage becomes a qualitative one. Small teams are able to do things that large teams simply can't, and small codebases can be worked with in ways that large codebases simply can't.

-----


Nice point that redundancy, despite its readability, doesn't scale; and I agreed that abstractions are the answer, and that this schism creates a market opportunity.

This market opportunity is traditionally exploited by creating abstractions for other developers to use: a database; a language; a library; an OS.

The abstraction is sold many times, with each sale enabling the recipient to create more ambitious applications. This SOTSOG leverages those few who are capable of working effectively with abstraction.

-----


> Boilerplate - repeated pattern - helps orient a reader. The more familiar it is, the easier it is to understand. Every layer of abstraction increases the difficulty in understanding.

I really, really think that there's something about learning imperative languages that breaks our brains in some way. Because, I've heard people argue that

    theSum = 0
    theArray.each do |n|
      theSum += n
    end
is easier to read than

    theSum = theArray.sum
which seems utterly crazy.

Even worse, I've had people take correct functional-style code and make it "more readable" by expanding it out into a bunch of boilerplate, and fail to notice the typos and logic errors they introduced in the process. People don't actually read boilerplate code; they do a visual pattern match on it. If the pattern is close enough, they accept the code as correct even if there's a small error that they didn't notice.

-----


Yes, exactly right! Have you ever read a book for the nth time, and noticed a typo for the first time?

Your point makes me think that Java's boiler-plate + modern IDE's are really onto something. They automatically add the boilerplate for you, so it's there for your to pattern-match on, without having to write it correctly. This helps me understand their extreme popularity.

-----


The "human compiler" bit never made sense to me. When I start to write something for the second time, I stop and make it a function (or a class, interface, library, or whatever's appropriate). "Don't repeat yourself" is good advice in any programming universe.

-----


That was the point of the article. A design pattern is a sign that your programming language lacks an abstraction that makes it impossible not to repeat yourself.

-----


Could you give me a concrete example? Say, a pattern that can't be abstracted away in assembly language, but can be abstracted away in C++?

Serious question, not a troll -- I've always felt like I'm missing some key point here.

-----


An example of patterns gone insane http://ws.apache.org/xmlrpc/apidocs/org/apache/xmlrpc/server...

-----


WTF, I cannot believe that's not a joke.

-----


For example the "there aren't enough registers so I put some data on the stack"-pattern. You just don't have that in C++.

Or getting the result of an expression into another expression, like

    (a+b)*c
You have to write that as

    x = a+b
    y = x*c
In C# people regularly play the human compiler when defining getters&setters (or maybe their IDE does it, but you still have the noise code on your screen). Or when mentioning types everywhere when the compiler could infer it (although it's better now with the var keyword).

-----


A minor point, but C# will generate simple getters and setters for you automatically with syntax like "type Property { get; private set; }". As soon as you want to do anything interesting when get or set, however, you're back to playing human compiler.

-----


During college I designed a processor that could do that. It would read like this (assuming A, B and C were constants representing memory addresses pointing to integers):

$300: $(A) $(B) ADD $(C) MUL

And the result would be at the top of the stack.

Not all assembly languages are created equal. ;-)

-----


That has more in common with Forth than most assembly languages though.

-----


That's what I liked about the machine. When I realised that, if I designed a microcoded stack machine, I could make it run something very close to Forth in the 4 months I had to finish it, I knew I had to do it that way instead of a more conventional design (I took some implementation ideas and the assembly syntax from the 6502).

In the end, I wrote an incomplete Forth compiler for it, but I don't know where the floppies I wrote them to ended up.

-----


Assembly to C: procedures, loops, switch-case, expanded numeric constants (e.g. seconds in a week as 606024*7 rather than 6048000).

C to C++: virtual methods, scope-delimited allocation, dynamic typing.

-----


I think that's the key to my confusion here -- I wasn't thinking of language constructs like functions or loops as patterns, I was only including things like "factory objects" or "visitors".

Of course, if a function definition is a pattern, does that mean that Lisp (chock-full of function definitions) is a lower-level language than Prolog (which doesn't need functions at all and therefore abstracts away that whole idea)?

-----


Aren't all modern programming languages Turing complete?

-----


Turing completeness is actually 1. easy to achieve (even accidentally) and 2. undesirable in many circumstances (it makes code hard to analyse and process by machines and the semantics, what you really meant, get lost in the noise).

There's a reason there's no way to express loops and varibles in HTML, for example. That's the principle (or rule) of least power at work:

http://www.w3.org/2001/tag/doc/leastPower.html

Also see:

http://en.wikipedia.org/wiki/Turing_tarpit

-----


Turing machines are Turing complete.

-----


Yes. But that's irrelevant. You need to think carefully about how a UTM runs other TM's programs, about interpreters/compilers, and what one is doing when one writes out by hand design patterns instead of your language handling them.

-----


Yes, but if your language doesn't provide a way to abstract over the repeated code, you have no choice but to write it over and over again. (Not all repetitions can be eliminated by defining a function or a class.) This is the problem that design patterns "solve".

-----


Yes, but in most languages there's a point at which you cannot "make it a function". So you have to write it again and again and again.

Take for instance context management (Common Lisp's `unwind-protect`/python's `with`/C#'s `using`, it's also possible to express it using first-class functions which is the approach used by Smalltalk or Scheme or using your objects's lifecycles which is what C++'s RAII does). It's essential to ensure that resources you're using for a limited time (a lock, a file, a transaction, …) is cleanly released when you don't need it anymore, even if an error occurs during usage.

In java, most of the time this is done with stacks of try/except/finally blocks which you're going to write again and again and again.

How do you abstract it? You can't. Well you could use anonymous classes to emulate first-class functions in theory, but it has limitations of its own and it's not really supported by the wider Java community. And note that before Python introduced `with` or C# introduced `using`, they were pretty much in the same situation (theoretically, C# 3.0 could deprecate `using` since it has anonymous functions, Python on the other hand can't given its still crippled lambdas)

-----


Using a pattern is writing more or less the same thing over-and-over, with variations. That isn't necessarily bad or good in and of itself.

Code that has repetition will not be as compact as code in which every symbol is new, original. The later will require much more thought to both write and understand and so is not necessarily desirable.

If we use the term pattern loosely, any programming involves a series of nested patterns with variations. That is how it should be. That is essentially how good writing works as well.

So, actually, patterns and repetition are good if they make the original parts clear. So, as far as language goes, it seems like a language which allows you to include what you need to include and exclude what is irrelevant would be desirable.

Any program that expresses your ideas in the most compact fashion possible will incomprehensible to anyone else and incomprehensible to you next week.

-----


I never thought I'd see someone argue that writing the same thing over and over "isn't necessarily bad". Of course you did hedge by saying "with variations". But that's the whole point about design patterns vs. higher-level languages. If a language allows you to factor out the sameness and deal only with the variations, it's a bit perverse to want to keep doing it the lower-level way.

The problem with your argument about clarity is that clarity is very much in the eye of the beholder. To someone who doesn't know about first-class functions, code that defines a class to hold a function and passes instances of a "strategy pattern" object around is probably going to be clearer. To someone who does know about first-class functions, such code isn't "clear" at all, it's silly, and the name "strategy pattern" is ridiculous.

-----


I never thought I'd see someone argue that writing the same thing over and over "isn't necessarily bad"

Okay here I go:

Every line of code introduces complexity, every abstraction is leaky, and every interface encodes certain assumptions. As you work your way up into application level functionality, reusability of code falls off a cliff.

At some point some possible abstraction will be apparent that will reduce some redundant code. What are some reasons you might not want to go ahead and refactor?

Reason #1 - It might not be worth the time to parameterize the method. Even if it's just a couple parameters you will need to write that code vs a copy and paste job. Depending on the circumstances this may or may not be worth it--what's the likelihood of reuse? How much code are you actually removing?

Reason #2 - Maybe it won't be more readable. Just having less code is not more readable per se. Obviously this is somewhat subjective, but hey, we're all human. The new abstraction may not be conceptually useful in the wider context of the application. At a minimum this is going to require the incoming developer to look at one additional place to trace the code flow, this applies even if the next programmer has all the skills of the first programmer (or are the same person).

Reason #3 - There might not be sufficient information to craft the right abstraction. Even if you know you are likely to reuse some element, if you don't build it with the right parameters then it will need to be refactored later, possibly even scrapped entirely. If you have a good instinct about this you can hedge your bet by simply duplicating some code for now.

Reason #4 - Several elements may be similar yet unrelated. Even if you have many things that are exactly the same, they may be implementations of different ideas that are moving in different directions. An abstraction of these things ends up being a form of coupling. This issue comes up in a lot of testing, mainly because testing is much more open ended than business logic since the sky is the limit when you are deciding what and how to test.

Reason #5 - The variability may just be too much. Having a parameter or two is the backbone of efficient abstraction. However what if a common task has much more variability requiring 10 or 20 parameters, or maybe just 5 parameters with complex interaction? Obviously there is going to be a line somewhere, and real business logic can get infinitely close to either side of the line.

As skilled hackers I think it's too easy to see the mistakes of amateurs and beginners who miss obvious opportunities for abstraction. LISPers in particular are keen to notice when a blub language like Java requires especially obtuse constructs and duplication. However those are just strawmen. A language like Java is just too easy a target.

The truth is that a balance must be struck lest we become architecture astronauts.

Now, all that said, I think the beginning of the OA is completely right. The notion of a pattern being unabstractable is nonsense in higher level languages. I think that's an artifact of too much Java causing people to internalize false dichotomies about what code can do. Meta-programming opens all doors.

On the other hand, the idea that a design pattern is a hint that a language is not powerful enough is equally ridiculous. It only makes sense if you look at patterns that emerged for blub languages and then observe where their typical implementations don't make any mistake when you have high level capabilities like macros. Even if you are coding in the theoretically most powerful programming language you are still going to run into patterns of things that don't have a good abstraction due to the aforementioned reasons.

-----


Woe unto he that has to work with the other guy's high level abstractions that are inflexible, internally complex, and only fit the limited set of uses that were encountered during the initial development and are not easily extensible for additional features.

I've spent most of my consulting career crash course hacking (no time/budget for a proper rewrite, incurring more technical debt... I do what I can to improve things structurally) around terrible, terrible pattern heavy libraries that sought to remove every single line of duplicate code and ended up abstracting out all the wrong things. I have literally spent man years dealing with bad object<->HTML mappers.

Code duplication can be overwhelmingly preferable to generalizations that are created too early. I say it is often good to duplicate code until you have a better perspective on what it really makes sense to abstract, and have a better perspective on what a good implementation of generalized abstractions will look like to allow maximum flexibility later.

Design pattern junkies have given me some of my worst days. I don't even know if this addresses your post, but I agree with it. Hacking some terrible code right now.

-----


Defining abstractions is a form of optimization: it is the optimization of code readability. "Never optimize without profiling." "Premature optimization is the root of all evil."

-----


You haven't shown any of the things you've claimed to show; all you've given are (reasonable) reasons why not every-single-abstraction-is-a-good-idea-no-matter-what, a position nobody intelligent would ever take and nobody actually did. So it's a little ironic that you blithely declare the arguments in the rest of the thread "strawmen".

-----




Guidelines | FAQ | Support | API | Lists | Bookmarklet | DMCA | Y Combinator | Apply | Contact

Search: