Hacker News new | comments | show | ask | jobs | submit login

The "human compiler" bit never made sense to me. When I start to write something for the second time, I stop and make it a function (or a class, interface, library, or whatever's appropriate). "Don't repeat yourself" is good advice in any programming universe.



That was the point of the article. A design pattern is a sign that your programming language lacks an abstraction that makes it impossible not to repeat yourself.


Could you give me a concrete example? Say, a pattern that can't be abstracted away in assembly language, but can be abstracted away in C++?

Serious question, not a troll -- I've always felt like I'm missing some key point here.



WTF, I cannot believe that's not a joke.


For example the "there aren't enough registers so I put some data on the stack"-pattern. You just don't have that in C++.

Or getting the result of an expression into another expression, like

    (a+b)*c
You have to write that as

    x = a+b
    y = x*c
In C# people regularly play the human compiler when defining getters&setters (or maybe their IDE does it, but you still have the noise code on your screen). Or when mentioning types everywhere when the compiler could infer it (although it's better now with the var keyword).


A minor point, but C# will generate simple getters and setters for you automatically with syntax like "type Property { get; private set; }". As soon as you want to do anything interesting when get or set, however, you're back to playing human compiler.


During college I designed a processor that could do that. It would read like this (assuming A, B and C were constants representing memory addresses pointing to integers):

$300: $(A) $(B) ADD $(C) MUL

And the result would be at the top of the stack.

Not all assembly languages are created equal. ;-)


That has more in common with Forth than most assembly languages though.


That's what I liked about the machine. When I realised that, if I designed a microcoded stack machine, I could make it run something very close to Forth in the 4 months I had to finish it, I knew I had to do it that way instead of a more conventional design (I took some implementation ideas and the assembly syntax from the 6502).

In the end, I wrote an incomplete Forth compiler for it, but I don't know where the floppies I wrote them to ended up.


Assembly to C: procedures, loops, switch-case, expanded numeric constants (e.g. seconds in a week as 606024*7 rather than 6048000).

C to C++: virtual methods, scope-delimited allocation, dynamic typing.


I think that's the key to my confusion here -- I wasn't thinking of language constructs like functions or loops as patterns, I was only including things like "factory objects" or "visitors".

Of course, if a function definition is a pattern, does that mean that Lisp (chock-full of function definitions) is a lower-level language than Prolog (which doesn't need functions at all and therefore abstracts away that whole idea)?


Aren't all modern programming languages Turing complete?


Turing completeness is actually 1. easy to achieve (even accidentally) and 2. undesirable in many circumstances (it makes code hard to analyse and process by machines and the semantics, what you really meant, get lost in the noise).

There's a reason there's no way to express loops and varibles in HTML, for example. That's the principle (or rule) of least power at work:

http://www.w3.org/2001/tag/doc/leastPower.html

Also see:

http://en.wikipedia.org/wiki/Turing_tarpit


Turing machines are Turing complete.


Yes. But that's irrelevant. You need to think carefully about how a UTM runs other TM's programs, about interpreters/compilers, and what one is doing when one writes out by hand design patterns instead of your language handling them.


Yes, but if your language doesn't provide a way to abstract over the repeated code, you have no choice but to write it over and over again. (Not all repetitions can be eliminated by defining a function or a class.) This is the problem that design patterns "solve".


Yes, but in most languages there's a point at which you cannot "make it a function". So you have to write it again and again and again.

Take for instance context management (Common Lisp's `unwind-protect`/python's `with`/C#'s `using`, it's also possible to express it using first-class functions which is the approach used by Smalltalk or Scheme or using your objects's lifecycles which is what C++'s RAII does). It's essential to ensure that resources you're using for a limited time (a lock, a file, a transaction, …) is cleanly released when you don't need it anymore, even if an error occurs during usage.

In java, most of the time this is done with stacks of try/except/finally blocks which you're going to write again and again and again.

How do you abstract it? You can't. Well you could use anonymous classes to emulate first-class functions in theory, but it has limitations of its own and it's not really supported by the wider Java community. And note that before Python introduced `with` or C# introduced `using`, they were pretty much in the same situation (theoretically, C# 3.0 could deprecate `using` since it has anonymous functions, Python on the other hand can't given its still crippled lambdas)




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | DMCA | Apply to YC | Contact

Search: