Long but great (like most of Yegge's essays, as far as I'm concerned)! This paragraph stood out for me:
"The problem is that types really are just a classification scheme to help us (and our tools) understand our semantic intent. They don't actually change the semantics; they just describe or explain the (intended) semantics. You can add type information to anything; in fact you can tag things until you're blue in the face. You can continue adding fine-grained distinctions to your code and data almost indefinitely, specializing on anything you feel like: time of day, what kind of people will be calling your APIs, anything you want."
edit: and later:
" Here's an example of what I mean by being able to tag things until you're blue in the face: Take a for-loop. It's got a type, of course: for-loop is a type of loop. It can also be viewed as a type of language construct that introduces a new scope. And it's a type that can fork program control-flow, and also a type that's introduced by a keyword and has at least two auxiliary keywords (break and continue).
You could even think of a for-loop as a type that has lots of types, as opposed to a construct like the break statement, which doesn't exhibit as much interesting and potentially type-worthy behavior. A type is just a description of something's properties and/or behavior, so you can really get carried away with overengineering your type declarations. In extreme cases, you can wind up with a separate Java interface for nearly every different method on a class. It's pretty clear why Python and Ruby are moving towards "duck typing", where you simply ask an object at runtime whether it responds to a particular message. If the object says yes, then voila — it's the right type. Case closed. "
There are many Lisp-praising articles out there, and each one gives essentially the same reasons for Lisp being the perfect language. This one, however, despite being very large, seemed to be highly above the average to me. I think he was very convincing in pointing out precise shortcomings in the main languages he cited, something that is often only vaguely alluded to.
The machine interprets your program's symbols according to the language rules. Hence, all machines are interpreters. Some of the interpreting goes on in hardware, and some in software.
I remember having a similar epiphany: that all languages are always interpreted. I believe my next thought was "so what?". I suppose it might be a good thing to remember if implementing a new language.
Ugh, why the gratuitous dig at Guido? I think he's done a fine job leading Python, and I've found the Python community to be particularly friendly. Sounds like there's some beef here.
As far as I'm concerned, when he says, "The one issue on which TCO advocates seem to agree with me is that TCO is a feature, not an optimization.", it's complete bollocks. When your program blows up because you don't have TCO, that's a semantic failure--not a lack of a little optimization.
I think that's what he was implying. As in, TCO is a major decision that goes into a language design, not just some sprinkling of compiler optimizations, usually.
I'm claiming that using self-recursion just makes sense. The inability to have a recursive implementation of fib(n) is a loss in both expressiveness and readability. Recursion is fundamental to computer science, and is often the best way to express many types of operations. So, in my opinion, if Guido really wants Python to be simultaneously readable, expressive and pragmatic, TCO is necessary, because recursion just plain rocks sometimes.
Has anybody coined "Yeggism" yet?: Exaggerated claims with a sliver of truth and a whole lot of hand waving to back them up?
It's a great article, this is not a troll. But those are some big statements in there about it seems anyone and everyone. Glad no one treats Steve's rants as gospel...oh, wait...
Might as well call it an "Internetism". With Steve, though, you at least get style, articulation, a sliver of grace and as much verbosity as you can stand.
"The problem is that types really are just a classification scheme to help us (and our tools) understand our semantic intent. They don't actually change the semantics; they just describe or explain the (intended) semantics. You can add type information to anything; in fact you can tag things until you're blue in the face. You can continue adding fine-grained distinctions to your code and data almost indefinitely, specializing on anything you feel like: time of day, what kind of people will be calling your APIs, anything you want."
edit: and later:
" Here's an example of what I mean by being able to tag things until you're blue in the face: Take a for-loop. It's got a type, of course: for-loop is a type of loop. It can also be viewed as a type of language construct that introduces a new scope. And it's a type that can fork program control-flow, and also a type that's introduced by a keyword and has at least two auxiliary keywords (break and continue).
You could even think of a for-loop as a type that has lots of types, as opposed to a construct like the break statement, which doesn't exhibit as much interesting and potentially type-worthy behavior. A type is just a description of something's properties and/or behavior, so you can really get carried away with overengineering your type declarations. In extreme cases, you can wind up with a separate Java interface for nearly every different method on a class. It's pretty clear why Python and Ruby are moving towards "duck typing", where you simply ask an object at runtime whether it responds to a particular message. If the object says yes, then voila — it's the right type. Case closed. "