Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Scheming is Believing (2005) (googlepages.com)
50 points by DTrejo on Aug 21, 2009 | hide | past | favorite | 20 comments


Long but great (like most of Yegge's essays, as far as I'm concerned)! This paragraph stood out for me:

"The problem is that types really are just a classification scheme to help us (and our tools) understand our semantic intent. They don't actually change the semantics; they just describe or explain the (intended) semantics. You can add type information to anything; in fact you can tag things until you're blue in the face. You can continue adding fine-grained distinctions to your code and data almost indefinitely, specializing on anything you feel like: time of day, what kind of people will be calling your APIs, anything you want."

edit: and later:

" Here's an example of what I mean by being able to tag things until you're blue in the face: Take a for-loop. It's got a type, of course: for-loop is a type of loop. It can also be viewed as a type of language construct that introduces a new scope. And it's a type that can fork program control-flow, and also a type that's introduced by a keyword and has at least two auxiliary keywords (break and continue).

You could even think of a for-loop as a type that has lots of types, as opposed to a construct like the break statement, which doesn't exhibit as much interesting and potentially type-worthy behavior. A type is just a description of something's properties and/or behavior, so you can really get carried away with overengineering your type declarations. In extreme cases, you can wind up with a separate Java interface for nearly every different method on a class. It's pretty clear why Python and Ruby are moving towards "duck typing", where you simply ask an object at runtime whether it responds to a particular message. If the object says yes, then voila — it's the right type. Case closed. "


Yes. However static typing is quite interesting in Haskell. And signalling intent with typing is a form of machine-checkable documentation.


Didn't know this one...

There are many Lisp-praising articles out there, and each one gives essentially the same reasons for Lisp being the perfect language. This one, however, despite being very large, seemed to be highly above the average to me. I think he was very convincing in pointing out precise shortcomings in the main languages he cited, something that is often only vaguely alluded to.


The machine interprets your program's symbols according to the language rules. Hence, all machines are interpreters. Some of the interpreting goes on in hardware, and some in software.

I remember having a similar epiphany: that all languages are always interpreted. I believe my next thought was "so what?". I suppose it might be a good thing to remember if implementing a new language.


Yes. But some languages get compiler before they get interpreted. For example Python (in the cPython version).

And there's a continuum between interpretation and compilation. Threaded interpreters like they are common for Forth are quite close to compilers.


Well, maybe "reality" isn't interpreted :)


Ugh, why the gratuitous dig at Guido? I think he's done a fine job leading Python, and I've found the Python community to be particularly friendly. Sounds like there's some beef here.


Recently GVR said 'no' to tail-call optimization. Why? Check out his blog post on it:

http://neopythonic.blogspot.com/2009/04/final-words-on-tail-...

As far as I'm concerned, when he says, "The one issue on which TCO advocates seem to agree with me is that TCO is a feature, not an optimization.", it's complete bollocks. When your program blows up because you don't have TCO, that's a semantic failure--not a lack of a little optimization.


I think that's what he was implying. As in, TCO is a major decision that goes into a language design, not just some sprinkling of compiler optimizations, usually.


So you'd call him frigid, distrustful, arrogant and unfriendly? That doesn't really follow for me.

Edit: You and Guido are in agreement. It's a "feature" in that it visibly affects behavior, whereas an "optimization" would merely make things faster.


I said his opinion is bollocks.

I'm claiming that using self-recursion just makes sense. The inability to have a recursive implementation of fib(n) is a loss in both expressiveness and readability. Recursion is fundamental to computer science, and is often the best way to express many types of operations. So, in my opinion, if Guido really wants Python to be simultaneously readable, expressive and pragmatic, TCO is necessary, because recursion just plain rocks sometimes.


What are you responding to? TCO could be the second coming of Christ and I'd still think Yegge was being too harsh.


I agree. Though fib(n) is among the worst examples to cite for your cause.


This is from 2005. Please put old article dates in the title.


Why?


It's useful to know an article is four years old before you read a dozen pages of material and mentally file it all as current information.


HN


New to me.


Has anybody coined "Yeggism" yet?: Exaggerated claims with a sliver of truth and a whole lot of hand waving to back them up?

It's a great article, this is not a troll. But those are some big statements in there about it seems anyone and everyone. Glad no one treats Steve's rants as gospel...oh, wait...

;-)


Might as well call it an "Internetism". With Steve, though, you at least get style, articulation, a sliver of grace and as much verbosity as you can stand.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: