Hacker News new | past | comments | ask | show | jobs | submit login
Danny Hillis tells us why Computer Science is no good. (longnow.org)
16 points by rglovejoy on May 16, 2008 | hide | past | favorite | 15 comments



The idea of there being an indigenous "higher-level" theory of computation is an important one. But I doubt very much it would look like physics, or have the benefits of physics that Hillis thinks are important.

Its much more likely to look like biology, eg nature's computers. Most models in physics are not inherently based on the idea of information and computation, in fact the functions that they are based on are merely primitive recursive, making them unsuitable for for descriptions of fully generally recursive computational processes.

Of course, biology is also noted for its lack of higher theory, and so when I make the above claim, the meaning is that when such a theory exists it will address both of these issues.


Yeah but even biological computers have to observe the physical laws of nature right? The hardware of my brain is basically a parallel computer with electricity flowing through it. Maybe hardware should be influenced by physics, and software (neural networks, machine learning, etc.) by biology/cognitive science.


It's really refreshing to see someone say 'computer science is no good 'cuz it's not engineering', instead of 'computer science is no good 'cuz it didn't help me get a job as a website-janitor.'


'...memory locations, which are just wires turned sideways in time'

I hope I'm not the only one who finds this hilarious.


It struck me as quite clever (perhaps too clever, which seems to be a common thread in these recent Thinking Machines articles...).

His point, if I'm reading it correctly, is that we treat wires as an ideal and instantaneous transfer medium. In reality, however, wires take up physical space, so there are physical limits to the number and length of wires you can actually have. The more wires you use, the longer they need to be to route around each other, hence more time needed to transfer the data, breaking the abstraction.

Analogously, a memory location ideally represents something you can dump data into at one time, then extract it instantaneously at a later time. In reality, however, processors need to address memory via the bus. With a single bus, processors will have to queue their requests -- resulting in longer "time wires" and a slower computer. The alternative is to have multiple memories/buses (more "time wires"), but since the buses are constrained by physical limits, the number of "time wires" you can have is constrained just like the number of real wires. So our abstraction of memory is broken -- the amount of time it takes to access a memory location depends on how many locations we need.

You can see a similar result in today's machines, in the behavior of cache locality: if your algorithm tries to access too much data (many "time wires"), it overflows the cache and you're forced to wait (use longer "time wires"). Algorithms designed to exploit cache effects are sometimes an order of magnitude faster, illustrating the leakiness of our memory abstraction.


It struck me as quite clever (perhaps too clever, which seems to be a common thread in these recent Thinking Machines articles...).

Just to put "recent" into perspective, this article is from 1981 :)


Not to mention you still have to locate the things in 3space.


Up until "connection machine," I was thinking that this seemed naive even for an MIT AI researcher.


I feel obligated to say that Computer Science is not about the machines that do the computation but about the computation process itself.

If we are talking about the physical devices that do the computation, it's within the field of Engineering.

Daniel Hillis is an incredibly clever guy and the Connection Machines were probably the most beautiful computers ever built. Not only that, but they were, perhaps, the last computers ever built you could tell what they were doing just by looking at them.

I would love to see something like that on a desktop PC. Maybe not tasks or threads (and, certainly, not processors) or memory blocks being accessed and translating the patterns into blinking lights.


I'm far from an expert, or even a semi-amateur with computer design, but I think Hillis is arguing that physical realities/constraints should deeply influence the theoretical computational process itself, regardless of implementation details. I don't see this as tainting or complicating the pure, theoretical aspects of Computer Science with messy implementation details that are going to change in 2 years anyway (presumably, hard realities observed through Physics will stick around for a long, long time, until/unless we discover them to be different). Rather, I see this as another place where we can use Nature as a design guide.


Compsci is closer to Math than to Engineering. While you can study nature to find useful algorithms and approaches to solving real world problems, the fact something is not useful or solves no real problem never stopped mathematicians from exploring the idea. The very distinction of real problems from less real ones is totally strange to mathematicians. ;-)

It usually works out that the useless math of one day becomes very useful some later day, usually in an incredibly unexpected way. It is also usual that physicist tend to discover a use well before any engineers start poking around it.

As for observing nature, it's also frequent that engineers find ways to control and use something well before physicists have any clue as to how and why it works.


And, BTW, the essay is dated May, 1981.


1981. What is that in Dogbert years?



Eh. This isn't really very interesting, anyway. 27 years of screwing around with physics still hasn't produced any offspring (quantum computing doesn't count yet).

yawn




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: