Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Why "Build to last" is bullcrap consultants suck and how Digg failed to innovate (joopdorresteijn.com)
6 points by joop on Sept 17, 2008 | hide | past | favorite | 8 comments


Not a great article. Very poor english ("Today we live in a world that exponentially innovating world." << first sentence? C'mon!), and the point is very poorly made and, on the whole, incorrect!

I think there's probably some interesting point hidden somewhere in this article, but I can't put my finger on it - it's certainly nowhere near the surface.


THe three main points I was able to extract were reflected in the title:

Innovate or die. Consultants suck. Digg sucks because they didn't cash out .. and er tried to innovate but failed?

I'd agree with 2/3


I get the point of the article, but I stubbed my toe on this quote within it:

"By 2023, a $1,000 computer will exceed the computation capability of the Human Brain."

How to quantify the 'computation capability' of the human brain? If we knew the transformations the brain makes to the information it receives, we would understand the brain, which we do not.


The transformations performed by some regions of the brain have been modeled.

The estimates of the brains equivalent computational capacity I have read use functional models of regions of the brain, (i.e. audio, visual, ...) to obtain an estimate per neuron and extrapolate that to the entire brain. Estimates are 10e14 to 10e15 operations per second.

See http://books.google.com/books?id=88U6hdUi6D0C&pg=PA122&#...


That was interesting, thanks for the link.

However -- the neural functions cited by Kurzweil for which 'operations per second' have been estimated are:

1) edge and motion detection in the retina

2) computation of time delay between a sound arriving at one of our two ears versus the other

3) silicon computations required to model 10e4 cerebellar neurons

Let's throw out (3), since it's hard to know from the quick mention he gives what 'model' means in this context.

Now consider (1) and (2): they represent specialized, if experimentally accessible computations. The relationship of operations in the retina to operations in the cortex is unknown. (It seems, actually, that cortex reimplements certain retinal functions, e.g. directional selectivity, for unknown reasons.) Similarly, IIRC, the time difference between a sound arriving at the two ears is thought to be computed by the olivary nucleus (http://en.wikipedia.org/wiki/Superior_olivary_nucleus), a specialized brainstem structure.

So what do the retina and the brainstem have to do with the cortex? Even if you accept (3), you can ask the same question about cerebellum.

In the link you give, Kurzweil says that these studies show that "it is clear that we can emulate the functionality of brain regions with less computation than would be required to simulate the precise nonlinear operation of each neuron and all of the neural components (that is, all of the complex interactions that take place inside each neuron)."

I hope he's right; but the evidence he gives is pretty thin.

[ edit to make each item on the list have its own line ]


For those of you confused like I was, that's 10e14 to 10e15 per second, not 1014 to 1015.


I was particularly annoyed at that point as well, especially since a lot of the problems the brain solves are not based on its "processing speed" as much as its "software" that AI attempts to emulate.


What is meant by this:

"We are currently preparing students for jobs that don’t yet exist using technologies that haven’t been invented"




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: