

Computer Science: "an Alzheimer's patient who has forgotten more than he knows." - asciilifeform
http://lupoleboucher.livejournal.com/29014.html

======
jerf
While I'm not sure I can disagree with the conclusion, I disagree with the
arguments leading up to it. Some of the most interesting progress I see today
is occurring in the field of computer vision, and the reason why it is
occurring today and not 15 years ago is because 15 years ago, you had a
Pentium/MMX 133 with 16 MB of RAM. All of the algorithms that today run in
just-barely-real time, or run in batch mode on a timeframe that a single grad
student can do many compile-run-analyze-try something else cycles on per day,
would have taken weeks on that hardware.

I think general AI will see some stuff like this too, algorithms that could
have been conceived of 20, 30, 40 years ago but couldn't be feasibly run on
even a supercomputer of the time, that run fairly comfortably in a modern
<$1000 laptop at Best Buy. (And of course runs even better in a dedicated
server performance box; my point is that even low-grade consumer hardware is
pretty damn amazing nowadays.)

When I took a grad-level evolutionary computation course about eight years
ago, one of the two professors related a story about one of his earliest runs
of an evolutionary algorithm. He literally carried around a stack of
punchcards as he moved between several schools, and set up the computation so
that he could run it for a few hours as there was time, then suspend it again.
It literally took him years. This same computation of course runs in a few
seconds on something as powerful as an iPhone. (Which isn't too dissimilar in
spec from the laptop I used in that class, an 800 MHz Duron PoC (even at the
time, admittedly) which was proud to have 256MB of RAM, IIRC.)

It's not just the raw speed of the algorithms, it's the speed of the analysis
loop. Where computation time once dominated the research of even the best
minds, soon the best minds of our generation will actually be able to get
feedback about their AI explorations on reasonable time scales. The fat lady
has not sung yet.

(This still explains a lot in the non-AI cases too. It took a while before
garbage collection was not a big performance hit, then it took a while longer
for popular languages to pick it up. It was never _forgotten_ , it's just that
there is often a larger gap between "a good idea" and "a good implementation"
than ideas-people realize. Clearly, UNIX is old and out-of-date, but creating
an entire ecosystem that is clearly superior and also _ready to go_ is not
something you hand to a grad student to write a thesis on. And so on.)

------
timf
> _Really, whatever can be done now with a computer was in principle doable in
> 1965_

Why are people so attached to this line of reasoning... it's like saying
whatever can be done now with an automobile was in principle doable in 1920
because the laws of mechanical engineering haven't changed. Cars now are
"just" faster and have a few more doohickeys on them.

It's sort of a tautology once you make the paradigm in question all-
encompassing enough. I would love to see mind-erasing advances in CS, too...
but the lack of them does not mean that smaller and continually useful
advances aren't happening and aren't worth discussing/praising.

~~~
msie
_Really, whatever can be done now with a computer was in principle doable in
1965_

Yes, but in 1965 we couldn't read your snooty LiveJournal entries.

~~~
dan_the_welder
And you expect people to take your opinion seriously when you are posting it
on Live Journal while wearing a Horned Viking Helmet.

I've worn a Horned Viking Helmet, sometimes it is really the only option,
however I've never posted on Live Journal as it is never the only option.

Live Journal exists only as a punchline and a repository for the musings of
tween goths. Only to be replaced by Twitter in the fullness of time.

------
ianbishop
> Many such authors think "object oriented" or other fancy chinese words like
> polymorphism mean something which it doesn't, really.

I don't know that 'polymorphism' is a 'fancy chinese word' or really just an
attempt to bridge a definition which had already existed for a hundred years
which had a common contextual meaning.

The number of fallacies and blind ignorance in this post is absolutely
astonishing.

~~~
mahmud
Polymorphism, and really, all of Object Orientation is a bit older than "a
hundred years".

<http://en.wikipedia.org/wiki/Theory_of_Forms>

Plato was a Smalltalker.

------
GTanaka
As a student working in AI, I have to say that I both agree and disagree with
this writer. I agree with his assertion that AI, in its current state, is not
nearly as generalizable as we hope it to be, but simply because it is not now
doesn't mean we should give up on it entirely and wait for some breakthrough
to come in from another field. Better hardware has been a great boon to AI
researchers, but it's in times like these that we need not only to verify or
dismiss the mathematical theories put forth 50 odd years ago but enhance them
with experimentation.

For example, perhaps descriptors are truly the wrong way to go about object
recognition in images, or perhaps segmentation is the wrong way to look at a
scene, but they are ideas and they are worth testing. We won't find out unless
we try, because unlike the equations proven decades before, there are no
constraints we can simply apply.

------
javanix
Does anyone know what the 'Ford paradox' is? He refers to it in regards to the
physical implications of real number representation.

~~~
ianbishop
I looked for it earlier, when I originally read the article, and landed up
with some Henry Ford quote which is way out of context.

------
sev
>despite their inability to write decent high level >compilers (like, say,
natural language coding), or come >up with a methodology which prevents common
software >bugs, they think they can reproduce the functions of the >human
brain with a tin can.

Horrible premise to start with. Just because one hasn't achieved perfection in
a particular field, he/she should not try to aim high in other related fields?

Who's to say focusing on a broader spectrum won't be of help in regards to the
overall picture or help find solutions to challenges that haven't yet been
solved.

Little substance is right.

------
StrawberryFrog
Who is lupoleboucher.livejournal.com and why are their trolls showing up on HN
now?

------
hc
this is fantastic trolling but there doesn't really seem to be much substance
to it.

~~~
eplawless
This seems to be the point. "How to get noticed by passively trolling internet
communities." He's doing an excellent job, I've certainly noticed him and
devoted a portion of my day to being irritated by his continued presence.

