
Artificial Stupids - Charlie's Diary - tejaswiy
http://www.antipope.org/charlie/blog-static/2011/07/artificial-stupids.html
======
PaulHoule
I'd expect better from Charles Stross.

"Superhuman intelligence" is like "Superhuman strength"; in some ways a
bulldozer has superhuman strength, and in some ways a pocket calculator has
superhuman intelligence.

Computers can play chess better than people today -- there was a day when that
was a pipe dream, now it's a reality. We've attained "Skynet" when computers
can play "the game of life" against people and win.

The route of simulating a single human intelligence has been overdone by sci-
fi writers and others who speculate about the singularity. We'll be able to
simulate a statistical model of a person, derived from the output of millions
of people, much sooner than that. The "human memome" required to do this
already exists in everything we've written about ourselves -- it's a matter of
learning how to mine it.

The simulation of human faculties, on the other hand, is a business that's
steadily progressing because it's useful. No business really wants to hire
thousands of call center workers or expensive legal research aids. If we can
simulate human faculties we can replace workers by machines, so it's an
entirely practical affair.

I don't care about consciousness, souls or any of that. If a system can
perform, if a system can pass, that's good enough. It's good enough for
engineering and it's good enough for business.

As for critics of A.I. their main tactic is the conceit of moving the
goalposts over time. Years ago, Charles Dreyfus would have said that the
ability of humans to beat computers at chess proves that computers are stupid.
Now he can point out that this is just simple combinatorial search and is
nothing special. Once anything becomes practical, it's no longer magical or
A.I -- it's just simple.

I can think of three directions of "superhuman".

One obvious way to improve on human is to be faster... It's clear that this is
useful. Another is to have a larger capacity, both in terms of short and long
term memory. It's not so clear what the limits are here because a human-like
system might not scale well to a larger capacity.

Another way to improve on human is to be better at statistical reasoning.
People have certain systematic biases that cause them to make bad decisions.
This is particularly obvious because Poker is a good analogy for the "game of
life" -- ultimately you've got to make guesses about what you can't see, what
will happen in the future, what people think about it, and how they'll react.

So don't laugh because in a few decades, better Bayesians will be taking your
job, eating your lunch and hopefully being better stewards of the world than
we've been.

------
ctdonath
"[Sci fi] is also crammed with clichés that are superficially plausible but
which don’t hang together when you look at them too closely."

This seems the most valuable bit of the essay. There are so many things that
we "want", until we get them.

Recalls a recent article (anyone have a link? been looking for it) wherein the
future of human space travel is deemed nonexistent because lobbing humans to
another place is so hard with so few places worth expending that effort for.
Do we all want inter-planetary/stellar/galactic space travel? of course we say
yes, but faced with expending a lifetime (or a thousand) to get there and
burning all of Earth's resources to do it we'd say no.

Closer to home, there are so many products and services we want, yet go nigh
unto unused when we shell out hard-earned money for them. Yeah, I want XYZ,
but upon acquisition I move right on to acquiring PDQ instead of using XYZ to
anywhere close to its full potential. I've got a thousand unread books as a
testament to this.

Not quite sure where I'm going with this or why, but his point did seem
salient.

~~~
PaulHoule
This isn't the case for A.I.

Better A.I. means we can automate more things and make more profit. You can
take that to the bank. It's the kind of thing where you can make a 10%
improvement and get 10% extra returns.

Space travel has the issue that a 10% improvement doesn't get you anywhere.
It's also bad to conflate travel to earth orbit, the moon, other planets and
interstellar travel.

A mission to mars, for instance, is plausible with technology we've got.
Difficult, expensive, yes, maybe not worth it, but it could be done.

To go to another star you need to find a planetoid's mass of antimatter -- and
that's just the beginning of your trouble.

I was working on a story about an interstellar war and came to the conclusion
that, if you want to attack a world around another star, there's no need to
put an atomic warhead on your missiles, because the energy of fission or
fusion is nothing compared to what it takes to accelerate something to 10% of
light speed.

