
Q and A: The future of artificial intelligence - aduffy
http://people.eecs.berkeley.edu/~russell/temp/q-and-a.html
======
shmageggy
Because it's not in the title nor anywhere on the page, I'll point out that
this appears to be written by Stuart Russell, co-author of Artificial
Intelligence: A Modern Approach
([http://aima.cs.berkeley.edu/](http://aima.cs.berkeley.edu/)), the standard
intro to AI textbook.

~~~
aduffy
That's a good point, I should've included that in the post

------
joe_the_user
I have a loathing for claims put forward in the "Common Misconception: XXXX"
format.

For example: "Common Misconception: ...AI will necessarily increase
inequality."

The question of whether AI will increase inequality involve long arguments
that hinge on both unknowns about society and unknowns about AI.

Sure, it's not _absolutely certain_ that AI will increase inequality but
that's an empty assertion and the whole thing seems like an expert brushing
off serious concerns about technological development impacting society.

~~~
chronic6l
Of course AI will increase inequality... AI will be baked into more expensive
products which the poor cannot afford.

~~~
emblem21
Just think of all of those smart phones the poor can't afford.

~~~
joe_the_user
Oh sure, yes AI will also be baked into products sold to poor, no it won't be
for their benefit but rather to control, manipulate and rip them off. Kind of
like advertising and surveillance now.

------
nickysielicki

        > > When will AI systems become more intelligent than people?
        > [...]  
        > Achieving [general-purpose AI with greater ability than humans] would
        > require significant breakthroughs in AI research and those are
        > very hard to predict.
        > Most AI researchers think it might happen in this century.
        

Is the author projecting here, or is that actually true?

~~~
dvt
It's not true, and it's basically sensationalist nonsense. The scope of human
intelligence hasn't even been mapped yet and we can't even accurately
formalize things like natural language.

Saying AI has a long way to go is like saying an electron is "pretty small".

~~~
justinpombrio
What part of that quote, exactly, is "sensationalist nonsense"? The first part
that says that more breakthroughs are needed, or the second part that states
the testable fact that most AI researchers give a high probability that we
will achieve human level AI this century?

------
Philipp__
This popped out of nowhere. Because I am in serious dilemma if I should focus
for a few years on Ai(ML||DL||NN). I got place on very respected computer club
on my faculty. I got to choose in which team would I like to go. At first i
wanted to go to OS/Systems programming. But recently I was reading a lot about
Ai in terms pf CS. And pne of the board members of the computer club asked me
why I do not think about joining Ai team, since I was very well covered on all
basic bases of programming (OOP, algorithms and a little bit of functional in
Lisp). Since then I am bashing my head really. Ai looks really interesting to
me, but I am afraid of it's high level abstraction, and it looks to me that it
is just piping data for the most people (those people who make actual
algorithms are not included here). So i think my fear comes out of unknown. Ai
is the future and it looks very interesting and attractive to me, plus I could
go a learn awesome new things in terms of different, more exotic languages
that could do me well in general as software engineer, but man I don't know
yet...

~~~
maxander
Something to keep in mind if you're looking into studying AI seriously is that
it's really half a dozen barely-related fields, that have all retained the
group name "AI" solely because it sounds cool. Machine learning specialists
use a whole different set of ideas than robotic control specialists, for
instance. The logical inference engines or expert systems that used to be the
bread-and-butter of AI departments are now flippantly referred to as "GOFAI"
("Good Old-Fashioned AI") to distinguish them from the shiny new approaches
that get students hired into Google.

I think one could frame the question like this: What do you want to learn;
logic, statistics, and/or cognitive science? For each there is a branch (or
several branches) of "AI" that revolve around it, and have only a limited
amount to do with the others.

~~~
pouta
I am also struggling with deciding on what field to dive into. I really don't
want to ask but can you give me some insights based on what I am looking for?

~~~
TeMPOraL
What do you want to build? Do you have a system you're excited about? For
instance, a virtual assistant/personality? A robot that can move around and
learn like an animal? Something else? Answering this will definitely help
identify the proper field(s) to look at.

------
kleiba
I saw an interesting TEDx talk on this topic a while ago by an AI researcher
from the Netherlands [1]. The talk is titled "Why are computers not
intelligent (yet)?" by Prof. Niels Taatgen.

[1]
[https://www.youtube.com/watch?v=n1ARQsdNvmI](https://www.youtube.com/watch?v=n1ARQsdNvmI)

------
psyc
There are 2 irksome AI memes in particular that I've seen repeatedly in HN
comments. Even sillier, they contradict one another. The first is something
like "That's not AI, it's 'just' ML." The second is almost the opposite (but
just as misguided) and is something like "If you're not an ML expert talking
about ML, then you're not talking about 'real' AI."

This Q&A counters both, and I found it refreshing.

------
rdlecler1
>Common Misconceptions: Neural networks work like brains. In fact, real
neurons are much more complex than the simple units used in artificial neural
networks; there are many different types of neurons; real neural connectivity
can change over time; the brain includes other mechanisms, besides
communication among neurons, that affect behavior; and so on.

I really have to disagree with this statement because it's actually broad and
misleading. Yes, real neurons are more complex, but that doesn't mean that
artificial neural networks are not capturing the important
functional/computational properties or real neurons. In fact, you might even
say that real neural networks are approximating the computational properties
of artificial neural networks via a somewhat Rube Goldberg like process. If
you actually look at the mathematics that functionally describe artificial
neural networks it's the same math that describes gene regulatory networks.
This is not a coincidence. This mathematical abstraction like a platonic
computational system. In effect, evolution converged on this computing
paradigm twice using the processes available at its disposal. The first was a
chemical reaction network, the second was neurons. Moreover, if you actually
study the computational properties of these networks you discover that most of
the implementation details are actually irrelevant to the functional output
and it's the topology of the network that's primarily driving the function.
Not unlike how an engineer might recognize the A 8-bit added from the circuit-
logic diagram. The problem that people have with artificial neural networks is
that they're not spending enough time reverse engineering the salient
properties of biological systems and instead they're trying to brute force AI
with their mathematical brilliance. A lot of these questions disappear if you
study the network architecture.

~~~
shmageggy
> In fact, you might even say that real neural networks are approximating the
> computational properties of artificial neural networks

There's little to no evidence for this currently. To my knowledge, there's
nothing to indicate that the brain uses anything like backpropagation, which
is the mechanism by which artificial NNs learn. That's such a fundamental
operation for ANNs that I think it would be premature to claim anything like
the above.

~~~
rdlecler1
Back prolongation is a training algorithm for ANNs, not the circuitry that
performs the function.

------
maxt
> When will AI systems become more intelligent than people?

Human intelligence is a sort of 'field awareness' as described by Alan Watts.
Think more waveforms than digital bits. I think strong AI will happen when the
intelligence is given enough free will. We might one day have to throw out the
rulebook regarding AI harming us.

I think we lack a real metaphysics yet for what Strong AI is allowed to do.
Aleister Crowley's[1] 'do what thou wilt' is the only well thought out
metaphysics for free will we have now, and something we could teach the
machines.

Putting them in sandboxes is a nonsense as it has been demonstrated even in
modern computing that breaking out of sandboxes can be done. Even the most
hardened air gapped VMs can be bridged to the public Internet, more often than
not, by accident.

[1]:
[https://en.wikipedia.org/wiki/Aleister_Crowley#Beliefs_and_t...](https://en.wikipedia.org/wiki/Aleister_Crowley#Beliefs_and_thought)

------
ankurdhama
The article says: "However, the kinds of tasks addressed by AI systems tend to
differ significantly from traditional algorithmic tasks such as sorting lists
of numbers or calculating square roots."

Yes, those tasks do significantly differ from the mathematical calculation
tasks that algorithms are traditionally associated with BUT the whole idea of
AI is to take those tasks and represent them as algorithmic tasks so that they
can be solved using computers. No matter which human cognition concept you
associate to a computer, the reality is that the only thing that a computer
can do is run the given algorithm. Somehow most of the AI community is just
fascinated about giving computers the ability to "think" or have "perception"
etc without realising that what they are actually doing is "representation as
calculations".

------
dkarapetyan
I like this one

> Humans are generally intelligent. This claim is often considered so obvious
> as to be hardly worth stating explicitly; but it underlies nearly all
> discussions of AGI.

I wonder though. Should that really be the goal? This seems like another
instance of people's propensity to think of themselves as the pinnacle of
something. In this case that something is intelligence.

~~~
devoply
I think this frames it in the economic terms. The unstated goal is to replace
human beings with robots, to get automation to do everything people can do.
Which would then render people economically useless. Which would then cause
some sort of drastic change in the purpose of human life, which since the 1600
or so has been about using people to produce things that they then consume.

------
tedmiston
I believe it is generally accepted that artificial general intelligence is
considered unsolvable. It would be nice to hear that addressed more
explicitly.

~~~
cynicaldevil
Where did you get that notion? At least among people working in the tech
sector, it is widely believed that we will achieve AGI one day, and that too
with the the help of Turing machines. Except for Roger Penrose's argument
against this, I haven't seen any opposition to this belief yet.

~~~
tedmiston
Mostly from conversations in CS grad school ~2012 with research professors
doing related work, such as in ANN. At that time at least it was commonly
discussed in academia. Perhaps a breakthrough has happened since but not one
that I'm aware of.

For example, see the sections "Tests for confirming operational AGI" and
"Feasibility" in [1], the $100,000 prize requirements in the Loebner prize
[2], or "The integration bottleneck" in _The real reasons we don’t have AGI
yet_ [3].

[1]:
[https://en.wikipedia.org/wiki/Artificial_general_intelligenc...](https://en.wikipedia.org/wiki/Artificial_general_intelligence)

[2]:
[https://en.wikipedia.org/wiki/Loebner_Prize](https://en.wikipedia.org/wiki/Loebner_Prize)

[3]: [http://www.kurzweilai.net/the-real-reasons-we-dont-have-
agi-...](http://www.kurzweilai.net/the-real-reasons-we-dont-have-agi-yet)

