
What you wanted to know about AI - chriskanan
http://fastml.com/what-you-wanted-to-know-about-ai/
======
jallmann
When I started school, my dream was to figure out a theory to underpin a grand
unified model of artificial intelligence. Imagine my disappointment once I
started studying the subject in detail.

Most functional AI nowadays consists of algorithms that are carefully tuned to
solve a very specific problem in a narrowly defined environment. All the
research nowadays is pushing the boundaries of a local optimum. Right now,
true AI is a pipe dream without a fundamental shift in how we approach AI-type
problems. And no, machine learning/deep learning is not that shift; it is just
another flavor of the same statistics that everybody already uses.

What concerns me is not Skynet; what concerns me is the exasperating over-
confidence that some people have in our current AI capabilities, on Hacker
News and elsewhere. Too often, we discuss such technology as a miracle tonic
to various economic or social woes, but without acknowledging the current
state of the technology and its limitations (or being completely ignorant of
such), we might as well be discussing Star Trek transporters. And usually, the
discussion veers into Star Trek territory. Proponents of self-driving cars: I
AM LOOKING AT YOU.

Take self-driving cars: at least with humans, our failure modes are well-
known. We cannot say the same for most software, especially software that
relies on a fundamentally heuristic layer as input to the control system. To
that mix, add extremely dynamic and completely unpredictable driving
conditions -- tread lightly.

~~~
foreigner
The key to self-driving cars is to realize that they don't have to be perfect
- they just have to be better than us. It's not that the AI driver so good -
it's that human drivers are SO BAD! I agree with you that AI is a pipe dream
but I do think self-driving cars will succeed. I don't think the computers
will ever match our judgement but it's trivially easy for them to beat us on
attention span and reaction time, which will make them better drivers.

~~~
jallmann
> The key to self-driving cars is to realize that they don't have to be
> perfect - they just have to be better than us.

Again, that's skirting the issue. Do you have any idea how close self-driving
cars are to being "better than us" ? As someone who's done computer vision
research: not close at all.

> I don't think the computers will ever match our judgement

That is exactly the problem.

> it's trivially easy for them to beat us on attention span and reaction time

Attention span and reaction time are not the hard parts of building an
autonomous vehicle.

This kind of comment beautifully illustrates the problem with casual
discussions about AI technology. Humans and computers have very different
operating characteristics, and discussions all focus on the wrong things:
typically, they look at human weaknesses, and emphasize where computers are
obviously, trivially superior. What about the contrapositive: the gap between
where computers are weak, and where humans are vastly superior? More
importantly, what is the actual state of that gap? That question is often
completely ignored, or dismissed outright. Which is disappointing, especially
among a technically literate audience such as HN.

~~~
Retric
I suspect that the current google car is already safer than the overall
average driver.

Don't forget some people speed, flee from the cops, fall asleep at the wheel,
get drunk, text, look at maps, have strokes etc. So, sure at peak performance
cars have a long way to go. However, accidents are often a worst case and
computers are very good at paying attention to boring things for long periods
of time.

PS: If driverless cars on average killed 1 person each year per 20,000 cars
then they would be _significantly_ safer than human drivers.

~~~
jallmann
> Don't forget some people speed, flee from the cops, fall asleep at the
> wheel, get drunk, text, look at maps, have strokes etc.

Again you are falling into the same pit: a nonsensical comparison of human and
computer operational/failure modes. Of course computers can't have strokes.
And yes, they are good at "paying attention to boring things". That is
trivially true. And that's not where the discussion should be focused.

I do hope self-driving cars will be generally available sooner rather than
later. What's not to like about them? But what I'm really curious about is how
that availability will be qualified. Weather, road, visibility conditions?
Heavy construction? Detours? Will this work in rural areas or countries that
don't have consistent markings (or even paved roads!)? Will a driver still
have to be at the wheel, and what extent will the driver have to be involved?

What is really annoying are breathless pronouncements about a technology
without critically thinking about its actual state and implementation. We
might as well be talking about Star Trek transporters.

~~~
Retric
A car that can that 80% of the time can get a sleepy or drunk people home
would be a monumental advantage and likely save thousands of lives a year.

Basicly an MVP that flat out refuses to operate on non designated routes, bad
weather, or even highway speeds could still be very useful.

PS: Classic stop and go traffic is another area where speeds are low,
conditions are generally good. But, because people can't pay attention to
boring things you regularly see traffic accidents which creat massive gridlock
cost people day's per year sitting in traffic.

------
davmre
A lot of respected AI researchers and practitioners are writing these "AIs are
really stupid" articles to rebut superintelligence fearmongering in the
popular press. That's a valuable service, and everything this article says is
correct. Deepmind's Atari network is not going to kick off the singularity.

I worry that the flurry of articles like this, rational and well-reasoned all,
will be seen as a "win" for the nothing-to-worry-about side of the argument
and lead people to discount the entire issue. This article does a great job
demonstrating the flaws in current AI techniques. It doesn't attempt to engage
with the arguments of Stuart Russell, Nick Bostrom, Eliezer Yudkowsky, and
others who are worried, not about _current_ methods, but about what will
happen when the time comes -- in ten, fifty, or a hundred years -- that AI
_does_ exceed general human intelligence. (refs:
[http://edge.org/conversation/the-myth-of-
ai#26015](http://edge.org/conversation/the-myth-of-ai#26015),
[http://www.amazon.com/Superintelligence-Dangers-
Strategies-N...](http://www.amazon.com/Superintelligence-Dangers-Strategies-
Nick-Bostrom/dp/0199678111))

This article rightly points out that advances like self-driving cars will have
significant economic impact we'll need to deal with in the near future. That's
not mutually exclusive with beginning to research ways to ensure that, as we
start building more and more advanced systems, they are provably controllable
and aligned with human values. These are two different problems to solve, on
different timescales, both important and well worth the time and energy of
smart people.

~~~
AndrewKemendo
_It doesn 't attempt to engage with the arguments of Stuart Russell, Nick
Bostrom, Eliezer Yudkowsky, and others who are worried, not about current
methods, but about what will happen when the time comes -- in ten, fifty, or a
hundred years -- that AI does exceed general human intelligence._

That's because it would be arguing a straw-man. So there is no reason to
engage the argument.

~~~
davmre
I'm not sure I understand what you mean by "straw man" here. The usual meaning
is that to attack a straw man means to argue against a position that no one
actually holds, but which is easier to attack than your opponent's actual
position. The concerns about the long-term future of AI are real, actual
beliefs held by serious people. At this point there's a respectable literature
on the potential dangers of unconstrained, powerful optimizing agents; I
linked a couple of examples. These arguments are well thought through and
worth engaging.

By contrast, this article and many others are implicitly arguing against an
_actual_ straw man position, the position of "let's shut down AI research
before it kills us all in six months", which no serious person on either side
of this debate really holds (though it's understandable how someone who only
read the discussion in the mainstream press could come to think this way).

~~~
AndrewKemendo
_at this point there 's a respectable literature on the potential dangers of
unconstrained, powerful optimizing agents, of which I linked a couple of
examples._

There isn't really though. Superintelligence, while an interesting book,
doesn't do much more than pontificate on sci-fi futures based largely on
writing from Yudkowski. Bostrom gives no clear pathway to AGI. Neither does
Yudkowski in his own writings or Barrat (Our final invention). By the way
Super intelligence is kind of an offshoot book from Global Catastrophic Risks.

All of them take these interesting approaches, WBE, BCI, AGI etc... and just
assume they will achieve the goal without looking realistically about where
the field is or how it would get there.

So what they do is they say: We are here right now. In some possible fantasy
world we could be there. The problem is they can't draw the dots between them.

For example, find me someone who can tell me what kind of hardware we need for
an AGI. Can't do it, because no one has any idea. What about interface, what
is the interface for an AGI?

Even better (this was my thesis project): What group would have a strong
enough requirement that an AGI would be required to build? Industry, Academia,
Military? Ok great, can they get funding or resources for it?

etc...

Note, I am not saying that there is no potential that AGI could be a problem.
The point here is that nobody has firm footing to say that it will definitely
be a huge risk and that we need to regulate it's development.

There are actually people calling for legislation to prevent/slow AGI
development - see Altman's blog post and many of the writings on MIRI. So that
is what I am rallying against.

We need more development into AGI not less.

~~~
maaku
I'm more in your camp than MIRI's but...

Could you argue against the position directly? You kinda took a swerve in the
middle there. Bostrom, Yudkowsky, and Barrat's positions are basically
"superintelligence is possible, it will eventually happen in some way, and
when it does ..." It's not within the scope of the quoted works to elaborate a
technical roadmap or provide firm dates for the arrival of superhuman machine
intelligence.

So you would like to see unencumbered research in this area. They argue that
the long term outcome of this research is existentially risky and therefore
should be tightly regulated, and the sooner the better. Do you have any points
regarding this particular difference of opinion?

~~~
csallen
I'm not worried about the immediate future at all. We definitely haven't
reached the point where hastily-implemented regulations will do more good than
harm.

That said, you put it exactly right. The arguments about the potential long-
term risk are persuasive _regardless_ of where we are today. It bothers me to
see this argument ignored time and time again.

~~~
maaku
> That said, you put it exactly right. The arguments about the potential long-
> term risk are persuasive regardless of where we are today. It bothers me to
> see this argument ignored time and time again.

Actually I am almost completely unpersuaded by the arguments of Bostrom,
Yudkowsky, et al, at least in their presented strong form. Superintelligent AI
is not a magical black box with infinite compute capacity, the two assumptions
which really underlie the scary existential risk scenarios if you look
closely. It is not at all clear to me that there will not exist mechanisms by
which superintelligences can be contained or their proposed actions properly
vetted. I have a number of ideas for this myself.

In the weak form it is true that for most of the history of AI its proponents
have more or less ignored the issue of machine morality. There were people in
the 90's and earlier who basically advocated for building superintelligent AI
and then sitting back and letting it do its thing to enact a positive
singularity. That indeed would have been crazy stupid. Kudos to Yudkowsky and
others for pointing out that intelligence and morality (mostly?) orthogonal.
But it's a huge unjustified leap from "machines don't necessarily share our
values" to "the mere existence of a superintelligent amoral machine will
inevitably result in the destruction of humanity." (not an actual quote)

~~~
eli_gottlieb
I just want to point out: the history of software, just regular software, has
been typified by the New Jersey approach and the MIT approach. The former
consists in just hacking together something that kinda-mostly works, releasing
fast, and trying to ameliorate problems later. The latter consists in
thoroughly considering what the software needs to do, designing the code
correctly the first time with all necessary functionality, (ideally) proving
the code correct _before_ releasing it, putting it through a very thorough QA
process, and then releasing with a pre-made plan for dealing with any
remaining bugs (that you didn't catch in the verification, testing, and other
QA stages).

Only the latter is used for programming _important_ things like airplanes,
missile silos, and other things that can kill people and cost millions or
billions of dollars when they go wrong -- the things in which society knows we
_cannot afford_ software errors.

I don't think it's even remotely a leap to say that we ought to apply only
this thorough school of engineering to artificial general intelligence,
whether it's at the level of an idiotic teenager or whether it's a
superintelligent scifi _thingy_.

Now, if we think we can't really solve the inherent philosophical questions
involved in making a "world optimization" AI that _cannot_ go wrong, then we
should be thinking about ways to write the software so that it simply doesn't
do any "world optimization", but instead just performs a set task on behalf of
its human owners, and does nothing else at all.

But either way, I think leaving it up to New Jersey kludging is a Very Bad
Idea.

~~~
maaku
No one is making "world optimizations" engines. The concept doesn't even make
sense when he wheels hit the road. No AI research done anytime in the
foreseeable future would even be at risk of resulting in a runaway world
optimizer, and contrary to exaggerated claims being made there would be plenty
of clear signs something was amis if it did happen and planty of time _to pull
the plug_.

~~~
eli_gottlieb
I think you missed the last sentence: your software doesn't _need_ to be a
"runaway world optimizer" to be a very destructive machine merely because it's
a bad machine that was put in an important job. Again: add up the financial
and human cost of _previous_ software bugs, and then extrapolate to consider
the kind of problems we'll face when we're using _deterministically buggy_
intelligent software instead of _stochastically buggy_ human intellect.

At the very least, we have a clear research imperative to ensure that "AI",
whatever we end up using that term to mean, "fails fuzzily" like a human being
does: that a _small_ mistake in programming or instructions only causes a
_small_ deviation from desired behavior.

------
aamar
There's some good comments about some new AI tools; it's a shame that the
article's premise is a straw man.

 _The fears of machine superintelligence are based on the belief that true AI
is just around a corner. After all, we’re so advanced and the progress is only
accelerating, it’s probably a few years away, at most._

Analogously, public figures started warning us about climate change, so it
must be just a few years before the earth is uninhabitable. It should be
pretty obvious that if we start worrying about significant threats only a "few
years" before they kill us, that will be as a rule too late.

Carbon-driven global warming ("the greenhouse effect") was gathering support
in the 60s, followed by consensus in the 80s. Yet we are just now experiencing
the first years of growth without increased carbon output.

The claim by the concerned parties with respect to harmful AI is that it may
be a threat in the next 25 to 50 years: 2043 and 2065 are common estimates. "A
few years" isn't a good way to characterize that, interpret Gates, etc., or a
reasonable understanding of when we start worrying.

~~~
pqomdv
The global warming comparison isn't very good. We had the capabilities for
carbon output 50 years ago and it has slowly been increasing. But we don't yet
have an AI. So there is no need to warn anyone, except out of irrational fear.
Once we actually get it, it would make sense to start warning about its
applications, so it doesn't get out of control.

If I apply your analogy correctly, then warning about AI now, is the same as
it would be warning about global warming in the 19th century. Not very logical
and certainly very paranoid.

~~~
one-more-minute
If people had started to worry about the long-term effects of carbon output
_before_ it was already widespread, a lot of the damage it has caused could
have been limited.

I don't see what's irrational about trying to solve problems before they
become problems, rather than trying to do damage control after the fact.

~~~
AndrewKemendo
You aren't getting the comparison.

It's akin to someone back then worrying about the long-term effects of
steam/water vapor.

You don't know if vapor is actually a problem, or will become one, and you
don't have any data in any field to back up your concerns.

~~~
Thrymr
Arrhenius predicted global warming in 1896:
[http://en.wikipedia.org/wiki/Svante_Arrhenius#Greenhouse_eff...](http://en.wikipedia.org/wiki/Svante_Arrhenius#Greenhouse_effect)

~~~
AndrewKemendo
Ah, Fantastic! He led the creation of the field of climatology if I recall.

If someone can do forecasting of AGI with similar empiricism that would be
revolutionary and welcomed.

~~~
aamar
Arrhenius probably did no empirical research in this field. He devised a
generalized formula based on theoretical considerations and indirect
measurements (of the moon's appearance). It would take ~60 years before a
quorum of researchers believed his formula could provide accurate real-life
predictions.

Moravec's 70s papers are roughly comparable to Arrhenius's formula. The basic
computational power predictions (a minor extension of Moore's law) still seem
to be basically correct. There's a reasonable debate to be had about whether
more interesting predictions made by Moravec/Kurzweil/etc. have been validated
or not.

------
moyix
Argh. From the article:

> The fears of machine superintelligence are based on the belief that true AI
> is just around a corner. After all, we’re so advanced and the progress is
> only accelerating, it’s probably a few years away, at most.

Versus MIRI [1], quoting Nick Bostrom, whose book is arguably what most
recently sparked all of the current discussion of AI risk:

> If what readers take away from language like “impending” and “soon” is that
> Bostrom is unusually confident that AGI will come early, or that Bostrom is
> confident we’ll build a general AI this century, then they’ll be getting the
> situation exactly backwards.

> [...]

> > My own view is that the median numbers reported in the expert survey do
> not have enough probability mass on later arrival dates. A 10% probability
> of HLMI [human-level machine intelligence] not having been developed by 2075
> or even 2100 (after conditionalizing on “human scientific activity
> continuing without major negative disruption”) seems too low.

> > Historically, AI researchers have not had a strong record of being able to
> predict the rate of advances in their own field or the shape that such
> advances would take. On the one hand, some tasks, like chess playing, turned
> out to be achievable by means of surprisingly simple programs; and naysayers
> who claimed that machines would “never” be able to do this or that have
> repeatedly been proven wrong. On the other hand, the more typical errors
> among practitioners have been to underestimate the difficulties of getting a
> system to perform robustly on real-world tasks, and to overestimate the
> advantages of their own particular pet project or technique.

I agree with the meat of the article, though – recent AI progress has surely
been oversold by the media. It's really frustrating to see people continually
arguing against strawmen when there are some perfectly lovely arguments to
grapple with instead!

Aside: is there some kind of law we could coin that if a rebuttal to AI risk
starts off by referencing Skynet and Terminator, we can safely assume it's not
going to be worth reading?

[1] [https://intelligence.org/2015/01/08/brooks-searle-agi-
voliti...](https://intelligence.org/2015/01/08/brooks-searle-agi-volition-
timelines/)

~~~
0xdeadbeefbabe
The smart guys (professors and such) have brought this on themselves by
pretentiously labeling their field Artificial Intelligence. Excitable fools
would move on to the next thing if it were called Computational Rationality or
something similarly boring.

------
foobarqux
> The fears of machine superintelligence are based on the belief that true AI
> is just around a corner.

No, the fear of AGI is that the time between when we first start to see
promising progess in AI to the point where AI is out of our control would be
so short that we would not be able to react to prevent it.

~~~
maaaats
But the fear and thoughts should instead be directed to how one can solve the
economical an social challenges that arise when AI removes a lot of jobs. That
is the issue here, not some Skynet-thingy destroying humanity.

~~~
foobarqux
They are both legitimate problems.

------
blixt
I would agree with this article that every independent solution today does not
have the slightest chance to be iteratively improved to some kind of real
intelligence. Even the "generic" game playing AI is very limited outside the
scope of a pixel buffer and some digital output signals.

However, I don't understand how the arguments correlate to "there is no need
to worry about AI". The real worry should come from the fact that there are
now many, many more people looking at the problem of AI than before, and it's
a competition that now has serious money behind it.

The article mentioned the infinite monkey theorem as a counterargument, but
ironically that is exactly how I think we will stumble upon generic
intelligence. The more people that work on it and have the confidence, belief,
and money to actually try out random things, the more likely it is that
someone out there will discover the step change that will, in a day, take AI
from deep prediction algorithms to eerily deliberate behavior.

Even if no one manages to shortcut our way to generic intelligence, we sure as
hell will brute force it. While we're extremely early in the world of
simulating organic brains, they exist. People have constructed virtual organic
bodies controlled by virtual electric currents from virtual neurons in virtual
brains. It's happening today, and as capacity, domain knowledge, and most
importantly awareness increases, more and more people will work on the problem
and bigger and bigger brains will be simulated.

I really don't see how we can question whether we will eventually reach human
level intelligence running on computers. The only way I see generic machine
intelligence _not_ happening is by making people believe it is so impossible
that no one ever tries to do it. And maybe that was the purpose of this
article.

~~~
lsz9
No doubt increased funding and interests will increase the odds of stumbling
upon "cheap" strong AI - a faster-than-brute-force solution. But consider the
only strong intelligence we know: say intelligence only relied on the brain
(which might well not be the case). This organ consumes a large fraction of
the energy available to the individual. I'm not saying evolution produces
optimal solutions but I guess its fair to say that a vastly superior algorithm
would have had a good chance of concurring the world by now. So imitating the
biological processes of the brain in silicon would be rather costly no matter
the gain. And that would in the end (at best) be a one-in-seven-billion
individual ie. no magic powers. I guess funding will flow towards practical
systems that augument some consumer who will in the end provide a return on
the investment. Just saying we are not short on intelligence and its not
likely to be cheap to run massive scale super-AI in silicon.

~~~
blixt
This is all very vague theorizing of course, but I believe that once we reach
a level of artificial intelligence that resembles biological intelligence,
scaling it will be quite feasible on a silicon platform. You've also removed
the element of maintaining a body (exercise, eating, hygiene) as well as
expanded potential interfaces (direct connection to the internet) so that even
a 1:1 silicon brain would presumably be more efficient.

The next thing you can do is optimize the time step element, so that for every
1 second of thinking in a silicon brain, it has performed the equivalent of 10
seconds of thinking in a human brain. At that point, you've surpassed human
intelligence. It's also safe to assume that if we do reach this point, we've
learned a lot more about how biological intelligence works, and may be able to
use to our advantage the beneficial intelligence properties of savants.

The really big downside with all these approaches is that the intelligence
will be built from a blueprint that includes emotions, and we may do something
that is very morally wrong towards the intelligence that we created.

------
pyrrhotech
Tldr: current AI capabilities are exaggerated by cherry-picked sample data and
fear fueled by famous smart people is unwarranted.

Do Any of those cited give specific timelines? Even if we are very far away,
do you really doubt that one day machines will have superhuman intelligence? I
take that as pretty much a given, whether it's 50 or 500 years from now. What
I'm not so sure of is whether fear is an appropriate response.

Altman's bacteria handwashing analogy doesn't hold up. We don't care about
bacteria because they have no central nervous system or consciousness on any
level. However, we go out of our way to protect animals that can feel pain and
experience emotions because it's what we've decided through our intelligence
and higher reasoning is the moral thing to do. Stats show that the more
intelligent and educated the human, the more likely he is to behave morally as
our greatest moral philosophers define it. Why would super intelligent
machines buck this trend?

~~~
dannnn
> Do Any of those cited give specific timelines? Even if we are very far away,
> do you really doubt that one day machines will have superhuman intelligence?
> I take that as pretty much a given, whether it's 50 or 500 years from now

Why? If you extrapolate from the amount of progress we have made toward AGI in
the last 50 years (ie, none), then it's reasonable to argue that we still will
have made no progress 50 and 500 years from now.

There are intellectual problems that humans aren't capable of solving; it
wouldn't make any sense to talk about "superhuman intelligence" if that wasn't
the case. The currently available evidence suggests that "constructing an AGI"
might very well be one of those problems.

~~~
weavejester
> If you extrapolate from the amount of progress we have made toward AGI in
> the last 50 years (ie, none)

That's an odd way of defining progress.

> There are intellectual problems that humans aren't capable of solving; it
> wouldn't make any sense to talk about "superhuman intelligence" if that
> wasn't the case.

A superhuman intelligence doesn't necessarily have to come up with solutions
humans would _never_ think of, it just needs to come up with a solution in
less time, or with less available data, or with fewer attempts.

------
themgt
I've begun to think about AI more in terms of "artificial will" than
"artificial intelligence". Intelligence and will seem mostly orthogonal, in
that advanced computational reductionism appears capable of extremely
intelligent behavior without anywhere near the amount of self-determinism
shown by insects.

Whether artificial will is something that can be implemented in a turing
machine and run on silicon seems an open question. I believe the concept, when
deeply considered, is almost precisely antithetical to the goal of programming
languages. The goal of artificial will is to give control to the program, not
the programmer. Perhaps that's possible in a turing machine, but I have a
feeling the "natural language" in which to express such a program would be
like an inside-out LSD trip.

~~~
tmerr
If you knew how the atoms were arranged in an animal simulating the animal's
behavior would be as simple as simulating each individual atom. In practice
that's not tractable but it shows that in theory, since the behavior of atoms
is computable (blah blah quantum physics shut up), anything an animal can do
can be simulated by a turing machine including will. Just because it's not
simple to express in a programming language doesn't mean it not possible.

------
mnembrini
Nice article. I'd like to point out that in the Pacman example the agent is
only receiving a partial picture of the environment (details are in the video
description), so it's unfair to criticize it for lack of planning.

As to why this is the case you'd have to ask the researcher, but I think it's
because the observation space would be too big for the machine running the
agent (both memory and run time)

------
dataker
As a contributor to several ML projects, I am happy the mainstream media and
'thought leaders' haven't found out and picked on early projects like OpenCog
or DeepDive. We could've seen a tremendous amount of pseudoscience BS that
would've undermined important initiatives.

------
pauletienney
Indeed we are far far away from true AI (to me it implies self-consciousness).
The point is even if it happens in 100 or 200 years, it will be a _huge_
change in human history.

I guess Gates, Hawking and Stark are talking about a further AI creation, they
are kind of long-term thinking guys.

~~~
mikejholly
Agreed. As someone who works in AI (and knows its limitations) it's
frustrating to see the layperson getting nervous about AI after watching
Chappie and hearing some of these quotes from Gates and the like. Human level
AI is way way way out.

~~~
codezero
I think it's reasonable for the layperson to get a bit nervous. Even if it's
50 or 200 years, advanced, human-brain-like-or-better AI will happen.

Imagine the public sentiment if mass media made people aware of the
inevitability of the nuclear bomb 50 to 200 years in advance and people (not
just the government) were actively working on developing it.

~~~
NhanH
In the analogy of nuclear bomb, it's more likely than not a public awareness
of nuclear power would have prevented _nuclear plant_ (the good thing),
_nuclear bomb_ (the bad thing) is extremely unlikely to not be developed
anyway.

~~~
codezero
Maybe, it's hard to know for sure, just like it's hard to know now :)

I think that had nuclear physics been more obvious 50-200 years earlier, it
may have lead to a lot more practical private-sector development. In the case
of power generation, this is seriously beneficial.

How will private-sector AI turn out? Will they boot up sentient AIs in Docker,
then discard them? Is that OK? Who even knows?

------
codyh1
Human level AI seems to be a far reach but AI in specific domains seems to be
the approaching intersection. What I mean by this is the self-driving
car...very discrete skills but not any where near human level. Let's not
forget google's "Find Mitten's the Cat" AI on videos.

~~~
simonh
Computers have always been very good at performing very specific tasks in
highly proscribed circumstances. What were learning to do is expand the
circumstances within which the computer can operate, but as you say the tasks
are still very highly specific.

We could probably have programmed a computer to drive a car back in the 70s,
within an extremely specific configuration of roads and with no traffic or
pedestrians. Now self driving cars have very advanced sensors, can cope with
other traffic, pedestrians and a much wider range of road geometries. But it's
still only useful for driving a car, you couldn't take the same program and
teach it to even controll a boat or a plane let alone play Jeopardy, predict
the weather or trade on the stock market. Systems like this are going to be
very useful, but they are not taking us on a path to develop strong general
AI.

~~~
ghaff
>But it's still only useful for driving a car

And it's still only useful for driving a car in a rather specific
configuration of roads and a somewhat limited set of conditions. I'd argue
that a car's ability to operate autonomously and rather reliably under those
circumstances is leading a lot of folks to be _very_ optimistic about how long
it will be before I can summon robo-taxi with my smartphone.

------
jcoffland
I'm always glad to see someone cutting through the hype. Way too many people
gobble up the missinformation pumped out but academia and corporations who's
main goal is to increase their own funding. Add to that the wild speculative
fiction of the likes of Ray Kurzweil and pop science "news" who again are
mostly interested in entertaining and shocking you to increase they're own
popularity and thus revenue. There's just not enough incentive to think
critically and too few of the readership are inclined to cut through the crap.
It's not even socially acceptable to think critically. It's such a downer.
You're better off just agreeing and not pointing out your friend's gulibility
if you want to continue to have friends.

------
kamaal
I'm not sure why this is so difficult. AI simply going to be a highly context
sensitive answering machine which will rely and pick its answers from a
massive knowledge graph(which it can grow as it spends more time interacting
with assisted learning). If there is no context, it will just pickup the most
commonly expected answer based on weights and priorities. This is exactly what
human mind does as well.

Everything else is parsing the sensor inputs in a language that makes sense to
the AI answering machine.

So its knowledge category and context. Form a big enough and comprehensive
graph of interconnected elements, and traversing that graph is what AI will do
eventually. Apart from growing the graph and using the knowledge within the
graph to improve itself.

------
ilaksh
Study AGI approaches like OpenCog, HTM, etc. and look at mainstream deep
learning objectively. Lose your supernatural beliefs about the mind or human
exceptionalism.

What do people do? Advanced pattern-based behavior generation and unsupervised
hierarchical spatial temporal pattern learning and abstraction. Logic and
reasoning. Attention and goals.

I believe we will start to see somewhat convincing human-like conversational
interface, speaking and movement from machines in lesx than five years. I
don't even believe these things require real breakthroughs necessarily -- we
can probably mostly combine existing techniques.

------
ccvannorman
The most common misconception about AI is that it will "mimic human level
intelligence or better". Human intelligence is an infinitesimally small sliver
of possible conscious entities with agency, and whatever "wakes up" enough for
headlines to declare AI is real will almost certainly be worlds apart from us.

------
ikeboy
>It is telling that among people who actually do machine learning most aren’t
afraid of superhuman AI (even if they believe it’s possible).

[http://futureoflife.org/misc/open_letter#signatories](http://futureoflife.org/misc/open_letter#signatories)

------
mbrock
To me, algorithmic trading and investing is already a pretty big scary AI
proposition. And it's happening all over, for real profit.

Privatization and algorithmification (or whatever) of large scale human
decision making seems like an enormous change. And the computers don't have to
"think" in order to do this.

The next step in this scenario would be policy decision making based on AI
techniques. Statistical measuring, machine learning, etc in order to decide on
political details, gerrymandering, etc.

In other words, computers don't need to grow sentient and godlike. We could
just delegate power to them anyway. Maybe they'll be a bit stupid; so are we,
just in different ways. And they won't understand human concerns in any deep
way. But they'll be efficient and profitable.

You could already look at the international market as a kind of machine or
distributed algorithm. With human "computers." Replace these computers with
robots, per standard capitalist efficiency procedures, and voila, the world is
run by machines.

~~~
fiatmoney
"policy decision making based on ... statistical measuring" is > 2000 years
old. The modern version based on a mathematical understanding of statistics &
sampling, as opposed to collecting aggregates large enough that you can just
treat the error as 0, is ~80 years old.

~~~
mbrock
Yeah, it's quite fascinating. Like the "Cybersyn Project" in Chile. [0]

 _Project Cybersyn was a Chilean project from 1971–1973 (during the government
of President Salvador Allende) aimed at constructing a distributed decision
support system to aid in the management of the national economy. The project
consisted of four modules: an economic simulator, custom software to check
factory performance, an operations room, and a national network of telex
machines that were linked to one mainframe computer._

[0]:
[http://en.wikipedia.org/wiki/Project_Cybersyn](http://en.wikipedia.org/wiki/Project_Cybersyn)

~~~
fiatmoney
Cybersyn was kind of a joke; it was a Star Trek set hooked up to telexes where
aides manually aggregated information in the same way leaders get their
briefings the world over.

------
sushirain
“A woman holding a teddy bear in front of a mirror”. Hilarious.

The jokes will not last long.

------
smilencetion
Why SHE still keeps us alive ?

