
AI - rdl
http://blog.samaltman.com/ai
======
elwell
Critically speaking, this article doesn't add anything. I don't know why I
read it. Can anyone explain why they upvoted?

~~~
rdl
For me, "There are certainly some reasons to be optimistic. Andrew Ng, who
worked or works on Google’s AI, has said that he believes learning comes from
a single algorithm - the part of your brain that processes input from your
ears is also capable of learning to process input from your eyes. If we can
just figure out this one general-purpose algorithm, programs may be able to
learn general-purpose things." is what was most interesting to me. The idea
that there's a fairly simple algorithm to learning, applied in the brain,
which produces at least the basic learning capability, and possibly
consciousness.

~~~
nl
We already have a candidate algorithm[1], which is successful in a broad range
of applications[2][3][4].

[1]
[http://en.wikipedia.org/wiki/Deep_learning#Convolutional_neu...](http://en.wikipedia.org/wiki/Deep_learning#Convolutional_neural_networks)

[2] [http://deeplearning.net/reading-list/](http://deeplearning.net/reading-
list/)

[3]
[http://en.wikipedia.org/wiki/Deep_learning#Results](http://en.wikipedia.org/wiki/Deep_learning#Results)

[4] [http://www.wired.com/wiredscience/2012/06/google-x-neural-
ne...](http://www.wired.com/wiredscience/2012/06/google-x-neural-network/)

~~~
electrograv
"Deep Learning" is not itself a candidate for anything, because it's not any
single algorithm, but a category of approaches.

Deep Learning generally refers to machine learning algorithms that deal with
stacking multiple layers of simpler functions to enable more complicated
functions, and optimizing all the parameters to best fit your training set and
generalize to new samples (the hard part). Though it usually refers to neural
networks, I dont think there's any reason it doesn't also apply to other
layered approaches as long as there's a relatively unified learning algorithm
applied across the whole system.

There are clearly many different deep learning algorithms, even if you just
count the permutations of tricks you can choose from to improve layered NN
generalization. Though to be fair I think very good progress is being made
towards developing "better" algorithms in the sense that new ones (e.g. RBM
pretraining + dropout) usual perform better than than older algorithms, no
matter what data you use it on (now network architecture is another matter
entirely).

~~~
nl
I actually linked to convolutional neural networks on the Deep Learning page
as the algorithm.

But I do agree with your point.

------
kevinalexbrown
If the only goal is an 'artificial' consciousness, it might be more prudent to
consider a functional definition of what consciousness is and try to build
that. We didn't make computers by modeling how individual neurons perform
mathematical calculations.

On the other hand, if you want to go the biological route, there's some
awesome work to be done. If I were to study consciousness, here's the question
I would ask: how do we separate our selves from our surroundings? Patients
with brain-machine interfaces (like moving a mouse cursor) start by thinking
about moving their arms around. Then they apparently report that they
gradually just feel that the interface is another body part. So if it's set up
to change the TV channel, they just imagine that they have a channel-changing
organ.

So maybe you want to build a system that can identify what is a part of itself
versus what is not, and it's not just a fixed list. So what does that data
structure look like? How is it defined, queried, and updated? Defined by what
you can 'influence?' So gradated based on my influence? These aren't just
broad philosophical questions, they're more specific and actionable.

That's just one possible angle, but it's different than, say, machine learning
paradigms where you want to build a machine that can do pattern classification
(which the brain undoubtedly does). There are probably other routes as well.

~~~
debt
We know a ton about the brain but so little about the mind itself. We still
don't have definitive answers to what consciousness is, why it's here, what's
useful for, etc. Some people debate whether the mind exists at all. Also,
there's still very little understanding of the difference between the
conscious and unconscious mind.

I think building an artificial consciousness is going too far. Artificial
intelligence is simpler; it's just fake intelligence. Seems easy enough right?
If it looks like a duck and quacks like a duck then it's intelligent. We don't
need to make it "conscious" necessarily, again whatever that means, in order
for it be intelligent.

I feel like we can build artificially intelligent software pretty "easily"
relative to making it "conscious".

~~~
nzp
> We know a ton about the brain but so little about the mind itself. We still
> don't have definitive answers to what consciousness is, why it's here,
> what's useful for, etc. Some people debate whether the mind exists at all.
> Also, there's still very little understanding of the difference between the
> conscious and unconscious mind.

One of the really, really bad consequences of the Cold War was the scientific
divide between East and West. By that I mean serious lack of scientific data
exchange between the blocks. The consequences are still felt and this area
(the problem of consciousness) is the one that suffered. The problem of
"consciousness" was basically solved, at least at a conceptual level, by
Soviet psychology and neuropsychology. Here I refer, of course, to the work of
Vygotsky and Luria. What is consciousness? Almost nothing at all by itself.
Consciousness as found in humans is a consequence of our cognitive development
and the advanced symbolic capabilities of humans. The subjective perception we
have of the thing we call consciousness is "simply" (it's not really simple
when you get into details) a product of humans acquiring language skills (I'm
simplifying).

This is not to say the subject is trivial, it takes volumes to describe what
is happening, but the thing we informally call "consciousness" is really
nothing at all in and of itself, and the perception we have of it is just a
result of the very complicated process of cognitive development. Thin air,
like Lisp's cons.

If you want to read on it I can recommend Vygotsky's _Language and Thought_
(actually, it's his only book) and Luria's _Language and Consciousness_ (I'm
not sure it was ever translated into English, it's a collection of his lecture
notes from a university course he did on the subject) or possibly _The
Cognitive Development: Its Cultural and Social Foundations_.

Why this line of thinking is mostly ignored in the West I have no idea. Why do
we still cling to metaphysical (even religious I would say) phantasies about
"consciousness" is an interesting topic itself. Is it because it's romantic to
think there's something special, transcendent, about our minds? Are we really
that sentimental? I have some hypotheses, but it's a different topic.

~~~
debt
I think that's another possible _narrative_ for the mind but it's certainly
not hard science. There's no concrete data showing where the mind is in our
bodies or what part of the brain creates it. Psycology can't answer questions
like where does the mind come from or what is experience.

It's an interesting narrative though and I'll check out those books.

~~~
nzp
> Psycology can't answer questions like where does the mind come from or what
> is experience.

I'm not sure I understand what you mean by this. Of course it can, that's the
whole purpose of psychology (and, more fashionably, neuroscience of course).
To me that sounds like saying _science_ can't answer these questions. Do note
that when I say "psychology" I mean strictly the scientific areas of whatever
comes in the bag labelled "psychology". Due to historical accidents the term
acquired a lot of BS pseudo-scientific baggage, and it's really a shame those
things can detract from a wealth of valuable hard results honest scientific
psychology uncovered.

The answer that developmental cognitive psychology, at least the theory I'm
referring to, gives is that "the mind" comes from the only place it can come
from: neural processes and the way they hook into environmental interactions
of the organism (social and physical). The key to understanding what gives
rise to "consciousness" is in understanding the role of language acquisition
in broader cognitive development. The point where a child utters it's first
words is neither the beginning nor the end of this extremely nuanced process.
In my first post I took it for granted it's understood that this is not just
armchair speculation, it's based on empirical data. As any good scientific
theory it's far from complete, maybe in some details is inaccurate but it's
certainly infinitely better that an endless philosophical debate (with strong
religious, or in the best case idealist, undertones) on what the mind is and
where it comes from.

~~~
haberman
> it's based on empirical data

What empirical data can say anything about, for example, the philosophical
zombie problem?
[http://en.wikipedia.org/wiki/Philosophical_zombie](http://en.wikipedia.org/wiki/Philosophical_zombie)

~~~
nzp
There's no such data, because the zombie problem is complete nonsense. Well,
actually, it is complete nonsense precisely because empirical data can't say
anything about it.

~~~
haberman
By that argument, there is no moral argument against inflicting pain on
others, because the pain of another is not something we can empirically
observe, except by analogy of how we react to the pain we ourselves
experience.

~~~
nzp
First, the moral argument against inflicting pain on others doesn't depend on
existence of pain. The moral dilemma is: is it acceptable to inflict pain on
others or not. This is different, and to a large extend independent, from the
question if pain exists in experience of others. In other words, if pain
exists in others, it doesn't follow that you _have to_ , by mere logical
reasoning, make a moral conclusion that inflicting pain is wrong. There is an
uncrossable ontological abyss between the empirical what _is_ and the moral
what _should be_.

Second, the case of pain from an empirical side is not at all like what we
have in the philosophical zombie "problem". We _can_ empirically observe pain.
There are all sorts of physiological and neural manifestations of pain. Of
course, now you may say "ah, but how can we know that these empirical
manifestations mean the person is experiencing the sensation of pain".
Scientifically that dilemma makes little sense, it's simply unproductive, it's
scientifically useless. If we were to go by that route, we could inject a
similar dilemma into every scientific problem, which inevitably would lead to
the problem of solipsism. How can we _really, really_ be certain that anything
at all exists? Well, I suppose we can't, but this is a question that science
has long ago abandoned because it doesn't get you anywhere, it doesn't yield
any useful results.

Do note that unlike the question of pain, the zombie problem is defined so
that there is in principle absolutely no way to detect, to measure, if someone
is a zombie or not. On the other hand, we _can_ in principle measure and
detect events correlated with introspective reports on sensations. If we
couldn't do that for some phenomenon it would be wise to consider that the
phenomenon doesn't exist for the purpose of empirical scientific examination.

Frankly, I'm surprised that my previous post (where I say the zombie problem
is nonsense) got downvoted because this is the foundation of scientific
methodology. If you can not, even in principle, measure/detect something then
it makes no sense to discuss it. Of course, you can amuse yourself and
speculate on it, but that falls outside of boundaries of scientific inquiry
and I hope that's what we're discussing here.

~~~
haberman
> Of course, now you may say "ah, but how can we know that these empirical
> manifestations mean the person is experiencing the sensation of pain".
> Scientifically that dilemma makes little sense, it's simply unproductive,
> it's scientifically useless.

Sure, it's _scientifically_ impossible to evaluate. From a purely scientific
perspective pain is just electricity. How would you convince an intelligent
being that could not feel pain that it exists at all?

The existence of pain falls outside the boundaries of scientific inquiry, I
agree. But are you saying that it therefore doesn't exist? Because your
earlier argument seems to be that we can explain the mystery of consciousness
within a scientific framework, and that is the larger point I disagree with.

~~~
nzp
> How would you convince an intelligent being that could not feel pain that it
> exists at all?

Assuming the being is "reasonable" (in this context it would mean it's willing
to accept that there exist concepts that it may not understand or directly
experience, and is willing to trust us), we could just point out the chemical
and electrical phenomena correlated with pain and say that it's something that
causes a certain kind of feeling of discomfort. We would get in trouble if
this being also can not feel general discomfort, but you're probably bound to
hit a wall in understanding at some point anyway when communicating with an
entity whose experiencing capabilities are wildly different from ours.

> The existence of pain falls outside the boundaries of scientific inquiry, I
> agree. But are you saying that it therefore doesn't exist?

Actually, my point about pain was that it does exist, precisely because it can
be examined and explained within a scientific framework. If we couldn't do
that, then we could say that for all practical purposes, as far as science is
concerned, "pain doesn't exist".

The same is true for consciousness. What's difficult about it is that it's not
_a_ thing, there's no hormone for consciousness, there's no brain centre where
it's localized, rather it's a process and a product both phylogenic and
ontogenic so it's a lot harder to capture it and identify it, to put it "under
the microscope". It's not some secret sauce to intelligence, it's a
consequence of intelligence. And the most important part of the process is the
dynamics of language acquisition (at least when we're speaking of conscious
experience in Homo sapiens).

I could go into the details but I'm afraid my posts would explode in length.
ATM I don't have time to dig for good online material on this, and I'm under
the impression that the theory in time got derailed into developing some
practical aspects concerning child cognitive development, verbal learning etc,
and away from the hard meaty implications we're discussing here, so I'm
reluctant to even attempt to go into that rabbit hole. But they're explicitly
there (the books I mentioned discuss the issue at length). Interestingly,
about 10 years ago I was doing some work on word-meaning and symbol grounding
development and I was both glad and frustrated to see literature on computer
modelling in this area full of operationally defined concepts from the theory
but people were seemingly unaware that this work has already been treated in
depth on the theoretical level because there were no references to it then,
I'm not sure if anything has changed, I've since moved on to other things. For
example, the Talking Heads model[1][2]. It's not about consciousness per se,
and although the authors never reference the socio-cultural theory of
cognitive development (a horrible name in this day and age, it tends to evoke
associations to post-modern dribble, but nothing could be further from the
truth), it can give you a good idea of some aspects of the dynamics explored
in the theory because what is happening in the TH model is exactly what the
S-C theory describes is happening externally during language acquisition (in
broader strokes though).

As for the philosophical zombie problem, I'd like to retract what I said about
it being nonsense. Actually, it's very useful in showing why worrying about
subjective sensation of consciousness is completely useless in AI and is very
much like asking how many angels can dance on a tip of a needle. On a very
related note I'd add: people are severely underestimating the significance of
the Turing test.

[1]
[http://staff.science.uva.nl/~gideon/Steels_Tutorial_PART2.pd...](http://staff.science.uva.nl/~gideon/Steels_Tutorial_PART2.pdf)

[2]
[http://scholar.google.com/scholar?hl=en&q=talking+heads+expe...](http://scholar.google.com/scholar?hl=en&q=talking+heads+experiment&btnG=&as_sdt=1,5&as_sdtp=)

~~~
haberman
> Actually, my point about pain was that it does exist, precisely because it
> can be examined and explained within a scientific framework.

The physical processes of pain (ie. the electricity) can be observed
scientifically, but the "sensation" of pain (to use your word from before)
cannot. But it is the "sensation" of pain that gives it its moral
significance, otherwise inflicting pain would be no different morally than
flipping on the switch to an electrical circuit.

> The same is true for consciousness. What's difficult about it is that it's
> not a thing, there's no hormone for consciousness, there's no brain centre
> where it's localized, rather it's a process and a product both phylogenic
> and ontogenic

I can only conclude that you mean something different than I do when you say
"consciousness." To me the sensation of pain is a subset of consciousness.
It's the difference between electricity "falling in the middle of the forest"
so to speak and electricity that causes some sentient being to feel
discomfort.

> Actually, it's very useful in showing why worrying about subjective
> sensation of consciousness is completely useless in AI

Sure it's useless _to AI_. To AI the zombie problem doesn't matter, because
the goal is to produce intelligence, not sentience. But it's useful in a
conversation about what sentience and consciousness mean.

If we created intelligence that could pass the Turing Test against anybody, it
would be basically impossible to know if it experiences sentience in the way
that all of us individually know that we do. But that is the essence of the
zombie problem. Where does sentience come from? We have no idea.

Actually I take it back; the zombie problem will be extremely useful to AI the
moment a computer can pass the Turing Test, because that's when it will matter
whether we can "kill" it or not.

~~~
JabavuAdams
> The physical processes of pain (ie. the electricity) can be observed
> scientifically, but the "sensation" of pain (to use your word from before)
> cannot.

You state this as though it's a given, but it's not. You're assuming Dualism.
So, of course you end up with Dualism.

> But it is the "sensation" of pain that gives it its moral significance,
> otherwise inflicting pain would be no different morally than flipping on the
> switch to an electrical circuit.

This is a silly over-simplification. Complexity matters. The patterns of
electro-chemical reactions that occur when I inflict pain on another human
cause that human to emote in a way that I can relate to because of the
electro-chemical reactions that have been happening in me and those around me
since before my birth. So what?

It's in no way comparable to flipping a light switch, except in the largely
irrelevant detail that electricity was part of each system.

The fact that an incredibly complex system consisting of individuals,
language, and society should yield different results from three pieces of
metal and some current shouldn't be the least bit surprising, and is not a
reasonable argument for dualism, or p-zombies.

Here's my take on the p-zombie "problem". We can say all kinds of shit, but it
doesn't have to make sense. For example I can say "This table is also an
electron". That's a sentence. It evokes some kind of imagery, but it's utter
nonsense. It doesn't point out some deep mystery about tables or electrons.
It's just nonsense.

~~~
haberman
> You state this as though it's a given, but it's not. You're assuming
> Dualism.

No. Dualism is the idea that our minds are non-physical. I say minds are fully
physical, and all thinking happens in the physical realm. But somehow the
results of this thinking are perceived and sensed by a self-aware being as
"self" in a way that other physical processes are not.

> The patterns of electro-chemical reactions that occur when I inflict pain on
> another human cause that human to emote in a way that I can relate to
> because of the electro-chemical reactions that have been happening in me and
> those around me since before my birth.

Exactly. You are extrapolating _by analogy_ that other people experience pain
in the same way you do, because you cannot experience their pain directly in
the way that they do. But this analogy of thinking is just an assumption. And
it certainly offers no insight into why you are self-aware and a computer (a
very different but still complex electrical system) is not (we assume).

------
edanm
If anyone is interested in AI, I highly recommend joining Less Wrong, a
community started by AI researcher Eliezer Yudkowsky. He started the community
to convince people to focus on the "friendly AI problem". [1] I actually
recommend that everyone read LW, but _especially_ if you're interested in AI.

[1] In a nutshell, the friendly AI problem is: assume we create an AI. It may
rapidly become more intelligent than us, if we program it right. As soon as it
becomes significanlty more intelligent, we will no longer be the most
intelligent beings around, so the AI's goals will matter more than ours.

Therefore, we should really give it good goals that are compatible with what
we want to happen. And since no one right now knows how to define "what humans
want" good enough for writing it in _code_ , then we'd better figure THAT out
_before_ building AI.

~~~
timClicks
Any suggestions for getting started in the LW community? Its barrier to entry
means I can't seem to vote on anything, etc. It's quite intimidating really.

~~~
edanm
I'll reiterate something another poster said, since it isnt' getting enough
credit:

Read HP:MoR.

It's a fanfic of Harry Potter, written by the same Eliezer Yudkowsky who wrote
much of Less Wrong. It was specifically written to convey the feeling of "what
it means to be a rationalist".

For those who aren't into Harry Potter or into Fanfiction (like me), I can
tell you this: Suprisingly, it is one of the _best stories I 've ever read_.
And I'm talking just as a story, nevermind the other value you can get form
it, which is a good introduction to the "rationalist" community.

I'd argue that the BEST way to understand what is going on at LessWrong is to
read HP:MoR, as it was intended to be such an intro and succeeds masterfully,
while being amazingly fun.

~~~
cgag
I want to emphasize just how good this book is. It may very well be my
favorite thing I've read and I was also initially skeptical when it was
presented as fan fiction.

I think I might restart reading it.

------
mappum
I believe human-level general intelligence (and beyond) is already inevitable,
even if we don't make significant developments in "solving" intelligence.
Projects that are already developing stuff like this (e.g. IBM Blue Brain) are
just copying the human brain as closely as possible. Of course, this isn't as
efficient as it could be (they simulate it all at the molecular level, so you
can only get 1 neuron per CPU). However, as Moore's Law progresses, even if we
don't make the software more efficient, we will eventually be able to create a
fully functional simulation.

But if you look at the history of technology, things we create aren't usually
exactly based on models seen in nature. Airplanes aren't exactly like birds. I
believe we will find a more "man made" model for general intelligence (maybe
not even a neuronal model) that works much more efficiently with the hardware
we have available.

Going back to the airplane analogy, we already have the people who strap
wooden wings to their arms and jump off buildings (like Blue Brain), but we
are looking for the first Wright brothers design.

~~~
TeMPOraL
"So... why didn't the flapping-wing designs work? Birds flap wings and they
fly. The flying machine flaps its wings. Why, oh why, doesn't it fly?"

[http://lesswrong.com/lw/vx/failure_by_analogy/](http://lesswrong.com/lw/vx/failure_by_analogy/)

A very relevant article, both to the main topic and bird/airplane example.

Yes, biology has solved some problems and can suggest some solutions, but we
can't go cargo-cult on it and expect things to work just because they _look
similar_.

~~~
moconnor
We were trying to make airplane-sized birds, which don't exist in nature for a
variety of very good reasons.

Equally, our materials science wasn't good enough for flapping wings at the
time.

I expect to see flapping wing designs appearing in micro-flyers within a
decade. At the insect scale they're really useful:

[http://www.epsrc.ac.uk/newsevents/casestudies/2011/Pages/Tin...](http://www.epsrc.ac.uk/newsevents/casestudies/2011/Pages/Tinyflyingmachineswillrevolutionisesurveillancework.aspx)

------
rdpfeffer
Hierarchical Temporal Memory is the closest model to the brain that I've seen
([http://en.wikipedia.org/wiki/Hierarchical_temporal_memory](http://en.wikipedia.org/wiki/Hierarchical_temporal_memory)).
They are capable of generalized learning and excel in the same way that humans
do. They are capable of abstraction, self categorization, and online learning.

There are elements of past AI models in the HTM model, however to reduce HTM's
or any deep learning algorithm to a mere combination of past AI concepts
overlooks the power of the right model when it is achieved. It would be like
saying that Facebook is just a news feed. Sure, that's what gets most of the
eyeballs, but there's a lot more there which would drastically reduce its
value if not present.

What I think is most interesting is that we may find that Humans learn pretty
inefficiently from the perspective of the amount of input data required over
time. This may seem silly at first, but when you consider how many neurons
cover the surface area of our ears and eyes and then consider the fact that it
takes anywhere from 12 to 14 months for a child to speak its first word, you
might start to agree with this line of thought. Also, when I consider the fact
that this processing all happens in parallel even further pushes me in this
direction.

Whatever the case may be, HTMs are definitely a cool area of research. For
those who are interested, you should definitely check out more of Jeff Hawkins
work at Numenta. They've been able to demonstrate some pretty novel things. He
wrote a book back in 2006 that blew my mind. Went into deeper explanation
about how HTMs could model everything from deep learning, to consciousness,
creativity, and bunch of other things.

------
jcfrei
> I am quite confident that we’ll be able to make computer programs that
> perform specific complex tasks very well. But how do we make a computer
> program that decides what it wants to do?

Are we so sure that "we" really are in charge of what we want to do? I believe
a lot of our desires and ambitions are hardcoded into our brains and we just
project them onto present goals, like getting a promotion or learning how to
play the piano, etc. ultimately all these desires cater to the same few
desires we always had and were born with: developing a sense of social
belonging and intimacy.

See also:
[http://en.wikipedia.org/wiki/Belongingness](http://en.wikipedia.org/wiki/Belongingness)

~~~
idProQuo
This is why the desire for Strong AI boggles my mind. In order for a computer
to operate at a "human" level, it would need to make decisions based on things
like ambition and fear and greed. It will also have to constantly make
mistakes, just like we do.

If it didn't have character flaws, it wouldn't be operating at a "human"
level. But if it does have these character flaws, how useful would it really
be compared to a real human? Is the quest for Strong AI just a Frankensteinian
desire to create artificial life?

I'm curious if there are any good papers looking into stuff like this.

~~~
onnoonno
Yeah. And what if the computer discovers that desire is pointless? Will it
have a religion? Buddhism? Zen Buddhism?

Why does quantum computing even exist if all forms of computing are
equivalent?

------
dicroce
One thing that crosses my mind whenever I imagine creating a human level
intelligence is that it takes humans YEARS of constant stimulation to begin to
exhibit intelligent behavior... Sometimes I wonder if we'll have the algorithm
may before we realize it...

~~~
diminoten
We can stimulate a computer with the equivalent amount of information in much
less time than years, though.

~~~
marcosdumay
We probably can't. Unless it's an extremely fast algorithm, we don't have the
processing power to make it run faster than our brain.

We'll can in a couple of decades, probably. But not now.

~~~
TeMPOraL
Brain seems to be slow (I read it was measured to run at 200Hz tops), but it's
extremely parallelized and caches the livjng shit out of everything.

~~~
auntienomen
It also appears to have quite a remarkable instruction set.

------
khafra
> artificial consciousness, or creativity, or desire, or whatever you want to
> call it. I am quite confident that we’ll be able to make computer programs
> that perform specific complex tasks very well. But how do we make a computer
> program that decides what it wants to do? How do we make a computer decide
> to care on its own about learning to drive a car? Or write a novel?

I am not able rightly to apprehend the kind of confusion of ideas that could
provoke such a question.

 _We_ cannot decide what we want to do--we can only decide how best to fulfill
our wants; a person learns to drive a car because we want the freedom, social
approval, and other stuff that comes with that.

Natural selection gave us our base-level desires, that all other desires
spring from; and it was able to do that because it's an optimization process.
A functional AI's desires will come from _some_ sort of optimization process;
the only question is what that process will be optimizing.

------
byehnbye
I don't understand the logic behind these types of posts that add no value to
the poster and the people discussing it in comments.

Is it a signaling mechanism to attract people working in this area? I'm sure
you have already turned them off by showing your naivete. So no value to you.

To people trying to discuss this by racking their brains and looking for new
ideas, Any one with a quint decent thought/idea will never share it here to
enlighten us laymen. So no value to us too.

So, what's the point HN, and why all the upvotes?

~~~
feralmoan
> So, what's the point HN, and why all the upvotes?

I guess SV types get a little hot for AI based utopian/dystopian succession
when they're the only ones who could possibly gain from such an outcome.

------
yogo
Interesting post. The way I look at it is from the perspective of a human
baby. How do they become intelligent? They have the "sensors" to detect
features and will be able to recognize their parents. A lot of things are then
learned, like touching a hot stove and from getting a formal education. For a
machine to be artificially intelligent it would have to learn from its
environment but also take formal instruction. That seems like a lot of ground
to cover and this is what makes it unrealistic (for now).

You can have a machine read every book in existence but how long will it take
for it to understand in the same way a human reading a lot of books would need
to understand it somehow.

~~~
yogo
Not sure what set off the down votes but we can teach a computer to recognize
characters for application in OCR. We also learned those characters from being
taught in school and reading bad handwriting. We teach computers the same way.
How are they supposed to magically recognize them especially since they didn't
invent whatever language it is?

Note that I'm referring to artificial general intelligence[0].

0\.
[http://en.wikipedia.org/wiki/Artificial_general_intelligence](http://en.wikipedia.org/wiki/Artificial_general_intelligence)

~~~
beat
This was back in the 1990s, but I worked in what was basically a data entry
company, where the processing was a mixture of scanned forms (Scantron style)
and human key entry. The project I was on was image scanning the forms for
other purposes, so we had a neural net handwriting recognition system that we
were comparing to human key entry at a large scale - millions of documents.

What we found was that human key entry significantly outperformed the neural
nets, even when the data was carefully handwritten in constrained boxes.
Humans were so far ahead of the heavily trained neural nets that the software
was basically unusable at that point.

Of course, that was nearly 20 years ago, and things have probably moved on
quite a bit. But you can still see the basic problem in Captcha-style
validation on web pages. Computers just can't be trained to recognize
distorted text that humans can read pretty easily.

~~~
chegra
[http://www.kurzweilai.net/vicarious-ai-breaks-captcha-
turing...](http://www.kurzweilai.net/vicarious-ai-breaks-captcha-turing-test)

I think they have done that.

------
bohm
I wrote about limitations on AI imposed by algorithmic complexity here:
[http://paulbohm.com/articles/artificial-intelligence-
obstacl...](http://paulbohm.com/articles/artificial-intelligence-obstacles-
information-theory/)

In essence: unless we extract that "single algorithm" that Andrew Ng believes
in from an existing medium, we're unlikely to rediscover it independently.

(read the papers i link to at the bottom of the article for a more rigorous
explanation)

------
Ygg2
Wait, wasn't creativity part solved?
[http://en.wikipedia.org/wiki/Computational_creativity](http://en.wikipedia.org/wiki/Computational_creativity)
(I'm refering to Stephen L. Thaler's work)

I mean you need two neural nets. One that adds chaos to another neural net and
a second one that returns the result. You could probably optimize by somehow
making those parts parallel.

------
hooande
Many people see Artificial General Intelligence as a panacea. The idea is,
"We'll create artificially intelligent scientists who will solve all of our
other problems for us!". I think that future generations will look back on
this as a modern version of alchemy. If blogs had existed centuries ago I'm
sure that people would have called the transmutation of metal the most
important trend of their time. This isn't to denigrate the author of this post
or the people who have dedicated their lives to AGI research. It's just that
this idea probably falls into the category of "If it seems too good to be
true, it probably is". [1]

We have no proof that artificial _general_ intelligence can exist. We have
numerous examples of specific intelligence: playing chess, driving cars,
various forms of categorization. But we don't have a single example of an
application that can handle a task that it hasn't been specifically trained
and tested for. It's not to say that it isn't possible, but there is no more
evidence for AGI than there is for Bigfoot, leprechauns or space aliens. The
idea of artificial consciousness currently requires a leap of faith.

The most important trend today is collecting massive amounts of data and using
them to make accurate predictions. Instead of lusting after one all-singing
all-dancing intelligent program we should focus on tackling one form of
decision making at a time. Drive cars, land planes, predict the weather and
calculate the best way to get from point A to point B. One day we'll wake up
in a world where artificial intelligence is all around us and the idea of a
one size fits all solution will seem silly and quaint.

[1] In fairness, this was said about every amazing thing in modern life. You
never know.

~~~
theatgrex
But we do have proof that general intelligence exists. So How would it be
impossible for artificial general intelligence to exist ? Do you believe in
mind body dualism? AGI will be difficult but comparing it to Bigfoot or
Leperchauns is rediculous. There is a huge difference between very very very
hard and impossible.

~~~
001sky
Gold exists, So How would it be impossible _alchemy_ to exist ?

~~~
RogerL
Star fusion does it every day.

Evidence that something exists is evidence that it is created.

~~~
001sky
Alchemy and AI are both a bit more culturally specific. I agree that seeing
life on earth would not logically be inconsistent with extr-terrestial life
existing. But that is not to say AI=ET.

------
rdl
I wonder what other credible contenders for "most overlooked technology" are.

I think "physical tamper evidence/tamper response" is one, along with hardware
security functionality (crazy secure virtualization extensions, etc.) --
essentially competing with Intel not just on power but also on security
features. Although Intel is leading in this area with TXT and now SGX.

------
elwell
10 points in 10 minutes. If multiple accounts weren't used, that must be a
ready-worthy article.

~~~
napoleond
As others have pointed out, the domain probably earned some points all by
itself (and multiple submissions, which act as additional votes on the primary
submission). It was also posted by a (popular) YC alum, which typically also
accelerates upvotes.

~~~
elwell
> and multiple submissions, which act as additional votes on the primary
> submission

Didn't know that; thanks for pointing out.

------
dschiptsov
The problem of AI is that it has much more complexity than we used to think.
Even best guys like Minsky have grossly underestimated the complexity, so they
got stuck, and now are in a process of developing more subtle, refined
theories. In some sense the story of modern AI is a story of how Minsky
assigned to figure scene decomposition for robotic vision as a summer project,
and now teaching Society of mind and calling for entirely different approach
to what is intelligence. It is not, of course, some massively parallel
recursive problem solver implemented in neurons, it is too naive view, so
there is no use to search for one.) It is as complicated as actions of whole
humanity with a billions of semiindependent agents.

------
erik14th
Humans have a drive, a raison d'etre, intelligence being only a mean to
fulfill that drive.

Thing is that drive is uncertain/subjective.

Learning can't be an objective by itself, intelligence is just a tool, from
that perspective you can say there are already AI, like targeted ads, it
learns about you and acts accordingly.

So to have a "generalist" AI, like human's intelligence is general, you'd have
to have an objective like staying alive and build up from that.

~~~
psionski
No, it's not. Drive is the desire to maximise future available options (e.g.
to not get trapped). There is already software that decides what to do without
a human telling it what to do. For more see
[http://www.ted.com/talks/alex_wissner_gross_a_new_equation_f...](http://www.ted.com/talks/alex_wissner_gross_a_new_equation_for_intelligence.html)

------
axilmar
> Andrew Ng, who worked or works on Google’s AI, has said that he believes
> learning comes from a single algorithm - the part of your brain that
> processes input from your ears is also capable of learning to process input
> from your eyes. If we can just figure out this one general-purpose
> algorithm, programs may be able to learn general-purpose things.

As I wrote a few days ago here:

[https://news.ycombinator.com/item?id=7217967](https://news.ycombinator.com/item?id=7217967)

"The way intelligence works, in my opinion, is this:

1) experiences are stored in the brain. Experiences contain inputs from the 5
senses as well as the sense of danger/satisfaction at that point.

2) at each given moment, the brain takes the current input and matches it
against the stored experiences. If there is a match (up to a threshold), then
the sense of danger/satisfaction is recalled. Thus the entity is able to
'predict', up to a specific point, if the outcome of the current situation is
bad or good for it, and react accordingly.

The key thing to the above is that the whole process is fused together: the
steps for adding new experiences, matching new experiences and recalling
reactions is fused together in big pile of neurons."

------
kordless
I'm just going to stick this out there because I'm a futurist and it's my role
to share with others what I'm thinking about. What I share may be flat out
wrong, scary, half assed, or appear to be crazy. So be it.

We've advanced a lot in the last 100 years. We're starting to see a bigger
picture forming with the advent of compute and networking capabilities.
Combining simple elements of these basics give rise to surprising and
interesting behaviors. See "Twitch Plays Pokemon":
[http://news.cnet.com/8301-1023_3-57619058-93/twitch-plays-
po...](http://news.cnet.com/8301-1023_3-57619058-93/twitch-plays-pokemon-is-
now-a-fight-for-the-soul-of-the-internet/) as an example of suprising
behavior.

The more we look in detail at the universe around us, the more puzzling it
gets. Prime numbers spirals are unexplained. The two slit experiments results
indicate the observer plays a part in collapsing a particle's probability
wave. The effects of dark matter could be a result of parallel universes. You
couldn't make up weirder shit if you tried.

It's not a huge leap of logic to assume some parts of our brain operate at a
quantum level. Given that first statement comes to a truthful fruition, I
don't think it would be entirely unreasonable to assume AI will do so as well.
Given computers already use some quantum properties, it's also reasonable to
expect advancement in AI lies in this direction.

When they announced Google was getting a D-Wave computer, I got really
interested. Granted, they know beans about how it works (and whether or not it
actually works at all) but it's still crazy interesting to consider.

As I said, I could be wrong or crazy. Or both.

~~~
carlob
> It's not a huge leap of logic to assume some parts of our brain operate at a
> quantum level.

To the best of our knowledge it is still a pretty large leap of logic.

~~~
kordless
It's definitely a leap, but there's a decent amount of information (not proof
however) on the subject laying around. We still don't understand how the brain
brings about consciousness. I guess I should say it's not a huge leap to
assume it has something to do with other things we also don't understand.
Given quantum effect and number theory still elude us in areas, it's a decent
approach to assume they _might_ be related.

I've debugged problems in my code that, at first glance, appear to be
unrelated to each other. Given something is slightly off in one area isn't a
proof something of in another area is related, but it's a good place to start
looking.

~~~
carlob
As far as I know there is a decent amount of information _against_ the fact
that brains are quantum computers.

On the other hand there is a talk by Hinton on youtube [1] that sheds
interesting light on some variants of deep learning and the way the brain uses
(classical) noise.

[1]
[https://www.youtube.com/watch?v=DleXA5ADG78](https://www.youtube.com/watch?v=DleXA5ADG78)

------
temuze
> "the part of your brain that processes input from your ears is also capable
> of learning to process input from your eyes. If we can just figure out this
> one general-purpose algorithm, programs may be able to learn general-purpose
> things."

This is the Holy Grail of CS. I believe we're closer than most people would
expect and I think it's going to be a race to the finish line.

~~~
maaku
For perception, maybe. The neocortex (hint: where we do everything we consider
"thinking") operates on entirely different principles and is not
interchangeable with perception.

------
youlweb
I'm a strong A.I hobbyist, and I just put up a website for my basic abstract
algorithm: genudi. The website's implementation of the algo revolves around
having a conversation with a computer. Currently in limited release, you can
request a Pioneer account from there:
[http://www.genudi.com](http://www.genudi.com)

~~~
youlweb
An example is worth a thousand words, take a look at this conversation with a
strong AI machine:
[http://www.genudi.com/share/27/Dialog](http://www.genudi.com/share/27/Dialog)

------
varelse
I think one of the next hot emerging careers will be connecting and
interfacing traditional computational algorithms for the bits that are clearly
orders of magnitude more efficient than using a multi-layer neural network to
do them into neural networks, SVMs, and/or whatever comes next that figure out
how to allocate such work from raw data feeds.

------
bovermyer
I think going about it from a complex algorithm point of view is the wrong
approach.

We should, instead, be concentrating our efforts on two things - sensing, and
reacting. The predictability of the reaction doesn't matter; all that matters
is that the machine reacts. Everything else will need to depend on
evolutionary processes, which requires a third criterion - changing reaction
based on prior data.

If the previous reaction did not lead to a negative result ("negative" meaning
detrimental to one or more arbitrary values), then the reaction can continue
to the same stimulus. If the previous reaction, however, elicited a strong
positive result, then the reaction should be encouraged. Similarly, if it
triggered a strong negative response, it should be avoided.

To a degree, you could do this without any kind of "operating system," just by
using sensory data as inputs in a complex circuit.

At least, that's how I would approach it. I know nothing about A.I. research.

------
forrestthewoods
There are many classes of problems in computer science and AI falls into one
of my favorites. If today the world had a machine with infinite CPU power and
infinite RAM we _still_ wouldn't have a good AI.

We just don't have the knowledge to utilize such resources to write an AI that
could, for exampe, play League of Legends or Starcraft at a level beyond
professional gamers. And it certainly couldn't write a best selling novel. It
could solve an arbitrarily large traveling salesman problem but it couldn't do
those other things. I think that's kind of awesome.

I'm not saying it can't be done. Assuming we humans don't kill ourselves I
think someday it will. But it's a long, long ways off.

~~~
codeulike
If you had a machine with infinite CPU power and infinite RAM you could
_evolve_ an AI. Greg Egan wrote a great story about that:
[http://ttapress.com/553/crystal-nights-by-greg-
egan/](http://ttapress.com/553/crystal-nights-by-greg-egan/)

Someone tries it, with mixed results.

~~~
seanmcdirmid
Ah, but we weren't given infinite time also, so you might be able to evolve an
AI, but it might take many millions of years (like it did with real life).

~~~
Houshalter
He said infinite CPU power which implies infinite number of iterations. In
real life it would probably take a lot more than millions of years (because
computers are too slow to simulate populations of millions of minds.)

~~~
seanmcdirmid
Does infinite energy (as infinite CPU cycles) really translate into immediate
time? My physics isn't great, but I thought energy was related to mass, while
entropy was related to time.

------
yconst
Dear HN users: If you are even minimally interested in the topics that this
post "covers", please, do yourselves a favour and open up _any_ book about
machine intelligence instead of reading such uninformed and negligent posts.

I urge you to, seriously.

------
davidrangel
From a pure layman's perspective: if you believe in evolution, then what
separates us from a reptile (as mentioned in the post) is almost certainly
something we can figure out and replicate. There is nothing "special" there.

So if you believe computers today already have the "intelligence" of a
reptile, or a toddler (i.e., ability to play pong), or something along those
lines, it's only a matter of time before a computer has the intelligence of a
full-blown adult human (and soon thereafter much more).

Our level of intelligence/awareness seems magical only because we haven't
fully understood it yet. That will change.

~~~
dinkumthinkum
No I don't think "belief" in evolution implies that at all. That's a big jump.

~~~
davidrangel
Why is that a big jump? I'm not saying it will be easy or quick. It does imply
that getting from a reptile-level intelligence to a human-level intelligence
was a natural process and something that can be reverse engineered.

------
feralmoan
I dunno, the definition of intelligence is biased by our own ability to
comprehend and therefore profoundly slim (even IQ can be gamed, and most
people puke at emotional/lizard brain intelligence) so large chances "science"
either under or overshoots its "stakeholder aware" analysis and doesn't know
functional self preserving, iterative enhancement intellect when its looking
it in the face. You know us monkeys, running around derping science with wacky
wavy hands going 'Hai Dolphin What You do jump hoop!'. AI, rockin the casbah.
Maybe one day!

------
jk4930
"But artificial general intelligence might work, and if it does, it will be
the biggest development in technology ever."

I'd like to point interested readers to the AGI Conference series[1], the Open
Cognition Project[2], and a mostly outdated (2009) but still useful list of
everything AGI[3].

[1] [http://agi-conf.org/](http://agi-conf.org/)

[2]
[http://wiki.opencog.org/w/The_Open_Cognition_Project](http://wiki.opencog.org/w/The_Open_Cognition_Project)

[3] [http://linas.org/agi.html](http://linas.org/agi.html)

------
skywhopper
"But how do we make a computer program that decides what it wants to do? How
do we make a computer decide to care on its own about learning to drive a car?
Or write a novel?"

Perhaps it'd be better to ask: why would we _want_ a computer to do these
things? I certainly do not want to live in a world where computers have their
own motivations and desires and the ability to act on the same.

Actually, I can put that more strongly: none of us _will_ live very long in a
world where computers have their own motivations, desires, and the ability to
act on the same.

~~~
catshirt
you seem pretty convinced living computers would kill us all. this seems like
a pretty big assumption to me?

~~~
Houshalter
Because you are made of atoms that can more efficiently be used for something
else. Morality (and all human values) is a purely human concept that evolved
in the specific conditions of human evolution (and even just our specific
culture.) AIs are not anthropomorphic, they don't have to have _anything like_
human minds or values.

~~~
onnoonno
Why do you think AI would be more concerned about efficiency than
understanding?

~~~
PeterisP
There are a bajillion possible future worlds that a particular mind might
choose to make, and only a extremely tiny faction of these worlds include
hapiness of mankind as a priority above everything else. And being not first
priority, in essence, means being a worthless disturbance in the way of the
first priority, and thus extinction.

------
catshirt
" _But how do we make a computer program that decides what it wants to do? How
do we make a computer decide to care on its own about learning to drive a car?
Or write a novel?_ "

if intelligence is solved by reverse engineering the brain at a molecular
level surely consciousness and creativity are?

" _And maybe we don 't want to build machines that are concious in this
sense._"

if the physical composition of the brain defines intelligence and conscience,
i'm not sure you'll be able to pick and choose. i am all for artificial
conscious though. yolo.

------
urlwolf
Sorry to be negative; Sam Altman is a great guy and had plenty of valuable
insight before, but his take here is infantile at best. When people accuse PG
of speaking in very authoritative terms, it doesn't bother me. It's a writing
device. But here Sam is doing the same trick, with the difference that I know
the field he talks about well enough to see the poor logic transitions. My
take: don't imitate PG's style on areas where you are not deeply
knowledgeable.

------
wildermuthn
Are we building AI equivalents of the philosophical zombie?

[http://en.m.wikipedia.org/wiki/Philosophical_zombie](http://en.m.wikipedia.org/wiki/Philosophical_zombie)

If P-Zombies are an impossibility, then so too true AI that dismisses the need
for consciousness, or begs the question by assuming that consciousness will
emerge from intelligence.

It may be that intelligence, true human intelligence, emerges from
consciousness.

------
ekianjo
> Something happened in the course of evolution to make the human brain
> different

Not just the human brain, a great number of animals share the very same
characteristics as the human brain we find ourselves closer and closer to them
every single day as new researches on animal neurology and animal behavior get
published. It would be very wrong to suggest that intelligence is only a human
thing.

------
dblacc
AI was also the subject Gates pointed to when asked a similar question on the
AMA thread the other week on Reddit, I believe.

------
vonnik
Computers are actually pretty good at being creative already. That's not the
hardest problem to solve... [http://www.psmag.com/navigation/nature-and-
technology/rise-r...](http://www.psmag.com/navigation/nature-and-
technology/rise-robot-artist-67731/)

------
seanalltogether
How do we develop intelligent systems when we don't know how to measure
intelligence?

~~~
Houshalter
Who says we don't know how to measure intelligence? The simplest way would be
just to select a problem that requires intelligence and measure how good it
does at that problem.

~~~
rectangletangle
But what is a problem that specifically requires intelligence to solve? How do
we know we're measuring intelligence, and not the programs aptitude for the
particular problem?

~~~
Houshalter
Why does it matter? If if solves the problem then it's as good as if it was
fully intelligent. Who cares what goes on inside the black box?

If you worried about it overfitting to a specific problem, give it lots of
problems and weight the solutions by complexity. So you heavily favor simple
algorithms that can learn to solve a large class of problems, over ones that
are more adapted for those specific problems.

~~~
benched
All I'm learning from this thread is that some people see infinite regression
where others see tautology.

edit: what I mean:

Person 1: "What if we're just measuring the color of the sky? What if we're
just measuring blueness? What if we're just measuring a wavelength of light?
... ..."

Person 2: "The sky looks blue."

~~~
Houshalter
What do you mean?

------
whywhywhy5
The easiest way to create AI would be to make a model our own brain's neural
pathways. With a computer model we could analyze and break down what makes
consciousness/intelligence, etc.

------
neverminder
In my opinion true AI must have reasoning skills and be self aware. This
brings the disturbing truth: either we never get it to work or we will get it
to work and it will destroy us.

------
JabavuAdams
It's foolish to create general AI. What we should be working on is IA
(Intelligence Amplification).

Why would you create a god, when you could instead become a god?

------
bencollier49
Does anyone have a list of public companies (especially small ones) which are
involved in AI? There don't seem to be many besides the big players.

------
dinkumthinkum
I hate to get "meta" but it doesn't seem frontage worthy, it's just a musing
about AI.

------
namenotrequired
...and then we'll just need to figure out how to make humans more creative.

------
imranq
how does this guy consistently write mini-Paul Graham essays?

~~~
mappum
Because he basically is mini-Paul Graham.

~~~
Buttons840
Really? The essay "Andrew Ng thinks there's one algorithm underlying all
intelligence. Also, I hope we get conscious computers." is a mini Paul Graham
essay?

No PG essay is so lacking in content or original ideas. Conscious computers?
Who hasn't thought of conscious computers?
[http://en.wikipedia.org/wiki/History_of_artificial_intellige...](http://en.wikipedia.org/wiki/History_of_artificial_intelligence#The_optimism)

------
ratsimihah
Building AGI in Common Lisp if anyone wants to help.

~~~
jk4930
Do you have your own approach to AGI or are you implementing another concept?

~~~
ratsimihah
I have my own approach.

------
etanazir
D-Wave AI - maybe; diophantine AI - lol.

------
argumentum
There's a joke in the computational neuroscience community somewhat along the
lines of "consciousness is where neuroscientists go to die". I literally saw
this happen when I was at the Salk institute as a lowly undergraduate research
assistant. Sir Francis Crick was there at the same time, and spent his last 25
or so years in pursuit of a theory of consciousness. I was fortunate (and
humbled) to get to talk to him a few times before he passed in 2004, and he
was undoubtedly a thinker of superior ability and unbounded curiosity (a
pretty awesome combination).

Actually there were (and still are) a lot of the biggest names in the field
there at the time (including Crick, Terrance Sejnowski (a professor of mine
who co-invented the Boltzmann machine* amongst other things), VS
Ramachandran). These are all people who were coming at the problem from the
biological &| pure science/math side of things (vs people like Andrew Ng, who
Sam mentioned, who have a more CS/engineering based approach).

No doubt they consistently came up with spectacular theories and very
interesting models of how a specific regions of the brain may function. How
for the most part they worked was this:

1\. Come up with a biologically or cognitively plausible mathematical model
(many of which were fantastically cool). 2\. Implement and run this model on
massively parallel architectures (at the time not quite the level of
technological sophistication you see nowadays, so things may have changed a
lot) 3\. To train, use feedback from EEG (this is what I "worked" on, but they
also worked with other electrical signals, MRI and chemical measurements at
the level of individual neurons).

The biggest progress was made at the smallest level (understanding how
individual neurons and small networks work). This was primarily because
measurement at this level actually provided useful information. The
signal/noise ratio of EEG scalp recordings (which to this day gives me
nightmares) was (and is) so terrible that I left the field as a quite
disgruntled phd student. Maybe I just didn't have the intellectual capacity,
but I never felt like I was working on anything that made sense. This was true
for _many_ of my fellow graduate students .. after a couple years, we felt we
were doing pseudoscience.

Rant completed, I think the CS/engineering approach is more promising: don't
worry about the biology or some grand theory of the mind and just try to do
something useful. Since computers get more powerful consistently, we'll
incrementally be able to do more and more useful things. If consciousness
emerges at all, it may or may not appear like _human_ consciousness. We may
not even be able to tell if/when this happens, but at least we would be
solving real problems in the meanwhile.

* [http://en.wikipedia.org/wiki/Boltzmann_machine](http://en.wikipedia.org/wiki/Boltzmann_machine)

------
ygmelnikova
Explain to me how a brain can evolve, and yet not understand how itself
functions?

~~~
qbrass
There's little to no evolutionary pressure towards the brain understanding how
it functions.

~~~
ygmelnikova
Ok then, tell me where the evolutionary pressure came from for brains to
consider and act upon things like burials, math, surgery, and brain surgery in
particular.

------
byehnbye
Goodbye HN!

~~~
elwell
?

~~~
kordless
If you had AI, you wouldn't need a community like Hacker news, or something
along those lines.

------
henrygrew
"Something happened in the course of evolution to make the human brain
different from the reptile brain, which is closer to a computer that plays
pong." \- i beg to differ on this statement, clearly the complexity and sheer
magnificence of the human brain did not come about by mere chance, if this was
so it would have been easy to replicate this.This is proof of an intelligent
creator.

