
Zuckerberg, Musk Invest in Artificial-Intelligence Company Vicarious  - pmcpinto
http://blogs.wsj.com/digits/2014/03/21/zuckerberg-musk-invest-in-artificial-intelligence-company-vicarious/
======
mattj
The AI winter is very much over, and we're back to the good old days of
selling the future. I bet this team is very sharp, but there's still merit to
"over-promise, under-deliver."

"Phoenix, the co-founder, says Vicarious aims beyond image recognition. He
said the next milestone will be creating a computer that can understand not
just shapes and objects, but the textures associated with them. For example, a
computer might understand “chair.” It might also comprehend “ice.” Vicarious
wants to create a computer that will understand a request like “show me a
chair made of ice.”

Phoenix hopes that, eventually, Vicarious’s computers will learn to how to
cure diseases, create cheap, renewable energy, and perform the jobs that
employ most human beings. “We tell investors that right now, human beings are
doing a lot of things that computers should be able to do,” he says."

~~~
samstave
Funny to think that instead of curing diseases or making cheap renewable
energy, we'd instead try to spend resources to invent a computer to do that
for us...

~~~
johnrob
Why do you think our creators made the simulated universe we live in? There's
an infinite-loop bug, however: each simulation tries to solve the real problem
by creating a sub-simulation.

~~~
0003
Now you have infinite problems.

------
trekky1700
When Zuckerberg and Ashton Kutcher invest in things, it doesn't really grab my
attention. But when Musk does, it really does sound promising.

The company's tech sounds really awesome, being able to perceive texture from
photos and interpret objects from it would be so useful in so many real world
applications.

~~~
speeq
I think Elon Musk also invested in Stripe, which I found was interesting.

~~~
skadamat
In an interview he emphasized that he was a tiny initial investor / had no
idea what they were doing (I'm guessing Peter Thiel convinced him to since
Stripe was trying to help make their initial vision for Paypal a reality)

------
kriro
I read/markered "On Intelligence" on my train commute to work and have
scribbled a bunch of notes in the book. Pretty interesting and I like the
basic idea of the memory-prediction framework, invariant representations,
"melodies of patterns", focus on neocortex and the whole same general
algorithm for all senses.

I haven't had the time to research how far the general idea has gone or if it
is relevant at all but the scetched examples were pretty interesting.

I also found the random remark of "consciousness = what it feels like to have
a neocortex" interesting.

Glad to see that some smart money is bet in this general direction.

~~~
eurleif
>I also found the random remark of "consciousness = what it feels like to have
a neocortex" interesting.

So there's a way it feels to not have a neocortex? Doesn't feeling anything
imply you're conscious, which means you don't need a neocortex to be
conscious?

~~~
Lambdanaut
An insect "feels" pain. It doesn't however feel retrospective pain, and it
doesn't feel the past in the same way we do.

Having a neocortex is like having a 6th sense.

Our taste, smell, hearing, sight, and feeling neurons are all indirectly fed
into our brain. With a neocortex, that input is also fed in. It's our
"consciousness" feeling.

I hope that explains the confusion.

~~~
eurleif
Thanks, that makes sense. I was thinking of a different definition of
"consciousness", the "hard problem" definition. [0] For an ant to feel pain,
it would have to be conscious in the hard problem sense.

[0]
[http://en.wikipedia.org/wiki/Hard_problem_of_consciousness](http://en.wikipedia.org/wiki/Hard_problem_of_consciousness)

------
wikiburner
Does anyone know why Dileep George left Numenta? Wasn't he a co-founder there
with Jeff Hawkins?

Is there much difference between the goals of Vicarious and Numenta?

~~~
ScottBurson
I can only speculate, but as you may recall, Numenta abandoned their original,
belief-propagation-based design, replacing it with a new one based on sparse
distributed memory. Dileep had done a lot of work on the original design, and
I recall reading that's what Vicarious is using, having licensed it from
Numenta. So I think you can put it down to a difference in technical direction
between Dileep and Jeff. As far as I know, the split was amicable.

~~~
otoburb
Their marketing and fund-raising approaches also seem to be completely
different. I think this was good for both of them, especially if Dileep is
licensing from Numenta.

Win-win scenario.

------
pweissbrod
"Phoenix hopes that, eventually, Vicarious’s computers will learn to how to
cure diseases, create cheap, renewable energy, and perform the jobs that
employ most human beings."

This line made me laugh. Which of the three goals is the most likely and
desired outcome? (I'll give you a hint, it isnt curing diseases or finding
energy)

That's like saying: 'my robots will cure cancer, bring world peace and replace
most manual human jobs.'

~~~
jaibot
Agricultural technology already replaced most manual human jobs (from the era
when most human jobs were agricultural). Humans found other things to do, and
we found ways to use the surplus.

If it gets to the point where there isn't any unskilled labor left to do, we
can always choose as a society to vastly expand the welfare state and divvy up
at least part of the accumulated surplus to everyone. We have already moved in
this direction a bit, and I expect to see more things along the lines of
guaranteed minimum income in the future.

~~~
netcan
"vastly expand the welfare state and divvy up at least part of the accumulated
surplus to everyone"

That's a technology we haven't had much luck with so far. We've had economies
where "distribution" was related to direct wealth creation (make a sandwich
and eat it myself), property & labour (you make a sandwich with my bread,we
eat half each), thievery (gimme your sandwich). We've done sharing in small
groups, that may have been the paleo-economy. We've doe bits of charity
welfarism, redistribution and centralization but never really succeeded at
making those work well at a large scale, especially for those supposed to be
protected by it.

~~~
jaibot
We disagree about how well redistribution can and has worked at scale. My
ultra-compressed (lossy) take is "pretty inefficient, but somewhat effective
improving the lives of the less-wealthy".

One of the most dramatic and promising examples of redistribution,
GiveDirectly, is actually doing some followup research on the effectiveness of
their redistribution, and it looks pretty good so far: (pdf warning)
[http://web.mit.edu/joha/www/publications/haushofer_shapiro_u...](http://web.mit.edu/joha/www/publications/haushofer_shapiro_uct_2013.11.16.pdf)

That's an extreme example - a relatively-small scale transfer from wealthy
donors to a much poorer country - but it speaks well to the principle.

~~~
sirkneeland
I think there are pretty strong correlations between smaller social/political
units and more effective, efficient, and non-corrupt welfare/redistribution
systems.

There isn't a Nordic country with a population greater than just the
population of the New York metro area (let alone New York state...let alone
the United States as a whole)

~~~
netcan
Even in Nordic countries "normal" is working for a living. They have big
social and governmental institutions that have a lot of money passing through
them and they manage to do that relatively efficiently. But, they don't have a
complete disconnect between wealth creation by normal means (owning productive
property and/or working) and consuming that wealth. The government is just
more involved in the process.

If most people work, pay taxes and use the "free" public transport you still
have a situation where most people are both funding the transportation and
using it. Consumers & producers of stuff.

These futuristic ideas about AI doing all the work while most people are
unnecessary creates a completely different jar of pickled fish.

------
SuperChihuahua
This fits Elon Musk's vision. He had 3 main and 2 smaller things that in the
future will most affect the future of humanity.

Main: the Internet, the transition to a sustainable energy economy, and space
exploration, particularly extension of life to multiple planets

Smaller: artificial intelligence and biology

~~~
Houshalter
_If_ AI succeeds, then none of those other things will matter at all.

~~~
r00fus
> If AI succeeds, then none of those other things will matter at all.

Define success. Now define for whom - the investors? mankind? The AIs
themselves?

~~~
Houshalter
As in an Intelligence Explosion ([http://intelligence.org/ie-
faq/.](http://intelligence.org/ie-faq/.)) "Success" is really a bad way of
wording it, I just mean if it happens, none of those things will matter.
Regardless whether it is friendly or not. Either we go extinct or the AI is so
far beyond us, our present progress doesn't make much difference.

------
loup-vaillant
If they do pull that off, I hope they will be very, _very_ careful. You know,
Intelligence explosion, Friendly AI, taking over the world, that sort of
things.

[http://intelligenceexplosion.com/](http://intelligenceexplosion.com/)

~~~
lukifer
We're at the middle of the process, not the beginning:
[http://omniorthogonal.blogspot.com/2013/02/hostile-ai-
youre-...](http://omniorthogonal.blogspot.com/2013/02/hostile-ai-youre-
soaking-in-it.html)

"Information has been running on a primate platform, evolving according to its
own agenda. In a sense, we have a symbiotic relationship to a non-material
being which we call language. We think it's ours, and we think we control it.
It's time-sharing a primate nervous system, and evolving towards its own
conclusions."

\- Terrence McKenna

~~~
loup-vaillant
I'm not sure what your link has to do with your quote… Anyway, this blog post
is not quite right.

While I agree capitalism is a more pressing problem than AI right now, it
won't kill us all in 5 minutes. A self-improving AI… we won't even see it
coming. There is also much more brainpower dedicated to "fixing" industrial
capitalism, than addressing existential risks such as AI. And industrial
capitalism doesn't need fixing, it needs to be abolished altogether.

Corporations are even less autonomous than the author thinks. Sure, kill a
CEO, and some other shark will take its place. On the other hand, those sharks
are all from the same families. Power is still hereditary.

If the people were truly informed about how the current system works, it would
collapse in minutes. To take only one example, Fractional Reserve Banking is
such fraud that if everyone suddenly knew about it, there would be some
serious "civil unrest", to put it mildly.

The same does not apply to an AI. It's just too powerful. Picture the how much
smarter _we_ are from chimps. Now take an army of chimps, and a small tribe of
cavemen (and women), which somehow want to exterminate each other. Well, the
chimps don't stand a chance, if the humans have any time to prepare. They have
fire, sticks, lances… Their telepathy have unmatched accuracy (you know,
speech). And they can predict the future far better than the chimps. Now
picture how much more intelligent than _us_ an AI would be.

It's way worse.

\---

Now, this new agey speak about information taking a life of its own… It
doesn't work that way. Sure, there's an optimization process at work and it is
not any particular human brain. But this optimization process is nowhere near
as dangerous as a fully recursive one (that is, an optimization process that
optimizes _itself_ ). And for _that_ to happen, we need to crack a few
mathematical hurdles first, like Löb's theorem.

But that's not the hard part. The hard part is to figure out what goals we
should program into the AI. Not only we need to pin them down to mathematical
precision, but we don't even know what humanity wants. We don't even know what
"what humanity wants" even _mean_. Hell, we don't even know if it has any
meaning at all. Well, we're not completely blind, we have intuitions, and a
relatively common sense of morality. But there's still a long road ahead.

~~~
lukifer
The connection between the Hostile AI link and the McKenna quote is this: the
informational barrier between humans, institutions and technology is highly
permeable, and creates a perfect petri dish for natural selection in
informational life (you can model them as "memes", although the analogy to
genes isn't a perfect one).

Yes, it breeds far less rapidly than a Kurzweilian AGI, and one day we will
face that music for better or worse. But what I'm driving at this is that will
not come as a singular moment when SkyNet gets the switch flipped; it will be
a gradual evolution from the pre-existing emergent intelligence of the
"human+institution+technology" informational network. (Even if you had a day
where you flipped the switch on an infinitely accelerating AI, that life form
would still inherit the legacy data of humans and their institutions, which
would inevitably shape its consciousness, infecting it with any memes sticky
enough to cross the barrier.)

See also: the coming wave of Distributed Autonomous Corporations.
[http://www.economist.com/blogs/babbage/2014/01/computer-
corp...](http://www.economist.com/blogs/babbage/2014/01/computer-corporations)

> On the other hand, those sharks are all from the same families. Power is
> still hereditary.

Too true. Just because new evolutionary cycles are happening powerfully at
higher layers of abstraction, it doesn't mean the old ones disappear.

------
tremols
Have we cracked the brain's "programming language" yet? I am affraid that
until now research has been focused on the biological side of it; and it makes
more sense to me to replicate the logic instead of replicating the brain's
biological structure.

I believe that dataflow/reactive programming is the answer and the direction
to follow as its principles are pretty close to how neurons work; plus it can
be made to work on top of von neuman architectures.

~~~
quomopete
the brain having a programming language would have to be based on a lot of
assumptions that not everyone is willing to make.

~~~
Vardhan
I don't think he meant that in the literal sense, hence the quotes.

The brain's "programming language" refers more the to the idea of what makes
the brain's biological structure work to produce human perception.

~~~
quomopete
I realize that, I made my comment towards this being a metaphor and I still
stand by it.

------
spinchange
Replicating the neocortex is Kurzweil's vision / approach also:
[http://en.wikipedia.org/wiki/How_to_Create_a_Mind](http://en.wikipedia.org/wiki/How_to_Create_a_Mind)

~~~
ape4
neocortex would be a cool company name

~~~
aswanson
It would. Too bad it's taken by a lame Oracle consultant-type outfit:
[http://neocortex.com/](http://neocortex.com/)

------
giardini
"Replicating the neocortex, the part of the brain that sees, controls the
body, understands language and does math. Translate the neocortex into
computer code and 'you have a computer that thinks like a person,” says
Vicarious co-founder Scott Phoenix.'"

Do you? Other than the language part it sounds like you may instead have an
electronic lizard or cow. Add language and you might have an electronic parrot
or dolphin(they can do some language processing).

Something's missing - the ability to reason: deduction, induction and
abduction. The ability to set goals and to find a path to those goals. These
are the magic that everyone has been seeking and not finding for a long time.

The pieces the Vicarious found speaks of are available today. We have
exquisite computer vision, pretty good language understanding and fair robots
but no strong AI and certainly no embodied AI. The promises above are hollow.
But it will make some people a lot of money.

~~~
Houshalter
Well it's not clear where such high level functions come from, but it's
certainly progress. Making an AI as smart as an animal would be an incredible
advancement btw.

~~~
giardini
"Making an AI as smart as an animal would be an incredible advancement btw."

Only if your goal was to make an artificial (non-human) animal. But would that
be a step toward making an artificial human?

~~~
Houshalter
Of course it would be. Human brains are only slightly different than animal
brains. Most of the work goes into getting to chimps, then it's just a short
distance to humans. Animal brains have incredible pattern recognition and
reinforcement learning. We don't _have_ to take the path evolution did of
course, but it would be progress.

------
sirkneeland
"Elon Musk made the electric car cool.

Mark Zuckerberg created Facebook.

Ashton Kutcher portrayed Apple founder Steve Jobs in a movie."

Which one of these things is not like the others?

------
arfliw
First time I've ever heard of Mark Zuckerberg investing in anything, separate
from Facebook. He always cited 'focus' as the reason why he never does it.

~~~
skadamat
He also invested in Panorama Education -
[http://www.crunchbase.com/person/mark-
zuckerberg](http://www.crunchbase.com/person/mark-zuckerberg)

------
peterhunt
Vicarious seems similar to Numenta. Which makes sense since they share the
same cofounder.

------
z3phyr
Computers based on neuromorphic design are the best bet for intelligent
Machines.

The question is, how to control analog computations with a programming
language?

~~~
TeMPOraL
> _Computers based on neuromorphic design are the best bet for intelligent
> Machines._

I wouldn't go that far. We don't understand enough about the nature of
intelligence and the way brain works; right now saying that "the best bet for
AI is for computer to look like a brain" is like saying "the best bet for
heavier-than-air flight is for a machine to flap wings like birds", which was
a stupid idea for the reasons we now understand well.

~~~
z3phyr
Neuromorphic computers do not look like a brain. They just borrow some of it's
so called 'features'.

I am not saying that we should copy the brain. But at least we could copy the
design, just like we did for aeroplanes. Neuromorphic sensors could act like
our cerebellum, which act during unforeseen incidents. They are typically
error tolerating.

~~~
enupten
In that case, we should probably move to a noisier (/faster/cheaper) floating
point processor.

~~~
Houshalter
I wonder if you could make it completely analog. Find functions that can be
done fast/cheaply in silicon, and then design learning algorithms that can
take advantage of them.

------
klrr

       * First Law: A robot must never harm a human being or, through inaction, allow any human to come to harm.
       * Second Law: A robot must obey the orders given to them by human beings, except where such orders violate the First Law.
       * Third Law: A robot must protect its own existence unless this violates the First or Second Laws.

~~~
deletes

      * Zeroth Law: A robot must never harm humanity or, by inaction allow humanity to come to harm.
    

Every other law gets an _unless this interferes with the zeroth law._ suffix.

I encourage anyone to read the robots series, specifically( in that order ):
_The Caves of Steel, The Naked Sun and The Robots of Dawn_ , where the three
laws are used in the story, and even the zeroth law is implied in the third
book.

~~~
otikik
And don't assume that since you watched the "I, Robot" movie you don't need to
read the series.

~~~
MBlume
The movie was really more of a deconstruction than an adaptation of the books

------
adamio
'Seeing' & understanding unstructured textual data is a huge step towards
replacing manual human work.

Captcha appears to be a good place to start. It'd be awesome to feed some
software a mess of an Excel document, and ask it to analyze a question

------
dsugarman
privatization of research is real and awesome!

~~~
loup-vaillant
Probably not. If Noam Chomsky is to be believed (I do), most research to date
has been publicly funded. In the US, it has been mostly through military
expenditures. (To take only one example, ARPANET itself was funded by the
military.)

The actually awesome part is having huge investments on long term research.
Private or public, it doesn't make any difference.

~~~
throwawayaway
In response to your comment on another topic: You can run a "freedom box" as
follows: [http://freedombone.uk.to](http://freedombone.uk.to) The guide will
work for a raspberry pi or a beagle bone. It was created out of frustration
with progress of the freedom box project.

------
Punoxysm
I know someone who intereviewed at Vicarious and came away unimpressed. That
said, any company with an investment by a guy who can make his company buy it
out is a good one to invest in.

------
kolbe
Does this company do research or implementation of existing research?

------
pmcpinto
I'm really looking forward to see what this team is building.

------
SimpleXYZ
I like how their contact form has a captcha.

------
secondForty
And it'll be great at selling ads!

------
chris_mahan
If they could make something with the intelligence of a common bee, they could
make awesome drones.

------
6cxs2hd6
Probably Larry Page already knew about this when he recently said he'd rather
invest in Musk biz than Gates charity?

(Not that I agree with him, but it helps explain why he uttered such a thing.)

~~~
raldi
Larry didn't actually utter that, BTW. Check the transcript:

[http://insideevs.com/google-ceo-larry-page-billions-go-
tesla...](http://insideevs.com/google-ceo-larry-page-billions-go-tesla-ceo-
elon-musk/)

~~~
6cxs2hd6
I agree that a paraphrase =/= a quote.

Are you saying my paraphrase significantly misrepresented him? If so, how?

(I'm not trying to be argumentative, I just genuinely don't understand your
point.)

~~~
raldi
He didn't mention Gates or charity at all.

It's like if you said, "I like ice cream" and I reported it as, "This person
thinks buying ice cream is more noble than giving a starving family a bag of
rice."

~~~
6cxs2hd6
OK I understand now. I read this sentence in the article:

 _...suggesting that when he passes away, he’d like for his billions to go to
Tesla’s Musk._

Although leaving your billions to X implies not leaving it to Y -- e.g. Gates
-- that's not necessarily true.

And more basically, this is the article writer's sentence -- not a quote from
Larry Page.

I was wrong. Thank you for helping me understand why.

------
graycat
Look, guys, sure, in some sense computing is part of the best promise for AI.
Fine. I'll even agree that at least for now computing is necessary.

But, note, nearly everything we've done in computing, especially in Silicon
Valley for the past 15 years, has been to apply routine software development
to work that we already well understood how to do manually. A small fraction
of the efforts have been some excursions into more, but these have been
relatively few and with rarely very impressive results. Net, what Silicon
Valley does know how to do is build, say, SnapChat (right, it keeps the NSA
spooks busy looking at the SnapChat intercepts from Sweden!).

But for anything that should be called AI, there is another challenge that is
very much necessary -- how to do that. Or, if you will, write the software
design documents from the top down to the level of the individual programming
statements. Problem is, very likely and apparently, no one knows how the heck
to do that.

Given a candidate design, people should want to review it, and about the only
way to convince people, e.g., short of the running software passing the Turing
test or some such, is to write out the design in terms of mathematics.
Basically the only solid approach is via mathematics; essentially everything
else is heuristics to be validated only in practice, that is, an
implementation and not a design.

Thing is, I very much doubt that anyone knows how to write a design with such
mathematics. If so, then long ago there should have been such in an AI journal
or with DARPA funding.

Basically, bluntly, no one knows how to write software for anything real about
AI. Sorry 'bout that.

Wby? We just do not know hardly anything about how the brain works. We don't
know more about how the human brain works than my kitty cat knows about how my
computer works. Sorry 'bout that. And AI software will have a heck of a time
catching up with my kitty cat.

By analogy, we don't know more about how to program AI than Leonardo da Vinci
knew about how to build a Boeing 777. Heck the Russians didn't even know how
to build an SR-71. Da Vinci could draw a picture of a flying machine, but he
had no clue about how to build one. Heck, Langley fell into the Potomac River!
Instead, the Wright brothers built a useful wind tunnel (didn't understand
Reynolds number), actually were able to calculate lift, drag, thrust, and
engine horsepower, and had found a solution to three axis control -- Langley
failed at those challenges, and da Vinci was lost much farther back in the
woods.

We now know how our daughters can avoid cervical cancer. Before the solution,
"we dance 'round and 'round and suppose, and the secret sits in the middle,
and knows.", and we didn't know. Well, the cause was HPV, and now there is a
vaccine. Progress. Possible? Yes. Easy? No. AI? We're not close enough to be
in the same solar system. F'get about AI.

~~~
Houshalter
Well we do actually have a purely mathematical approach to AI worked out.
Granted it requires an infinite computer, and personally I don't think it will
lead to practical algorithms. But still, it exists. And from the practical
side of things, machine learning is making progress in leaps and bounds. As is
our understanding of the brain.

Remember that airplanes weren't built by Da Vinci because he didn't have
engines to power them. It wasn't that long after engines were invented that we
got airplanes. The equivalent for AI, computing power, is already here or at
least getting pretty close.

~~~
graycat
> Well we do actually have a purely mathematical approach to AI worked out.

Supposedly with enough computer power and enough data, a one stroke solution
to everything is stochastic optimal control, but that solution takes, say,
just brute force to, say, planetary motion instead of Newton's second law of
motion and law of gravity. Else, need to insert such laws into the software,
but we would insert only laws humans knew from the past, or have the AI
software discover such laws, not so promising. This stochastic optimal control
approach is not practical or even very insightful. But it is mathematical.

> machine learning is making progress in leaps and bounds.

I looked at Prof Ng's machine learning course, and all I saw was some old
intermediate statistics, in particular, maximum likelihood estimation (MLE),
done badly. I doubt that we have any solid foundation to build on for any
significantly new and powerful techniques for machine learning. I see nothing
in machine learning that promises to be anything like human intelligence.
Sure, we can write a really good chess program, but no way do we believe that
its internals are anything like human intelligence.

> As is our understanding of the brain.

Right, there are lots of neurons. And if someone gets a really big injury just
above their left ear, then we have a good guess at what the more obvious
results will be. But that's not much understanding of how the brain actually
works.

It's a little like we have a car, have no idea what's under the hood, and are
asked to build a car. Maybe we are good with metal working, but we don't even
know what a connecting rod is.

> It wasn't that long after engines were invented that we got airplanes.

The rest needed was relatively simple, the wind tunnel, some spruce wood,
glue, linen, paint, wire, and good carpentry. For the equivalent parts of AI,
I doubt that we have even a weak little hollow hint of a tiny clue.

In some of the old work in AI, it was said that a core challenge was the
'representation problem'. If all that was meant was just what programming
language data structures to use, then that was not significant progress.

Or, sure, we have a shot at understanding the 'sensors' and 'transducers' that
are connected to the brain: Sensors: Pain, sound, sight, taste, etc.
Transducers: Muscles, speech, eye focus, etc. We know some about how the
middle and inner ear handles sound and the gross parts of the eye. And if we
show a guy a picture of a pretty girl, then we can see what parts of his brain
become more active. And we know that there are neurons firing. But so far it
seems that that's about it. So, that's like my computer: For sensors and
transducers it has a keyboard, mouse, speakers, printer, Ethernet connection,
etc. And if we look deep inside then we see a lot of circuits and transistors.
But my kitty cat has no idea at all about the internals of the software that
runs in my computer, and by analogy I see no understanding of the analogous
details inside a human brain.

Or, we have computers, and we can write software for them using If-Then-Else,
Do-While, Call-Return, etc., but for writing software comparable with a human
brain we don't know the first character to type into an empty file for the
software. In simple terms, we don't have a software design. Or, it's like we
are still in the sixth grade, have learned, say, Python, and are asked to
write software to solve the ordinary differential equations of space flight to
the outer planets -- we don't know where to start. Or, closer in, we're asked
to write software to solve the Navier-Stokes equations -- once we get much
past toy problems, our grid software goes unstable and gives wacko results.

Net, we just don't yet know how to program anything like real, human
intelligence.

~~~
Houshalter
I was referring to AIXI as the perfect mathematical AI.

The main recent advancement in machine learning is deep learning. It's
advanced the state of the art in machine vision and speech recognition quite a
bit. Machine learning is on a spectrum from "statistics" with simple models
and low dimensional data, to "AI" with complicated models and high dimensional
data.

>if someone gets a really big injury just above their left ear, then we have a
good guess at what the more obvious results will be. But that's not much
understanding of how the brain actually works.

Neuroscience is a bit beyond that. I believe there are also some large
projects like Blue Brain working on the problem.

I swear I saw a video somewhere of a simulation of a neocortex that could do
IQ test type questions and respond just like a human. But the point is we do
have more than nothing.

~~~
graycat
> I was referring to AIXI as the perfect mathematical AI.

At

[http://wiki.lesswrong.com/wiki/AIXI](http://wiki.lesswrong.com/wiki/AIXI)

I looked it up: His 'decision theory' is essentially just stochastic optimal
control. I've seen elsewhere claims that stochastic optimal control is a
universal solution to the best possible AI. Of course, need some probability
distributions; in some cases in practice, have those.

That reference also has

> Solomonoff’s theory of universal induction formally solves the problem of
> sequence prediction for unknown prior distribution.

Hmm? Then the text says that this solution is not _computable_ \-- sound bad!

Such grand, maybe impossible, things are not nearly the only way to exploit
mathematics to know more about what the heck we are doing in AI, etc.

~~~
Houshalter
Approximations to AIXI are possible and have actually played pacman pretty
well. However I still think solomonoff induction is too inefficient in the
real world. But AIXI does bring up a lot of real problems with building _any_
AI, like preference solipsism and the anvil problem, and designing utility
functions for it.

------
erikpukinskis
The trouble with this kind of artificial intelligence is that I don't think
it's possible to think like a human without actually having the experience of
being human.

Sure, I think we could aim to build basically a robot toddler that had a
sensory/nervous/endocrine system wired up analogously to ours. It would
basically be a baby, and would have to go through all the developmental stages
that we go through.

But I suspect we'll have a hard time modeling that stuff well enough to create
anything more than "adolescent with a severe learning disability". People
underestimate just how carefully tuned we are after millions of years of
evolution. The notion that we could replicate that without having another
million years of in situ testing and iteration seems naive.

And even then, why would we expect the AI to be smarter than a human? There is
already quite a lot of variation in humans. Many people at the ends of the
bell curve have extraordinary processing power in ways typical humans don't.
But it turns out while those things are useful in some ways, they limit those
people in other ways. So it's not like we're not trying out evolved designs,
it's that on balance they don't seem to actually function fundamentally
better.

One cool thing about the robot is that you could have many bodies having many
experiences all feeding into one brain. But I'm not convinced that would
actually lead to "smarter". I mean, look at old people. Yes we get smarter as
we age. But age also calcifies the mind. All of that data slowly locks you
into a prison of past expectations. In the end, it's a blend of naive and
experienced people in a society that maximizes intelligence. And again, it's
not like societies haven't been experimenting with that blend. Cultures have
evolved to hit the sweet spot. It's not clear that adding 1000 year old
intelligences would help.

And anyway, we already have 1000 year old intelligences: books!

You could say that there is benefit to having all of that in one "head" but
then you have to organize it! Which experience drives the decisions, the one
from 2014 or the one from 3014?

Again, culture evolved explicitly to solve this problem. People write books
and the ones that work stick around.

I guess what I'm saying is the evolution of the human being is already here:
it's the human race, fed history via culture, connected by the internet, in
symbiosis with computers.

The idea that removing the humans from that system would make it smarter makes
no sense to me. Nor does the idea of writing programs to do the jobs that
humans do well. It's like creating a basketball team with 5 Shaquille O'Neils.
I don't think they'd actually be able to beat a good, diverse team with one or
two Shaqs.

Or think of it this way: if numerical/logical aptitude is such a huge
advantage in advancing capital-U Understanding, why do smart people bother
learning to paint? Why do we bother listening to children? Why do we bother
having dogs?

I would argue it's because intelligence is as multi-media as the universe is.
Sometimes a PhD has something to learn from a basset hound. And the human race
has just as good a handle on it as any AI ever will. We just have a different
view of the stage. We have the front row and they have the balcony.

------
JackFr
Long con.

------
pjbrunet
Call it what it is, an expert system, market research, a database of
decisions/observations. Real "artificial intelligence" only exists in science
fiction, in the minds of children playing with toys. Your computer (doll)
won't ever love you back or have any awareness or understanding no matter how
bad you want it to. It's a cool sounding buzzword for marketing, but if
there's any intelligence here it's coming from a few developers hiding behind
their tricky algorithms.

A computer will never have intelligence, no matter how many factors and
randomizations you code in to give the illusion of intelligence. Calling a
collection of observations "intelligence" is an insult and severe
underestimation of what intelligence is. If you believe artificial
intelligence is possible, you're missing out on what life has to offer--or you
would never think a box of switches could come alive.

There's no hint of evidence you = your brain. It's safe to say the brain
processes information literally. But we have no idea where intelligence
originates. Sadly, some people never get beyond a literal interpretation of
things.

~~~
JamesArgo
Sadly, some people confuse their ignorance with knowledge and make all kinds
of embarrassing claims.

>There's no hint of evidence you = your brain.

Sign up for a lobotomy we'll see much "you" is left afterwords.

> If you believe artificial intelligence is possible, you're missing out on
> what life has to offer--or you would never think a box of switches could
> come alive.

You got me. Believing x would make me think life has less meaning. Therefore,
x is false. What an argument.

~~~
pjbrunet
"Sign up for a lobotomy"

Correlation does not imply causation. If I unplug your LCD, that doesn't imply
the application failed. If I pull out your CPU, doesn't imply the cloud app is
not still functioning. Possible examples of this, people who are "brain dead"
that have reached out to grab a scalpel (organs about to be harvested) or have
come back to life after brain death or pronounced dead but can quote a
conversation that happened in the room while brain dead, etc.

