

Interview with Stephen Wolfram on AI and the Future - cjdulberger
https://gigaom.com/2015/07/27/interview-with-stephen-wolfram-on-ai-and-the-future/

======
zekevermillion
The most interesting part of this interview is the discussion of what happens
when human agency is further removed from an individual bot. If a bot has no
owner, do we start viewing it as an animistic entity with rights and
responsibilities? Or, do we return to a world of individual as opposed to
corporate responsibility? with decentralized currency and homomorphic
encryption, seems like we are pretty close to this world right now. Very
exciting.

~~~
cLeEOGPw
I think there will be no such thing as no owner. In worst cases, it will be
the state that compensates for damages. Kind of like if you drive into a hole
in a road you can seek damage compensation from municipality. There will be
some kind of laws that would instantiate agency of some sort to regulate and
make sure there are no wondering "entities" without owners and they would be
responsible.

------
yigitdemirag
What I wonder is that considering that computation is universal, would it be
easier to achieve human-like understanding of the environment using
hierarchical models without any use of neural networks?

Can we achieve such mathematical breakthrough without making use of
neuroscience or related biological fields?

~~~
jpapon
If you accept that computation is universal, then yes, you could definitely
achieve human-like understanding through means that are completely different
from the actual human wet-ware.

It might not be very efficient... but it's certainly possible. If the universe
is isomorphic to a Turing machine, then the human brain certainly is. If that
is so, then it is possible in principle to create a human-like machine with
any other Turing-complete machine. It just might take you a billion years to
simulate one second of human brain activity.

~~~
TheOtherHobbes
It's not obvious that either the universe or the human brain are isomorphic to
a Turing machine.

There's an epistemological tendency to assume the universe is isomorphic to
the most complex mechanical model available at the time.

Historically, those models seem to be reliably wrong; the universe always
turns out to be more complicated and subtle than we imagine it is.

~~~
cLeEOGPw
You don't need to replicate human brain to achieve human brain-like behavior.

