
New report examines how AI might affect urban life - ozdave
http://news.harvard.edu/gazette/story/2016/09/what-artificial-intelligence-will-look-like-in-2030/
======
swombat
This is obviously valuable work in general, because a lot of people don't read
science fiction, blogs, dream about technology, etc. I question how valuable
this is to the HN audience who spends much of their time thinking about these
topics, though. I spotted nothing that stood out from a skim through this
article. Maybe the full report has more, but most of what it covers seems to
be things along the lines of "who will have liability when a self-driving car
kills someone?"...

~~~
sixQuarks
I'm always a bit shocked when a friend says they never heard of Elon Musk, but
it happens quite often.

~~~
simonh
Its the same for me, but I suppose if you're not actively looking into buying
an electric car he doesn't directly affect many people's lives. Indirectly
yes, but not so much directly. Other than that space technology hasn't been
big in the popular conscience for many decades now and solar is boring power
utility stuff.

Until the iPod most people had never heard of Steve Jobs, or if they had he
was swiftly forgotten as just another computer guy.

------
neom
We don't hear much talk about quantum computing and AI, is this because
general purpose QP is considered unrealistic? Another interesting aspect in
the podcast is that "doctors can't be replaced because they are able to
interact with a patient", yet boston dynamics have shown us some pretty
interesting progress in robotics, and we have some pretty amazing imaging and
recognition software. The final thing I find interesting that people often
refute on is that there isn't access to a good training set on humanity to
start to understand us, well, I'd suggest that the WWW is an good collection
of human knowledge.

~~~
AngrySkillzz
It's mostly because (to my knowledge) we don't currently know of any ways in
which quantum computing will make artificial intelligence faster or more
feasible. If you read up on the relationship between quantum and classical
complexity classes, and what effective quantum algorithms we've developed so
far, this may be more clear.

~~~
neom
Shall read up. Thank you!! :)

------
ilaksh
These types of reports seem overly conservative. Unless you think there is
something magic about human or animal brains (many people still do believe
this sort of thing or think there is some quantum (read 'magic') feature) then
you have to anticipate a strong possibility that we will be able to emulate
most of human capabilities and traits with hardware.

When we get to that point, it is very likely we will be able to improve the
performance of these AIs by speeding them up or interconnecting them, etc. So
another likely possibility is that these AIs could be twice as smart as us.
Many speculate thousands of times smarter, but I don't think there is any
reason to assume that is feasible -- and twice as smart as us is significant
enough.

So these AIs could become much smarter then us and because they are not
constrained by biology could take any type of physical form.

I think that if you are trying to speculate cautiously then you should
consider this very similar to an alien invasion. The one advantage we have is
that we will be training/programming the first generations of these things, so
we had better get that right. But after they become say twice as smart, normal
humans probably won't have as much sway over them anymore.

I think that when you look at all of the amazing abilities from the last few
years from Watson winning at Jeopardy, Deep Dream, Deep Mind with WaveNet
speaking and arcade game learning, Atlas walking and picking up boxes, winning
at Go, etc. -- these are all amazing and people are very serious about
pursuing general AI again. So how do we know how many more fundamental
breakthroughs are actually required to get to our truly general humanlike
intelligence? Are we even sure that we don't have the techniques already and
just need to combine them in a certain way?

So my way of thinking is that to speculate conservatively, guess that there
may be two major research breakthroughs required on the same order as deep
learning. If we had not seen such an increase in AGI research then that would
sort of console me. But because AGI funding has increased and the belief is
back that these types of goals are possible, it seems like we should not
assume that it will take many decades to achieve these breakthroughs (if we
actually need more breakthroughs). How do we know that one or two of the
dozens (hundreds? thousands?) of literal geniuses with an adequate background
who are working in this area will not come up with some new major
breakthroughs in the next two years or five years or eight?

So yes my personal opinion is that we should start planning for an 'alien
invasion' of superintelligent AGI which if we are being cautious could pretty
much come at any time now, and certainly it might happen before 2030.

~~~
visarga
The rate of discovery certainly is very high, every week something amazing
pops up, but the more I read, the more I realize just how far we still are
from human level intelligence. We don't yet even have a robotic body able to
fold a shirt or walk about on its own (BD robots were remote controlled), or a
chatbot able to pass as human. The good thing is that there is interest and
funding. Maybe we still need two or three fundamental discoveries, could take
anywhere between 5 and 50 years.

~~~
ilaksh
If you google 'robot folding shirt' then yest that's been done. The BD robots
_do_ have autonomous modes.

~~~
visarga
I know it's been done, but not done brilliantly, just barely. Just like
walking, grasping (the Google robotic grasping experiment) - they are still a
long way off. Seems like progress is advancing at a slow pace in robotics,
unlike deep learning which is relatively much more successful with vision,
audio and behavior (reinforcement learning).

------
steavex
cool read, thanks for sharing!

