
DeepMind's Mustafa Suleyman says general AI is still a long way off - seycombi
https://techcrunch.com/2016/12/05/deepmind-ceo-mustafa-suleyman-says-general-ai-is-still-a-long-way-off/
======
dpandey
The fascinating thing about his answers is that he says nothing that wasn't
already known 25 years ago. 'General AI' is a really vague term and means
almost nothing.

It's well known in AI that common sense AI systems are incredibly hard to
build and expert AI systems are relatively straight forward to model and
build. Given that the most 'common sense' type products we see like Siri and
Alexa still appear to be rules based systems (that have been enhanced with
historical data no doubt), we all know the world is not suddenly going to
become a common sense AI paradise unless a new Marvin Minsky makes some
breakthroughs.

I find that the term 'AI' is frequently used when a company has taken an
existing problem and enhanced the solution by using machine learning at one or
more places. Nothing wrong with that fundamentally, but calling that AI is
more of a marketing gimmick than anything else.

Google/Facebook do some of the best machine learning enhanced workflows that
delight users. They usually refrain from branding it as AI.

~~~
StavrosK
Semi-offtopic, but I've always wondered why something like a medical diagnosis
AI doesn't exist. The doctor (or operator) would enter all your symptoms and
test results, and the software would use statistical data and research results
to give you a probability of each disease (e.g. a headache is overwhelmingly
probably a flu, but if your test results for X are elevated, the probability
for brain cancer increases, etc), as well as a list of tests the doctor could
run.

Something like this would be immensely valuable, and would immediately improve
healthcare everywhere, since the software could be updated with new or obscure
knowledge right away, so you wouldn't have to shop around for doctors until
you came across someone who had read of a very rare disease somewhere, the
software would already know everything you might have.

Is there something like that out there? It sounds like there must be, but I
don't know of any doctors using anything like this.

~~~
ska
These systems have been worked on since the 90s at least. The latest branding
of them is "Decision Support Systems" but there are lots of CAD (Computer
Aided Diagnostic, cf Computer Aided Detection). There is a lot of interesting
work that falls under the general heading of "Evidence based medicine" that
relates to this.

There are a lot of barriers to presenting it as you imagine, though: some
liability, some workflow, and some just the deep lack of appropriate datasets.
There is also some adoption issues - clinicians are rarely going to be happy
with a black box approach but it can be difficult to explain/expose the
details of such analysis in a way that clinicians will accept.

A last thought: this is actually very hard to do well and dangerous to make
mistakes with - so your rather cavalier claim that there would be immediate
and wide spread improvement of health care should be tempered a bit.

~~~
dpandey
Your last thought is key. Medical diagnosis demands extremely high precision
as well as 'judgement' because one little deviation can be highly correlated
with something serious. And there can be several of these deviations in
different places. Sometimes a doctor notices something is off while looking at
something else. It's not easy to model out all these while keeping up with the
changing knowledge in the field and still avoiding false negatives.

~~~
ska
Agreed - I think the potential value is quite high in unusual root causes.
It's going to be very hard to make a system better at diagnosing 'flu' than a
trained clinician. However, "flu-like-symptoms" plus a couple of subtle things
in the history may point at a relatively obscure syndrome or whatever, which
can be very difficult for a clinician to find.

Another thing to consider in any work like this, you really need to thank
about what your false positives mean, in a practical sense. If your numbers
are big enough (e.g. screening) and your positives result in a procedure (e.g.
biopsy), eventually someone is going to die based on a false positive. This
may still be the better option, of course, but you really need to demonstrate
that, and think about what your sensitivity and specificity tuning needs.

------
denzil_correa
Anyone who has spend time in AI or ML would know this is true. Currently, AI
is very helpful to solve certain types of tasks. The recent advances in Deep
Learning etc. is the improvement of AI to solve some of these specific tasks.
A lot of people have confused this to assume there would be a master AI system
that would solve general tasks. I don't see that happening anytime soon.

~~~
squeaky-clean
I have (or avoid) this conversation with family and friends all the time. That
because we're hyper-successful/advanced in one area, we can easily just apply
that to everywhere else, like some AI budget where we currently have a surplus
in Chess/Go/Jeopardy.

"Well, what if they took all the AI power that makes that GO thing beat the
best player, and converted it into regular intelligence? It would probably be
as smart as the average person!"

Is a real question I've been asked several times by friends. Of course, no one
wants to be friends with "that guy" at a party or family dinner, so I usually
just say "Yeah, that would be neat" and move on. But I really do like talking
having serious conversations speculative AI and geeking out over AI. It's hard
though, because most people just want to talk about the kinds of AI they've
seen in movies, which are unrealistic compared to what's possible and
practical.

~~~
somestag
I've gotten some decent mileage with family and friends by making the
distinction between "weak" AI and "strong" AI.

I tell them that any AI that's designed to only solve one type of problem (or
a finite set of problems) is "weak." An AI that can figure out how to solve an
arbitrary problem is "strong."

I then tell them that we've never been able to create a strong AI, and we've
never been able to convert a weak one to a strong one. They ask why not. I
tell them that every weak AI, at its core, has a mathematical way to evaluate
its outcome. Every time it does something, it does it by assigning some number
to the outcome, and it tries to get that "number" as close to the "good"
number as possible. All of our advances in weak AI have been in ways to come
up with better numbers, or use computers to come up with good numbers to
situations that we understand very well but aren't good at attaching numbers
to. Finally, I say that with the way we make weak AIs, to make it into a
strong AI, we'd have to come up with a way to assign an accurate number to
literally every single decision the computer might be asked to make. But there
are infinitely possible decisions, so we'd have to come up with infinite ways
to come up with a number. To come up with strong AI, we need some "universal"
way for the computer to think about the world, but we have no idea what would
look like.

This explanation is hardly adequate for a true understanding of AI, but it
works pretty well for helping them understand the gist of the problem without
using some misleading metaphor that would just give them another bad
understanding.

------
sapphireblue
Is he a CEO though? Wikipedia and other press articles say that Hassabis is
the CEO:
[https://en.wikipedia.org/wiki/DeepMind](https://en.wikipedia.org/wiki/DeepMind)
and Suleyman is Chief Product Officer, the head of applied AI at DeepMind.

~~~
cybertronic
[https://deepmind.com/about/](https://deepmind.com/about/)

Hassabis is CEO

------
rectang
Step 1: State a problem. Any problem which has been articulated is vastly
closer to being solved than any problem which has not.

Step 2: Rescue the problem from necessitating general AI by developing an
algorithm. Any problem which has been split off from general AI is vastly
closer to being solved.

------
ilaksh
I thought Hassabis said he was seriously trying to build grounded AGI based on
some type of system with deep learning-ish stuff. I think it will work within
around a dozen years or so.

How do they know that they can't take their current deep learning-type
techniques or some variation/enhancement of them and apply them to more
demanding and general circumstances?

For example, they have the agent that goes around trying to score in a 3d
first-person-shooter with the pixels as input. Has it been determined that
there isn't a way to gradually train that system to recognize words, phrases,
or even sentences that would give it hints about how to score better, maybe by
associating words with other (spatial/navigation) parts of the model or more
general association between aspects of the learning system in different
domains.

To me it seems like some type of deep learning (or maybe a variant using
spiking neurons since those can reportedly learn with fewer examples?) when
grounded by integrated sensory inputs of different times and incrementally
trained, may be able to already exhibit general animal/human-like
intelligence, or will be able to with one or two minor 'breakthroughs'.

I'm still going with 2029 (as suggested by Kurzweil I think).

------
roymurdock
DeepMind sounds more and more like IBM Watson by the day. Both created AI
engines for games (Jeopardy, Go) and are now focusing on the medical market.
The two examples of actual AI applications cited in the article focus on
DeepMind's healthcare efforts:

 _The company is working with the National Health Service (NHS) in the UK on a
project around the early detection of acute kidney injuries. Critics, however,
argue that the collaboration with DeepMind is wider than the company and the
NHS previously disclosed. In addition, it’s also working with the Moorfields
Eye Hospital on looking at how it can use the hospital’s eye scans to diagnose
eye conditions better and faster. The NHS, however, mostly used its own
algorithm in its project, with DeepMind focussing on the front-facing app.
Suleyman argued that this is due to the fact that it’s still early days for
this collaboration, which only started 12 months ago._

So in the health space they're currently doing anomaly detection and building
applications for their clients. Sounds very Watson/consulting. Lots of money
in healthcare due to the amount of data generated and the complexity of
building regulation-compliant.

Judging from their website, they've also built an algorithm for reducing
cooling energy usage in Google's data centers.

There are a ton of IoT platforms that are currently competing to connect a
bunch of industrial/medical/automotive/aerospace and defense devices and
servers, analyze that data with machine learning and statistical techniques,
make it easy for users to build their own cloud/edge applications, and eke out
every last drop of efficiency they can for their customers.

DeepMind is fortunate to have Google's brand, infrastructure, and reach. I'm
just not sure what else separates them from any other "AI" (read machine
learning/algorithm-building/efficiency consultancy) out there.

~~~
tfgg
AFAIK the health stuff at DeepMind is just one part, the research division is
the main bit. Not like IBM Watson consultancy-pretending-to-be-AI-research.

> I'm just not sure what else separates them from any other "AI" (read machine
> learning/algorithm-building/efficiency consultancy) out there.

probably the amount of pure research they produce:
[https://deepmind.com/research/publications/](https://deepmind.com/research/publications/)

~~~
dpandey
Those are 2 very different things as much as we'd like to believe they're one
since they're within the same company.

I wouldn't be surprised if google sunsets the company and absorbs the
employees in its more profitable products.

~~~
deong
No way I could see that happening. I mean, they might conceivably decide to
fold the group into Google Research rather than keep the DeepMind name, but I
doubt they'd do even that due to the name recognition that DeepMind has
garnered even among the general public.

The idea that they'd take the people and put them to work making Blogger
better or something? Not a chance. AI research is pretty much Google's core
function. They'd no more break up that functional area than Apple would take
Jonny Ive's team and put them answering support calls.

~~~
dpandey
On the contrary, DeepMind is a subsidiary in the new alphabet structure and is
looking at a stick wielding CFO with a profitability question on her face.

If they don't figure out compelling products in reasonable time, the
researchers will become part of google research (maybe retaining the deepmind
name as part of a group within google research) and the engineers will become
absorbed or find new jobs. Also, DeepMind is not such a big name outside our
little geek community. Try it with someone :)

~~~
dpandey
I wasn't implying that DeepMind (or its research team) are going to be
dissolved. My point was: _I wouldn 't be surprised_ if that happens if they
don't find compelling products to make money off of given Alphabet's revenue
focus. Also it can go the consultancy type route it currently seems to be
headed for and consultancies can make a lot of money (look at IBM), so they
could be a great business. But that might leave the researchers unsatisfied
with their impact.

My point was that ideally we like to believe that research supports products,
but generally speaking, most of Google research or Microsoft research papers
are rarely applicable to their products. And good researchers like being
independent (that's how you woo them - with the promise of independence) so
you can't usually dictate what they work on.

~~~
deong
> My point was that ideally we like to believe that research supports
> products, but generally speaking, most of Google research or Microsoft
> research papers are rarely applicable to their products.

I have no inside information here, but my hunch is that that's overly
pessimistic. From the outside, we hear about DeepMind finding cats in YouTube
videos or learning to play Atari games or Go, and it's tempting to say,
"Google doesn't sell Go programs, so what was the point."

However, Google does do incredibly good image labeling in Google Photos. Their
autonomous vehicles might eventually be a profit maker, potentially using deep
reinforcement learning methods developed to play Atari games. Certainly,
Google Translate and Inbox use ideas developed from their machine learning
groups.

You're right that there's often a pressure to make internal research groups
more profit-driven and more connected to the revenue stream that runs a
company. Microsoft Research saw this happen to some degree 10 or so years ago.
But Google's core competitive advantage is their ability to collect and
analyze data. Data __is __the revenue stream that runs the company.

Also, I think the "independence" aspect of research is overblown to some
extent. Almost no one is truly independent. If nothing else, you're subject to
the needs of funding agencies. Typically, researchers just want the ability to
work on problems that interest them using ideas they have some control over
developing. You can do that inside a company, provided that company is
interested in the same types of problems you're interested in. That's been the
success of corporate AI/ML labs -- many of the best researchers want to go
there because it's a steady stream of interesting work free from the need to
do the constant dance for grant money.

~~~
dpandey
Lets keep in mind that DeepMind is a subsidiary of Alphabet that has to
justify its existence (not tomorrow, but over time) with revenue. Interval
research corporation is a good example of a very high profile research entity
that couldn't produce revenue and shut down.
[https://en.wikipedia.org/wiki/Interval_Research_Corporation](https://en.wikipedia.org/wiki/Interval_Research_Corporation)

Retaining researchers who write papers is highly correlated with providing a
de facto academic environment for them to work in. Microsoft research has
become almost the most interesting place for researchers to go to in the last
10 years because they have developed the place to feel like a university
department. Look at their research papers (public). Any overlap with
commercial products looks providential more than intentional.

I'm obviously not talking of people like Jeff Dean, whose intent is to build,
not publish papers. The big table paper was not the intent - building Big
Table was the intent and when they were done, they ended up writing a paper
about it.

If Google uses some of the DeepMind research itself, that's really good.
DeepMind can make revenue from Google that way (if they have that
arrangement). If they can't, it's hard to justify their existence as an
independent financial entity - they're better off just being part of Google.

------
mark_l_watson
I like his honesty in saying we are decades away from any general sort of AI.

I remember at AAAI 1982, the conference was handing out bumper stickers saying
"AI its for real" that my coworkers and I happily put on our cars. Yeah, a
little optimistic:-)

That said, I like the way Deep Mind is run, especially the forays into health
science.

------
airesQ
People with high-profile jobs are severely constrained in what they can say
(even though this guy is not the CEO, contrary to what the title claims ATM).

And we already saw that google doesn't want to touch the "uglier" aspects of
ML/AI/robotics with a ten-foot-pole (e.g. see how they reacted to Boston
Dynamics's humanoid robot video).

So I'm not sure if we should take this seriously. DeepMind technical views are
mostly on its papers. Not press statements.

~~~
dpandey
Really good point. He might just be following google's PR directives. Also,
while their research team is ostensibly doing great work, maybe the product
team doesn't have much to show or claim after all. Which CEO/exec wouldn't
want to make a tall claim about their product or company while on stage (that
they can reasonably back up if questioned)?

------
cr0sh
I personally think that if "general AI" (that is, something akin to human-
level or beyond, and possibly sentient) ever happens - it'll happen by
accident, and will be an emergent phenomenon that we won't be able to explain.

It will surprise all of us, and will likely be a "earth-shaking", akin to an
extra-terrestrial alien contact. Perhaps even more so - because it would be a
hard example against the idea of dualism (unless it arises from a quantum
computing system - then humans can posit an "out" of course, regardless of
whether it is true or not).

All pure speculation, of course. I honestly don't think we'll be able to
"design" a general AI - we don't even understand how sentience and
consciousness (among other things in the philosophy of mind) even works or
exists in human brains. We understand the gears and wheels, but can't describe
the factory.

------
socmag
I totally agree with the general sentiment of the comments and the article
that AGI is a long way off... in the anthropomorphic sense we usually imagine.

That said in many regards, the sum total of the internet, social networks and
automated systems that already exist, to me is fast beginning to resemble an
emergent AGI / ASI, and furthermore is already out and about busy doing its
business.

That it just happens to use networks, humans and machines as cells and DNA
seems kind of irrelevant.

Of course this is a silly POV thought experiment, but seriously, if that were
the case, would we even notice?

I'm not sure our own cells and DNA know they are part of a larger entity for
example.

Okay, enough weird thoughts.. time for bed.

------
xianshou
FYI, Mustafa Suleyman is not CEO, but rather "Co-Founder & Head of Applied
AI," as described in the article. Demis Hassabis is the CEO.

~~~
sctb
Thanks, we've updated the submission title.

------
saycheese
>> "We founded the company on the premise that many of our complex social
problems are becoming increasingly stuck"

Real AI will not simplify the complexities of social problems — and to believe
so shows a complete lack of understanding of those problems and what real AI
will become.

~~~
maaaats
> _and to believe so shows a complete lack of understanding of those problems
> and what real AI will become_

When the CEO of one of the most well-known AI-companies makes a claim you
disagree with, you should probably give a better reason for him being wrong
than "a complete lack of understanding".

~~~
leereeves
Being CEO of an AI company doesn't mean he has a greater understanding of
"social problems".

~~~
patrickmn
"Science talks about very simple things, and asks hard questions about them.
As soon as things become too complex, science can’t deal with them… But it’s a
complicated matter: Science studies what’s at the edge of understanding, and
what’s at the edge of understanding is usually fairly simple. And it rarely
reaches human affairs. Human affairs are way too complicated." — Noam Chomsky

------
curiousgal
Curious to know about his technical expertise, he's listed as an entrepreneur
who dropped out of college.

------
skocznymroczny
That's what he wants you to think!

------
patkai
It's quite shocking that even technical people think AI is "a" thing, and will
"come" at a certain time.

