
AI, Apple and Google - kawera
http://ben-evans.com/benedictevans/2016/6/23/ai-apple-and-google
======
peyton
Another big difference is Apple starts with how the product should feel and
works backwards.

This is tough to reconcile with AI because AI sometimes fails.

When your device doesn't do what you want, it feels bad.

When you read an article about your HooliPhone handing over data to government
agencies, it feels bad.

When you go to the Hooli Store to demo a HooliPhone, ask it to show you photos
of your dog before her last haircut, and get no results, you feel less
enthusiastic about owning a HooliPhone.

The solution Apple uses is to add constraints. Apple doesn't need AI to guess
where you want to save a file on iOS because there isn't an exposed file
system.

There are some areas where simplification isn't possible, like mapping and
voice assistants. These happen to be two of the most widely panned Apple
features.

The strategically tricky thing about Google's pivot to AI is that there's a
confirmation bias against adding constraints. This may result in impressive
demos that just don't feel right in the real world.

~~~
askafriend
This is an incredibly insightful comment and I thank you for sharing your
thoughts.

You've put into words something I've been feeling and trying to express for a
while. "How does it make you feel?" is an incredibly important question and
unfortunately for Google, something you can't really quantify (see: Circles
and Google+).

Actually Circles and Google+ is a great example that we can look at to see how
the feel of a product can help lead to it's eventual downfall. When someone
"friend requested" you on Facebook, you had a simple decision to make: "Is
this person a friend of mine?". While everyone has their own definition of
"friend", it's pretty easy to make a decision and sometimes you're gonna have
to say "no" but that's fine; it feels bad for a short amount of time.

With Circles and Google+, you had to ask yourself: "Do I know this person?",
"Is this person a friend of mine?", "But how close am I to them really?",
"Should I add them to the family circle? but they're not _really_ family...",
and so on. The worst part is, you had to do this all upfront. When you have to
reflect about that many things about a human being, it starts to become far
more negative than it has to be. And if you're going thru this experience all
the time, then good luck. Now Facebook has it's issues too, but see how such a
subtly negative "feel" to a product can be real friction.

I recently heard someone make the argument that while a lot of the Google AI
tech is impressive, they haven't really been able to build something that
feels _that_ useful as a product in the real world. It's an impressive tech
demo for sure at Google's conferences, but once the tech makes it to Android,
they don't really become used all that much. Take Now-On-Tap, for example. It
was a highly touted feature on Android: an incredible ability to rip context
from whatever you were looking at on your phone. Now you hardly see it
mentioned and people don't really seem to use it that much. Google Now stuff
is cool too, but besides airplane reminders and calendar integration, is it
really _that_ useful day to day? I suspect most people would say no.

Look...all of the stuff Google does is very cool and _very_ impressive...but
it just lack a certain human feel to it. A certain simplicity in how something
is useful. These AI experiences are not _predictably_ useful. They aren't
experiences you can _rely_ on. Now I'm certainly generalizing a bit, and I
know someone will come up with some counterexamples (please do! I'm curious),
but I think you understand the sentiment I'm trying to convey: that Apple just
understands the user from a different vantage point than Google simply by
asking how something make them feel (and Jony Ive has emphasized this question
time and time again, though he sometimes gets made fun of for his
overzealousness).

~~~
peyton
Back in school I was part of a small group of students chosen to meet Eric
Schmidt.

He walked into the room and asked, "Who here is making something really
impressive?"

I thought a long time about what that says about Google.

Maybe it's about Google's competing on talent--they already have a $70 BN
revenue machine and simply need a way to attract people who can keep the
lights on.

Maybe it's just the personalities of those at the top.

Recently a friend showed me Google's Project Soli, a gesture interface that
uses radio waves to sense hand position. You click by touching your fingers
together and scroll by rubbing your fingers.

The project looks fantastic and is clearly produced by some smart people.

But the product video is a big call-to-action to developers: "We're excited to
see the gestures you come up with."

I feel bad for the talented people working on this. Because the first time
somebody uses the product, they'll try a new gesture and it just won't work.
Creativity and curiosity yields frustration and a feeling of stupidity.

If you make something really cool, make it easy to love at first sight.

There are ways to use Project Soli's precise depth sensing so that there's
nothing to learn and no way to fail. There are applications that will make
people rethink what our devices can do and reward natural curiosity.
Unfortunately I can't go into details.

Instead, people will be waving at their Google watches and wondering why it
keeps taking selfies instead of scrolling through their notifications.

------
eridius
> After all, Apple Maps has 3x more users than Google Maps on the iPhone and
> Google Maps is _definitely_ better.

That's a pretty bold statement. When Apple Maps first launched this was
certainly true. But Apple Maps has improved significantly since then, to the
point where whether Apple Maps or Google Maps is better depends on where you
are and what data you're trying to find.

~~~
sgslo
In San Antonio, Texas, last week, I used Apple Maps to search for 'two
brothers bbq'. The actual name of the establishment is 'two bros bbq' or
similar. Apple maps decided to give me results from Europe as the first few in
the result set. Why in the world would I want directions to Europe while in
Texas? Google maps, of course, picked it up on the first try.

One other thing - while in navigation mode with Apple maps, try panning the
map around, or even zooming. Let me know how that goes.

~~~
addicted
I don't get how Apple Maps decides to show locations in other continents when
it is unable to find a a match locally.

And Apple's inability to do any sort of decent fuzzy matching is legion.

~~~
jolux
Apple Music can't resolve ampersands to and and vice versa. Disaster.

~~~
TeMPOraL
A wild guess - maybe someone else (Google) has some crucial patents on fuzzy
matching? I know it's probably not the case, but it wouldn't surprise me _at
all_ if it was true...

------
zeta0134
Serious question: is any group working on a voice activated digital assistant
for the privacy minded? One that can operate without a connection to a backend
network where it must necessarily transmit recordings of your voice to perform
queries?

Or is good speech recognition just that intensive, such that common consumer
devices really can't handle the load?

~~~
dharma1
For an AI assistant, check mycroft. Note that there is a lot more to an AI
assistant than speech recognition.

For just speech recognition you can use Kaldi. Its not hard to deploy on your
own server. I've been meaning to package it for Ubuntu phones but haven't
gotten around to it yet.

Pretty good speech recognition is doable on current mobile hardware. The
trained models are large and there is still work ahead shifting the inference
to mobile GPUs but it's doable.

It also helps if it's integrated into the OS, if you want it to work invisibly
with multiple apps.

~~~
walterbell
See also Protonet Zoe, [https://www.indiegogo.com/projects/protonet-zoe-start-
your-s...](https://www.indiegogo.com/projects/protonet-zoe-start-your-secure-
smart-home-now) with some code at [http://experimental-
platform.github.io](http://experimental-platform.github.io) and project
background at [http://www.curbed.com/2016/3/28/11317418/zoe-smart-home-
tech...](http://www.curbed.com/2016/3/28/11317418/zoe-smart-home-technology-
hub-data-privacy).

------
sahaj
One bold difference it seems between the companies is that Apple idea cycle
lives around the release of the iPhone or hardware, though may change in the
future. Google, on the other hand, is the complete opposite. The idea cycle is
always ongoing (think core app updates) and hardware is an afterthought or
accompanies later. This likely will lead to different outcomes for AI
implementation for each respective company.

"the promise of extracting new insight from all sorts of data pools will not
always be met."

While certainly true, we can all agree that not having access to these data
pools is a disadvantage.

------
xbmcuser
Apple has more map users than Google maps on iOS simply because you can't
change defaults

~~~
stephenr
Everyone on HN always rattles of the "perfect is the enemy off good" mantra
when it comes to getting something out there, so they can iterate.

A major fucking company takes the same approach, to provide its massive user
base with additional functionality and improved privacy, improves the service
over time, and people still want to claim it's all about "no one uses it by
choice"

Nothing prevents people going to the App Store to get the google maps app or
using the browser version, and yet for millions of people the apple maps app
is sufficient. It works, it's easy to use.

If you don't want to use it you don't have to, but Jesus fucking Christ don't
assume your experience and mindset is the same as every other person on the
planet.

~~~
spot
but every application that wants to open maps is required to to use the apple
map application, so google is locked out of the ecosystem. this is what bugs
people, not releasing early and improving.

~~~
stephenr
How many apps _open_ maps? I've always seen apps _embed_ a map, sometimes with
apples, sometimes with google's

------
dave2000
"For example, the error rates for image recognition, speech recognition and
natural language processing have collapsed to close to human rates, at least
on some measurements."

I've always laughed at how poor speech recognition is. I know it's probably a
hard problem, and I know that sometimes you can transcribe a whole sentence
without error. But how long will it be until it will just work; I'll be able
to just speak normally, in English, and have it transcribe without error?
Microsoft put up a page recently with a demo and prices, but it was a lot
worse than the last one I tried.

------
pazimzadeh
Apple has a knack for choosing the right interface for the task at hand
(digital crown, multitouch, iPod wheel, mouse, etc). Are there any examples of
Alphabet succesfully coming up with an original interface to solve a real
problem other than google.com?

~~~
sangnoir
> Are there any examples of Alphabet succesfully coming up with an original
> interface to solve a real problem other than google.com

I'd say Google Now cards are pretty original interface and they are downright
_magical_. Google already knows the time & place of an appointment I have
across town, it also knows that traffic is getting pretty bad on the possible
routes, so it _tells me_ (without me asking) to leave much earlier than I had
planned.

------
zatkin
I get the feeling that if AI gets sprinkled all over iOS, it's going to be
annoying because the device will try and suggest actions that you may not
want. That turns away users because the device is trying to predict too much.

------
Artlav
> computers don't ask them anymore:

> Where do you want to save this file?

> Which photos do you want to delete to save space?

I dearly hope not. These questions are too important to decide by the
computers at their current level.

------
bouchier
Nokia & RIM dismissed _touchscreens_ not smartphones, geeze.

~~~
honkhonkpants
Nokia marketed ghastly resistive touchscreens running Maemo for years (at
least from 2005 to 2011).

------
YeGoblynQueenne
Hype harms the brain.

>> In the last couple of years, magic started happening in AI. Techniques
started working, or started working much better, and new techniques have
appeared, especially around machine learning ('ML'), and when those were
applied to some long-standing and important use cases we started getting
dramatically better results. For example, the error rates for image
recognition, speech recognition and natural language processing have collapsed
to close to human rates, at least on some measurements.

Let's see.

a) Magic? Really?

b) "The last couple of years" goes back to 2014. Machine learning's big break
into the mainstream happened at least in 2012, when a conv net won ILSVCR [1].
Did the op not look further back before reporting on all this "magic"?

c) Unfortunately no "new techniques" have appeared. Convolutional neural
networks date from 1988 [2]. LSTM Recurrent Neural Networks were first
described in 1997 [3]. Machine learning in general is pretty much as old as
GOFAI, with the earliest connectionist ideas detailed in the 1950s [4].

All this is really well known and understood and not at all controversial-
Geoff Hinton himself is on record explaining that the recent boom is due to
more data and more processing power (I can dig up a link if there is any
doubt).

d) "the error rates for image recognition, speech recognition and natural
language processing have collapsed to close to human rates, at least on some
measurements"

In image recognition, the big success story is the ImageNet results detailed
above. For speech processing I'm told there's been a big jump also, someone
else could give an example.

For natural language processing: not in your wildest dreams. We're nowhere
near "close to human rates" in any way, shape or form, unless you count very
restricted results like how good this or that natural language parser does on
a one-million word corpus like the Brown corpus. Unfortunately, that counts
for nothing.

Now, most of that is stuff you can find easily on wikipedia with a bit of
reading. There's absolutely no reason to waste peoples' times with
disinformation and pointless overhyping.

[1]
[https://en.wikipedia.org/wiki/ImageNet](https://en.wikipedia.org/wiki/ImageNet)

[2]
[https://en.wikipedia.org/wiki/Convolutional_neural_network#H...](https://en.wikipedia.org/wiki/Convolutional_neural_network#History)

[3]
[https://en.wikipedia.org/wiki/Recurrent_neural_network#Long_...](https://en.wikipedia.org/wiki/Recurrent_neural_network#Long_short_term_memory)

[4]
[https://en.wikipedia.org/wiki/Perceptron#History](https://en.wikipedia.org/wiki/Perceptron#History)

