
Google I/O Developer Keynote [video] - LaurensLang
https://events.google.com/io/
======
yRetsyM
Some of my highlights:

\- The big move to on-device ML for things like speech recognition. This was
especially demonstrated with an accessibility angle, showing global subtitles
available for any application. And an assistant that allowed you to answer and
reply to phone calls using TTS and back.

\- Android Q big focus on privacy, promoting privacy to a top level menu item
in settings, as well as adjustments in apps to provide ready-access to privacy
features.

\- Large focus on security, including calling out Gartner reports many times,
having the most secure operating system in a number of tests, and most secure
device (Pixel).

\- Google Home products rebranded as Google Nest. Launch of Google Nest Hub
Max ($229), with Camera allowing for Duo calling, and a nice hand gesture to
silence the device when loud/noisy (no more shouting)

\- Pixel 3a (XL) devices launched, at a lower price point of $399 ($479) with
a decent set of features. No longer Verizon exclusive.

\- Google wants to be "Helpful"

~~~
ilovecaching
Sounds like exactly what Apple said when they launched the X. All on board AI,
secure enclave, no shipping your data off to the cloud for security.

The issue with Google is that they're an advertising company, so they're
always going to have an incentive that pushes them to market your data unlike
Apple.

~~~
SquareWheel
Then why are they releasing on-device voice recognition to replace their
cloud-based voice recognition?

~~~
cle
Because it's faster and works with spotty/no reception?

(I'm just guessing because I can't find details on this stuff by wading
through the feed.)

Also, just because it's doing local inference doesn't mean it won't be
uploading data back to Google periodically. As GP said, they have a strong
incentive to do this, so it's reasonable to just assume they will do this
until they can prove otherwise.

~~~
monocasa
And doing the inference locally, and just uploading transcripts for the most
part is way more cost effective. They don't have to pay for the power on these
endpoints.

------
regnerba
That live transcription built into Android Q that works on device and for any
app is incredible. I can see so many uses for that myself, I cannot imagine
how awesome something like that would be for people who have difficulty
hearing.

------
pier25
On device speech recognition is pretty great, but as a whole the keynote was
pretty conservative. Probably because they promised too much stuff in previous
years that either wasn't released or quickly failed.

------
jayd16
As models start fitting on devices, what keeps it a competitive advantage? Are
specific models patentable? Copyrightable?

What's stopping Apple or a foreign company from shipping these models
themselves?

~~~
novok
Creating & improving that model is the competitive advantage. It's like saying
what is the competitive advantage of compiled executables that run on customer
devices. I estimate it's copywriteable.

~~~
jayd16
But any would be copy-cat could simply push out the new model a day later.

I'm also struggling with whether such a model should fall under copyright.
Models would naturally converge.

------
Findus23
The on-device Machine Learning sounds pretty amazing. Does anyone know if this
will be a part of AOSP or be a closed-source extension (like digital
wellbeing)?

