
The UX of AI - dsr12
https://design.google/library/ux-ai/
======
nukeop
It looks like it's just a way of reassuring the "common people" that all AI
systems are good, under human control, and helpful. Google's AI is often
centered on surveillance, profiling, marketing, and exploitation. We are being
spammed with pictures and videos of happy people playing with children and
animals so that we learn to associate AI, and Google's AI in particular with
happiness and innocence, while it is simply a very powerful and unimaginably
advanced tool of corporate oppression.

I found a lot of factually incorrect statements in that article. Such as: "If
a human can’t perform the task, then neither can an AI." Don't we create AI
systems to solve problems human cannot all the time?

~~~
nunodonato
> I found a lot of factually incorrect statements in that article. Such as:
> "If a human can’t perform the task, then neither can an AI." Don't we create
> AI systems to solve problems human cannot all the time?

Not really, I think we create AI systems to solve problems humans take way too
long to solve. But I can't think of anything AI does, that a human couldn't,
given sufficient time and resources.

Regarding the rest, I fully agree with you. Things like this are nothing but
scary to me

~~~
aesthetics1
I feel like the statement is fundamentally contradictory, seeing as how humans
have invented and trained the AI systems. By definition, that is something
that humans are doing.

On the other hand, it may be more insightful to say that if humans cannot
perform the task (time is not a factor), than neither can AI. For example,
assessing non-tangibles, or something abstract like identifying sarcasm in
text without context.

~~~
TeMPOraL
I think this is dancing around a tautology - if humans can't do something "in
theory", it means they can't describe it, which means they can't program it in
AI. Obviously.

------
TuringTest
Regarding the interaction of AI and user experience, I like to use the spell
checker as example.

Think what it means that a black-box process looks over every one of your
actions, sometimes deciding to "correct" some of them, more or less silently.
Now think what it would be like if you couldn't see the list of alternatives
that the AI is considering, and that you couldn't fix the mistakes made by the
corrector.

Until we get person-like "hard" Artificial Intelligence, the desired model for
AI is having a way to inspect all the possible alternate decisions that could
be taken, and most important, having the possibility that a human overrides
any one of those automated decisions (to the point of safely disabling the
whole system in extreme cases).

~~~
TeMPOraL
Oh yes.

Reminds me of a recent observation by Scott Alexander:

 _Dear @apple - your OS has a global spellcheck that autocorrects names of
medications to names of different medications, eg "duloxetine" to
"fluoxetine", without telling the user. Some clinics inexplicably continue to
use MacBooks. Please fix this before someone gets hurt._

([https://twitter.com/slatestarcodex/status/944739157988974592](https://twitter.com/slatestarcodex/status/944739157988974592))

There's nothing more infuriating than a system that "knows better" and which
you can't override (and have the override _stick_ ). Today's spellcheckers
generally break whenever what you're writing isn't in single language, using
common words. Which, for me, means pretty much all the time - since most of my
chats and documents I write involve mixed Polish/English with domain-specific
vocabulary. And even if we had General-AI-powered spellcheckers, there _still_
would be a need to tell them no - after all, my spelling might be bad, but I
might be making an explicit stylistic choice.

AI systems need not remove user's agency or control from the process.

~~~
gregknicholson
MacOS is a service provided by Apple, not a tool that you control.

AI could be a very useful tool, but it's a very scary service.

~~~
TeMPOraL
Which is why I hate the very concept of Software as a Service.

------
mark_l_watson
I liked the article. I have worked in the field of AI since the 1980s and this
article gave me a different perspective.

Of course the elephant in the room is privacy concerns. It concerns me that
Facebook and Google track our actions in ways that we can’t really opt out of.
Facebook is worse.

As a decades long supported of the FSF, ACLU, and EFF my attitude has changed.
I still occasionally donate to these organizations because they push back for
our benefit, but, I live and work in a digital world and I want to be as
effective as possible in that world.

Google Assistant helps me getting stuff done in ways that Apple’s privacy
respecting Siri can’t do because of the information Google has. I would love
to own the camera showcased in this article and at family and other social
events get a few good pictures at the end of the day without having to think
about taking pictures.

People get to make their own choices and for myself I mostly use duck-duck-go
and a VPN so Verizon does not have my web history, but I compromise on using
select Google Services.

------
chris__butters
Are Google still trying to brainwash everyone into trusting them? I find it
funny when they are the ones who, if they could, would track our every
movement to then try and sell us something?

I can understand the fact that while ML/AI are still fairly closed off systems
as they are not understood by the general population that having someone with
UX skills to be able to better design the interactions and help people trust
AI/ML products (preferably not developed by Google)

------
deanCommie
I use Google Photos to back up all my cell phone camera photos.

It automatically generates galleries from trips I make, and selects the "best"
photos.

I still review them, and decide if I agree on Google's ML agorithm's opinion
of best (from a series of snaps of someone's face or a landscape)

While it's really good at removing blurry shots, aligning crooked ones, and
discarding those where someone half-blinked, it cannot AT ALL tell the
difference between a weird interesting endearing facial expression and a weird
strange off-putting facial expression.

And that's because the real data is not in the image. It's in the personality
that I know of the person. A picture of a person smiling who I know is dour or
unhappy is the one I want. Meanwhile for a person who's incredibly photogenic
and always has a flawless smile - but is usually consistent and bordering on
insincere - I'd prefer a candid shot where they're not looking their best -
but the real personality that maybe only I know of them is shining through.

These are not objective truths. My favourite photos of my wife are ones she
hates. And it's not even about having a different SELF-image from those that
others have of you - we also like different photos of our dog, and of our
mutual friends.

No ML model will be able to pick that, and unfortunately this isn't just a
case of "let's give them more data and wait".

If we trust the algorithms to make these decisions for us, we will just
encourage the rounding of all the uneven edges of our world until we settle on
a bland average existence where everyone has the same idea of what a "cute
baby" looks like.

This is already happening with information bubbles on Facebook and Twitter.

The trend needs to be reversed, not doubled-down on.

~~~
wyager
> No ML model will be able to pick that

Everything you described actually sounds like a pretty straightforward
challenge for ML; do online learning for preferred subject/expression pairs on
a per-user basis.

~~~
xg15
Nope. If Katie once made a funny face because some situation on their trip to
paris was hilarious, that doesn't mean I always/only want pictures of her
where she makes that same funny face for the rest of my life.

~~~
wyager
This doesn’t sound like an AI problem; it sounds like a context problem that
no one (not even a human) besides yourself could resolve with the available
data.

~~~
deanCommie
Right. And we don't need or want them to.

I want the smartest software engineers in the world to work on other problems
than how to recommend better photos for me.

------
xg15
I'm actually kind of impressed by this bit:

> _What if we could build a product that helped us be more in-the-moment with
> the people we care about? What if we could actually be in the photos,
> instead of always behind the camera? What if we could go back in time and
> take the photographs we would have taken, without having had to stop, take
> out a phone, swipe open the camera, compose the shot, and disrupt the
> moment? And, what if we could have a photographer by our side to capture
> more of those authentic and genuine moments of life, such as my child’s real
> smile? Those moments which often feel impossible to capture even if one is
> always behind the camera. That’s what we set out to build._

They are really taking the growing worries around ubiquitous smartphone usage
and try to spin them into an argument for _having your whole life constantly
recorded by a camera_.

That's some next-level PR skills right there.

~~~
PuffinBlue
I'm realising that The Circle was documentary not parody.

~~~
mark_l_watson
That was a great book (movie was good also). It will be “interesting” to see
how the world will change over the coming decades, good and bad effects of
technology. I am in my mid 60s. From my point of view, the acceleration of
technological improvements is geometric and life feels different as each new
wave of technology changes our lives. I started using computers by entering
programs in octal after hand assembly and now I occasionally run jobs using
thousands of CPU cores.

------
lustig
So they are basically creating that creepy life recording product from S01E03
of black mirror?

~~~
TuringTest
No, they're planting a creepy artificial intelligence on top of Black Mirror's
life recording product.

------
Scea91
> If a human can’t perform the task, then neither can an AI.

False. I am applying machine learning to network security. The data are
basically incomprehensible to a human but AI is quite able to find patterns in
those huge amounts of data.

~~~
ccozan
Indeed.

Human body has limitations. Maybe imagination which is considered limitless.

But, next use case: No human can sustain ~10G for more than a few seconds [0]
and pilot anything that flies. An sufficient AI for flying has no issue in any
G.

[0]
[https://en.wikipedia.org/wiki/G-force#Human_tolerance](https://en.wikipedia.org/wiki/G-force#Human_tolerance)

------
chriszelazo
We are still in the nascent stages of AI UX. I’m excited to see where it goes

------
ACow_Adonis
"If you aren’t aligned with a human need, you’re just going to build a very
powerful system to address a very small — or perhaps nonexistent — problem."

I'm guessing a desire to avoid advertising or a certain corporation's tracking
and holding data on me and my family isn't going to be deemed a sufficient
human need :P

Additionally, like Facebook, I have no choice because even if I opt out (which
I can't do without already opting in), someone else out there taking video and
photos and including me opts me in regardless of my feelings.

And yet, apparently cameras and video analysis with the convenient side effect
of tracking and identifying me and my loved ones to produce even more
Facebook-wall worthy images is a human need.

A cynic (realist) might observe these principals aren't enough to stop one
working on a very powerful but non-existent problem that not only doesn't meet
the real human need, but might actually be, in overall summation, detrimental
:(

------
vadimberman
> If you aren’t aligned with a human need, you’re just going to build a very
> powerful system to address a very small — or perhaps nonexistent — problem.

If you have to "align with a human need" and that problem is not obvious, this
sounds like a solution looking for a problem.

------
austincheney
> If you aren’t aligned with a human need, you’re just going to build a very
> powerful system to address a very small — or perhaps nonexistent — problem.

A common theme in AI research from major software companies who are throwing
thousands of developers and billions of dollars at this. AI isn't a giant
monolithic application like a search engine. So long as the vision of AI
remain around single, closed, data-oriented, monolithic applications AI will
continue to be as impressive as it is now. It will get faster as the hardware
gets faster, but it won't be what tech evangelists are hyping it up to be.

------
AIrabuYU
What if the people I'm familiar with is the entire population of the country?

~~~
H4CK3RM4N
Well then the AI will help you capture every moment _you_ care about.

------
Insanity
To be honest, whilst reading this article I was more "shocked" by this 'clip'
and it kind of took away from the information they were trying to convey.

This looks creepy to me, and I probably would not trust the device enough to
use it for important moments. And for the not-so-important moments I really
don't need pictures of videos.

I realise the point of this article was not really this specific product, but
damn, it did take my focus away from the content.

------
restlessmedia
Quite surprised at the use of gifs on this page, just the images weigh in at
92MB.

------
frostburg
Besides the PR, even the purported goal is less than worthless. Clearly what
photography needs is more banal snapshooting, now with editorializing AI
enforcing utter conformity.

------
JoeDaDude
Re: the kid with the basketball. How is this the best photo? The AI chose a
picture centered on the basketball, but which trimmed off the top of the
child's head.

------
titzer
I'm extremely cynical and distrustful of AI-first. I don't accept the three
"truths" FTA:

>>>>>

We developed the following truths as anchors for why it’s so important to take
a human-centered approach to building products and systems powered by ML:

1.) Machine learning won’t figure out what problems to solve. If you aren’t
aligned with a human need, you’re just going to build a very powerful system
to address a very small—or perhaps nonexistent—problem.

2.) If the goals of an AI system are opaque, and the user’s understanding of
their role in calibrating that system are unclear, they will develop a mental
model that suits their folk theories about AI, and their trust will be
affected.

3.) In order to thrive, machine learning must become multi-disciplinary. It's
as much–if not more so—a social systems challenge as it's a technical one.
Machine learning is the science of making predictions based on patterns and
relationships that've been automatically discovered in data. The job of an ML
model is to figure out just how wrong it can be about the importance of those
patterns in order to be as right as possible as often as possible. But it
doesn't perorm [sic] this task alone. Every facet of ML is fueled and mediated
by human judgement; from the idea to develop a model in the first place, to
the sources of data chosen to train from, to the sample data itself and the
methods and labels used to describe it, all the way to the success criteria
for the aforementioned wrongness and rightness. Suffice to say, the UX axiom
“you are not the user” is more important than ever.

<<<<<

My take:

1\. I like the "human-centric" AI goal, but in the end we're inevitably going
to move slow, faulty humans out of the loop. No matter how "thoughtful" we are
along the way, the inevitable result is that we always want to take humans out
of the loop to save money and avoid hard work and instead automate it away,
because of sheer economics. It's a mistake to think AI is not going to figure
out what problems to solve, or that we are not going to let AI make its own
goals (it already makes its own subgoals at multiple levels). We are
continually incapable of figuring out what problems to work on. While the
planet warms up, we waste energy on cryptocurrency bubbles. While the ocean
fills up with trash we make chat apps and self-driving cars and better
smartphones. These decisions are dumb. We'll eventually realize that we suck
at focusing on the right problems and put AI into the decision making
processes. Worse, we'll want to (stupidly, IMO) pursue artificial general
intelligence to do that. Each step we get closer we'll just hope for more and
better outcomes from the AIs, and cede more and more decisions to the AIs. And
then one day we'll find it's terrifying to have an inscrutable AI develop its
own goals at a superhuman pace.

2\. AI's goals are always opaque. Especially when that AI system is run by a
massive opaque for-profit corporation. We all know what such corporation's
ultimate goals are: make as much money for shareholders as possible. It's
great that we tell ourselves stories about how awesome AI is for people, but
make no mistake, AI is just the latest weapon in the ever-escalating pursuit
of perpetual exponential growth. Who the hell knows what its goals will be in
the end.

3\. This is an unparseable smokescreen going in six directions at once.
Translation: we have no idea what we are doing or why, or whether it is going
to work. But we are committed to it, and we're full of a blind faith in
technology. Maybe if we had more Sociologists it will work?

------
realworldview
More ads from the ad company. _yawn_

------
Siecje
Is the clips still for sale?

