
Google Duplex might make my design job redundant - xbmcuser
https://thenextweb.com/podium/2019/06/29/google-duplex-gui-ai-design/
======
danShumway
> _Duplex is making websites redundant. Designers like me are now faced with
> the possibility that we could ‘optimize’ the experience by simply removing
> it altogether and have the AI interact with the server instead._

This was one of the original points of HTML and the original design of the
web. The Semantic Web meant that 3rd party and automated assistants would be
able to control it. Burners-Lee described it like so:

> _I have a dream for the Web [in which computers] become capable of analyzing
> all the data on the Web – the content, links, and transactions between
> people and computers. A "Semantic Web", which makes this possible, has yet
> to emerge, but when it does, the day-to-day mechanisms of trade, bureaucracy
> and our daily lives will be handled by machines talking to machines. The
> "intelligent agents" people have touted for ages will finally materialize._

Everything goes full circle, and there is nothing new under the sun.

Where voice assistants are concerned, this is a largely unexplored UX space.
Voice assistants in the future are not going to be the same as they are today,
because our design trends are going to evolve as we learn more.

If those trends move towards predictable interactions over interpreted ones,
and if we decide that there are times we want to use assistants without
physically talking to them, will we see a resurgence of text-based assistants?
Will we eventually go full circle all the way back to command lines, just
under a different name?

~~~
haydn3
I think today the technology is different, there are more qualified people,
and the futurists and the like will make it stick.

Automation and people trying to have everything 'cloud-based' and 'secure'
will probably rely on services not just like this, but like time-saving and
squared-away solutions like this.

Businesses had no idea of this back when HTML was invented, and now that we
understand how that works and that has become such a task that we would
benefit greatly from implementing something to get rid of it, who knows!

I think it's worth a shot to automate something like this, which is already
tech focused and surrounded by talent, that automating other things will
become closer in scope.

That's what it's all about really, time-saving and automation.

------
hawaiian
That the author considers the command line an archaic mode of human-computer
interaction makes it difficult for me to take his commentary on Google Duplex,
let alone AI and HCI design, seriously.

Secondly, I have a hard time thinking design is anywhere close to obsolete.
Google Duplex sounds like an adapter that will make reservations for you upon
voice command. This will not obsolesce anything except a few forms on most
websites. Design encompasses _much_ more than how a form looks or functions.

~~~
9nGQluzmnq3M
They don't say archaic, they say "learning to code is beyond the reach of most
people", which seems like a fair statement.

My kids learned how to use touchscreens as babies. They learned how to
manipulate GUIs around the time they entered school. But they still regard my
terminal window as black magic, and I'm not sure how to even _start_
explaining what's happening when I punch in commands.

That said, I agree the article is overstating the impact this will have:
computers talking to computers has been a thing for a long time, they just use
APIs (which are carefully crafted to be as umambiguous as possible) instead
attempting to parse ambiguous human speech and pipe it into arbitrary web
pages.

~~~
_emacsomancer_
> and I'm not sure how to even start explaining what's happening when I punch
> in commands.

Though perhaps not trivial, it seems easier than, say, explaining what you're
doing when you fix/manipulate things on a car engine.

The terminal uses a form of language to give commands, and the result is the
execution of the command and/or some sort of printed output.

`ls -lah` could be explained as 'a fast way of inputting' the equivalent of
"Alexa, tell me in detail what files are in this directory". (Yes, you would
have to explain something basic about files and directories, but that still
seems reasonable.)

~~~
saagarjha
FWIW, I’ve given this talk to people unfamiliar to the terminal and they’ve
been able to get the gist of how basic things work in under an hour.

~~~
beenBoutIT
That's because finding people unfamiliar with the terminal that are willing to
spend an entire hour in front of a terminal learning how to use it is the hard
part.

~~~
_emacsomancer_
My original suggestion was envisioned as a 3–5 minute explanation.

------
zerotolerance
"Tony Aube - Design @ Google" This is an advertisement.

------
EugeneOZ
I absolutely hate to use voice-controlled assistants. Not because they are
annoyingly dumb, not because they can only be good in areas their developers
prepared them, but because this way of communication is much slower and
absolutely not private, at all. Maybe most of the population currently is
living lonely, but I have family and I'm not ready to speak out my web surfing
to them. Even more stupid it would be to do in office or train, plane. Only
good place for voice-controlled assistants is a car, I think. And a spaceship.

~~~
hadsed
I'm not sure it's the slowest. In fact according to this chart it could
potentially be one of the fastest: [https://www.wired.com/2016/01/phil-
kennedy-mind-control-comp...](https://www.wired.com/2016/01/phil-kennedy-mind-
control-computer/amp)

(Seems i can't link the actual image but in the article there's a chart that
breaks down the speed/information density of different types of
communication.)

------
Skunkleton
At each stage, the most expert remaining users are left behind. Many
professionals extensively use the CLI. The majority of the remaining
professionals use a traditional GUI on a traditional OS. Most people using
smart phone UIs are not professionals, but some of them are fairly expert in
their interactions. Those using AI UIs are generally just crossing their
fingers to avoid pulling their phone out.

Maybe someday AI will be a legitimate user interface, but I don't think we
know what that looks like yet.

~~~
Mirioron
I think we know what we want it to look like - like a conversation with
another human, but we don't know how to get there.

------
danShumway
This isn't necessarily on-topic, but how is Duplex going to deal with Captchas
-- particularly when its doing 100% automated things like remembering to book
an appointment in the future?

Are we going to see an industry shift away from the belief that a physical
human needs to be at a website in the future? Is there going to be some kind
of back-door where Duplex won't need to solve Google Captchas?

It's hard for me to look at Duplex without thinking that it's something of an
admission that bots and automated assistants are a legitimate way to interact
with the web, and that we should be trying to block behaviors, not agents.

~~~
blazespin
Yes, on the last. Captchas are meant to increase revenue by decreasing fraud,
not blocking AI from helping customers spend money on your service.

~~~
visarga
Yes, but both AI and fraudsters use bots. How would the site operator know if
it's a benign bot or not?

------
krzat
Something seems off here, the google AI was probably trained on these designed
forms, to be able to fill them.

If designers then stop creating these forms, then what will the AI use? Seems
like some kind of API would be needed, but typical REST API is not detailed
enough to support this.

So the API would somehow need to be created (potential job for UX person) or
inferred from something.

------
nikkwong
Sensationalist headline. This totally ignores the fact that not all of web
traffic is transnational; a sizable percentage of it is—surprise surprise—for
leisure, and in those cases there is no substitute for interacting with an
interface directly.

------
Animats
Wasn't 2016 supposed to be the year of the chatbot? Whatever happened to that?

These things have been around for years. There was a MIT Media Lab demo long
ago. It's basically Amazon Echo for services, right? Google might make this
work by insisting that services offer an API that they can call to get
business from Google users. Of course, Google will want a cut of the revenue.

They'll probably get it going for food delivery and car services, then bail on
anything that isn't basically online ordering. Airline reservations, maybe.
Doctor appointments, probably not. Appointments with important individuals,
unlikely.

------
mark_l_watson
An interesting article. An ‘existence proof’ of (reasonably) effective voice
UI interactions is the Apple Watch. I find it liberating to leave my phone at
home and still stay connected with my watch.

To use the example in the article, I can imagine making the request to Siri on
my watch, having Siri read back a reservation, followed by “is this OK?”

------
ProxCoques
The "punch card, command line, GUI, touch screen" chart makes no sense in
"facilitating HCI". Touch screens are simply an input method for GUIs. Why not
include voice input, eye tracking or gestures in that case?

------
blazespin
This is why watches are a big deal. Being able to talk to your watch and have
it order for you, show images.

Too bad I hate wearing watches.

------
sametmax
"Our proprietary locking product will replace the now common standard and open
way of doing it"

No thanks

------
codesushi42
Right. Remember when the radio eliminated the need for books, magazines and
newspapers?

------
rootsudo
NLP is the future.

