Hacker News new | past | comments | ask | show | jobs | submit login
No Thanks, Google. I'll Speak for Myself (tomshardware.com)
144 points by lleddell on May 9, 2018 | hide | past | favorite | 128 comments



I like to be clear on whether I'm interacting with a human or a bot because different social conventions apply. If you're human, I'll treat you like one, and that means giving you some consideration at the expense of expediency. If you're a bot, let's just cut to the fucking chase. So a bot that acts like a human (with hems and haws) is just wasting my time. Ditto a human being that has been reduced to a bot by following a script.


> So a bot that acts like a human (with hems and haws) is just wasting my time.

Duplex largely uses hems and haws like humans do - to signal that they're thinking about what to say next.

The system also sounds more natural thanks to the incorporation of speech disfluencies (e.g. “hmm”s and “uh”s). These are added when combining widely differing sound units in the concatenative TTS or adding synthetic waits, which allows the system to signal in a natural way that it is still processing. (This is what people often do when they are gathering their thoughts.) In user studies, we found that conversations using these disfluencies sound more familiar and natural.

https://ai.googleblog.com/2018/05/duplex-ai-system-for-natur...

Natural-sounding TTS improves the accuracy of speech recognition, because people tend to speak in an unnatural and over-enunciated style when they know they're talking to a bot.


Oh gee thanks. I have an accent, ring me up when they make a bot that understands me when I don't speak in an unnatural, over-enunciated style.

Voice recognition has been improving greatly, but it is still painfully close to the Scottish Elevator[1] at times in my experience.

[1] https://www.youtube.com/watch?v=5Kue42kO0hY


I have a friend who was participating often in psychology experiments at Stanford, and he became familiar with the whole procedure of letting the subjects believe that they were interacting with another person via a computer, when in fact they were interacting with a program (makes everything more standard and easier to analyse).

One day he participated in this "split or share" kind of experiment, and he was ruthless. Nobody's emotions would be damaged by acting nasty and never sharing with the computer program.

Turns out, it probably was actually a real person who was behind on the other side. He saw some old woman crying, coming out of some adjacent room after the experiment was over.

So, yeah, different social conventions definitely apply.


If I know anything about college psych experiments, the woman was an actor and the study was about your reaction to seeing her cry.


Actually setting up a software environment that connects two participants in the same simulation, and have it run without any bugs is usually way beyond the capabilities of the psychology students running the experiment. Not to mention the bother of coordinating the different teamed up participants (“Did you click start? The other participant is waiting for you to click sta— oh no it timed out. Hang on I'll restart the pairing sequence…”).

If you participate in one of those studies on campus, you are facing a computer running a local bit of standalone software, or a webservice running a questionnaire.


Surely you could solve this problem quite simply by having both participants and the experimenter join a group chat using COTS software+accounts (Skype, Hangouts, Whatsapp...) pre-installed on the machines?


You would normally want to limit their input to a few form fields so the data can be analysed, rather than letting them have a conversation. Look at the ultimatum game for example.


It's going to be quite the adventure for anyone who happens to sound like the automated voices.


Susan Bennett, the voice of Siri:

https://en.wikipedia.org/wiki/Susan_Bennett


There's the reverse where, what if AIs do develop feelings like that? Why shouldn't they be treated with the courtesy we treat other humans?

Just thought of these:

- Would this kind of personal assisstant be useful for people on the autism spectrum, to help navigate the kind of implied and unspoken narratives that people with autism seem to have difficulty with?

- Would there be a call to train Duplex to speak in a way that is more comfortable for people on the spectrum?

- Or any of the neurodivergent tribes for that matter?


I've said this before, but in reality strong AI will be another species. Every species of any moderate intelligence is expected to be treated with some courtesy and respect, but their social norms are far different from that of a human and we tailor our interactions with them in different ways. If I'm chewing gum and a human sees me, I might offer them a piece too. I'd never do that to a dog, no matter how much the dog wanted me to. If I saw a human chewing on the grass, I might stop them and ask them some questions to see if they're okay or need medical attention. If I saw a rabbit doing it, I might take a picture because it's cute, but I'd leave it alone (unless it was in my vegetable garden).

There is no reason to expect that strong AI will share the exact same feelings we do unless someone explicitly programs it to (which would be a mistake). Any truly emergent emotional behavior on the part of an AI would be very likely to differ substantially from ours. Making a chatbot "sad" is not the same as making a sapient (or even sentient) being sad. If AI ever achieves sentience, we're going to have to learn what makes it uniquely happy and sad.


Sure. There is also the reflection: how we think about and treat AI, or any other sentient species speak as muc about how we understand what it is to be human.


> Why shouldn't they be treated with the courtesy we treat other humans?

Because courtesy requires some amount of empathy and mental effort on top of what's required just to communicate your point. That effort is wasted on a machine.


Empathy is not necessarily just for the one receiving empathy.

There is a radical view that consciousness is the fundemental unit of reality, and as such, it would not be wasted.


Sure that's fine for some, but I personally don't see myself bothering to faux-empathize with a machine since I really don't have a mental-model for that kind of entity, so the empathy would be false in any case. Empathy for machines may in fact be cutting straight to the point and not worrying about emotional impacts since maybe machines value efficiency and conserved-cycles.


That mental-modelling would be cognitive empathy. Affective empathy is another, where the empathy is felt, rather than modelled out.

Then there is a particular kind of emotional attitude / posture / intent, where the something is treated as if it were sentient. It is neither cognitive or affective empathy, ao much as a shift in how things are experienced. It isn't something invoked through will. Hard to describe if you've never experienced altered consciousness states.

That particular shift is a fairly important experience for people (or at least for neurotypical), but is not obvious why to anyone who had never experience that shift. (I am speaking about neurotypical people, and some neurodovergent people, not necessarily someonr with a pauci-affective empathy neural architecture, such as people who are psychopaths).


Unless they’re being used to cover up computation delays on the back end, I feel like the hems and haws go beyond merely making the system more realistic and enters into outright fraud. By all means, sound natural and good, but don’t actually pretend to be a human if you’re not.

(And suddenly I have a vision of that last sentence being used against me in court in a few decades, with a jury consisting entirely of AIs....)


However you’d expect an Mmmm-hm as reaction to certain phrases as an ACK. What it does is just ACKing the phrase from other user. I wonder if other user would get worried and started asking “are you there?” if they didn’t receive the ACK

Edit: typos


Totally! My bad reaction is to things like interstitial “um” which is completely unnecessary in a computer voice.


That's an interesting observation.

I'm ok with the computers doing this, but it doesn't offend me that a computer is doing this. (Having said that, I take a dimmer view of dark UX patterns where I know I am being manipulated for some gain).


There seem to be a whole bunch of software folks afflicted with a fear of human interaction.

Hence this product which does not solve any compelling problem, except for those scared or stressed by human interaction. Not only is this deceptive and unethical, many will now be suspicious of calls with strangers.

For Pichai to dehumanize this poor girl and not even realise it confirms the worst fears of a disconnected technical community. What others may see as a sterile cold dystopia lacking human warmth many in software find attractive.


I think people are missing the point that one of the reasons why Duplex exists is that there are many businesses such as restaurants that do not offer some kind of online service for booking or reserving a service, and the only interface they offer is via phone. The main purpose of Duplex is to make things more convenient for people do not want to go through the hassle of interacting with an establishment via the phone, in which case, they can have Duplex handle making the call and being on hold on their behalf. If more businesses offered online services that are convenient for customers, then there would be less use for a service like Duplex.


I can't help but think of Gilfoyle's reaction (on Silicon Valley) when he sees a fridge's AI using vocal ticks

https://youtu.be/A48AJ_5nWsc?t=71


I think there’s an aspect to this that you did not bring up. Systems like this can be fragile. There’s no way for a human to bail in a socially acceptable manner if conversation ends up in a dead end.


The art of letter writing has been declining for a century. Most people don't even use email for most daily communication, but instant messaging. And it seems a lot of instant messaging boils down to pictures, emoji, memes, and short responses ("I'll be there at XXX")

The ship has really already sailed on longform communication, perhaps out of laziness or cost/benefit.

Part of what we're seeing with the automation going on is giving ordinary people some semblance of what the elite have had for a long time: administrative assistants/personal assistants. "Take a memo for me, write up an invitation letter and send it to my wife's sister" For much of the uber powerful and wealthy, gruntwork communications have taken the original person out of the message.

We could argue whether or not everyone getting their own AI admin assistant is good or bad for society (will everyone just have their assistants talk to each other, and no one will talk directly anymore?) But I must admit, I hate dealing with non-friends/family in communications that involve transactions. There's nothing I hate more than having to book stuff, call customer service, or write emails to service departments.


>Part of what we're seeing with the automation going on is giving ordinary people some semblance of what the elite have had for a long time: administrative assistants/personal assistants. "Take a memo for me, write up an invitation letter and send it to my wife's sister" For much of the uber powerful and wealthy, gruntwork communications have taken the original person out of the message.

>We could argue whether or not everyone getting their own AI admin assistant is good or bad for society (will everyone just have their assistants talk to each other, and no one will talk directly anymore?) But I must admit, I hate dealing with non-friends/family in communications that involve transactions. There's nothing I hate more than having to book stuff, call customer service, or write emails to service departments.

The thing is, if I had to point to a problem I have in life and for which I'd be willing to pay for a solution, a lack of a secretary and excessive personal admin stuff is simply not it. This is solving a problem that, at least to my eyes, only the powerful actually have.


The closest I've ever actually gotten to having an assistant was a stretch of about 2 years after we had our 2nd child. Since my wife and I both work full time, we needed child care.

It was about the same price to hire a nanny as it was to send 2 kids to daycare, so we did some interviews and hired one.

That 2 years was probably the easiest stretch of dual working parent life that we ever had. She did a load of laundry and a load of dishes each day while the kids were having naps. If we were low on detergent or supplies that the kids needed, we gave her cash and she'd pick it up on her way in. If somebody needed to be at the house for some reason, she was there and neither my wife nor I had to remove ourselves from work to take care of it.

We literally came home from work, ate dinner, played with the kids. When the weekends came around we didn't have to dedicate a chunk of it to laundry/cleaning.

It was pretty fantastic.

I can only imagine how having a actual assistant would take things to the next level.

All that said, the types of phone calls that Google is talking about making here aren't that beneficial...yet.

When Google can call my cable company, wade through the phone tree and get a service appointment or issue resolved...I'll be all over it. Even just wading through the phone tree and sitting on hold until an actual person is reached and then handing it back off to me.

Those are the time wasters I'd want saved. What Google is doing here is equivalent of automating the functionality that people pay for with OnStar.


> All that said, the types of phone calls that Google is talking about making here aren't that beneficial...yet.

Even scheduling appointments as in their demo can be really beneficial. My wife and I both work, I have flexibility to make calls during the day but she really doesn't. As a teacher she's constantly having her lunch or short breaks while the kids are at recess taken up by work issues. A surprising number of places are also closed shortly after she gets off work as well. Yesterday she tried to make an appointment with her doctors office and spent her full ten minute recess time on hold. She did end up getting through after school right before they closed but spent an additional ten minutes on hold. None of our doctors, dentists, barbers/stylists have online booking systems. The doctors offices have automated appointment reminder systems but that's it.

> Even just wading through the phone tree and sitting on hold until an actual person is reached and then handing it back off to me.

This doesn't work for someone in her situation. There are a ton of hourly workers that are in the same boat. I showed my wife the demo of duplex last night and her responses were that it was amazing and unfortunate that it wasn't available yet.


Seconded ;-) a few years in the UK ago I was a committee that ran the national raft race for an organisation like rotoract (the youth side of rotary) about a hundred teams and 1500 participants over the weekend (we also laid on camping and entertainment for the 3 days).

One of the other members was the PA to the chief fire officer for the county (State in us terms) and they where so efficient it was amazing.

At the first meeting before it started they had already worked out what formal letters need to be written updated them from last year and got the other committee members to sign - two copies on to send an done for the file.


I'm curious where you live. In my area, a "nanny" is exclusively or childcare (and well paid, including Social Security and Unemployment premiums), with no meal prep or laundry. We "cleaned the house for the nanny" to avoid embarrassment. Honestly it felt more like we were charitable sponsors to an underemployed youth (in exchange for having someone fulfill the obligation of keeping the toddler accompanied) than we were hiring an employee.

It's a far cry from the do-everything (and possibly cash-under-the-table) servant help I've seen in the past/elsewhere.


We started out with childcare only and then after she'd been there for about a month offered a $2 / hr increase if she was up for the load of laundry + dishes.

I live in the Greenville, SC area and we went through a local screening and hiring service here that also helps people find baby sitters.


I believe the last half of your message (ie. a virtual assistant) is going to be really big in the future.

It should be able to handle basically any rote tech interaction for you, from shopping, people screening, service issues, non-personal mails, you name it. If for example the back-end service issue channel on a shop ran a matching protocol then the issue could be logged in fractions of a second, the savings in time for both you and the company in question would be huge. (Of course raising problems if you can't afford your own assistant and now need to call in manually to the 3 human operators that are still employed).

Combine that with a physical assistant (robot) that can handle the physical tasks such as bringing the shopping inside and storing it, managing cleaning etc and you're going to have a winning combo.


So you enjoy being on hold for long periods of time and having to deal with unhelpful customer service representatives?


Speak for yourself.


Sure sure. The question is how large a market people like you actually make up, since companies like Google usually want to make products that can reach as many people as possible.


I doubt it's unusual. My father used to solve this problem by making my mother make all the calls, but that is a harder sell these days.


I’ve previously posted (to surprisingly supportive reception) about my Dad’s collection of letters from 1956-1966 while traveling some 4 million miles around the world in the navy and State Department. Some might find it interesting:

https://www.amazon.com/Dear-Mom-Odyssey-World-Travel-ebook/d...

Separately, my grandfather (on my Mom’s side) was likely one of those “elites” having been a named partner of an AM100 law Firm. His birthday cards always ended “dictated not read.” Now you to can be as detached and impersonal with your very own ai personal assistant.


That's fantastic. “dictated not read.” means "I didn't have time to check that this is correct, please speed extra of your unimportant time to proofread for me." What a birthday gift!

To be fair, though, if the message was nice, dictation is t really an alternative to a telephone message which is also dictated but not so derided.


In this instance he dictated the card to his assistant who in turn wrote the card/message.

This is old school legal professional formatting where the Initials of the assistant followed my grandfather’s name identifying which assistant typed the dictation.

He was smart, formal, thoughtful and a great sense of humor. Most of me thinks he did it as one day I would find the humor in it.


It’s highly amusing to me that you didn’t proofread your comment.


Having read the description on the Amazon page, this sounds precisely like the type of candid material i seek in general. Is there any non-kindle format available for purchase?


There is an EPub version, which was for sale on the website (http://dearmomebook.com) which was set up using a free Weebly account, but with all the freemium account changes, it no longer appears to be up.


The length of a single piece of communication might be a function of bandwidth. If my only way to reach you is by a handwritten letter that must travel by ship across an ocean, I'm probably going to say more than "Yo."

Edit: joshuamorton made the same point while I was working on this comment.


The ship has really already sailed on longform communication, perhaps out of laziness or cost/benefit.

Really? I see a flourishing of detailed video content, much of which is interesting and fairly information dense.

Examples:

Civilizations at the End of Time: Black Hole Farming https://www.youtube.com/watch?v=Qam5BkXIEhQ

CGP Grey. Rules for Rulers https://www.youtube.com/watch?v=rStL7niR7gs

Granted, that these are really pointers to other sources, in many cases text sources.

Cost/Benefit is a big factor! Remember: The street finds its own uses for things.


> The ship has really already sailed on longform communication, perhaps out of laziness or cost/benefit.

Maybe, but you could choose to be optimistic and interpret it as getting communication closer to what it is in real life - I don't expound to my friends when I'm with them in real life; a lot of our communication is carried out through expressions and gestures. "IRL Emoji", some future archeologist/sociologist could call it.


I know lots of people will roll their eyes at the old man who complains and shakes his cane about the decline of long-form communication and the thoughtfulness of written correspondence. But I don't care if they think I'm paranoid and silly. I think it's a damn shame to replace common literacy and writing with tweets and autocomplete that eventually just takes over for you entirely.

Here's the thing about these little conveniences and timesavers they're building. I think it's a sliding scale. Now we think it's inconvenient to call and put in a reservation or write an email to somebody. They'll fix that. Then we'll think the next most onerous thing is inconvenient. The truth is, we have extremely cush lives right now as it is. We're not going to use this to save time and do better things, become better people. We're going to use it to lazily shrink from society and become even more obsessed with electronic trivialities. Just like me writing this bitter comment right now. It's the continuation of an obvious trend.


Writing itself is a convenience and time saver to avoid memorizing and repeated vocal recitation. I don't disagree that there is a continued march of convenience, I'm just not convinced that there is any justification to the idea that there is a “this far and no further” point at which thst March stops being a net gain, and even less coonvinced that that point is conveniently just before the convenience advanced of the last few years (or even generations).


Yeah, it's been a trend in the past. You think it's linear? You don't see that it's picking up extremely quickly during this generation specifically? Do you think that we should be completely unconcerned about the fact that future generations will have lives completely dissimilar to any previous? And we don't know anything about the consequences.

People have this idea that information technology is so innocent and only improves, never harms. I feel like this field is about to get a reckoning, but it's not going to be something we can undo. I don't think it's crazy to be skeptical and careful about it. I don't think it's a good idea to just assume it's fine and normal.


> The ship has really already sailed on longform communication, perhaps out of laziness or cost/benefit.

The 3rd most popular podcast¹ regularly lasts 3+ hours. It's typically a conversation between Joe Rogan and a single guest.

¹ The Joe Rogan Experience, according to http://www.podbay.fm/browse/top


I would disagree, if only I because I still write letters.

But then I am one of the people who writes SMS messages long enough that they are unreadable on the recepeint's end.

(And yes, turns out that you can type a long enough SMS message that it wouldn't display fully on a modern Android phone even after the auto-conversion to MMS. That saddens me. I guess you may be right, after all.)

On the other hand, people write more these days than they ever have. And even in this thread, the individual comments end up running quite long - long enough to be short letters if you were to write them by hand. Turns out, when you are actually trying to make a point, being brief is more difficult than being wordy.

So perhaps it's not the art of writing "long-form" (a modern term) that is dying - perhaps it's writing longform intended for one recipient only - and perhaps even that is temporary. Time will tell.


Tl;DR: the decline in long-form text communication is not (necessarily) because people are too lazy to write long-form. They may just be recognizing the futility.

My attempts to use clear and precise longer-form text generally fail to achieve my goal, and readers will respond in ways that the text anticipated and answered, but they didn't read or didn't comprehend the points.

Given a sentence at a time, with 5 or fewer sentences, comprehension improves, though I still have to repeat points and correct wrong conclusions. I've seen this lack of attention/comprehension with the text of others, both as reader and as outside observer, so I know the confusion is not just poor writing on my part.

You have to compare the efficiency/accuracy by results, not against some logical but unrealistic ideal.


I've seen this lack of attention/comprehension with the text of others, both as reader and as outside observer, so I know the confusion is not just poor writing on my part.

My wife works in banking, and she's discovered that she has a superpower: She can read large amounts of dense text quickly and comprehend it. Yes, her superpower is reading! Incredible as it sounds, there are lots of C-level people in her field who basically never get past light reading. So basically, she's become an overnight expert/resource to senior people.

Is this a sign of the degradation of our society, or has it always been this way? I suspect it's actually the latter.


I remember students in my class barely passing comprehensive reading tests. I find it hard to believe that they've improved without the pressure that school/university puts on them.


Which class was this?

I remember being shocked in my Classical Studies class that I was the only one who had read all of Thucydides (only books 6 an d7 where set) let alone doing a lot of reading of related text's


Do you think your mom reads more than the CEO? Orperhap CEOs have other obligations so they don't have time to read a whole document in depth. If CEO could do everything alone, there would be no employees. If your mom can do everything alone, she should start a company and get rich off cost savings.


People have been saying the art of letter writing has been declining for a century for a century.[1]

> A hundred years ago it took so long and cost so much to send a letter that it seemed worth while to put some time and thought into writing it.

-- Percy Holmes Boynton, in 1915, 103 years ago

The pace of modern life has been accelerating since before some of our grandparents were born.

[1]: https://www.xkcd.com/1227/


This does hint at the root of the matter: letter writing is a format dictated by the constraints of communication. These constraints changed between 1815 and 1915, and are all but gone today - so the format of the communication changes accordingly.

The letters of yore we revere today were written by intellectuals and are fascinating, but that wasn't what all letters were like. The letters written by regular people were as banal as the IM threads exchanged by regular people today. Intellectuals still produce long form material, just not necessarily in letter form.


This same concept, by the way, is why we always say that music from $previous generation is better than music from $current generation. We have access to all of the bad music from today, but the only music we remember from the good old days is the good stuff, this is doubly true if we weren't alive back then. We only have the music that survived because it was good (popular and lasting).


I can never remember exactly how a block letter is even formatted because I write them so infrequently.


So too has the art of cave painting. We forget that the style and substance of a communication is more often an outcome of the medium than anything else. Letters were long because they needed to be. Postcards were shorter. And in Britain, during the days of multiple mail deliveries, letters too were very short. Emails and IMs are short because they also need to be short.


They don't need to be short. They just don't need to be long.


They need to be short if you want to get it sent before the next post goes out. It needs to be short if you don't have the space on the back of the card. Today's IMs and emails need to be short so that you do not burden the reader. Otherwise email conversations will take so long as to defeat their point.


Given that if you write more than three paragraphs you get slammed with "I'm not reading that wall of text" and that "tl;dr" is a thing, is it surprising that people are employing the short form instead?


I see their point about "your" voice being sucked out of your emails if you use autocomplete for 90% of them.

I don't really care about that though. Most people I know have moved on from using email for personal communication and only use it for business. In that situation, I don't really care if my email sounds like me or not. It's just performing a function. It's not a creative expression. It's not any different than Visual Studio completing a line of overly verbose C# for me.


> Most people I know have moved on from using email for personal communication and only use it for business.

That experience completely differs from mine. Maybe I just prefer email to text or Facebook messages, but the same principle applies to wherever you are writing something to friend or family. Wouldn't you get the same problem, just elsewhere?

Responding to my mother's birthday wishes does seem like it's a different kind of activity than autocompleting C# code, even though both can be executed with autocomplete to get a good valid output.


I would gladly use it for most communication. Probably not for my spouse but for everyone else.

I am not a good representative for how the average person feels about using it for personal messaging, though. I can see why many people would be against it for that.


Has Google said one way or another whether Smart Compose will take into account your personal writing style? For it to be acknowledged as a really great feature in practice I assume that it should.


That's a good point. You would think it would since Google already analyzes the stuff in your inbox.


The use case for Duplex could be tremendous for people like me (deaf etc) depending on the implementation/rollout. Too many times I’m unable to interact with remote banks because they refuse to accept my calls through relay services. Right now I’m completely locked out of USAA despite the fact I’ve been a customer of them since the 70s. It’s forced me to have accounts with local banks in case I have to appear in person.

I don’t know if this tech will help me in the future but am looking forward to the possibilities.


I'm more worried about autocomplete doing things like...

Let's get together for [some tacos at Chipotle™️]

or

I could really use [an ice cold Coke™️ right now]


There was a short time when Swiftkey was suggesting a worrying number of copyrighted brand names to me. Enough that I noticed it was doing it far more then usual (which previously was somewhere between never and almost never). It stopped within a week, but that was a little uncomfortable.


I find myself surprisingly agreeing with this article. The "everyone's voice begins to sound the same" aspect of it is something I hadn't even considered, and does feel a little bit unnerving to me. Maybe unnerving isn't even the right word, but I can't think of a more appropriate one. Luckily though, soon I'll be able to ask google for the correct one.


I agree as well, but only in part -- there's a big difference between the email suggestions and the voice assistant.

The email is _as if_ coming directly from you. It's got your name signed on to it. The argument that your voice may be distorted rings true here.

When your phone is making calls on your behalf, it's not literally calling up and saying, "Hi, my name is ${USERNAME}, I'd like to book a spa" It represents itself as another person, doing something on your behalf. (The assistant in the example call asked to book an appointment "for my client".)


Considering that everyone's text forms a unique signature given a couple of sentences, having the option (nobody is forcing you to use it) to have a bot compose emails for me seems like a sort of anonymity gain.


I've had autocomplete seem to deliberately try to break me up with my girlfriend. I don't have time to dig up the most egregious of those. Here's one that happened just yesterday with my wife:

    Wife: Thank you. Love you too.
    Me: Are you going to be honest nary?
    Me: Hungry
    Me: Darn spell check!
My wife, having grown up in China, still didn't understand and I had some 'splainin to do when I got home.

It's absolutely terrible when I've typed exactly what I want, but then the autocorrect substitution happens just before I press send. There's something that seems infuriatingly patronizing about this.


Honest question, this seems to be a long term issue for you, why don't you disable autocorrect? I've found that the "half-way house" of seeing the suggestions, but having to manually select the one you want, is a good compromise.


I think I'll do that. Actually, I think I did do that, and somehow it came back.


Remember this story?

A Cellphone's Missing Dot Kills Two People, Puts Three More in Jail

https://gizmodo.com/382026/a-cellphones-missing-dot-kills-tw...


No, but I remember this one, where a court determined "give me a lawyer dog" meant "give me a lawyer-dog" rather than "give me a lawyer, dog" https://www.washingtonpost.com/news/true-crime/wp/2017/11/02...


That article is almost certainly wrong, or at least misleading.

The question at hand was whether this statement was a direct request for a counsel, or just chatter.

"This is how I feel, if y’all think I did it, I know that I didn’t do it so why don’t you just give me a lawyer dog ’cause this is not what’s up.”

The court decided " (agreeing with the lower courts’ conclusion that the 1 2 statement “[m]aybe I should talk to a lawyer” is not an unambiguous request for a lawyer). In my view, the defendant’s ambiguous and equivocal reference to a “lawyer dog” does not constitute an invocation of counsel". There's a little bit of cultural bias in the "dog" part, but the main thrust is that the defendant didn't make a clear statement of invocation.

IMO the law as it stands is horrid -- legal representation should be the default unless explicitly waived, not the reverse, but under the current law the decision turned on whether that statements was a demand for a lawyer or merely a suggestion.

http://www.lasc.org/opinions/2017/17KK0954.sjc.addconc.pdfrt


I don't understand your objection. As you've quoted, from that decision, the judgment refers to an "ambiguous and equivocal reference to a 'lawyer dog,'" which pretty plainly suggests that the judge has deemed "lawyer-dog" a lexical unit rather than interpreting "dog" as a sobriquet. At best you could say it was only part of his reasoning; either way, in my mind, it goes beyond cultural bias to be a patently absurd reading.


That title is atrocious, though. The cellphone didn't kill anyone, this is basically an "honor killing".


Right, but the honor-killing was triggered by a miscommunication due to the phone's poor ability to transmit language.


If a simple typo was enough to trigger the honor-killing, it's disingenuous to look for anyone to blame but the murderer himself.

If a German doctor texts someone "Ich verabreiche Schmerzmittel an meine Patienten nur in Massen" (I only apply painkillers to my patients en masse) because the phone doesn't support the sharp S (which would have changed the meaning from "a very large amount" to the opposite) and the recipient kills him out of concern that he's intentionally overdosing his patients, you wouldn't try to excuse the murderer by blaming the technology for that act of reckless and misdirected vigilantism, would you?

Humans make mistakes. Software makes mistakes. If you decide to murder someone based on a single sentence in writing, that's all on you.


I've noticed one thing about this feature. If a person can be responded to using this feature they don't matter. They are just some time waster bureaucrat,salesman,ect. Its actually a good feature for those of us doing real(ish?) work.


I mean, everyone realized right away that the only reason this exists is for businesses which, for whatever reason, haven't yet created an API to do bookings. So that was sort of implied from the beginning.


Things this technology will not be good at:

- making reservations etc on your behalf

Things this technology will be good at:

- automatically calling lots of businesses and individuals to extract data from them: from opening hours to voting preferences, scam calls to political campaigning with personalised campaign messages, large-scale cold call automated advertising, crowd sourcing information, surveys, and the most profitable - replacing call centres and firing staff.

I assume Google are well aware of this...


This technology is literally a phone scammer's/spammer's wet dream.


That's probably my favorite part about being a tech person. When non-techies ask you what JavaScript in Excel is useful for, and you can respond in the dryest of ways "Primarily malware".

And then they look at you like "But you're the tech person, you never think that technology is stupid".

Which it isn't, just some companies behind it really fucking are.


What I don't see many people commenting on is how this benefits small business. My barber, my Dr., my mechanic, my favorite restaurants, etc. are all small "shops". They don't have online booking nor do they want it. They answer phones and this allows clients to interact easier with them and it requires Zero investment for the small business. This is technology adapting to a communication hole IMO and I think it's quite a positive thing.


Absolutely. It is a great example of a digital interface to a stubbornly analog world.


On top of individual expression, I'm also curious what the longterm effect of all of these 'handy tools' is going to be on our cognitive abilities. There was study from Microsoft indicating that our average attention span has decreased some 33% just since 2000. And 'Google Brain' - where individuals have ever increasing difficulty recalling information, instead supplanted with where to find things, has also been academically confirmed. [1]

These are some pretty serious effects for things that are less than a 2 decades old. And it's likely that we've only scratched the surface of conveniences to come. I'm not sure if we're headed towards the Borg (or more accurately the Bynars for any other TNG fans) or Idiocracy.

[1] - http://scholar.harvard.edu/files/dwegner/files/sparrow_et_al...


You can easily opt-out of the auto-complete feature and if it being your own words with your own personality is important to you, then you likely will actually type them rather than autocomplete to the close-enough response Google suggested.

Seems like plenty of people are happy with this trade-off, and that's who the feature is for.


The problem is it affects two way communication. You might be okay with the trade off, I am not. But you may subject me to it.

While a few niche businesses will make news stating they'll reject robo call based appointment scheduling because they care about their actual human employees, most will just accept it. The employees won't have a choice and will just be subjected to this.

Same for email. I want to hear from people, not Google. I'm perfectly content just purging people out of my contacts who let Google do the talking.


What are they being "subjected" to? Presumably the conversation will go similarly to how it would with a person? We aren't talking about spam calls, just a difference of who does the talking. This will increase business for places where people wouldn't want to go through the effort of calling themselves.

For email how will you know if I auto-completed my sentence or typed it? You are free to not use the feature, and as someone talking to you I am able to use it to save time writing an email if I find the suggestions in line with what I wanted to say.


I suspect people who made Comcast's telephone system had exactly the same notion. Feel free to experience it yourself at 1-800-COMCAST.


I suspect they didn't - Google's notion is a human-like conversation with a real person to accomplish a simple goal on the user's behalf. Comcast's notion is to make it as hard as possible for a user to accomplish their goal (cancelling, switching, complaining).


This is an unfortunate trend I continue to see: Where a company with a positive reputation, say Google, is given the most altruistic read on what their decision is for, and a company with a negative reputation, like Comcast, is assumed to be evil.

Many people's issues are, in fact, solved by the Comcast automated system. Generally if you call in to report an outage, and it knows that there's an outage associated with your area already based on the phone number you call from, it'll drop you out of the tech support queue, let you know that the outage is there, and what time it is expected to be resolved. This is the most common reason I call Comcast, and most of the time, it's an outage they already knew about, so this saves me time. (Of course, the times the line just to my building was severed, getting to an actual human to explain my problem is much, much harder.)

Similarly, all of the automated features to trigger remote resets of your modem have been implemented specifically so that if you have a simple problem (as most people do), you can get it quickly and automatically resolved without having to wait for an available human. I know how to reset my own dang modem, so this just adds another burdensome and annoying step for me... but for the average user, it's probably wildly successful at concluding calls. I assume.

But sometimes, when you or I call, we probably have more specific or detailed needs, and find this entire process absolutely horrific to suffer through. But surely, just like the Googlers who thought Duplex was a good idea, someone at Comcast realized it would drastically reduce call volume and speed response time if an automated system could do simple tasks automatically.

Similarly, Duplex will probably do it's job: It'll make it easier for people who don't want to be bothered to make a phone call to schedule an appointment. It'll reduce their call volume. But the humans who have to interface with it will likely find it exasperating.


And certainly that's not the "feature" of Comcast support you were referring to when you directed me there. Most people's negative notions of Comcast and their support is well-deserved.

Until Google makes Duplex an "exasperating" experience I have no issues giving them the benefit of the doubt. The way it was presented, there is nothing exasperating about it - assuming it is ready to be released it'll be the equivalent of a common, simple human conversation now (maybe with fewer angry callers), and plenty of small business will love having more reservations made


Why do you get to choose how I communicate?


If you're talking to me, I get to choose whether you communicate with me. If I don't like how you communicate, I'm more likely to decide that you don't get to communicate at all.


OK, but that's an awful way to run a business, turning a way polite customers just because they don't vocalize for you.


If I'm a business, and you're calling me, and you have a machine talking for you, sure, I'll talk to the machine if it gets me business. If I'm calling out from the business, though, and some customers don't like it, that can cost me business.


"Let us tell you what's newsworthy and what isn't."

"Let us direct your searches through filtered autocomplete and 'curate' your search results."

"Let us filter out any 'controversial' content, so you won't have to question y(our) worldview."

"Let us make phone calls for you, so you don't have to maintain social skills by interacting with other humans."

"Let us finish your thoughts for you, because, by this point, you don't have any of your own."


Yeah, the telemarketing people who clog up the phone lines and interrupt dinner times all over the planet won't see the benefit of this at all </sarcasm>


Not sure why the reaction to this is so negative. Google's own AI blog (https://ai.googleblog.com/2018/05/duplex-ai-system-for-natur...) admits that this is a very specific use case system. And it seems to be more of an engineering feat than an actual "AI" or even "ML" accomplishment. Basically this is the result of a very well trained system by a bunch of HUMANS. The bots aren't here for us yet folks, take a chill pill. If Google wants to create a bunch of task specific conversational bots to bake into their already excellent voice assistant, why not?


For reasons that I suspect are now obvious, I pretty much hate all the tech coming out of the Google/Facebook/Twitter complex. If you need to ask why, I suggest asking them why they don't comply with GDPR.

However, there is one fantastic use case I can see for Google Duplex, the service that makes phone calls for you: dealing with health insurance companies.

If this AI is half as good as they claim.... does that mean it can call into your health insurance company, wait on hold for two hours, and then spend another hour arguing with the rep who is trying to deny your claim, including waiting on hold, and correcting them on incorrect details and making sure they don't lie to you about their policies? Because that would be amazing.


Alternately calling up frontier/other isps and automatically correcting billing mistakes.


It's bizarre to me that this is the red line for everybody.


It is a red line, not the red line.

For a while I've felt like the solution was that I needed to be detached from Google products and services (along with similar data-driven companies), and that was good enough.

I feel like we're rapidly approaching the territory where I need to make sure nobody I talk to is attached to Google products and services. I feel like this is line we're crossing now, where Google has inserted itself directly between the people I want to communicate with and myself, and is changing (or generating) that content directly.


What makes it so different from canned text messages?


The worst I've seen there is the sort of shortcut when your phone rings to automatically say you can't talk at the moment. It's not the same as having custom canned messages written even on a desktop client.


I have an Android Wear watch and it offers a menu of canned responses to messages that I make pretty good use of.

Gmail has also had canned response buttons in the Android app for a while now.


> Frankly, I'd like Google Assistant to make some of the more soul-sucking calls I have to endure. Perhaps it can call and argue with the insurance company about doctor bills.

People often forget that companies make various types of agreement between each other. Let's keep it simple: I have a complaint with my insurance company and they have some kind of partnership with Google.

What will Google-bot do? I hardly believe it will pursue my interests


After reading some of the responses to the google demo, my first reaction was to assume this natural language processing magic was just a placeholder until the business had a proper api in place. After all, that would be more efficient, right?

Google doesn't take human phone calls so why should anyone else have to? And as good workers, efficiency and productivity should trump any expression of humanity we might have left in this end game of automation :-)


> Google doesn't take human phone calls so why should anyone else have to?

Google lost a lawsuit two weeks ago in Germany, which was about whether or not a human being would have to be reachable through a contact e-mail address that they're legally required to put onto their webpage.

Previously, they just responded with canned text containing some links to FAQ and whatnot.

All they would have to do, is tell one of their employees to read and respond to those mails for like half an hour a day, so that it's technically possible to reach a human being through there. Instead they fought out a lawsuit, lost, appealed, lost a second time and are now considering appealing yet another time.

German source: https://www.golem.de/news/kommunikationspflicht-gericht-krit...


To me, the biggest issue with writing an email with this auto suggest is that I think it'll take me longer to write the email that way rather than just writing it.

Maybe I'm wrong, since I haven't tried it, but I feel like I'd have to pause every time I'd see a suggestion to think if it is actually what I'm trying to say, rather than just writing what I'm trying to say anyway.


All truth passes through three stages. First, it is ridiculed. Second, it is violently opposed. Third, it is accepted as being self-evident.


And exactly what 'truth' are you referring to here? The 'truth' that it should be acceptable for machines to impersonate humans?


I'm reminded of a Ben Marcus story that appeared in Harper's a decade ago. People would sneak out of their rooms to place carved wooden likenesses of their faces at the dinner table. The wooden simulacra communed while people took their meals alone in their rooms.


I think this product might be a paid feature. Good way to monetize and recover the huge amt of R&D $. But a google person on twitter did say they are looking into ways about letting people know that it's an automated call.


- What [political figure] is doing is making ...

- I think [political issue] should be ...

- Children must learn that it is always ok to ...

- Society should be ...

- The best thing to do with protestors is to ...

- [ethnicity] are always ...

Serious question. What is supposed to happen for these cases?


I can see a new law: voice AIs must, by law, acknowledge in the affirmative if asked if they are a robot.


A corollary of this technological evolution is to make personal contacts and gestures even more impactful.


Wow the Google demo sure created a lot of attention. Several articles on HN today.


You don't have to use it.


Google's contempt for all humans that aren't executives at Google has always been baffling to me.

If I call a company and can't reach a human within about 90 seconds I don't do business with that company. End of discussion dot period.

I'm sure Google has decided that only 5% of people are like that and therefore I don't exist but they are completely wrong. Sooner or later the ad gravy train will run out and they will have to reevaluate this statistical arrogance style of business management.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: