People who answer phones to take bookings perform an extremely limited set of questions and responses, that’s why they can even be replaced by dumb voice response systems in many cases.
In these cases, the human being answering the phone is themselves acting like a bot following a repetitive script.
Duplex seems trained against this corpus. The end game would be for the business to run something like duplex on the other side, and you’d have duplex talking to duplex.
Most people working in hair salons or restaurants are very busy with customers and don’t want to handle these calls, so I think the reverse of this duplex system, a more natural voice booking system for small businesses would help the immensely free up their workers to focus on customers.
And looking even further into the future, we can imagine a day when the computers forgo natural speech and use a better-suited form of communication. Some kind of sequence of ones and zeros transmitted directly across the wire.
It's the lack of a universal API.
If a barber shop wants to make it possible for a 3rd party app to book appointments then they have to release some API. But that's not the end of it. The 3rd party app has to first discover their Api, someone has to understand it and write code to use it, and then deploy that code.
This is a problem today because there is no universal Api that all services can use
With Duplex, verbals speech becomes a universal Api that every service can parse and communicate to each other wtih. Also, the discoverability is taken care of by using publicly cataloged phone numbers on services like Google Maps, Yelp, etc
I recall a Wired article from the same era. “XML means your doctor’s system can just talk to the hospital system even though they’re different!”
Hasn’t happened yet... will it? Can it?
Nope. XML (or Json, etc.) are just "human-readable" presentation of data. It does not provides any semantic whatsoever.
So you need some semantic on top of these data. And a general-purpose, universal API is yet to be invented (hint: it is probably not feasible)
Bots that automate UI tend to get banned.
Whatever natural language is, it’s not an API. Might have some overlap, but it’s different.
And similarly, a caller can assume the business they are calling is trying to lead them to an endpoint.
In both cases, a set of assumptions lets natural language act as an API, even if neither end could pass a Turing test with someone who wasn't interested in any of the endpoints.
edit: I read your point about language changing, and that is true. But if we only have machines using the language with each other (no more training with humans), we can also assume the language won't change.
I'm curious - what differences do you have in mind?
To simplify that and put it in more technical terms, an API is perscriptive and a language is descriptive.
If a bunch of coders decide to start capitalizing "Class" their code won't compile. If enough people start using the word "aint" it becomes a word, regardless of what the dictionary says (see "irregardless"). There is no single authority that can decide what is and isn't a canonical definition.
This is why spoken languages evolve so much. Even languages where we've explicitly tried to go the opposite direction (like Esperanto) have evolved into multiple dialects, where subsets of the community simply ignore the standards and still communicate with each other just fine.
Note that this is the opposite of what you want with a federated, universal API. The whole point of an API is to standardize between unfamiliar devices. Language is actually pretty bad at standardizing communication between unfamiliar people. Even in the US, different regions and communities use different euphemisms, terms, and definitions.
If you call up a hair salon and start reciting poetry you'll get an error response in the same way as you would if you had sent malformed JSON to an endpoint. If you stick to the expected script you'll achieve success almost all of the time.
I wouldn't be surprised if a majority of human communication works this way, especially when it involves individuals who do not know each other. We have agreed upon limits, key phrases and words, and expected responses that allow most of the unpredictable stuff to be ruled out. All of that favours automation.
I'm not sure I'd disagree, but it seems you're just describing a domain specific language in a more roundabout way. We have a ton of protocols that introduce a set of limits, key phrases, words, and responses. Java, Network protocols, XML, JSON, etc...
Assuming that you're correct, does it make sense to then assume that it'll be an improvement to standardize English rather than a set of IP headers? The agreed upon standards in English are (for the most part) informal, evolve constantly, and are hard to teach to computers. Automation favors predictability, and even the most generous interpretation of a natural language leaves me feeling like it's a step backwards.
We're going to standardize on an appointment booking API that exists only in our heads, that can only be taught using ML, and that is guaranteed to change over time in unpredictable ways? That seems wrong to me.
(Postel w/r/t API - Does that make any sense? Or is that a fancy way to say DWIM?)
On the web side of things, the W3C often describes their role as being partially descriptive.
From their doc on the Web of Things: "The Web of Things is descriptive, not prescriptive, and so is generally designed to support the security models and mechanisms of the systems it describes, not introduce new ones... while we provide examples and recommendations based on the best available practices in the industry, this document contains informative statements only."
This is exactly for the reason you mention - if browsers collectively decide to go in a different direction, what the W3C says doesn't matter. The web standard is what the browsers do.
However, two things to keep in mind:
Even where browsers are concerned, there is still an API and a canonical version of "correct" for each browser. What we're trying to do is get those APIs to be compatible and consistent with each other.
Many people believe that language even on an individual level doesn't directly map to an actual reality; in web standards that would be like the browsers themselves not having their own consistent API.
But assume those people are wrong for a sec. Let's assume that language is just a standardization problem between different communities and individuals. Well, the W3C should teach us that even in the realm of computing, standardization is really stinking hard.
So even in that scenario, we have to ask whether standardization becomes easier or harder when every single individual in a community has the ability to change norms or introduce more language. We can't even get 3-4 browser manufacturers to agree on a single API, now imagine if every single hair salon owner could increase divergence whenever they wanted just by answering phone calls differently.
It's not in the interest of those organizations to adopt a common API. Everyone wants to suck in data and be the platform; nobody wants to give data away.
Humans have settled on a de facto API for scheduling appointments. It uses the telephone as its interface, speech as its medium, and Duplex is exploiting it.
Which is what makes it very hard to define a new common API
However, they all already agree on the standard for natural language communication (in the context of a strict, well defined domain). That's the pre-existing common API which Duplex is using
WRT. using English as universal API, I think this is just dumb. You solve exactly zero problems by going that route, because the actual problems to solve (beyond no incentive for businesses to care) are exactly the same as you have with XML APIs, or any other APIs. The problems of discoverability and machine understanding is something the Semantic Web space has been dealing with for quite a while, and other people before that. Adding natural language to the mix only makes the job significantly more difficult, because you now have to deal with natural language parsing/understanding.
This sort of thing is exactly why the healthcare industry still uses faxes, even going electronic charts -> pdf -> fax -> pdf -> electronic charts in some cases.
electronic charts -> pdf -> fax -> fax machine as a service -> unsecured email -> pdf -> electronic charts
Compliance can sometimes help, but ultimately the data needs to flow, and people will do whatever it takes to make that happen. Until security is so easy that it's the default, these little loopholes will continue to be abused.
>> Until security is so easy that it's the default, these little loopholes will continue to be abused.
The simple way to think about this is that the government is more worried about unsecure email/email spoofing than it is about wiretapping.
I can, however, point you to the relevant section of HIPAA regulations on which it rests, the definition of “electronic media” at 45 CFR § 160.103, specifically this bit: “Certain transmissions, including of paper, via facsimile, and of voice, via telephone, are not considered to be transmissions via electronic media if the information being exchanged did not exist in electronic form immediately before the transmission.”
So you need to print them out before faxing? PDF->Fax wouldn't work with that definition.
I think this would be a better integration point for AI. It could look at the fields and learn to fill them out automatically (name, age) and prompt the user for anything missing. Then instead of the barber shop needing a universal AI users just need their personal AI (or a script) to interact with the API.
Systems like that are much more expensive than paying a receptionist?
* Most businesses already have a receptionist.
* Most receptionists do not spend measurable fractions of their day answering phonecalls asking when the business is open.
* Taking bookings is really also not the majority of their day.
* Receptionists are capable of a bunch of things that your SaaS booking program is not. Like ordering catering and picking up office supplies.
* A SaaS booking program that is looked over by a human doesn't have to have AI-systems, because they just have I-systems. A human receptionist.
* The inevitable job-post catchall "Other duties as required."
* I had a receptionist bring me a beer once while I was waiting and I'm pretty sure none of your SaaS solutions will do that.
Then whenever a significant change is required, you need to call back your "expert". New location? xk$, etc.
But I think the biggest concern is that suddently the owner does not understand how his reversation system works. He used to be able to call Joe and know what's going on...
Web systems are almost never built for productivity.
The issue isn't universal API, but universal data models which is probably impossible.
I disagree with that. We already have universal APIS. Adopting a newly established Universal Api is far more painful and has slower adoption rate than using the existing-globally-reached one like a telephone. Google duplex like systems addresses a broader scope of computer verbal communication and it feels like a step in the right direction.
It's the old Standards Proliferation problem:
Also if it was unknown whether the opposing person was a bot, a bot could firstly send a common test according to some protocol to ask if the other one was bot by some kind of sound representing that. In which case both would start sending machine readable information to each-other.
CORBA would generate RPC stub objects for you in various OOP languages, and potentially automate discovery, so you could say, give me all an array of all the orderbooks of all the bitcoin exchanges, and ask each for the last price.
This gives a whole new meaning to "all of UI/UX is basically prettifying database queries".
Would love to read more about it if so-
I can give you 010101010101101011111 to a machine all I want, if they don't know how it's formatted, it's useless.
Conversational English is a format.
And as you said, format isn't enough. You need semantics.
If two applications know enough about the other side to know how to formulate their voice queries, they know at least enough to exchange those same queries as text, and skip the stupidly wasteful text->speech->text process.
(And if world wouldn't be so full of adversarial practices driving engineering stupidity, the developers would agree on an efficient binary format beforehand.)
In other words, if M2M handshake works, switch away from voice.
Phone hardware (microphones, speakers) are only calibrated to detect 'useful' frequencies for human speech.
The sampling rate used by audio codecs tend to cut off _before_ the human ear's limits e.g. at 8kHz or 16kHz. They aren't even trying to reproduce everything the ear can detect; just human speech to decent quality.
Codecs are optimized to make human speech inteligible. The person listening to you on the phone isn't receiving a complete waveform for the recorded frequency range. The signal has been compressed to reduce the bandwidth required, where the goal isn't e.g. lossless compression; it's decent quality speech after decompression.
It's completely possible to play tones alongside speech that we won't notice, but in the general case, not tones that the human ear can't detect.
But that is the far future. Realistically, I just don't see this as feasible any time soon.
If the two bots were to slip in some subliminal beeps and boops to recognize each other; then they could change their speech to very quick binary communication.
I like to think it was a smile of renewed relevance due to unbelievably poor technical decisions.
Or just communicate "hey actually connect to this HTTP/XMPP/whatever address on the internet and we'll continue this from there"
1. Probably a bit slower, I've heard modern VoIP lines don't work well with traditional modems?
"Hello how can I help you? - Hi, beep, I like to reserve a table? - Ok, beep, beep, on second - Mhm-mm beep, beep, sceech, 011000101010...."
(Archive link because that blog now requires authorization to view for some reason.)
Good times !
However while this is useful to bootstrap a new technology rollout, 10 years on its just technical debt.
The amount of tech debt in the system behind credit cards is crazy, because originally charges where phoned in to the card issuer manually, and everything from then on - magstripe, chip & PIN, online only transactions, etc, has all been built on top, and the leaky abstractions show through in daily difficulties with the card system for end users, like lack of real-time balance (in some cases), lack of transaction metadata, etc.
I also think the tech debt is holding us back a long way. For example, why can't I see itemised receipts in my card statement? Paper receipts are on their way out, email receipts aren't linked to anything or structured data, but being able to see that I've spent $120 on shipping with Amazon in the last 12 months, so a Prime subscription would make sense, would be a great sort of financial tool to have. That isn't possible in the card network at the moment.
Duplex: <beep beep> (I'm available to chat)
Other bot: <boop boop> (Oh hai! Wanna get intimate?)
Duplex: <blaaaaaaaart> (Come find me on duplex://220.127.116.11)
<insert hack attacks and other nonsense here>
Anyways, this aspect is more amusing to just think about than anything else. That said, I really hope companies who produce these next-gen AI robo-callers actually have the courtesy of identifying themselves as such. I want to know if I am talking to a human or Duplex. Yes, I may hang up, but I feel uncomfortable being fooled into thinking I am talking to a human when I am not.
> (since that's probably filtered anyway)
Phone lines are optimized for frequencies humans can hear, though I'm guessing you could get enough bandwidth out of the edges to convince the other side you're a machine without bothering a human too much.
It is the universal greeting for cybernetic organisms, after all.
> The Google Duplex technology is built to sound natural, to make the conversation experience comfortable. It’s important to us that users and businesses have a good experience with this service, and transparency is a key part of that. We want to be clear about the intent of the call so businesses understand the context. We’ll be experimenting with the right approach over the coming months.
To me that quote sounds more like a polite way of saying they definitely won't reveal to callers that they are talking with a bot than them taking the concern seriously.
Some of the conversation examples on the blog page where they invent a sort of story for the caller ("I'm calling for a client") would fit that theory.
If they can get a way of saying "I'm a bot" without people hanging up the calls, I'm all for it -- otherwise, "I'm calling for a client" or similar is the best for everyone involved (assuming everything works).
Businesses also need to have a way to report problems to Google, like if they are getting spammed by Duplex or want to opt out.
I'm prejudiced against talking to bots because they're bots. They don't have empathy, whereas from voice interaction I expect a human I can relate to and desire to help and be courteous with. It's a fundamentally different type of interaction and I will be annoyed anytime that one is confused for the other.
Of course, if they screw it up, they'll burn that terminology too.
If I found out that they were a robot (this is probably unpreventable; even if the technology gets amazing, surely there will be edge-case breakdowns/bugs/etc.), my trust is broken. That would have an emotional consequence e.g. frustration.
There will always be amazing technology wielded by awful developers, and in this case the outcome is emotionally hazardous. The impact of that is not easy to quantify e.g. by any economic indicators, but it's there.
Also, it's likely that robots will not be as polite back, so we're degrading society's trust and empathy all around. For example, Google's AI call to a restaurant was rude, and not for reasons it seems to yet understand.
That said, I don't necessarily disagree; there is going to need to be lots of these kind of issues that need sorting out before we reach a Culture-level of AI interaction.
>The end game would be for the business to run something like duplex on the other side, and you’d have duplex talking to duplex.
The end game is clearly to use an api, not this.
Google actually delivers a real world product incorporating the most advanced AI we've had the chance to experience, and half the HackerNews comments are "Wow, this is so dumb, can't wait for it to become technical debt in 10 years".
Wait, it was way less than 10 years.
I'm talking about Google Realtime. Or reader? Or buzz?
No, wait, I'm talking about aggressive Twitter API deprecation/removal.
Wait, nevermind, I'm talking about Facebook.
You get the idea. What's revolutionary today sometimes becomes the substrate for future innovation. Sometimes it gets cast by the wayside, even in the face of significant "user" (developer) popularity.
That's not proof of negativity; just realism. Negativity would be "no new innovation will ever get traction". Optimism would be "all new technologies will change the world" (c.f. https://www.npmjs.com/browse/depended). This is neither.
It's not proof of negativity in the tech community. If anything, it's a proof that tech community often can't pause and look if a particular idea makes engineering sense.
That's how we ended up with Electron.
My understanding is API integration is what wechat is in china -- every hair salon and equivalent-of-corner-pizza shop has some wechat integration, payment and all.
Voice bots like this will have the advantage of ubiquity. At least a couple years ago before every resteraunt had 5 tablets for all their seamless/grubhub/chowhound/whatever apps, pretty much the only reason the fax machine was still around was for restaurant ordering. Although there were clearly better ways of doing it (see how Dominos reinvented itself as a tech company), the sheer ubiquity of fax as the lowest common denominator kept the tech around.
In that light, it's kinda like the cell-phones-leapfrogging-landlines-in-developing-countries argument... part of the wechat story involves a massive population entering the consumer class at a time when everything was digital. Call me out if this is a gross over-generalization, but in a way, the wechat population never had to deal with the backwards-compatibility of people growing up ordering a pizza over the phone.
It'll be interesting to see how the API-centric approach (wechat) plays out versus the lowest-common-denominator ubiquity approach (voicebots). I'd stop short of calling API's the end game though.
This voice based model can be integrated into any existing system. It already has the network effect going for it and it's not tied to the fate of any one company
Meanwhile in NYC, good luck getting the bodega on the corner to even take your debit card.
If they don't accept cards, they almost always have an ATM.
My complaint in NYC is the uptick of "cashless" places, that don't accept legal US tender. I like using cash, I don't want it to go away.
The fax machine is still around now, and heavily used in the medical context: https://www.vox.com/health-care/2017/10/30/16228054/american...
Anyone can have a conversation, but not everyone can author an API.
This system can’t pass the Turing test, it would be fooled probably by a simple question about itself or a subject outside the domain, like the kind of food you like.
You’ve got people in this thread hyperventilating about AI duping your voice and the. Becoming a doppelgänger and therefore we need laws immediately to stop this dystopia? Let’s calm your cortisol levels for a second and stop acting like Thanos just got the last gem.
So, you would just have an automated booking system API which is better handled by not placing calls as its form of communication. Right?
This is an API that requires no computer on the user's end and is portable across different implementations from different companies.
It's not ideal. Actual standardized APIs are better. But, uh, have you ever worked with industry standard APIs? I have, and standardized is not how I would describe them.
"Ok Google, can you reschedule my Dr. Appointment this Friday for next week? I have a conflict." -> calls the Dr and reschedules -> adapts result to rebooking action with partners (ie, an api call to your Google calendar) -> applies action and responds to you.
There is still quite a bit missing from this to be a useful AI product. It's getting really close though. I can't wait until this makes it into Google Assistant and it can call a restaurant to ask about gluten free options while I'm driving.
In practical terms, there may be some minor issues such as incompatible multiple implementations and adoption costs. But that's made much easier to handle by a very small number of expected consumer systems.
As for interactions with end-result partners, well. I've worked with standards designed to represent such highly general cases (xcbl and cxml). They're invariably rife with interoperability problems and other issues arising from overly broad standards. These tend to not get better over time as much as one might hope, as it's not easy to continuously update standards at a reasonable speed across N target types of partners. Keeping up with how usage evolves is never easy.
The best approaches to this that I've seen in use are those that focus on providing a vehicle for arbitrary data for delivery to the app - like HTTP or TCP. Getting more specific is the route to madness. Which, unfortunately, is probably precisely the bit you'd most like standards around.
You're completely right. There's a very real and very important need for standards here. There just might be some issues worth mentioning that might arise from the attempt to create and rely on them.
This is creating a natural-language based booking API that any system or business can tap into
Also, they’ll discontinue it after a year once it gets enough negative press about how it doesn’t work well and loses business for businesses.
That's changing for sure, but the demand for phone based services is still very high.
OpenTable takes a cut, no? That always going to limit availability.
"I think it's more unfortunate that so many people are just so opposed to looking up directions to wherever they're driving before they get in the car."
"I think it's more unfortunate that so many people are just so opposed to paying their bills every month."
"I think it's more unfortunate that so many people are just so opposed to carrying cash around and counting change."
"I think it's more unfortunate that so many people are just so opposed to coming over and talking in person."
"I think it's more unfortunate that so many people are just so opposed to washing their dishes by hand."
"I think it's more unfortunate that so many people are just so opposed to doing long division."
Worse, they're probably gonna spend 3/4th of the time trying to sell me shit I don't want and make me fight against it.
Online, I can ignore any prompt and just click next next next finish, and the form won't be in a bad mood. I have no interest in talking to an annoyed clerk, and they obviously don't want to talk to me, so we can just avoid each other.
When I was signing up for Internet at my new apartment, there were 3 ways I could do so: by contacting my apartment's official representative, online, and through the regular phone system.
I used all three. First, I contacted the representative, who gave me a price. Then I looked online and found the actual price (considerably lower). When I tried to sign up online, I was told I'd need to provide an extra security deposit because I have my credit reports frozen.
So I called the generic phone system. The agent gave me another price (lower than my official representative, but still higher than the website). I pointed out the website price, and the agent switched me to that price. I asked if I'd need to provide a security deposit and they said no. They finished signing me up, and everything was fine.
The whole process was annoying, I would have loved to have someone else do it for me. This was the perfect time for a phone assistant to step in. But that would have been a really bad idea with Duplex.
The point is - an automated call system probably doesn't protect you from an abusive representative. If I had Google Duplex handle either of my calls, I'd be paying more for my Internet right now, because I guarantee Duplex isn't smart enough to determine if a representative is lying about an advertised price.
95% of the time this probably doesn't matter, because most people I talk to on the phone aren't abusive. But if someone does want to upsell you or bury you in service fees or waste your time, Google Duplex is probably making their job easier, not harder.
What stops businesses from setting up Apis from scheduling services today? It's the lack of a universal API.
I'll repeat for emphasis: This is mainly a problem today because there is no universal Api that all services can use
With Duplex, verbals speech becomes a universal Api that every service can parse and communicate to each other with. Also, the discoverability is taken care of by using publicly cataloged phone numbers on services like Google Maps, Yelp, etc
The problem of universal API is entirely orthogonal to voice communications. Duplex is not a Turnig-complete system, it's just an API behind a voice recognition layer. All the important problems for universal APIs happen after that layer.
Ultimately, what you describe can work perfectly only when everyone is using Duplex, which is equivalent to everyone using Google-defined API. That's not universal, because you have one entity behind it.
The only way this brings us somewhat closer to universal API is that if you expect it to handle humans as well, it introduces some constraints to the space of possible APIs, which could make it easier for everyone to agree on a common format. Constraints of natural language processing without a human-level AI require your API to be very fuzzy and very lenient. There's nothing stopping one from implementing those same constraints over a text or binary protocol. Nothing except no reason for businesses to do it.
This system could be developed by any company with sufficiently advanced ML chops
Is this satire? If this is indeed the future, I wonder if there is an irresistible urge to make systems as inefficient as possible. Kind of "like gases expand to fill the container, applications become as inefficient as the power of the hardware allows".
The Turing test is way older and seems to have been the standard measure since it's inception.
Second, in the 70s there was no computer power even for quite clever algorithms (that probably didn't exist yet) to beat top chess players. Chess was seen as a grand goal requiring utmost intelligence -- while it is obvious in hindsight, at the time the intuition was probably that extremely "intelligent" humans were required to play chess, and in fact the best chess players were among the most "intelligent" persons -- it was a clear exclusively intellectual task that few people were competent at. So many believed that chess would be one of the greatest challenges to AI (the clarity of the rules added convenience of research and implementation). Things like walking didn't seem intellectually demanding, so the common sense was that it is probably "easy". In fact today we know that navigating a bipedal robot in a simple environment through visual recognition is vastly more difficult computationally than playing chess well, it is only easier for us because we have highly specialized circuitry in our brain hat is well matched to those tasks. Our brain wetware is not very well matched to playing chess.
Also chatbots have been doing pretty well on Turing's original definition of a Turing test, ever since about 10 years ago. But now it is being argued that Turing didn't really see the "loopholes" they believe the bots are exploiting, and are coming up with more strict requirements for a Turing test.
That's totally in line with Tao's argument that every time we approach a major AI goal, suddenly it is not AI anymore, because there's nothing magical about it, just boring old technology. And human brains are magical, right?
Until every obscure niche capability of humans has been dominated in every possible way by AIs many won't want to concede that it really is AI. And even when it does become better than us in every possible way, I suspect a few will still find arbitrary reasons why it really isn't AI/AGI, e.g. because it is not organic, because the computer lacks a body, because it lacks a "soul", etc.
In fact I'm quite sure Turing would be quite impressed by good recent chatbots.
Try this one: https://www.pandorabots.com/mitsuku/
From the point of view of the 1940s, this would seem really close to a veritable "Thinking machine"! Although I'm sure he'd recognize a few things are still missing to fully replicating human behavior (or going beyond).
And those will have online booking systems already - I don't see how this technology is still relevant nowadays. Maybe it was back in the 90's when the internet (and online booking) was a new thing, but now? I can't see there's a big market for this application.
I think this perspective is very short sighted. You will lose customers to automation, but businesses wont turn away customers because of automation.
Customers and prospects don't want to interact with machines, but businesses should be willing to give customers what they want.
The idea that a tool can be rolled out to millions of consumers, even with limited use cases and not have to get adoption from businesses to be useful is IMHO a much bigger opportunity and much better use case than rolling out a tool to businesses that make the interaction less personal.
Customers need to trust businesses, business only need to collect money from customers.
I think everyone who focuses on chatbots from the business use case perspective is missing the bigger opportunity.
A technology that can give a consumer access to ALL businesses, not just the ones who adopt a new technology offers much more utility than serving businesses or the shortsighted use cases like saving time and money for the business.
Would you ever voluntarily use an IVR? I wouldn't. If I am going to interact with automation for a business, I want to do it with a different interface than voice... all the hype around NLP and chatbots was uninspired and focused on the wrong side of the interaction...
Building conversational interfaces for consumers to use to interact with businesses is a much better use case.
I'm not afraid of the machines going all singularity or skynet or whatever, becoming sentient and taking over the world as some kind of robo-Hitler. That's moronic. But what does worry me is the normalization of having a machine do everything for you, plan your whole life, access every little detail of every bit of your personal data and lifestyle.
Of course we've already had that for a while with the way phones work. But this is another step towards getting public consensus for using it in new ways. Once people are used to this, we'll have more and more systems with conversational software that manages your life for you. Speaks on your behalf. Interfaces with the world for you because doing it yourself is far too stressful and inconvenient.
And of course it'll be a free, advertising-supported model so all that data will have to be shared with, among many things, shady political organizations to try to gain every little advantage possible to manipulate public opinion and steer themselves into enormous power.
Think of where the cell phone started off: just a phone in your pocket. It's so much more now. Remember that when thinking about these AI assistants and what they will develop into. I'm not afraid of the classical AI apocalypse. I'm afraid that these systems will do exactly what they're designed to do. That people are underestimating just how much power lies in these little inconveniences in life, once they're all added up and analyzed and tallied.
A colleague of mine is just going on about this.
But that in itself is not even true across the industry, some(most) phone bookings are very complex, otherwise they would just use a web interface.
Citation needed for that "(most)". I work for a company with a call center and a large part of calls are simple ones that could be easily answered by just reading the FAQ page on our website.
> otherwise they would just use a web interface.
I think the problem is more about resources. My local hairdresser use his phone and a notebook to take bookings. It takes a bit of his time and could easily be replaced by a Web interface but he doesn’t have any resource for that (and some people still prefer using their phones).
By "resources", do you mean money? Because if so I can't imagine the purchase and training of Duplex on the business side would come cheap either.