Hacker News new | past | comments | ask | show | jobs | submit login

But in this case you're not talking with a real person. Instinctively, I dislike a robot that pretends to be a real human being.



> Instinctively, I dislike a robot that pretends to be a real human being.

Is that because you're not used to it? Honestly asking.

This is probably the first time it feels natural where as all our previous experiences make "chat bots" and "automated phone systems", "automated assistants" absolutely terrible.

Naturally, we dislike it because "it's not human". But this is true of pretty much any thing that approaches "uncanny valley". But, if the "it's not human" solves your answer 100% better/faster than the human counter part, we tend to accept it a lot faster.

This is the first real contender. Siri was the "glimpse" and ChatGPT is probably the reality.

[EDIT]

https://vimeo.com/945587328 the Khan academy demo is nuts. The inflections are so good. It's pretty much right there in the uncanny valley because it does still feel like you're talking to a robot but it also directly interacting with it. Crazy stuff.


> Naturally, we dislike it because "it's not human".

That wasn't even my impression.

My impression was that it reminds me of the humans that I dislike.

It speaks in customer service voice. That faux friendly tone people use when they're trying to sell you something.


> It speaks in customer service voice. That faux friendly tone people use when they're trying to sell you something.

Mmmmm while I get that, in the context w/ the grandparent comment, having a human wouldn't be better then? It's effectively the same. Because, realistically that's a pretty common voice/tone to get even in tech support.


Being the same as something bad is bad.

There are different kinds of humans.

Some of them are your friends, and they're willing to take risks for you and they take your side even when it costs them something.

Some of them are your adversaries, overtly. They do not hide it.

Some of them pretend to be your friends, even though they're not. And that's what they modeled it on. For some reason.


Apologies, I'm doing my best, but I'm quite lost.

The problem is you don't like the customer service/sales voice because they "pretend to be your friends".

Let me know if I didn't capture it.

I don't think people "pretend to be my friend" when they answer the phone to help me sort out of airline ticket problem. I do believe they're trained to and work to take on a "friendly" tone. Even if the motive isn't genuine, because it's trained, it's way a nicer of an experience than someone who's angry or even simply monotone. Trying to fix my $1200 plane ticket is stressful enough. Don't need the CSR to make it worse.


Might be cultural, but I would prefer a neutral tone. The friendly tone gives some expectation of good result of the inquiry or of implication, which makes it worse when the problem is not solvable or not in the power of agent to solve - which many times it is - you don't call support for simple problems.

Of course I agree that "angry" is in most cases not appropriate, but still, I can see cases in which it might, for example, if the caller is really aggressive, curses, or blames unreasonably the agent, the agent could become angry. Training people that everybody will answer them "friendly" no matter their behavior does not sound good for me.


Being human doesn't make it worse. Saccharine phonies are corny when things are going well and dispiriting when they're meant to be helping you and fail.


You can ask it to use a different voice.


The Khan academy video is very impressive, but I do hope they release a British version that’s not so damn cheerful.


I wonder if you can ask it to change its inflections to match a personal conversation as if you're talking to a friend or a teacher or in your case... a British person?


This is where Morgan Freeman can clean up with royalty payments. Who doesn’t want Ellis Boyd Redding describing ducks and math problems in kind and patient terms?


> This is probably the first time it feels natural

Really? I found this demo painful to watch and literally felt that "cringe" feeling. I showed it to my partner and she couldn't even stand to hear more than a sentence of the conversation before walking away.

It felt both staged and still frustrating to listen to.

And, like far too much in AI right now, a demo that will likely not pan out in practice.


This, everyone had to keep interrupting and talking over it to stop it from waffling on.


I had the same reaction. I agree that it sounded very staged, but it also sounded far too cheerful and creepily flirty too. Unbearable.


Emotions are an axiom to convey feelings, but also our sensitivity to human emotions can be a vector for manipulation.

Especially when you consider the bottom line that this tech will be ultimately be horned into advertising somehow (read: the field dedicated to manipulating you into buying shit).

This whole fucking thing bothers me.


> Emotions are an axiom to convey feelings, but also our sensitivity to human emotions can be a vector for manipulation.

When one gets to be a certain age one begins to become attuned to this tendency of others' emotions to manipulate you, so you take steps to not let that happen. You're not ignoring their emotions, but you can address the underlying issue more effectively if you're not emotionally charged. It's a useful skill that more people would benefit from learning earlier in life. Perhaps AI will accelerate that particular skill development, which would be a net benefit to society.


> When one gets to be a certain age one begins to become attuned to this tendency of others' emotions to manipulate you

This is incredibly optimistic, which I love, but my own experience with my utterly deranged elder family, made insane by TV, contradicts this. Every day they're furious about some new things fox news has decided it's time to be angry about: white people being replaced (thanks for introducing them to that, tucker!), "stolen" elections, Mexicans, Muslims, the gays, teaching kids about slavery, the trans, you name it.

I know nobody else in my life more emotionally manipulated on a day to day basis than them. I imagine I can't be alone in watching this happen to my family.


What if this technology could be applied so you can’t be manipulated? If we are already seeing people use this to simulate and train sales people to deal with tough prospects we can squint our eyes a bit and see this being used to help people identify logical fallacies and con men.


That's just being hopeful/optimistic. There are more incentives to use it for manipulation than to protect from manipulation.

That happens with a lot of tech. Social networks are used to con people more than to educate people about con men.


[flagged]


> not wanting your race to be replaced

Great replacement and white genocide are white nationalist far-right conspiracy theories. If you believe this is happening, you are the intellectual equivalent of a flat-earther. Should we pay attention to flat-earthers? Are their opinions on astronomy, rocketry, climate, and other sciences worth anyone's time? Should we give them a platform?

> In the words of scholar Andrew Fergus Wilson, whereas the islamophobic Great Replacement theory can be distinguished from the parallel antisemitic white genocide conspiracy theory, "they share the same terms of reference and both are ideologically aligned with the so-called '14 words' of David Lane ["We must secure the existence of our people and a future for white children"]." In 2021, the Anti-Defamation League wrote that "since many white supremacists, particularly those in the United States, blame Jews for non-white immigration to the U.S.", the Great Replacement theory has been increasingly associated with antisemitism and conflated with the white genocide conspiracy theory. Scholar Kathleen Belew has argued that the Great Replacement theory "allows an opportunism in selecting enemies", but "also follows the central motivating logic, which is to protect the thing on the inside [i.e. the preservation and birth rate of the white race], regardless of the enemy on the outside."

https://en.wikipedia.org/wiki/Great_Replacement

https://en.wikipedia.org/wiki/White_genocide_conspiracy_theo...

> wanting border laws to be enforced

Border laws are enforced.

> and not wanting your children to be groomed into cutting off their body parts.

This doesn't happen. In fact, the only form of gender-affirming surgery that any doctor will perform on under-18 year olds is male gender affirming surgery on overweight boys to remove their manboobs.

> You are definitely sane and your entire family is definitely insane.

You sound brave, why don't you tell us what your username means :) You're one to stand by your values, after all, aren't you?


Well said, thank you for saving me from having to take the time to say it myself!


[flagged]


Well, when you inquire someone why they don't want to have more children, they can shrug and say "population reduction is good for the climate" as ig serving the greater good, and completely disregard any sense of "patriotic duty" to have more children like some politicians such as Vladimir Putin, would like to instill. They can justify it just as easily as you can be derranged enough to call it a governemnt conspiracy.


[flagged]


You say that but you clearly hate your own race. Why are you contradicting yourself?


Sorry mate I don't engage in weird identity politics like you do. Great Replacement is a conspiracy theory, full stop.

Why did you pick that username?


[flagged]


The question makes no sense. You've just asked me whether I plan to walk off the eastern or western edge of the planet.

Why did you choose that username?


With AI you can do A/B testing (or multi-arm bandits, the technique doesn't matter) to get into someone's mind.

Most manipulators end up getting bored of trying again and again with the same person. That won't happen if you are a dealing with a machine, as it can change names, techniques, contexts, tones, etc. until you give it what its operator wants.

Maybe you're part of the X% who will never give in to a machine. But keep in mind that most people have no critical thinking skills nor mental fortitude.


Problem is, people aren't machines either: someone who's getting bombarded with phishing requests will begin to lose it, and will be more likely to just turn off their Wi-Fi than allow an AI to run a hundred iterations of a many-armed-bandit approach on them.


Probably there will more nuance than that. And doomscrolling is a thing, you know.


I think we often get better at detecting the underlying emotion with which the person is communicating, seeing beyond the one they are trying to communicate in an attempt to manipulate us. For example, they say that $100 is their final price but we can sense in the wavering of their voice that they might feel really worried that they will lose the deal. I don't think this will help us pick up on those cues because there are no underlying real emotions happening, maybe even feeding us many false impressions and making us worse at gauging underlying emotions.


> Especially when you consider the bottom line that this tech will be ultimately be horned into advertising somehow.

Tools and the weaponization of them.

This can be said of pretty much any tech tool that has the ability to touch a good portion of the population, including programming languages themselves, CRISPR?

I agree we have to be careful of the bad, but the downsides in this case are not so dangerous that we should be trying to suppress it because the benefits can be incredible too.


This. It’s mind boggling how many people can only see things through one world view and see nothing but downside.


The concern is that it's being locked up inside of major corporations that aren't the slightest bit trustworthy. To make this safe for the public, people need to be able to run it on their own hardware and make their own versions of it that suit their needs rather than those of a megacorp.


this tech isn't slowing down and our generation maybe hesitate at first but remember this field progressing at astonishing speeds like we are literally 1 generation away


Why can’t it also inspire you? If I can forgo advertising and have ChatGPT tutor my child on geometry and they actually learn it at a fraction of the cost of a human tutor why is that bothersome? Honest question. Why do some many people default to something sinister going on. If this technology shows real efficacy in education at scale take my money.


Because it is obviously going to be used to manipulate people. There is absolutely 0 doubt about that (and if there is I'd love to hear your reasoning). The fact that it will be used to teach geometry is great. But how many good things does a technology need to do before the emotional manipulation becomes worth it?


I don't think OpenAI is doing anything particularly sinister. But whatever OpenAI has today a bad actor will have in October. This horseshit is moving rather fast. Sorry, but in two years going from failing the turing test to being able to have a conversation with an AI agent nearly indistinguishable from a person is going to be destabilizing.

Start telling Grandma never to answer the phone.


AI is going to be fantastic at teaching skills to students that those students may never need, since the AI will be able to do all the work that requires such skills, and do them faster, cheaper and at a higher level of quality.


One may also begin to ask, what's the point of learning geometry? Or anything, anymore?


"Naturally, we dislike it because "it's not human"."

This is partly right.

https://en.wikipedia.org/wiki/Uncanny_valley


> Siri was the "glimpse" and ChatGPT is probably the reality.

Agree. Can't wait to see how it'll be...


These sorts of comments are going to go in the annals with the hackernews people complaining about Dropbox when it first came out. This is so revolutionary. If you're not agog you're just missing the obvious.


Something can be revolutionary and have hideous flaws.

(Arguably, all things revolutionary do.)

I'm personally not very happy about this for a variety of reasons; nor am I saying AI is incapable of changing the entire human condition within our lifetimes. I do claim that we have little reason to believe we're headed in a more-utopian direction with AI.


I would say many pets pretend to be human beings (usually babies) in a way that most people like.


I think pets often feel real emotions, or at least bodily sensations, and communicate those to humans in a very real way, whether thru barking or meowing or whimpering or whatnot. So while we may care for them as we care for a human, just as we may care for a plant or a car as a human, I think if my car started to say it felt excited for me to give it a drive, I might also feel uncomfortable.


They do, but they've evolved neoteny (baby-like cries) to do it, and some of their emotions aren't "human" even though they are really feeling them.

Silly example, but some pets like guinea pigs are almost always hungry and they're famous for learning to squeak at you whenever you open the fridge or do anything that might lead to giving them bell peppers. It's not something you'd put up with a human family member using their communication skills to do!


There’s definitely an element of evolution: domesticated animals have evolved to have human recognizable emotions. But that’s not to say they’re not “real” or even “human.” Do humans have a monopoly on joy? I think not. Watch a dog chase a ball. It clearly feels what we call joy in a very real sense.


Adult dogs tend to retain many of the characteristics that wolf puppies have, but grow out of when they become adults.

We've passively bred out many of the behaviors that lead to wolves becoming socially mature. Such dogs tend to be too dangerous to have around, since they may lead to the dogs challenging their owners (more than they already do) for dominance of the family.

AI's will probably be designed to do the same thing, so they will not feel threatening to us. But in the case of AGI/ASI, we will never know if they actually have this kind of subservience, or if they're just faking it for as long as it benefits them.


> I think if my car started to say it felt excited for me to give it a drive, I might also feel uncomfortable.

Well, yes, you don't want to sit in a wet seat.


They being simple and dumb works for their benefit.

Most people would never accept the same behavior from a being capable of more complex thoughts.


Good thing you can tell the AI to speak to you in a robotic monotone and even drop IQ if you feel the need to speak with a dumb bot. Or abstain from using the service completely. You have choices. Use them.


Until your ISP fires their entire service department in a foolish attempt to "replace" them with an overfunded chatbot-service-department-as-a-service and you have to try to jailbreak your way through it to get to a human.


Not when they've replaced every customer-facing position. Oh and all teachers.


But I think this animosity is very much expected, no? Even I felt a momentary hint of "jealousy" -- if you can even call it that -- when I realized that we humans are, in a sense, not really so special anymore.

But of course this was the age-old debate with our favorite golden-eyed android; and unsurprisingly, he too received the same sort of animosity:

Bones was deeply skeptical when he first met Data: "I don't see no points on your ears, boy, but you sound like a Vulcan." And we all know how much he loved those green-blooded fools.

Likewise, Dr. Pulanski has since been criticized for her rude and dismissive attitudes towards Data that had flavors of what might even be considered "racism," or so goes the Trekverse discussion on the topic.

And let's of course not forget when he was on trial essentially for "humanity," or whether hew as indeed just the property of Starfleet, and nothing more.

More recent incarnations of Star Trek: Picard illustrated the outright ban on "synthetics" and indeed their effective banishment; non-synthetic life -- from human to Roman -- simply weren't ok with them.

Yes this is all science fiction silliness -- or adoration depending on your point of view -- but I think it very much reflects the myriad directions our real life world is going to scatter (shatter?) in the coming years ahead.


s/Pulanski/Pulaski/

Sorry, had to be that trekkie :) and nice job referencing Measure of a Man — such great trek.


To your point, there's been a lot of talk about AI, regulation, guardrails, whatever. Now is the time to say, AI must speak such that we know it's AI and not a real human voice.

We get the upside of conversation, and avoid the downside of falling asleep at the wheel (as Ethan Mollick mentions in "Co-Intelligence".)


I dislike a robot that's equal/surpasses human beings. A silly machine that pretends to be human is what I want.


It felt like a videogame for me




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: