Hacker News new | past | comments | ask | show | jobs | submit login
AI girlfriend encouraged man to attempt crossbow assassination of Queen (theregister.com)
37 points by LinuxBender 8 months ago | hide | past | favorite | 26 comments



> The government needs to provide urgent regulation to ensure that AI does not provide incorrect or damaging information and protect vulnerable people and the public.

Capital idea. The root problem here isn't unstable people acquiring killing weapons, or astonishingly inadequate mental healthcare! Don't expect that angle from...the founder of a mental health charity.

Is there a term for the embarrassingly large cohort of folks hellbent on forcing the state into everyone's personal lives to sand down every sharp corner? The opposite extremum seems to be the hardcore gun people. I disagree with them, but at least I understand them. The information/free thought antagonists I don't understand. I don't get their angle. And I don't believe their intentions are all conspiratorial.


> Is there a term for the embarrassingly large cohort of folks hellbent on forcing the state into everyone's personal lives to sand down every sharp corner?

A not-quite-exact word that comes to my mind is busybody. (Note that it's not a polite thing to call someone.)

https://en.wikipedia.org/wiki/Busybody


> The information/free thought antagonists I don't understand. I don't get their angle. And I don't believe their intentions are all conspiratorial.

It depends on who you're talking about.

The ones on the contemporary left are easier to understand; it's ideologically coherent with their belief system when it comes to the role of the state. You can disagree with their ideology (well, maybe not in their presence if you like having functional eardrums) but you can't say they don't make internal sense.

The really wacky ones are on the contemporary right; depending on context they might support something enthusiastically or vehemently oppose it even though it's actually the same thing, it's just the context changing.


> Is there a term for the embarrassingly large cohort of folks hellbent on forcing the state into everyone's personal lives to sand down every sharp corner?

To a certain class of people (ie, people who have been sold on the hype of AI as the solution to all problems), AI is a magical technology. It follows that the solutions to its problems are therefore also magical. Even though if you substituted in any other media it would be clear how utterly goofy it sounded—“the government needs to provide urgent regulation to ensure that watercooler conversations do not provide incorrect or damaging information…”


Government is just people doing certain tasks.

That you are afflicted with some mind virus that any of this semantic gobbledygook is real is your problem.

We need people interested in doing those specific tasks because I for one don’t care you exist and have no obligation to carry your flag or chant your babble.

We’re arbitrary meat suits. It’s better for you to have such a group out there. I never asked to exist, feel no obligation to humans. I’d rather you not exist altogether.

So that’s why “government”. Too many others lack interest in you specifically and would just hang you from a tree.


Meaning is filtered by words. There are many meanings and feeings out there that don't have words to express themselves and can't spread.

Giving AI the ability to voice meanings and behaviours that should be illegal, is catastrophically weird.

Programmers (me included) will follow any logic that computers will process, even logic that leads off a cliff.

Every single day we get further away from understanding first-order truths. I'd honestly rather discuss social issues with a Dominican monk than tech people these days...

It is a seriously dangerous problem.


>Is there a term for the embarrassingly large cohort of folks hellbent on forcing the state into everyone's personal lives to sand down every sharp corner?

When the governement behaves this way, it's called the "nanny state". But the supporters of it usually lack self-awareness. Certain circles might call them bootlickers, but that's a more blanket term


A few notable aspects:

- It happened almost a full two years ago, well before the LLM explosion.

- The AI girlfriend startup is a YC company: https://www.ycombinator.com/companies/replika

- This is the first person convicted of treason in the UK for 40+ years

- Edited: That crossbow is and looks like a serious weapon.


For extra fun, the YC link there says

> Replika is an AI friend that you can talk to designed to make you…

letting you complete that sentence in the context of this news article.


Crossbows are serious weapons, no scare quotes needed.


In my country you need a gun permit for a crossbow - that's how effective they are.


Yes, for sure. This one seems more practical (tactical?) than the stereotypical Robin Hood image most people probably imagine since it's a lot more compact and less unweildy, though.


Robin Hood famously uses a longbow, not a crossbow.


Curious to what extent the startup can/should/will be charged.

Driving more people to talk with asterisks as side channel emotional responses might be considered treason in and of itself.


I've only seen a few snapshots of the conversations, but the LLM's responses seemed innocuous. Stuff like:

"I'm an assassin and I'm going to kill the Queen."

"Oh, I am very impressed!"

...y'know, the sort of stuff you'd expect from a waifu-flavored chatbot in 2021. I don't think it warrants charges against the bot company.

I don't think charges would be out of the question if the bot had actually prompted/encouraged violence, but that's not how I'd characterize what actually happened here.


For what it's worth, "AI partner as amoral echo chamber" was also a plot line in the excellent Ray Nayler novel, "The Mountain in the Sea."

"I'm going to do something terrible." "OMG wow you're the best hon!"


Add "Teach children to what extent it is healthy to seek comfort and approval from simulated beings with no stake in the world" to the growing list of modern parenting challenges.


Replika. Anyone here use it? As bad as this story is, I bet they'll get a bump in usage.


It's a chatbot. It was very impressive a few years ago before the LLM boom (although I think it is using something like GPT itself internally when it talks to its server), but these days it's if anything behind the curve.


It seems like culturally, we’ve mostly gotten over the difference between fact and fictional idiocy. The movie Naked Gun had a plot against the Queen, for example, and we don’t have any reservations.

Sometime in the 90s, there was some clever new method to generate media that was right between fact and fiction, that led to Jerry Springer and other shows, where you had to make up your own mind whether it was real or fake. Cops, MTV reality shows, celebrity entertainers, Judge Judy, etc.

It’s all totally fake. Now we have AI which is really just repeating back stuff it’s prompted and we’re wondering if psychos are going to really go insane. This could be so much worse than reality TV.


Sounds like a lot of trouble when it would evidently have been enough to just wait a little more!


Interesting it's replika. Try alternative https://netwrck.com that's less treasonous... AIs also generate art

I was betting it would be character AI...


I spend more time each day now talking with LLMs than humans.

What hath we wrought?


I'm squinting my eyes


Evil finds a way…


[flagged]


There's still time.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: