Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
It sounds like science fiction but it’s not: AI can destroy your business (theguardian.com)
72 points by zqna on April 9, 2023 | hide | past | favorite | 73 comments


As concerning as this is, I'm wondering if LLMs like ChatGPT can be more powerful fraud-detectors than fraud-makers.

Unlike scam detection and filtering methods based on pattern recognition or reputation, LLMs can analyze the content of a scam rather than its artifacts. By that I mean, you can ask the AI "does this make sense, or is there reason to believe it's a scam? If so, how can we verify that it's not a scam, knowing that deepfakes exist" rather than "does this fit x y z pattern that we have seen before?"

I'm hoping, and I may be wrong, that most scams rely on emotional hooks, unsuspecting victims, or desperate victims. LLMs aren't vulnerable in the same way.

Maybe.


> can be more powerful _-detectors than _-makers

Surely you have witnessed (Generative) Adversarial Networks. It's called "an escalation" (attack vs defense, etc.).

More to the substantive proposal in the post: remember that incidents during surgery are dramatically reduced just by using checklists - you can already reduce incidents through deterministic regulation of action. And you do not require "fuzzy" systems like ANNs for that.


I agree, but the thing is, can the escalation potential of a scam which is essentially "send me your money for nothing", with extra steps, confound the understanding of an advanced LLM?

The scam has to be both human-comprehensible and plausible in order to be delivered to the victim in the first place, which places a limit on its complexity which the adversarial anti-scam LLM isn't bound to.


To train GANs you need access to the internal states of the generator, which you wouldn't have with these LLMs.


You don't need the internal state if you have effectively unlimited API calls querying the generator endpoint. Your loss function can be with respect to the adversarial network only.


Then you need an effectively unlimited number of queries to train you network, while the LLM can be trained on a few samples your network produces to counter it.


Wdym? The LLM doesn't have access to your internal state either. Nobody's at an advantage here in a zero sum configuration.


Just read what I wrote again.


You need to explain your thoughts better.


Weapons race. You deploy a more intelligent scam detector AI, I deploy a more powerful scam generator AI.

The only ones that benefit really are the AI makers.


I guess this is an asymmetrical kind of warfare, not simply two AIs duking it out. It's one AI targeting a human with a proposed transaction that results in an absolute loss for the human, and one AI trying to protect against that loss by verifying the plausibility of the transaction in context.

Most scams are fairly transparent and rely on wetware failures, right? The anti-scam AI doesn't have to overcome the scam AI, it has to compensate for the emotional responses or unpreparedness of the target. I think that's less of an ask than generating bulletproof scams. I could be completely wrong.


What if we make the scam detector AI too good and it starts to point out the hypocrisy inherent to our Society


That's the plan.


Article is about deepfake-based fraud. Good to be reminded of this risk, but the headline is riding the mostly separate wave of chatgpt interest.


There are plenty of large databases of image, video and audio material of private individuals tied to their identity. Youtube, instagram, Facebook, LinkedIn etc, all of these give you access to extremely high quality material as well as a lot of context that you could use to make a deepfake more believable. And that's before we get into non-public accessible databases, of which there are plenty.


Yes. Several years ago when banks started using voice pattern recognition as part of telephone banking authentication that was a pretty cool advance, but now it's looking redundant.


Many business people fail to plot trends correctly out longer than a few years.

Information follows the same S curve most adoption follows.

Anything you collect, is a liability once its stolen and eventually it becomes freely available regardless of promises to the contrary.


I'm not saying it was bad tech at the time of introduction, just that it's less effective as a security measure now.


It sounds like you are saying that only ChatGPT can be called AI.


No such implication intended.


Came here to say this. It's clickbait, but a good reminder nonetheless.


I wonder if some of the AI fraud will drive to more in person interactions.

If you meet with somebody in person, you know they are not a bot and you know what they look and sound like directly.


It'd be so cosmically funny to end up having to go do banking in actual banks because silicon valley innovated us into a dead end with AI.


History is circular. Another example, we have cars and yet are pushing to move back to not having them.


[flagged]


Ideology is pragmatism, just thinking further ahead.


Pragmatism is experimental though. "Accepting what works." Ideology is of many different sects including downright telological and ideals based. Even if a given instance tries to look ahead and suceeds I wouldn't call it pragmatic, let alone the whole category.


I hope so.

You already need to meet someone in person to get a true feel for them. There are so many things that don't come throug video. Smell, for instance, generally plays an important role in our perception.


Quite sure lot of people would be happy to smell other people less :)


Some financial institutions already support this concept. Accounts can be flagged to require a live video chat to transfer money or even require being in-person and showing state ID.


Some more descriptive terms might be helpful. AI sounds like something that has agency and makes decisions. This is about ML models to generate speech which is at a different level.


While I am aware that it is possible to use software to clone someone's voice, I am uncertain if it is currently feasible to utilize the cloned voice in real-time situations.

I am curious if these scammers are pre-recording conversations and playing them back in real-time or if they are actually able to speak using the cloned voice technology.


> feasible to utilize the cloned voice in real-time situations

Yes, it has been so for years. The attack to the Dubai Bank in Hong Kong was effected in January 2020.

https://news.ycombinator.com/item?id=29711876


From the article https://www.unite.ai/deepfaked-voice-enabled-35-million-bank...

"Though users at the Audio Fakes Discord and the DeepFaceLab Discord are intensely interested in combining the two technologies into a single video+voice live deepfake architecture, no such product has publicly emerged as yet."


> no such product has publicly emerged as yet

I am not aware of what was used in that occasion; there is also a good chance that the scammers could hack a system internally - the stake is stellar.

Surely, to convince the other part during a phone call a high degree of capability for interaction is required; from the same article, right above your quote:

> we can reasonably assume that the speaker is using a live, real-time deepfake framework

The quoted article is from late 2021. In these current days, we are seeing denounces according to which

> Voice scams are on the rise in a dramatic way. They were the second most common con trick in America last year, and they’re headed for the top spot this year (ex Voice scams: a great reason to be fearful of AI at https://news.ycombinator.com/item?id=35459261 )



The video's shown on YouTube have a number of "plausible" common responses pre-created, that can be played back by pushing a button.

In operation in the video's it seems to work pretty well. :(


If it's not now, it will be soon.


near real time is possible, there's lots of "v-tubers" doing it


And with a lot of people uploading reviews or DIY videos to YouTube it's going to be pretty easy to harvest enough of a voice print. If you have written comments on forums and other social media such as right here on HN they also have your writing style to work with.


I think we're on the verge of few-shot believable voice impersonation. Between that, realtime deepfake videos, and AIs being more than good enough to solve CAPTCHAs, it seems like we're at most a few years from having no means of verifying a human on the other end of any given digital communication unless someone figures out and implements a new solution quickly.


> few-shot believable voice impersonation

We have been there for the past few years. And yes, it has been actually used for scamming.

Submitted only a couple of days ago: Voice scams: a great reason to be fearful of AI - https://news.ycombinator.com/item?id=35459261


There are (mostly) solutions but I lot of people won't like them. As with things today like notarized signatures or just transacting in person, they basically depend on some sort of in-person attestation by a reliable authority. Of course, that means adding a lot more friction to certain types of transactions.


I can see how that might destroy many business models. But from the top of my head I can't come up with any whose loss would have a dramatic negative effect on my wellbeing. Could someone elaborate why I should be worried?


Why would passwords, personal devices, policed platforms etc. fail as an authentication method between known counterparties? Between unknown counterparties the issue is much bigger than just about being a human or not.


It does make it kind of hard to verify someone's identity.

That said I think trying to verify someone's identity through online means only became viable a few years ago when everyone had a somewhat working camera and microphone available, and with any luck the risk of deepfakes will cause an early end to the the scourge of people trying to film themselves holding a form of ID.


Online verification of identity might just not suffice for some things or there might be specialized IDs for online purposes etc.


During COVID brokerages did start allowing online transactions for certain things they didn't used to. However, at least my brokerage has reverted to requiring either a direct or indirect in-person presence.


If a common-sense LLM is listening to grandma's calls (privacy alarms going off but hear me out), it can stop her from wiring her life savings to an untrustworthy destination, without having seen that particular scam before.


Once we can run our own personal, private LLMs it will definitely open up a world of possibilities.

Actually applications like this will probably be implemented on the cloud-based models, since 98% of the public does not care about privacy as much as people on this forum.


It will open up a whole new category of vulnerabilities of the 'how to fool the LLM while convincing granny' type as well. Then there is the liability question, i.e. if the LLM is lured by one of those and granny sends her money to Nigeria or the punks around the corner - take your pick - then is the LLM vendor liable for (part of the) loss? In this it may start to resemble the similar conundrum in self-driving vehicles where a nearly-perfect but sometimes easily fooled self-driving system will lull drivers into a false sense of security since the system has never failed - until it did not see that broken car standing in the middle of the road and slammed right into it. When granny comes to rely on the robot voice telling her what is suspect and what is not she may end up trusting the thing over her own better judgement just like the driver who dozed off behind the wheel of the crashed self-driving vehicle did.


We will just get ChatGPT 6 to solve it for us. Done.


It is not the fact that the poster writes «Done.» where actually and comically it's not "done" at all in said proposal,

nor the other possible point that Statistical Large Language Models are not problem solvers, as in fact are special in Machine Learning for optimizing for goals transversal to actual "solutions" (not to mention that it is already a Sam Altman, while proud of the results («They are not laughing now, are they»), the first to jump into alarm when people drool "So it's AGI!?" on him),

but it must be noted that those scams happen because people live lightheartedly their responsibilities (among which, realizing that they are not in the XVIII century anymore) - and the post has a tint of this dangerous laid back approach.


I'm sorry, I was being sarcastic, it was a crappy comment and it was unhelpful.


No prob on this side C., I just took the occasion of your input for substantiveness.

(I would suggest that you mark sarcasm, e.g. with '/S' or equivalent. Once upon a time we all thought that rhetoric is immediately recognizable, then we met people who would believe ideas beyond the boundary of "anything".)


The election season is going to be out of control.

Fake ads for your opponent in their voice spouting bonkers positions and nonsense. Everywhere. Online, voicemails, far as the eye can see.

It’s gonna be rough.


I had in fact swiftly daydreamt (day-nightmared, more like) the scene of the watch-the-world-burn'ers discussing plans of the prank while on this page.

But the education of the public has already started, with the deepfakes that have circulated in the past few weeks, from odd or dreamed depiction high "authorities" to discussions about the status of documents related to e.g. the protests in France.


this one will be broadly recognisable

personally I'm more worried about the next one where the AI infestation will be far more subtle


It's going to be taken about as seriously as random photoshopped images were until now.


Yes it can. Especially tabloids like The Guardian. Fairly easy to come up with the soft of stuff they wrote. Including this hyperbole.


> Especially tabloids like

What about The Spectator - https://news.ycombinator.com/item?id=35459261

The very agents that can be scammed into damaging you are not aware of the state of facts. (They may read those "«tabloids»". It's educational.)


Already got people fired. Copy-writers are being replaced by a manager getting an AI to write bland drivel. Also model-designers for software (games). It's only going to expand.


I watched a video on how they made the mandalorian today and Hollywood is basically moving away from green screens towards gigantic curved lcd screens which have scenes created in a 3d games engine. It produces more natural lighting results and is better for actors and cinematographer to work with. It’s far cheaper than shooting on location but I wonder if there’s going to be a net loss in jobs when when you can just have a couple of prompt engineers and maybe one or two artists to make tweaks to get any shot you want.

Video for anyone interested:

https://youtu.be/Ufp8weYYDE8


First - it is a clickbait. Second - if am being asked to transfer money by say my fake relative I would surely do some verbal grilling first. Are they saying that listening part of AI can perfectly decipher all the questions and give proper answers? I doubt it. And even then I will make bank transfer to a relative. How would scammer actually get it?

As for bank manager transferring hefty sum without some proper verification - the problem is not scammers but the manager themselves.


>even then I will make bank transfer to a relative

The way the scam is said to work is that someone pretending to be your relative is telling you they are in an unusual situation away from home and it is very urgent so you must send them money in a different way.

e.g.

"The caller said she was locked up for a misunderstanding between herself and the Mexican state police of Quintana Roo. They needed money for a Mexican lawyer and court costs"

(That particular incident, apparently they were impersonating someone who had been to Mexico, but had - unfortunately for the scammer - returned prior to the call)

https://consumer.ftc.gov/consumer-alerts/2023/03/scammers-us...


I also certainly don’t know my own account numbers, much less a relatives. How am I going to know if they’ve given me the correct account details?


Send money by email


Oh so much more secure. Next up, via phone number?


> Second - if am being asked to transfer money by say my fake relative I would surely do some verbal grilling first.

Sadly, you are only one person on a planet of more than 7 billion.

I worked with someone whose mother had been scammed multiple times by folks claiming to be his (deadbeat) brother. Social engineering works, especially as the attacks get more sophisticated.


I wonder if this will drive adoption of secure voice communication ala Signal or something.


> secure voice communication

How would you implement that? You cannot certify a timbre you can simulate.


You can certify the endpoint hasn't changed its keys, as Signal currently does.


Only bit I could see helping is certificate based identity verification. It wouldn't help with "Biden admitting he is a space alien" type content verification.


Your business? AI can destroy your whole country. Just wait for the next elections.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: