I'm kind of amazed how slow 'AI phishing' has been to roll out.
The technology for customised text based attacks at scale has been available at least since Llama was open sourced. The tech for custom voice and image based attacks is basically there too with whisper / tortoise and stable diffusion - though clearly more expensive to render. I'm honestly not sure why social networks aren't being leveraged more to target and spoof individuals - especially elderly people.
Tailored attacks impersonating text or voice messages from close contacts and family members should be fairly common, and yet they're not. Robo-calls that carry out a two way conversation convincingly impersonating bank or police officials should be everywhere. Yet the only spam-calls I ever receive are from Indian call centres or static messages using decades old synthesised voice tech.
The ROI on scams based in Indian call centers is already huge.
They recently took my mom for $25k for what was a few hours of “work” over the span of two days. When I reviewed their communications and got the full story from my mom they’re in some ways laughably bad and in other ways very cunning.
Turns out it all started with a comically bad initial e-mail, pop-up, and then remote access. Then follow-up calls and text messages (Bitcoin QR code with her bank logo) and multiple people impersonating banks, various government agencies, etc. The end to end pipeline to replicate this via AI would be very complex and difficult (today).
I imagine the increased scale, additional opportunity, and reduced “payroll” that could be realized utilizing AI (given the initial level of development effort) just isn’t there. Yet.
Banks are actually getting much better at ferreting out these scams.
My Mom got one of those calls where the guy said my son was jail and they needed to pay his cash bail to get him out. They never called me or him, just panicked and went to the bank to get the cash out and the bank refused to give them the money because they knew it was a scam. They started asking my parents questions and pointing out how the scam worked when they got agitated they weren't giving them the money.
They called me from the bank and I texted my son and he was completely confused since he was at work and was just fine. I thanked the people at the bank profusely for their intervention.
It was a great learning experience for my parents as well. They are way more leery about strangers calling and have already hung up on several scammers. They also filter out email messages and won't click on any links in emails or pop-ups.
The whole experience really put their radar up for this stuff now.
I've read that even Target does this now, especially when an elderly person comes in trying to buy thousands of dollars worth of gift cards. Unfortunately too late for my grandmother. In her case, she'll know it's a scam and still participate just because she's bored.
Then you get the BTC folks arguing that all this regulation and safeguarding is a plot to prevent you from using your money. And once BTC takes over your mom can finally be her own bank.
So there's regulation. Until I read this I had assumed the bank branch was helpfully breaking the law by refusing to release customer funds. Do you know what's the law that forces (or allows) banks to do this?
> Banks are actually getting much better at ferreting out these scams.
A friend of mine runs a retail bank branch for a major bank. The other staff refer customers to her if they suspect a scam. She sees a few of these each month.
But those are just the people who personally come into a bank branch.
Her bank tried... She lives across the country from me and I flew to see her to help put her life back together. It was very involved: pretty much had to reconstruct her entire digital life. New phone numbers, e-mail addresses, online accounts, passwords, all banking, credit cards, recurring payments, bill pay - EVERYTHING. My concern was now that they had a fish on the hook where the scam worked once, they'd (of course) come back with a new angle.
As part of that we went to the branch where she made these cash withdrawals. I had the opportunity to speak with the teller that was there both days. The teller's story (and I believe her) was that she was extremely suspicious of the overall situation and had an extended conversation with my mom about how unusual this was, often used for scams, etc. However, back to my original point the scammers were well ahead of this...
My mom banks with Wells Fargo. Once the scammers discovered that they were able to capitalize on the news story from years back with the Wells Fargo fake accounts/fraud scandal. They even sent her links to news stories about it.
They were able to convince my mom that her money wasn't safe at Wells Fargo because the government was investigating another Wells Fargo scam and that all of their employees are in on it. They had another scammer in the org impersonate someone from the Federal Trade Commission (ridiculous) who was supposedly investigating this.
They prepared my mom with a robust script to dance around the teller's questions and skepticism that the cash she was withdrawing was for legitimate purposes. In this case the script was something about my mom doing construction on her home and paying laborers, contractors, etc in cash.
The entirety of the scam is wild. Once they got remote access from the phishing e-mail they got opened up Windows Command Prompt and pasted in a bunch of echo statements with stuff like "Child pornography detected" and a bunch of other ridiculous stuff. However - let's remember that thanks to Hollywood any terminal interface looks scary and sophisticated to the general population (every movie ever "hacking" with CLI gibberish).
They had her convinced she was being watched/followed, investigated for CP, her phones were tapped, etc, etc, etc. She was so freaked out she didn't know if she was going to be killed or go to jail.
My sister happened to try to reach her and my mom called her back from a friends phone. Was sister somehow stumbled on this and of course knew it was a scam. My mom didn't believe her. Needless to say I have a lot of credibility in my family on this stuff (given what I do) so thankfully my sister was able to get my mom to call me.
At the beginning of the call I was able to say to my mom "This is a scam. Let me guess: they did X, then did Y, told you Z, something with cryptocurrency, etc, etc". Only when I more-or-less nailed/predicted most of the details before my mom could even tell me her story did she realize that this was, in fact, a scam.
It made me realize something: when we were growing up a saying was "talk to your kids about drugs". There needs to be an equivalent campaign for these scams. Like many here I follow this kind of stuff pretty closely but only thought of it as a curiosity because they're so ridiculous (to me/us). It never occurred to me that I should be regularly updating less sophisticated friends and family members on the scams du jour.
HN Community: Talk to your friends and family about scams.
>> It made me realize something: when we were growing up a saying was "talk to your kids about drugs". There needs to be an equivalent campaign for these scams.
THIS.
I've been saying this for years. I have no idea why an a world that has been inhospitable to older folks for decades, why there are no programs to help elderly and at risk people to keep an eye out for these.
I'm sorry to hear about your mother as well. That's heartbreaking.
There's a third possibility. It's just not turnkey enough yet for criminal enterprises to bother. I'm sure cartels and mafia level gangsters sometimes have great tech people, but the level of op sec and technical knowhow exhibited by most professional criminals seems low.
Forgive my ignorance, but why are we surprised that voice messages aren't being spoofed more often? Doesn't this require a pretty darn decent dataset for training? Unless they've got a ton of videos of themselves shared on a public social media profile, I don't know that this is going to be a thing.
The dataset you need to train in the first place is indeed huge, but I think the idea is once them model is trained, new "voices" can be acquired with much less data than was required to train it in the first place. Just like you can instruct ChatGPT to talk about topics never heard of on the internet and in a dialect you customize and invent on the spot and it can comply, despite not consuming an internet's worth of subject matter about it.
Soon the role of the Indian call centers will change from running the scam directly to making spam calls to trusted contacts of the intended mark to collect voice data for TTS model fine tuning.
I'm not a data science / AI person at all, but AFAIK while initial model training is enormously processor intensive, customising a trained model is not. This has been my experience playing with custom training stable diffusion on very low memory home GPU. I'd assume it's true for voice generators like Tortoise [1] also. Moreover, while Eleven Labs isn't in my opinion as good, they let you custom train voices for an incredibly low cost and with a tiny amount of sample data [3].
For perfect audio spoofing, lots of audio would be needed. Bear in mind there are literally millions of podcasts available [2], and billions of youtube videos. Should be trivial to grab biographic data and voice samples from a subset of them.
I've had this nagging worry that my voice will be harvested during a random call acting like a sales person or survey. Besides getting money from family members, I really can't think of a scenario where my voice would allow a scammer to get into one of my accounts, but I'm sure it would make social engineering such an attach that much easier.
Earlier I received a strange call from someone. The voice asked if I could hear him. I said "I can hear you, now who is speaking". Then they hung up. I'm pretty sure they were hoping that I would respond with the word "yes". That is a very dangerous word in the wrong hands!
There was some "technician" calling my wife just today about cleaning the house's vents and she handed me the phone without much context. He claimed to have personally cleaned them for the former owner, and I told him to send me a mail or letter, and gave him my email address.
Afterwards I realized it might have been some sort of scam attempt. He seemed to be in such a rush to get it done tomorrow and book a time now now now. And he sent no mail.
I guess the scam would have been to prepay some sort of fee or the whole cleaning. But I've got this feeling he was fishing for something more. Dunno if they have the know-how to actually use my voice to scam e.g. my mother.
Why bother operating a much more complex LLM stack when you're already raking in cash from confused boomers trying to pay the IRS off with iTunes gift cards? Their system works. They'll take up machine learning powered tools once the old farts all die off/go broke and they need more complicated scams for more technology-savvy victims.
I'm being glib here but also if you're the type of person who gets texts from the IRS from a number you've never seen and take it at face value that you can pay off your overdue tax bill with gift cards... like, you are already the perfect victim for this sort of scam. They don't need to be good, they just need you to self identify and leap right into the trap.
Most people are criminals. Speeding, piracy, dubious porn, and so on. In a wider sense consumption of products or other use of criminally exploited labour.
At the same time, most people are more clever than one tends to expect.
”If the prosecutor is obliged to choose his cases, it follows that he can choose his defendants. Therein is the most dangerous power of the prosecutor: that he will pick people that he thinks he should get, rather than pick cases that need to be prosecuted. With the law books filled with a great assortment of crimes, a prosecutor stands a fair chance of finding at least a technical violation of some act on the part of almost anyone."
When the security team does that, they give up all right to tell people to analyse URLs, and assume all responsibility for anything bad happening from clicking on links. As they're advertising links as secure
I was about to complain about this. I take security training to inspect URLs and then Microsoft safe link or whatever it's called gives me a twenty line long URL filled with random characters. I have to trust it works since they took my manual inspection ability away.
i am fairly certain that whole thing is about tracking and not security. If it was about security, then they could still include the real URL in the "safeurl" url. They do not do this though, because it is not about safety, it is about data.
> Tbh the browser/email client makers are complicit in these phishing
attempts for hiding the URLs and the actual email addresses.
It's worse. Research "Scamicry".
Big business now is so fake, such a grift, drenched in PR deception,
and lacking integrity and trustworthiness, there isn't much space left
between what is "legitimate" and what is a scam.
If businesses like Google or Facebook hide URLs and email addresses
that's not a casual "mistake". It's because that's to their profitable
advantage to do so. And they know it puts you in harms way. So yes,
they're complicit in scams.
To make themselves a little more competitive businesses are always
learning from scammers, meanwhile good scammers keenly learn from
businesses to look more legit. Some ransomware "services" even have
better customer support than billion dollar companies. And big
business is certainly using the same AI tools as cybercriminals.
So a problem isn't how clever and scurrilous scammers have gotten,
it's how far legitimate services have fallen so that
ordinary folk struggle to know the difference.
How can we trust our own insticts for selecting what is good and
wholesome from what is rotten, when there are few moral differences?
The only difference resides in a digital identifier.
I beat that drum so long it turned into beating my head against a wall.
The last two companies I worked for insisted that customer account security was the highest priority, but as soon as I said we needed to stop hiding links to our own website behind Hubspot tracking URLs so we don't train our customers to click links that look like gobbledygook garbage, the marketing team melted down and it became clear where user account security actually fell on the priority list.
I don't think it's always malicious, though. I think most people in most companies just don't realize the risk. Like, I had to explain to my doctor's office why I'm never going to "confirm my identity" by rattling off my DOB and address at the beginning of a call when they called me. I even think of those specific data points as public information anyway and I'm not going to participate in that nonsense. It had never occurred to them that this was risky behavior.
It did make me appreciate my parish priest's method, though. Every quarter or so, he reminds people from the pulpit that he will never email them asking for gift cards or anything of the sort. If the parish needs money for something, he promises he'll ask for it right from the pulpit!
People realize the risk, they just think it can't happen to them personally, and/or just don't care because they personally aren't going to bear the risk. People run businesses the way they drive their cars, i.e. selfishly and arrogantly.
Are all people bad drivers? Technology gives diffusion of
responsibility. And business gives limited liability. Put them
together and you have magnified sense of agency and
invulnerability. But without wheels and a windshield people couldn't
travel at 100 miles an hour. The question is how gracefully the person
at the wheel handles that. Are you a gentleman in a Jaguar or a BMW
driver? [0]
It's not a question of how gracefully the person behind the wheel handles it. There is a certain moral expectation and minimum standard of handling, and this expectation is more often than not enforced by law and/or threat of civil suit, sometimes with severe penalties for deviating from expectation. That legal framework exists precisely because individuals cannot be trusted to handle it properly.
The natural set of incentives does not work well with human psychology. It does not prevent mishandling of motor vehicles, or at least fails to prevent it in enough cases that additional disincentives are needed, to ensure the safety of the public.
Stated another way: Enough people are bad enough drivers that we need laws and civil liability to create additional incentives against bad driving. The threat of collision, property damage, injury, or death to oneself, passengers, and people outside the vehicle is clearly not sufficient.
Driving is actually a great analogy, but maybe not for the reason you intended. If we relied only on individuals to act responsibly, the roads would be much more dangerous than they currently are.
Not sure we should stretch the car analogy too far, but you're right
about rules and regulations. Most people can be relied on to be
considerate, but that one percent of assholes ruin it for everyone.
From where I'm standing that one percent is basically American big
tech. Problem is, we have traffic lights, stop signs, speed limits,
and highway cops patrolling, but the big US corporations just drive
like assholes and get away with it anyway. In fact they're more like
terrorists that drive a truck into a crowd of people, and we all stand
around helplessly and wail... "what can be done?!"
Now, if this was a proper American tale, there would be a Blues
Brothers style 500 car chase, and at the last moment, just as the
bigtechmobile is about to jump the unfinished bridge Dukes of Hazard
style, Bruce Willis swoops down in an Apache attack helicopter and
blows them off the map!
oops I think I've stretched the car analogy too far.
I like your priest's style. Maybe more cybersecurity and anti-fraud
from the pulpit is the way to go now the digital realm has failed.
I'll have a word with our vicar, see if we can't squeeze a bit of
Kevin Mitnick and Julian Assange in between Proverbs and Revelations.
To be fair, he also uses the pulpit for the announcements before the Mass, not just preaching ;)
It really does help especially the elderly parishioners to hear it directly from the mouth of someone they trust, though. And he tells them if they have any concerns, any doubts about a situation they've gotten into, they can always call the parish office and they'll do the best they can to help them with whatever issue they're facing.
Even just training people to always hang up and call someone who definitely would know if the person they think is reaching out to them actually needs help is such a good first step. Even if the scam email "from" the priest says, "I can't talk right now," call the office. If he were in a bind, he wouldn't be the only one who knew about it.
Why don’t US phone carriers give their users the ability to block foreign calls terminating in the U.S., at the telephony signaling layer? In almost no case do I ever want to receive a phone call from a foreign country with a spoofed number. Nor do I think anyone in my family wants to either.
Imagine old people getting phone calls from frantic children. They won't know real from fake. Add tech like this to SIM forgery ..and we will devolve from a high trust society to a no trust society.
I think everyone with elderly parents already needs to have "the talk" with those parents, to help them to understand and deal with common (and less common) scams that prey on old people, what forms of communication to trust, what capabilities scammers have, and so on. All that is changing is the scammers' capabilities.
Happened to my family. Grandfather got a call from a panicky grandchild that sounded like them. The teller at Western Union is the only reason it was stopped. The scary thing is this happened more than 5 years ago so it’s only getting worse.
I expect that with AI, we'll be less able to rely on the heuristic of bad grammar to easily detect phishing. That one flaw gave the phishers away so often, and made it so obvious ...
I suspect it's a compromise. Entertaining a target with high literacy skills is more work. If you have AI that is excellent at communication, there goes one reason to aim low.
I see this a lot, but it seems hard to verify. Has a scammer actually ever come out and say they deliberately use poor grammar? Would be interesting to compare scam emails/messages from scammers based in the US/UK/Australia with those in India or Nigeria, and see if the pattern holds up for both.
Easy. As you say all you need to do is to compare and as I occasionally have a glimpse of the spam for some non-English language with a non-Latin script - it's the same.
Absolutely. I was talking to a teacher and they said the translations were easy to spot because they were so good, far beyond what the kid was normally doing. I then showed them the same kind of prompt plus something like "write this as an X year old French student with only moderate grasp of ..." and it was far more plausible.
I noticed how well it understands general parlance after it created marketing style copy for me and I told it to sound "less wanky" and it made it much more to the point.
I was called by an Indian like ten years ago, trying to convince me to follow some script obviously made for Windows when at the time I already hadn’t a Windows box for quite some time. Fun moment, in that specific case.
Probably the funniest thing here is that this call reached me despite the fact that I am French, living in France. And so I really wonder how they ended up calling me. I mean, chance I would understand some English speaker with an Indian accent (I like how it sounds, but it’s definitely an additional difficulty as a non-native).
I read here and there how extortion of old USA citizens by some organized Indian citizens is really a thing. To my mind the main issue at stake is that we have global level communication facilities, extremely high wealth disparities at world scale, and no compelling global social endeavor to reach an harmonization of human quality of life for everyone. I don’t mean the latter is on the official agenda of most countries out there either, but at global scale it’s obviously even worst.
With all that in mind, blaming a whole nation for the illegitimate actions of some minority in the country, all the more when the international geopolitical context itself is all but fair, is probably not going to solve any issue.
I wouldn't be surprised if alot of the viewbots are using the same pool of IP addresses. Blocking VPNs, VPSes, tor, and ranges with large amounts of bans would probably help.
On the other hand, twitch keeps firing employees, so they probably just ban the stream account every 20 minutes because they don't have the manpower.
kids starving in Africa doesn't mean families struggling to put food on the table for their kids in America have an ideal situation. Grim is relative to local conditions, not global ones. Grim is an expression of an environments impact on the psyche and will to live.
I can hardly dodge the propaganda that makes some names a compelling social basic knowledge expectation, like Mr Musk and the likes. It’s the very first time I read about mrbeast, kaicenat, and ishowspeed however.
Note that the admiration/detestation opinion might not be as socially mandatory. But probably it’s as optional as an agnostic position is tenable in a society full of theists looking for some heretics to burn on the one hand, and fanatical atheists eager to decapitate any devout on the other hand.
When I typed my phone number in the box on "Musk" "investment" website my landline rang instantly after entering the last digit. It was definitely an onkeydown event.
A friendly fast talking man in an extremely busy sounding (fake) call center asked me if I was $name_I_put_in_the_form. The voices in the background were people further down the sign up process.
I said yes, then asked how he got my number. He said I just filled out the form on the website. Then the form was replaced by a new page.
They did a good job confusing me, it was very impressive. I don't confuse easily.
Yep these are super common, especially on days spacex is launching starship. They only live broadcast it on X so it allows these scammers to step in and attempt to trick people.
I watched the last starship launch on a scam YouTube stream. It was 30 minutes delayed, which a thought was because Musk wanted to promote Twitter. They said multiple times that Elon will announce something big after the launch.
Directly after lift off it cut to Elon holding a speech and I only noticed this been a scam channel when he talked about the QR code and crypto.
using ai might be bringing out some low-effort success but at the end of the day, it is skill issue on our front.
a common heuristic to look out for is "badly"-written/spoken communication. the "AI vs Actual Indian" comment and nigerian prince emails stand out for most people, but they still ended up working well enough to become this wide-spread.
you just need to employ some critical thinking now for most external communication now. it is no different from some highly-motivated scammers doing it the old-fashioned way. at the end of the day, we are trying to replicate the success of some native-speaking teens (https://news.ycombinator.com/item?id=32959001).
I've already experienced two AI-powered phishing attempts personally in the last few weeks. One was pretty transparent, but the other almost got me. I expect we'll all see a lot more of these soon.
Does IOS count? In that case people have been compromised without clicking anything and just getting an invisible text message is enough. Browsers have exploits with sandbox escapes. Any link to a file that is automatically downloaded and opened in an application (office doc or PDF for example) can exploit vulnerabilities in the underlying application and allow for anything including remote code execution.
Hey, just going to say what I've been telling folks IRL, if you are reading this, and your parents and family members aren't tech savvy, you need to set them up with two factor authentication now.
Because you know how to do that, and it's so much easier than helping them when they get hacked.
Friend receives an email from ISP, asking her to contact them.
She searches, comes across a "customer service number" on a legit looking page, calls them up.
(Whoever she called) plays out a 30 minute charade about how she's been flagged by IRS for illegal activity and is about to have her business accounts frozen, including multiple phone transfers to "another party" (played by different people) to boost authenticity.
And during this whole time, they not once asked her for any "red flag" information (e.g. account #, SSN).
Instead, it seemed to be a shell game of extracting limited information (last 3 of your account #?), then having "unrelated" parties parrot that back as proof of their "working for the government."
I expect it would have eventually escalated into an actionable ask, but they were definitely playing the intermediate-term game.
If not for the utter moral black hole of the endeavor, I'd be kind of impressed.
I shouldn't, but sometimes I play along just to see what the scam looks like.
Last time I did this, it took three days of texting my new friend before it was finally clear that what she really wanted more than anything was to teach me to trade cryptocurrency.
Once, I thought I had her, because she spelled D&D like: D&D, but she played it off real cool and just explained that her English isn't that great so she used translation software.
In retrospect I think that all of her probing questions about my Svirfneblin cleric were because she later intended call him up and teach him to trade cryptocurrency. I like to think he's in some scammer's database now, causing confusion. He'd like that too.
Once I understood what she was after, I explained that my problem with cryptocurrency was that it resembled money too closely and really what I'd like to do with blockchains is to do away with money in favor of something entirely different.
Her training dataset had not prepared her for this conversation, so it was quite clear when her human handler took over. They were very rude, unlike their AI pet, and tried to bully me into sharing other people's contact info, which is when I lost interest.
I noticed the same pattern. The rude humans afterwards answered with expressions sounding like translated Chinese (like, I wouldn't think mentioning the ancestors' graves)
MFA doesn't stop this kind of phishing. If you're tricked to put in your password, you'll likely put in your 2FA code right after. A yubi key or device passkey that uses webauthn can stop these methods, since the domain seeking authentication is checked and won't authenticate unless it's the original domain.
Even then, that won't help scams and fraud that just trick you into sending money, or direct you to install malware.
surely it won't hurt. at minimum, it makes the attacker's job much harder -- their window to exploit becomes max 30 seconds instead of however long you don't change your password.
Tools like evilnginx proxy the traffic, then grab the auth token / cookie after a successful login. From there you can send the session tokens to something like necrobrowser to automatically do whatever you want with the account. The whole hack can happen in seconds.
I set up 2fa codes through Google Authenticator with my family, and employees. That is to say I generate a QR code, we all scan it while we are in the room together and can use it at any time to check who we are really speaking to. This is in addition to a question/answer pair that we have had with my immediate family for years (duress question, duress answer, standard question, standard answer).
Interesting. So it's a bit like providing a public key, if they need to make sure they are talking with you they ask you to provide the TOTP and they control they have the same number on their side?
Yeah that's right. So me, my 2 kids and my wife all have the same code, I have one with my brother and my dad (my mum is a bit too past it ... ) and one with my employees (I only have 2 ... ). It's like a way to prove you were all the same people in the room at the same time! I have a little script that produces a QR code, then I delete it and it will never exist again :) EDIT: my youngest daughter in particular really loves it. When I go on a run and get home without my key, and I knock on the door she grabs her iPad and opens the door a little crack and says "what's the code?"
If you are your family’s de facto IT support, it is worth considering Seraph Secure, which can detect when someone might be falling prey to an online scam and can notify you (among other things).
Rather than sending an article that they'll ignore, I recommend helping them do it when you visit. Note: you're guarding against phishing and also locking themselves out of their accounts. Both are important.
I bought Mom a Yubikey and helped her set it up on her Google account. She has it on her keychain. She doesn't need to remember how to use it, though, since it's only needed when she buys a new computer.
For good measure, I also helped her print out backup codes (and I know where they are) and I registered my Yubikey, just in case.
Nowadays, an old backup phone might also work, but I think paper backups are better because an old, unused phone might not start.
It’s actually worse than that - AI powered phishing sites will also copy your device profile and mouse, gesture and keyboard signature and use this to get past common anti-fraud techniques like device fingerprinting and behavioural biometrics.
I think this is one of those "the only thing that's worse is everything else" situations. Surely there are solutions, but I doubt there are solutions banks and payment processors would be interested in paying for, and at least the US government isn't particularly interested in compelling banks to do anything expensive.
Yea, my bad I guess. I tend to think people mostly get that biometrics are, well, mostly immutable and that not being able to switch them up in response to a suspected breach is a huge inherent weakness. So the only defense I really get of them from anyone is that the effort for the user is minimized while the effort for the attacker is still fairly high. The problem with that is why I mention inferrability: The existence of a computer system that can authenticate via a biometric implies the existence of one that can capture and spoof it, and we don't have any reason to believe this involves, say, more of a cost disparity than cracking a password, let alone anything approaching a strong one-way function. If your face is your key, do you start hiding your face on the street so no one can steal it? Same thing for behaviorals
When I call my bank they verify with my voice. There are further verification for meaningful actions but its still kind of crazy to be using "My Voice is my Password" this day in age.
Especially since "My Voice is my Passport" was defeated in that movie with a tape recorder and technology at the time. It was never a good idea and even the movie didn't seem to think so.
Yet my bank just turned this on for me as well in 2024. Now I have to figure out how to disable it...and will it really be disabled?
I mean SSNs are the worst possible authentication mechanism and yet we still have to freak out every time they're leaked. Security practices are so utterly backwards everywhere that it's quite apparent no one powerful is incentivised to care even a little bit
The section “Recognizing AI phishing attempts” is mournfully short, but there’s some companies out there like Jericho Security (https://www.jerichosecurity.com) that are working on countermeasures, at least for enterprises.
The technology for customised text based attacks at scale has been available at least since Llama was open sourced. The tech for custom voice and image based attacks is basically there too with whisper / tortoise and stable diffusion - though clearly more expensive to render. I'm honestly not sure why social networks aren't being leveraged more to target and spoof individuals - especially elderly people.
Tailored attacks impersonating text or voice messages from close contacts and family members should be fairly common, and yet they're not. Robo-calls that carry out a two way conversation convincingly impersonating bank or police officials should be everywhere. Yet the only spam-calls I ever receive are from Indian call centres or static messages using decades old synthesised voice tech.