>I cued AI Joanna up with several things I knew Chase would ask, then dialed customer service. At the biometric step, when the automated system asked for my name and address, AI Joanna responded. Hearing my bot’s voice, the system recognized it as me and immediately connected to a representative. When our video intern called and did his best Joanna impression, the automated system asked for further verification.
They used an AI to impersonate the author (probably a women, based on the name and picture), and were surprised that was more successful getting a man to do the same? That's not really a conclusive experiment. I'd at least try impersonating with another women.
Same reason why, at most UK banks, your online credentials (both user and pwd) are case insensitive: it's convenient to the customer. It costs less money to "pay" for the reduced security than it does in attrition causing customer dissatisfaction (which makes them leave), missed transactions, customer support hotline ("I can't login"), etc,
"It costs less" ... and the question is who pays that cost. If customers leave it's bad for the bank, if customers get scammed then the customers lose their money "well you should have protected your identity better, there's nothing we can do". So from the bank's perspective, customers getting scammed doesn't cost them anything.
I always found that aspect of Star Trek TNG really weird. They often say their command codes out loud for everyone to hear. In fact, there was an episode where Data was able take over the ship by imitating Picard's voice.
The main Australian government department which handles things like Medicare, social security ("welfare"), etc has the option to use voice recognition, and they promote it.
eg printed flyers selling people on the idea, etc.
Possibly even worse, the Australian Taxation Office (ATO) - our version of the IRS - has been creating voice prints of all callers for the same purpose.
A broader question would be about biometrics in general. It was not too long ago when these were widely promoted, but now with advances in AI these seem to be increasingly an attack vector.
I agree. I was merely saying that these were (and still are?) widely promoted even though the various problems were (are) well-known (and now intensified by AI).
I wonder if you can opt-out of that? You're right that it seems insanely unsafe, my bank have been legally forced to use two factor authentication for more than 10 years.
Im part of a subreddit right now that seems to be under siege by a heavy number of bots or human-bots. All I can tell is they post everyday on the same topic (politics) then they themselves cross post and post under their own thread with constant incendiary commentary. The big tell seems to be they post at least 75 comments a day and never appear to sleep. They only post about politics and only in this subreddit and always with the same bent.
There are a couple subs I believe that try to track bots. I say try because they don't exactly do any profound research due to most bots telling on themselves. But you get to see what patterns emerge from these bots: usernames are a big tell, a lot of them seem to be relying on the same script to generate usernames (probably pulled from the first reddit account creator repo they could find), and they repeat themselves a lot even if some nouns are changed (again indicating very primitive understanding of scripts). Another really huge tell, that only works if you're actively on reddit when it happens, is they reply to your comment instantly[1].
It also seems some are using these bots to change course of a subreddit [2], where they are upvoting/downvoting on mass (evading Reddit's usual bot trappings, which IIRC mostly stem from IP?). It's going to be very very interesting to see how easier access to bots is going to affect a lot of social sites.
There are two funny things about this entire situation: A) by attempting to call out these bots in some subs, you'll be warned/banned for brigading or being uncivil, and B) some subs have tried to be proactive about the situation but wound up being chastised[3] for various reasons like relying on AI detection tooling which produces a lot of false positives, and a general nonchalance as long as bots "post quality".
So basically, outside of external verification and pattern matching, you can't.
I was one of the people who said that using AI detection is a bad idea on that last link. Really, those AI detection tools are extremely unreliable and it’s a bad idea to use them for anything import like moderation decisions. I’m not against trying to stop bots or anything but doing it in a way that will harm real users more than bots is a bad idea
As in you were in that very thread? Cool coincidence if so!
But yeah generally I agree about the tooling, they're relying on methods that seem to be even more superficial and random than username pattern matching, and we really should be turning to the likes of Bard and OpenAI to instead implement solutions to make that easier.
Embeddings are great for this, capture all posts by all people in the sub and embed them, then cluster users by their overall position and the distances between their posts, etc, to profile them ideologically and single-issue wise, and then graph the links between similarly located users. Do they post in a web, or a tag-team way that differentiates them from other users?
And then also color these by the average response time, etc. But don't rely on these types metrics because they're the easiest to avoid by simply delaying responses based on their length and complexity, plus a random discovery time, plus a time-of-day dependent offset.
And then, potentially just issue warnings. "You've been detected participating in a brigading fashion, and bot or not, may be banned if this continues" and see what changes.
This is one of those conversations, as an AI researcher, that I tend to not like. There's two major voices and both are ridiculous. Neither should we completely rely on these tools and neither should we throw them out. It is a cat and mouse game, and yes, the mouse will always win. We're on hacker news, and everyone knows this same game exists for hackers and the advantage bias is similar. Doesn't mean we shouldn't stop trying to defend against the hackers. But ML requires a lot of nuance. We've already been using automated tools to ban people and we all know how frustrating that is. There have been several subreddits that would ban anyone that used the term "vaccines" because they wanted to protect their subreddits from anti-vaxxers but these backfire and also autoban people defending vaccines.
The problem isn't using these systems, the problem is that there is no nuance. No way to reach a human who can see the absurdity. But this has been a progression and even bureaucratic institutions do this with real humans. This is a frustration we all experience and we shouldn't make this worse. Use the tools, but always maintain a way to reach a human who can account for system failures.
An important issue with ML system failures is that when they fail, they fail bad. Anyone who has used GPT should be well aware of this and have experienced it (if you haven't, you haven't used it much, are incredibly lucky, or have been duped). It is also the reason AlphaGo lost that one game, and it played REALLY poorly. These higher dimensional spaces are rather complex and don't fit your intuitions. You'll have a better idea of how to stroll through flat land than you will through 10D land, let alone 2562563 (an image) land. The curse of dimensionality is that any intuition you have of geometrical meaning goes out the window.
> usernames are a big tell, a lot of them seem to be relying on the same script to generate usernames
Regarding this, I also noticed a pattern in usernames once, but realized that Reddit itself generates these usernames for new users, who often just accept it because they don’t care about their nicks. So usernames like Green-Biscuit398 are often real users!
I don’t know if that’s the pattern you’re referring to. But for me it makes it even harder to make any claims about the user from the obvious artificial name.
Yeah Reddit autogenerates names now, somehow they thought signing up for an account was too difficult and wanted to make it easier.
That policy is why a lot of posts on the linked subs try to avoid purely going off username unless there's another larger pattern linking them such as all of them repeating the same comment, or sharing the same link across subs in short time. Ignoring the scripts that generate names like elizabeth99057524 that exceed Reddit's username pattern (which seems to limit itself to two words, optional symbol dividers, 2-4 digits, and properly capitalized).
One of the more odd patterns, is dormant/hacked/purchased accounts. Reddit has a somewhat aggressive commenting policy for new accounts and the community at large seems to treat them differently - so you'll see in those bot tracking subs that a lot of accounts highlighted aren't new but quite old and were likely purchased or hacked. I suppose someone sees value in doing so, and I guess with enough of them purchased you can plant enough votes to encourage Reddit to do the rest (ie upvote a thread enough to get it to the front page of the sub, but not enough to alert people, letting actual users mindlessly upvote it as well).
I'm a moderator of a fairly large (600K) subreddit, and it's relentless. We have to keep upping the minimum karma req's to post but bots just start acquiring more and more. It's a never ending battle.
I'm not sure why it wouldn't, and having dead content enabled, I've seen some weirdly incendiary responses to pretty harmless comments recently. It's weird.
I wouldn't be surprised if the HN team is already doing a lot of work already. Another factor that may have spared the HN community is that there are lower hanging fruit to pick first, reddit being the prime example.
That would be the end of Reddit though - or of any other social network that got this restrictive. Unless governments mandated it (which will probably be the end result of all this - no more anonymity on the Internet).
They don't have to store it, they can delete it after account creation or just keep a hash in case they want to avoid account farming.
Youtube is already collecting IDs in the EU if you want to view age-restricted content.
Some countries also have ID cards that allow you to digitally prove you are 18 without revealing your ID, maybe that could be used for proving you're human as well (don't know if there is a way to prevent farming here).
so the counter-argument is they only have to know how to securely transfer and delete it. which is very similar to securely transferring and storing it. in terms of required competency.
and existing identification requirements don’t lessen the ask. lest wearing identification visibly at all times become the norm. at that point there isn’t an ask.
Are we living in bizarro world here? This is not some crazy ask and what happens billions of times on Reddit with passwords. There are even 3rd party services that will verify IDs via API and you never even see the data.
When you look at those "How can I make money? / Generated Name taught me how!" astroturfed youtube comment clusters, do you see a positive contribution to the community?
Is that what the parent is describing? Even so, the problem with those comments is that they're annoying spam, not that they are annoying spam made my bots.
Humans can post annoying spam too. Look at any social group that attracts pyramid schemers or get rich quick cryptocurrency schemes. Those aren't bots, they're just annoying.
You can go to open-mic nights in a pub if you want, but you don't necessarily appreciate or benefit from being surprised by someone with a megaphone when you went somewhere to hang out with your friends.
Maybe I'm misunderstanding OP. IT sounds like there are people on a subreddit talking about politics all the time. That's annoying, but that's also basically the behavior of most people on reddit, bot or not.
Only it didn't really trick her family and only got through very first level of the bank:
>My sister, whom I call several times a week, said the bot sounded just like me, but noticed the bot didn’t pause to take breaths.
>When I called my dad and asked for his Social Security number, he only knew something was up because it sounded like a recording of me.
>A Chase spokeswoman said the bank uses voice biometrics, along with other tools, to verify callers are who they say they are. She added that the feature is meant for customers to quickly and securely identify themselves, but to complete transactions and other financial requests, customers must provide additional information.
I'll stand by what I wrote, despite agreeing with some of the points raised.
If you do a magic trick, have you done magic? No. You appear to have done x or become y or destroyed z. It's a trick. It isn't real. It's a misdirection. If the person figures out how you did it or that magic doesn't actually exist, it's still a trick. An illusion. Even if it collided with their logic.
The problem of pausing for breath is likely very easily solved with randomization and/or monitoring punctuation and character/word count.
The father not being convinced is likely a multi-faceted problem. It's weird to be asked for your social security number out of the blue, and that gets people on edge anyway. The headline alone is sensationalized, eschewing that piece of information or slant is also peddling a very specific narrative.
Sure, what I meant was that if you perform a trick and call it magic is hype, if additionally your trick fails and you call it successful magic it is hype+false reporting.
While - before or later - it may well become an issue, it seems that right now the "cloning" is not good enough.
OT, but not much, a kind of fraud that has become rather common lately at least here in Italy is someone calling the victim (possibly someone old) on the phone telling them that their son or daughter was involved in an accident or was arrested and some money (cash) is needed immediately to solve the issue, most of the time someone impersonates the son or daughter and - while crying - confirms to the father or mother the situation.
Most of the time the victim (put under stress by the news) falls for it.
Surely these generated voices will trick more people in a similar situation, but I don't think that (yet) we are there.
Cloned implies that it is 1:1 identical, but the syncopation and inflection very clearly isn't.
A parrot mimicks and doesn't quite sound like the original. E.g., if a parrot mimicks me on the phone, someone can probably tell it isn't me even if it does sound similar.
It's fascinating nonetheless. The potential for crime is huge, and people already fall for crappily written emails and broken English phone calls. They call enough people, they'll eventually find an appropriate mark.
>Over the past few months, I’ve been testing Synthesia, a tool that creates artificially intelligent avatars from recorded video and audio (aka deepfakes). Type in anything and your video avatar parrots it back.
Synthesia is billed as an AI Video creation platform.
From their ethics page
>We will not offer our software for public use. All content will go through an explicit internal screening process before being released to our trusted clients.
Did they agree to this from the WSJ for the publicity?
They used an AI to impersonate the author (probably a women, based on the name and picture), and were surprised that was more successful getting a man to do the same? That's not really a conclusive experiment. I'd at least try impersonating with another women.