Hacker News new | past | comments | ask | show | jobs | submit login
My friends Instagram was hacked and deep-fake videos posted in less than 6 hours
395 points by k4runa 52 days ago | hide | past | favorite | 240 comments
Today has been wild. I am quite shocked at how quickly the whole thing happened and how difficult it is to report the hacked account to instagram and try recover the account. The docs seem to take you in a loop without ever being able to resolve the problem...

My friends instagram account has only ~2,000 followers, so not even a huge amount, and her email and password was reset about 6pm to a gmail account, and by midnight the account had already posted deep-faked AI videos of her promoting cryptocurrency scams.

The deepfake videos are very realistic too, if I hadn't know her better or know about the hacking it would be very easy to believe it was real...

It's possible they deep-faked her videos ahead of time but it seems like something you'd only spend resources on only if you knew the attack was successful.

And there doesn't seem to be that much news or content online about this happening or it seems very targeted... but for such an account with such a small following it seems like it must be quite widespread problem.

Have you had this happen to someone you know personally and what do you think about how prepared we are to deal with scams this sophisticated or what effect they might have?

Example from a news story. After seeing the video, I must admit I’m not sure if I believe the guy or not, which is scary.

Edit: Better link from deadmutex below - https://www.youtube.com/watch?v=vqr0oER03SE


I work really closely on deep fake tech [1] and I'd say I'm relatively current with the state of the art in the literature. This was not deepfaked. The person recorded it themselves and is lying.

The video quality is too good. The lighting and movements lack mistakes. It can't be first order model, wav2lip, or any of the relatively new audio to video models.

The audio doesn't suffer from spectral noise, and it matches the lip movements close enough to not be TTS. Voice conversion (VC) introduces pitch issues that are readily apparent, and it's incredibly hard to train VC models without a ton of parallel audio data from source and target speakers.

This is absolutely a lie (not a deepfake) and I'd bet money on it.

[1] I created https://fakeyou.com cartoon and celebrity TTS, real time voice to voice mapping for VTubers, and am currently working on ML blendshapes.

Friend was suckered by the exact same scam. Hers was NOT a deepfake, although I assumed it was at first. They cajoled her into making the video. She did so reluctantly, so she seemed a bit "off" in the video, but it was indeed her. She was out $300, was super embarrassed, and then completely tormented by how helpless she was in trying to recover her account.

I agree. I think the most telling sign is the hand gesture for the number 3 when he says "just invested 300 bucks". Don't think Deepfake models can understand intent yet.

There's two ways to think of this as a deepfake:

1 - This is an actual video of the guy in his home, but they changed/synthesized the audio and then worked on the lip movements to make it match.

2 - This is a video of an actor impersonating the guy, possibly to the extent of impersonating his voice (although his timbre might make that a little tricky), and then they just deepfake the face on to the actor. An example of this is deeptomcruise on TikTok, something that you should treat yourself to if you haven't seen yet - https://www.tiktok.com/@deeptomcruise (first two today aren't great, here's a good one - https://www.tiktok.com/@deeptomcruise/video/7018171271095553... ? )

I'm not even convinced there's any alteration here, but even if there was both of the above could be possible. Adobe demoed something called VoCo in 2016 that never saw the light of day, not sure if there is something approximating this available today: https://www.youtube.com/watch?v=I3l4XLZ59iw&t=260s

What do you mean? It’s not like a deepfake is a model that produces a new video from scratch in a completely random setting.

It needs a base video to be modified. They need a video of a person talking to do the deepfake.

Exactly, so the point is that any relevant gestures must be coming from the timing used in an existing video, the current deepfake tech can manipulate lips and facial expressions, but it can't have the video lift three fingers at the proper time when "300" is being said.

So this is either an indication of a very elaborate deepfake which managed to surface an amazingly coincidental source video (which should be possible to find on her archives) or that it's not a deepfake but a real recording.

Or: it’s a deepfake where the attacker made a video and attached the victims face in post-processing.

Or: the base video is well chosen among the victim’s, the message is crafted as to not deviate a lot from the base, and the deep fake is used to have him say other things.

I think the issue is that there would be insufficient audio sources to generate new spoken language.

That is not an issue. You can generate new spoken language with not much more than a word.

Sincerely: I’d love to hear an example of that.

The reason for my skepticism: The state of the art language pronunciation from the best this planet has to offer still requires a full phoneme library recorded in studio conditions. A voice sample taken from a user’s instagram page doesn’t seem like the kind of source material that would be useful to make convincing speech.

Wouldn’t it be a real person making the gesture and his face is deepfaked on?

I think you are right, if you find a similar person with similar voice (even commission them 5$ in fiverr) and only deep-fake the face in a low quality video, this is very much achievable in today's state of the art.

Pay attention to the hand. His index finger is turned inwards, that would be a very odd “number 3 gesture”. More likely random hand movement from the video used.

The perfection in the audio is precisely what made me skeptical.

It's exactly the same situation with my friend. “I just invested $300 into Bitcoin and got $10,000 back. Gotta try it,” except the numbers are higher... but she actually says it in the video too.

Telling that a major news outlet can't get anything more than a "we're looking into it" response from IG support.

I mean what if they are looking into it though? Would you prefer a half baked response?

Yeah. It's almost like the post is a scam and we are all being duped into promoting the non-deepfake crypto scam.

Would you be willing to share one of these videos? Our research group is studying the use and abuse of DeepFakes in the wild and we'd love to be able to do some analysis on this incident. Feel free to shoot me an email at brendandg@nyu.edu

It's likely not a deepfake. My friend fell for the exact same scam--they managed to get her to make the video herself.

I will ask for her permission to share it with you.


That's very kind but it seems like consent is a ship that has already sailed in this case.

Why’s that?

Presumably because these are widely disseminated videos posted by criminals.

I was doxxed and harassed for my Insta account. Eventually one day it was just taken even though I had 2 factor auth on Insta and email. There is basically no recourse.

Oh and it was given to one of mr beasts (from YouTube) helpers…

Probably inside job at IG tbh. Multiple reports over the years of desirable IG usernames being taken from legit users and handed to the friends of IG employees

I remember Facebook used to let any dev access the whole production DB as an effort to "remove red tape" and allow quick solutions to problems. That lack of red tape resulted in multiple stories of employees using that privilege to stalk people in real life.

This is why most of the FB security infrastructure is actually inward-facing. The actual infra is so complex that it would take an outsider quite a while to figure out how to get the data they might want. For an insider is it much easier to get improper access to data that you want or that someone might pay you for (real name or ip addr of a dissident, who your ex-girlfriend is now sleeping with, advising ad scammers on how to avoid detection, celebrity chats and private pics, etc.) The initial problems back in the day were the employee stalkers, but as the platform became more important the threat model changed to nation states compromising insiders. The red tape is not completely back, but as of five or so years ago there was a lot more monitoring of data access patterns and zero-trust gates on certain bits of data. OTOH, it meant that privacy and actual app security ended up falling into shit (a devsecops model where the head of privacy and security was someone completely unqualified for the role but ready to do whatever Zuck et al asked) but you win some and you lose some...

Pretty disgusting behaviour. Reminds me of this high profile case: https://www.businessinsider.com/andres-iniesta-claims-instag...

How does this mafia operate inside IG? Is there an honour code “don’t steal an account that a superior has stolen already”

Probably just doesn't conflict with each other often

100% does, especially with common names. They can see in the logs if the current holder was forcefully renamed, and presumably won’t touch those.

Fortunately, that's not something the current company Meta would allow to happen. That sounds like something that only Facebook would let fall through the cracks.

I can't tell if you are joking but I desperately hope you are.

Is it so hard to believe? Sure, all the same people are running the show and there haven't been any major structural changes, and if you looked closely at the day-to-day things they are indistinguishable pre/post Meta, but it's a different name! It may point to all of the same things, but Meta is not Facebook.

Just like when Philip Morris changed its name to Altria-- you could hardly claim Altria had done all of those bad things, it didn't even exist at the time!

(Also: joking)

That's an obvious joke.

A nice illustration of Poe's law :) I'm pretty sure it's a joke :D


Yes I should probably be more careful with an /s on some things. But that also seems to defeat part of the purpose of parody, automatically signalling that you don't need to think more about what I'm saying because I already told you. Sort of like explaining a joke makes it not funny (most times), signalling parody or sarcastic intent seems to dull the pointy end of it.

The most extreme will say "yes, this person gets it, they're one of us!" But those a little further from that edge might take a step back and think "wait, is that really what I sound like, really the end conclusion of all this?"

Anyway, I'm probably overthinking it.

I've also been thinking a bit about that :-)

And, it's happened in real life that I said something sarcastically, without indicating this, instead assuming the others would realize -- and instead they thought I was crazy. Maybe my voice sounded too serious

This account? https://www.instagram.com/chucky/

Are you sure the account and username were given and not just the username?

Account was deleted and user name handed over

Definitely reeks of an inside job, if not an outright hack of the Instagram service.

Was the account active?

Yes. I was using it when it was taken away. I was literally kicked out while logged in.

Was it SMS based 2fa or did you use some other method?

SMS 2fac to a google voice account with 2 fac

Still not secure.

Why ?

Because you can keep dialing customer service with various stories. ”A shark ate my phone”. ”It’s my husbands account. A shark ate him and I want to post a funeral invitation on his page”.

Just keep dialing, and you’ll find a compassionate and helpful person at some point

For that reason I'm very happy I'm with Giffgaff in the UK - they literally don't have any telephone based support. If you need support you need to message them.....from your account online. Oh and any request to transfer the number takes at least a few days to go through, and you get notification that it's happening - same if you are being sent a replacement SIM. I imagine a number takeover attack would be very difficult due to this.

This assuming you know my SMS 2FA number, it not not that expensive to have second phone number and second sim on a phone.

Just start the phone tour from the Instagram support line. ”My son was abducted by sharks, and I need to dial my wife. Which of her phones did she use for her Instagram?”

Again, the kindness, compassion and flexibility of humans is the weak link in security.

I bet that’s why Google has all but eliminated the support staff!

That is SMS hijacking. That’s why @epitom3 says 2FA with an app.

All SMS 2FA is insecure by design / default

With one big assumption attacker know you 2FA SMS number, it is not hard to have second number.

always use 12-18 digit random passwords and an authenticator app for 2fa.

If you are using a random password generator, don't be doing 12 character lol. Do 20+

Yeah, true. I do 64 on default, except the service doesn't allow it, then I'll do the max allowed characters.

Be care with that! I've created accounts in the past with a 40+ random character password and everything went swimmingly. Until I tried to log in. Bzzzt! Couldn't get in.

Apparently the password had a character limit that wasn't mentioned when signing up and was silently truncated server-side. A bit of investigation showed the <input> had maxlength="20" which is only enforced when typing characters. When using Javascript to fill a form will just ignore this attribute. https://codepen.io/jspash/pen/XWerVzY

I've noticed my bank doing shenanigans in order to prevent password managers from working well. It appears to be JS scripts that uppercase or lowercase the input field after posting but before the browser saves it. So it perpetually looks like I'm updating my password when I'm not. It literally just got populated by the browser.

What is the deal with banks being actively hostile to password managers?

One bank specifically I have to deal with will:

- Not allow you to paste a username/password (ctr+c/ctrl+v, right click disabled)

- Lastpass autofill doesn't work

- If the page loses focus, both user/password inputs are cleared, you get to start all over.

There is also a very small subset of special characters that are allowed. If you do not reset your password as often as they'd like, you have to agree to waive any responsibility for any issues with your account before logging in.

SMS 2FA required, there's no other 2FA option.

After entering your 2FA code, the "proceed" and "cancel" buttons are the exact same shape and color and I've hit the wrong one multiple times, in which case there is also SMS 2FA cool down and you have to wait 15 mins to start all over again.

It's absolute insanity and every time I have to login its an adventure.

God damn that was an annoying bug. Server-side validation not matching client-side. Hopefully you managed to log in eventually.

My insta got hacked from not original Instagram password after I had linked my account to Facebook. They were able to login with my username and password, but I was able to get in via FB. They disabled my account because they were noticing my baccount doing botlike things.

Writeup details etc in a medium post & repost here to HN? Not much, but at least a little extra bad publicity for IG & Mrbeast/associates

What was your account? Can you prove it?


I have old password reset emails and probably some screen shots somewhere

Please post this on Medium and submit it to HN. I'd be glad to help this get seen.

You were robbed and deserve the handle back. And potentially compensation if an audit trail reveals an inside job.

I've had my account hacked, too. I used to own: https://www.instagram.com/kaler/

Have owned it since Instagram launched and it was connected to my Facebook account which I've had since 2005 or so.

There is no hope with contacting Instagram/Facebook support.

Seems like it is time for a class action lawsuit

Sue for what?? Accounts are free. Go make another one.

Had the same thing happen to me. Was in the closed beta and now there is no way to get it back.

Link doesn't show anything to me, was it deleted?

If you know someone who works there (Kafkaesque absurdity surrounding social-media operators) you can recover an account asking for help politely.

As far as I know, that's actually how these 'high profile' takeovers work. You know someone at Insta and politely ask/silently pay to get your account "back" or take over an "inactive" account.

Really? Do you have any evidence to support this claim?

I know someone who works in a major social network company. They got an email on their personal gmail, asking to help acquire vanity usernames for a reward.

Well yeah, but is your friend who works at <social media company> going to risk their position & salary there for some sketchy hook up that's less lucrative?

Not everyone who works for a social media company is a six-figure dev. There's a lot of underpaid helpdesk staff and even-more-underpaid "independent" contractors for whom it might be worth the risk, and they're the ones who actually have access to the account recovery tools.

same here. I had an account that is now selling bras under my name. Tried contacting FB support so many times, nothing ever happened.

>It's possible they deep-faked her videos ahead of time but it seems like something you'd only spend resources on only if you knew the attack was successful.

I wonder if they had a list of account credentials, tested to find ones that worked without changing anything after verifying they were legit, and then once they had the content ready took over the account to ensure the work they had done was live for as long as possible..

This is a good tack, though the key is the gmail account. Presumably, the perp has multiple scams based on the content of the account.

Presuming much of the media creation is automated, they could also have run the process once the gmail account was owned.

Testing auth from unexpected locations in advance seems like an easy way to get noticed.

I'm not reading that they hacked/stole a gmail account. My understanding is that by taking over the account, they updated the associated email address used for contact within Insta. It just so happened that the email used by the attacker was a gmail account.

>Testing auth from unexpected locations in advance seems like an easy way to get noticed.

How many times we've received emails from online accounts notifying about login attempts? They are usually phishing attempts, but it occurs enough that most people don't believe the legitimate emails.

A friend of a friend went through this a while ago and I was called on to help. Insta was totally taken over and so I made her start going through the linked gmail account because there should be a message in there from insta about the details changes. Nope. You use the same password, don’t you. Yup.

So it’s - access insta, change email, phone and password. Access gmail, delete notifications from Instagram, profit.

In her case she had about 60k followers. They sent random emails the moment she tried to do a password reset, so they obviously back the email onto an automated random service.

The fact the insta even allows changing all of those details in quick succession is one thing. The impossibility of contacting them next is another. Took a few days but they eventually flipped the account back to her with about as little fanfare as taking it away.

How did you get them to give it back? I cant find anyway to contact them

Instagram is a dumpster fire.

I have a 3 letter instagram name and the amount of spam and attacks I get is insane... I get hundreds of password reset emails from instagram daily and constant DMs and follow requests from scammer and bot accounts.

I've tried contacting instagram about it several times but they never respond. Had to blackhole emails from security@mail.instagram.com to prevent my mail server filling up.

This episode [1] of Darknet Diaries tells the story of someone with an early Twitter/Instagram account, and how it was targeted by scammers. Pretty scary.

[1] https://darknetdiaries.com/episode/97/

I had my own "mobile + twitter hacked" story [0]. Solved it in several months. Yes, months, not days, not hours.

[0]: https://simon.medium.com/mobile-twitter-hacked-please-help-2...

Wow. I have an early Twitter handle and they let me turn off password resets without additional information. (I think you have to enter the email under which I've registered before it will send the request.)

I have a 3 letter twitter username and I definitely get password reset requests but not too often - about 1 or 2 a month

How early? My main is in the first 12,000.

I just checked and this appears to be for accounts of any vintage. If you go here:


Turn on "password reset protect" and it will require "either the phone number or email address associated with your account in order to reset your password". I'm not sure it adds a ton of security, but it definitely cuts down on the password reset emails.

I personally know someone with a two letter twitter handle that is a word in the english language, for awhile there they were inundated with password request attempts, but it took knowing someone 1-1 inside of twitter security in order to put additional locks on their account, beyond what any config will show unfortunately.

I'm glad to know I'm not the only one who gets hit with password reset attempts. It's strange because I don't have a short username; mine is noticeably long and weirdly specific.

I've noticed that the reset attempts seem to come in waves. I haven't charted it, but sometimes I'll get somewhere between 20-30 reset attempts in 24 hours, and at other times, I won't get any reset attempts for a full week or so. The whole thing is very bizarre.

Same. I just ignore them.

I got a 4 letter one I never use and the amount of resets per day is crazy.

I don't share my daily email with websites but for whatever reason I used it with this Instagram account. It's the only spam I get at this point. 20 email resets per day!! It can't be hard to fix that

I had a 3 letter as well. I was getting daily password resets for years and then one day it was stolen despite 2FA and a random 32-char password. No idea how they did it. I wasn't a heavy user and Instagram doesn't seem to offer any support so I just created another account and moved on.

Some sort of social engineering I guess :(

Considering that Instagram doesn't provide any support (that I can find, anyway), social engineering seems unlikely. Unless they found someone who worked at Instagram and somehow socially engineered them to transfer the account?

Same, I also have a three letter nick. Constant reset attempts, spam, fake offers of thousands of USD to transfer it.

Doesn't this apply to the whole internet? Internet connects both good and bad people.

The moment you create something that can be used to upload any type of content, some people will exploit it.

Sort of. Tech companies have an incredible lack of investment in actual customer service though. With many services it's easy to get on the line with a human - even if it can take a while there's a straightforward path.

Tech companies seem to invest far less into customer service, make it impossible to get on the phone with someone, resolve issues in a sane way, etc.

Same. Eventually (after 30-40?) they give you a link to turn off password resets attempts (or notifications of?) from devices you haven't logged on from in the past 90 days (I think?).

I just have a junk instagram account to get around the login walls and get flooded with reset requests too. You'd think they'd be able to solve for this.

Yet you still use Facebook's censored spyware, even with this terrible UX?

This account is going to be a heirloom NFT in the metaverse some day :P

This happened to my wife too and her IG profile was used to create a deepfake on onlyfans

Had an account not hacked, but copied pictures (only using bikini pictures), choose a similar username and followed all followers with an invite to a porn site (though I did not follow the link to see, it could not have been more explicit in what was being offered). Instagram "reviewed the report and decided it did not violate their terms of service" and refused to take any further action.

If they’re reposting one’s own pictures, a DMCA claim would be a good way to solve the issue. It shouldn’t be needed, but Instagram will obey the claim

This happened to someone I know as well. The only reason they knew about it was some of us saw it and alerted her.

Really messed up stuff.

I haven't even thought about them using the content across platforms now that they have the video samples already... but perhaps the followers are more important. You only really hear about celebrity deep-fakes on the news.

How does these scammers get their hands on the money? Doesn't Onlyfans require photo ID and personal banking details?

You'd be surprised at how many people upload pictures showing their id to public sites. And even if it's not that, then it might be an inside job by a family member, friend, co-worker, etc who had once enough access to the id to snap a pic of it.

Also, chances are that if you can convincingly create deepfakes in general you can (deep)fake a picture of an id to a degree it will be accepted by OnlyFans and other services, especially if these ids are from places the staff might not be entirely familiar with. Do you know for example what a Columbian or Polish or Turkish or Cambodian id should look like and what security features that you could see on a mere picture of it should be present, if there are even such features?

I've seen such id fakes done in practice, though that wasn't related to OnlyFans. That's why when I was in a position where I sometimes had to verify identities, I would not accept pictures of ids, I would ask for a "proof-of-life"/"timestamp" style pictures you see commonly used on pseudonymous sites like reddit or 4chan to establish authenticity of a poster. Those are not impossible to fake, but a lot harder, especially if you limit the time in which the other party can respond.

I don't know if OnlyFans adopted such a method of verification by now, but I know they used to accept just ids.

I also online-know a guy who says he used to run an OnlyFans scam where he would seek out underrated accounts, steal their content and republish it under his own accounts. That obviously required he create a lot of verified accounts with valid ways to pay out, of course. He never went into details on that. He could be lying about the thing, but when it came to other things he claimed over the years, a lot of it was verifiable true, so I don't know.

You can also buy verified OnlyFans accounts on the black market (hacked usually) or compromise accounts yourself. A lot of OnlyFans accounts are completely inactive, abandoned by the original owners, so they will probably not even notice if it gets taken. From there you can replace all the account content as you please, and I believe in the case of OnlyFans even change the user name and probably update the payout method and information as well.

As for banking information... that's harder, but there are probably some ways left. The question is if the OnlyFans account in this case was even made for financial gain, or just to cause humiliation, in which case subscriptions might have been free or the money might have never been collected by whoever created the account.

this sounds horrible. did they eventually take it down?

yes but only after she spoke with a journalist from VICE

Facebook doesn't seem to care about deepfakes on Instagram since they promote "engagement".

I can't say I have too much advice other than to always use strong passwords, don't share passwords across sites, use VPNs on public routers, and stay away from posting videos of yourself on cancerous engagement metric-driven social media.

I have another, stop using those websites completely. Delete your accounts.

What if your target demographic is on those sites, and/or you use Instagram not for fun/entertainment but for business purposes because in this age >90% of your target audience is there and very few are going to visit your website in this age?

Many people aren't fans of Insta/FB but they need to use it for reaching their audience.

I wonder if this problem can be at least a little alleviated by simply using multiple platforms.

If you use Instagram and then your account gets taken over, it might be convenient to be able to alert at least some of your fans by posting about it on Tiktok.

Better yet, even if few of your fans are on your official site, having an official site means you're not 100% locked out of your own identify when your Insta is taken over.

Of course maintaining a website is probably not the forte of most Instagram hot-shots, so this is by no means an easy solution.

It's unfortunate that these big social media companies let people become successful without fully understanding what they're up against after reaching a certain threshold.

alright just throw your hands up in the air and sigh then

Consumer boycotts of products have never worked. The only thing they effect is reputation[1], and social media companies clear don't care very much about theirs as long as they keep making money.

Collective action needs to be legislative or legal in order to actually change things.

[1] https://www.ipr.northwestern.edu/news/2017/king-corporate-bo...

This is the way.

Can we also lump Twitter and Tiktok with it? Is it me or there is a double-standard on HN where FB gets all the heat, but others don't.

I brought up Facebook because they own Instagram, but yes this applies to Twitter and TikTok.

Ah, cool. Snapchat too. I think all these are terrible and bad for society. Toxic and unhealthy for kids and adults alike.

LOL TikTok is worse. It lets you login with just a phone number and no password. I can actually log into the TikTok account of the previous owner of my phone number.

>Is it me

I think it's you, because I see criticism levied against TikTok here all the time.

All you need to do is search: https://news.ycombinator.com/item?id=28133017

Lots of criticism in that thread.

Top comment:

> I'm a huge fan of TikTok because after years of content stagnation and dullness, the internet is fun again.

So the "double-standard on HN" is that not literally everybody hates TikTok, and the existence of positive comments counts as TikTok and Twitter not getting criticized, despite there being many comments criticizing them? really?

Just for contrast, this is how the second-highest top-level comment starts:

> I like to think of TikTok as the crack cocaine of addictive social media. I used to think my dopamine receptors were burned out by the constant barrage of memes I received from reddit, Facebook and Twitter but TikTok proves they can refine that product to make it even more potent.

But evil HN is always only criticizing Facebook and letting TikTok get away with everything, sure.

I think that is good advice for someone who is more tech-savy and aware of the problem but I'm not sure the rest of the world is as well prepared, or what effect it might have on us or FB/IG reputation and engagement if it's becoming more automated and widespread.

Dude, when I was 16, 15 years ago, I remember doing a school essay about the risk of social media and saying "social media is not the culprit, it's us being so addicted to this new way of manipulating people around us into thinking we're "social credit"-worthy by uploading everything we can in a never ending race to the vacuum. We'll end up with what we deserve: we'll have traded our ability not to be lab monkeys clicking on buttons for the pleasure of our observers in exchange for the vain pleasure of sticking it to our ex/neighbour/psychotic mom/more beautiful colleague/whomever else insecure jealousy pushed us to addiction.

I feel we're just reaping the reward of our vacuity.

this reminds me of a paradox in trying to perform bot detection on social media: the closer bots get to being human, you also deal with humans who act like the bots: posting repetitively about the latest crypto scheme for example

so tik tok and instagram and even phones themselves apply skin-smoothing filters by default, guess what, that makes it harder to distinguish deepfaked videos from just heavily filtered genuine videos

I dont know how to convince people to be less bot-like, but social media is making it easy on these scammers by making beautification filters the norm

reposting this from reddit (tried to edit the profanities) as I'm certain this is what happened here.

Essentially it's social engineering via a face scan app.

You can see an example of a possible scammer here https://www.instagram.com/sashana_walter/

"My friend is into crypto sh$t and he messaged me saying to check out his story (already weird but whatever) and he posted that this lady turned his $500 into $8500. It was a legit video of him speaking, walking down the street. Anyway I messaged the chick and she told me to download this app and I signed up and it made me scan my ID and my face which I thought was wack again but trusted my buddy… So i end up sending this chick $200 in bitcoin like a f%c^in idiot, but then shes trying to say I need to send another $300 to be able to actually ‘process’ and I was like uhhhh nope? Now ‘she’ is trying to get me to do some two factor authentication confirmation code shit to ‘get my payout’. What I think is going on is they got into my friends account and made a deepfake from the face scan and started DMing other people. "

I have two Instagram accounts: one where I post things about me, other where I post things about this sport I practice. I lost the password for the later account, it was a nightmare to recover. The two factor SMS code would never come. When it would come, the app reported an error. I had to battle the app and the web platform for 2 days in order to get my account back. I cannot imagine the kind of hustle it would be if it was stoled.

I smiled a little bit when you say struggle for a whole 2 days. In one of those "get off my lawn" moments, I remember having to send in letter head printed requests via snail mail to take recourse. At the time, the small biz had never printed anything more formal than an invoice, so there really wasn't a letter head. Some of the work around procedures are just farcial when they are there, yet just down right infuriating when they are not.

Imagine that, a time where companies had mailing addresses and phone numbers to communicate with real humans.

Are you sure it’s a deepfake? A common version of this scam involves extorting the hacked user to record a video in exchange for getting their account back.

I don't know the details of OP's situation and there seem to be other people popping up in the thread with corroborating claims. But if anyone is interested, here are details on the extortion scam where the victims are made to record the video themselves.


I've been on voice chat with her throughout the day trying to help her fix the problem. So I've seen it happen in real time, from the suspicious login in Nigeria to the deep-fake video that happened later.

It's so very hard to believe that it would be the notoriously unsophisticated Nigerians¹ pulling off high-end deepfakes. These guys weren't smart enough to use a proxy, but were supposedly smart enough to produce a good deepfake.

If this really happened, there must be some easy to use public tool they used. I'd really love to see the video.

¹ There's no way anyone else would log into a stolen account from a Nigerian IP.

I don't think that the Nigerian scammers are necessarily unsophisticated. Their scam emails intentionally contain enough red flags that any reasonably intelligent person will recognize the scam and not reply.

This way, the only replies they get are from people who are extremely gullible and/or desperate. It allows the scammers to avoid wasting time on targets that have a low probability of success.

That said, there are certainly plenty of stories about Nigerian scammers who are themselves not very bright or perceptive (is the 401Eater website still up and running?).

> I don't think that the Nigerian scammers are necessarily unsophisticated. Their scam emails intentionally contain enough red flags that any reasonably intelligent person will recognize the scam and not reply.

It is actually deliberate. It has been part of scams like that for decades


It used to be seen in scams sent via the post as well.

>I don't think that the Nigerian scammers are necessarily unsophisticated. Their scam emails intentionally contain enough red flags that any reasonably intelligent person will recognize the scam and not reply.

The "nigerian scams" are extremely unprofitable compared to even mildly sophisticated scams like BEC or basic craigslist scams. I think it's safe to say that they're unsophisticated.

>That said, there are certainly plenty of stories about Nigerian scammers who are themselves not very bright or perceptive (is the 401Eater website still up and running?).

Yeah, because the actually intelligent ones are generally running very different scams.

Why would Nigerians be unsophisticated? Are you assuming someone from Nigeria wouldn't be smart enough to make deepfakes?

I think you can ask anyone working with threat intel about this. Nigeria has the least sophisticated cybercriminals in the world, they're very prolific but the tooling they use is hilariously bad.

I'm sure there are some exceptions, but those would probably be smart enough to conceal where they came from.

This isn't really surprising. Obviously Nigerian cybercriminals will be less sophisticated than the Russians for example, just look at their schools!

Right? Their prince was the first email scam of note, and probably one of the longest running ones at that. Also, you know VPN/VoIP numbers exist to help mask the identity/location of scammers. Just in case some of that might have slipped the mind of some peeps.

Nobody would choose to use a Nigerian VPN while logging into stolen accounts. This is really obvious.

> Their prince was the first email scam of note, and probably one of the longest running ones at that.

And all of the Nigerian prince spammers are incredibly unsophisticated. Anyone in the industry can tell you this, you can just look at the manner in which they send their emails.

Nobody you say? Sounds like the perfect cover then. Or, you know, for the lulz.

Almost anything is possible, but this is still by far the least likely explanation.

Why needlessly alert the victim when you could just choose a proxy in their city?

>Sounds like the perfect cover then

How would that even work? It's not like this would provide any kind of an advantage. In what kind of a threat model could this possibly be beneficial?

The phone number linked to the instagram account is also a +234 number now which is Nigerian based.

How do you know it’s a deep fake?

The same thing happened to someone I know, but they were basically forced to post the requested videos as ransom.

It also happened to a skateboarder, and many people claimed deep fake when it wasn’t. [0]

Not saying it’s not, just raising the question.

0: https://www.slapmagazine.com/index.php?topic=119395.0

Thanks, Bitcoin, for creating cash incentive for most of the worst behavior on the internet.

Crypto lockers, pump-and-dump schemes, scams, hacking processing power for mining, burning fuels to power even the most legitimate uses.

Cryptocurrency will go down in history as the most regrettable invention since leaded gas.

That applies to any and all currencies, not specific to bitcoin. Scammers have existed for centuries.

It applies to anything really. This is why we can't have nice things type of comments. Someone comes up with something clever, but the scammers figure out how to use it better than it was targeted originally. Now, it's a bad thing, even though the thing itself is neither good or bad.

Many new scam possibilities are specific to cryptocurrencies.

dont blame nails on the hammer

i've been seeing this trend lately. it's not limited to the crypto niche. i've seen other ways they were trying to hijack people's identity to sell whatever thing they're trying to promote

edit: also there was an instance in the past where my account got disabled and it took me months to get my account get reactivated again. the first issue is that they have a huge backlog of other people they are assisting. so you need to find a way around that

> the first issue is that they have a huge backlog of other people they are assisting. so you need to find a way around that

If that is true, that is a deep cut against their account security.

Can anyone guess what exactly the hackers want with a particular person's IG? Why don't they just auto/synthetically generate a fake face if they just want to sell or scam something? Is it to exploit the person's friend network?

They want a large number of followers who trust the person.

This news report had a musician guy's IG taken over and deep fake videos of him and his voice telling people to get bitcoins:


not deepfaked but recently some solana sales pitch took over lots of unrelated youtube accounts, it took youtube a good week or so to get them sorted

frankly, i think this is where decentralized networks can help us.

1. networks where global reach to total strangers is not even possible, or at least is not the default. the vast majority of people only really want to reach their “friends of friends” anyway. this eliminates a lot of would-be hackers from even knowing you exist.

2. accounts that are unhackable because users log in with private keys instead of user/pass that sit on a server. of course losing your private key is a problem, but i would suggest a system of social-account recovery anyway where you can regain access by 3 out of 5 friends approving, for example.

I do think a "trusted friends" circle would be a better way to deal with these kinds of problems, because I know my friends would be available to help me and easier to reach than customer support.

> 2. accounts that are unhackable because users log in with private keys instead of user/pass

There's nothing magical about private keys that prevents them from being lost or stolen.

The average person doesn't want to manage private key files. The average person wants to be able to recover their account if they forget or lose their password. Moving to private keys isn't a realistic solution for logins for the average person.

The average person prefers passwords to key files?

The average user isn't even aware of the potential option of the key file.

I think the popularity of password managers shows that users preferences are opposite to what you say. The user wants the machine to store the secrets rather than memorize and type them.

Recovery mechanisms is a separate issue from whether you use passwords or key files.

I’m sure the average person didn’t want to use keys to lock their house or car—we adapt.

> The average person wants to be able to recover their account if they forget or lose their password

That’s why i mentioned Social Account Recovery. There are solutions to improve general UX of key management.

>the vast majority of people

i think the concept of followers shows that your premise isn't exactly true and people with more followers than just family, friends, and friends of friends is also another clue.

“friends” in the term “friends of friends” refers to followers

Wouldn't that be worse since private keys are king and there is no way to intervene?

Exactly. If private keys were that much better for the average, shouldn't some common services use them now?

Private keys are used to upload code to github.

It's not about what's "better" but about what users are willing to tolerate. The average user has a very high tolerance for login bullshit, repeated captchas, etc. -- if the site has a monopoly on the social network you need to access.

And yet, even Github has passwords. Even Github uses passwords as primary. Along with millions of other sites. This isn't about what a few large companies do. It's about what works for most people.

Private keys, as appealing as they are in theory, don't really work from a user perspective. The closest thing that seems to be popular is 2FA keys like YubiKeys.

A simple social recovery method of 3 contacts approving could lead to mass takeovers if the attacker gets a hold onto multiple accounts. Often, a group of say 3 attacker controlled accounts doesn't just have one friend in common but multiple. As long as the graph is 3 connected and the attacker starts out with a group of accounts that can take over one more account, the attacker can take over the entire graph.

There are ways around this. Just having each account only able to do the verification thing occasionally and resetting that timer when it's done on your account make this a very slow process that manual intervention should be able to thwart.

Not a good idea. Account recovery should not depend on social relationships, the nature of which could change any time.

The point is that it’s optional and changeable.

domain registration and a blog solves most of this. You have to BYO discoverability though.

thats your big idea? i mean you can create a social network that does anything, and your big market killer is "safer login experience?" ... like who the fuck cares.

as prompted from the OP, it’s just a few things that a decentralized network can offer around account safety that is better than the current centralized ones today.

what are you looking for from me? i’ll try to perform better for you next time lol.

It doesn't matter at all to reality, so it's probably the next Unicorn in SV.

did you read the original post up top? are you aware that we are specifically talking about account safety and login?

I saw this happen on Facebook as well! Super insane, in this video they took his voice of which there where many recordings on his Facebook to create an add for Facebook. Super scary stuff.

Yeah, the voice was hers but it had a plain background... almost like a white wall which looked different from her other videos, and was about 10s in length.

Rhetorically, who cares about trust anymore?

Slightly more specifically, why should any stranger care if the charismatic "with-it" personality says anything in particular?

It's all BS, stories, that the world is telling to us. We can be just fine if we keep our heads down, and do what's expected of us.

Not Rhetorically:

"Trust" has never been guaranteed. It's partly an emotional experience.

So as technologists, how shall we pool our tools, to Augment Humanity? Why do we have emotion enhancing technology for everything but trust (rhetorically, because we have many)? Why would we Want Trust?

I'd personally love Digitally Signed video, proving mathematically it derives from a Genuine Work from the taking head in the video. Because I trust myself and need solid facts to continue doing so.

I need more reliable facts.

I made a new instagram account, and it was locked out and blocked within a day and someone stole the name.

Nothing to be done. Just wanted to post nice landscape photos of the place I biked too.

I logged in, it requested my number and they just cold say "we'll contact you" and nothing.

Doesn't gmail alert you if a different Ip is used to login? Seems the Hackers had both her gmail and Instagram passwords(probably common password). Sadly nothing much can be done except contacting instagram.

Sorry if I was unclear, she had a personal email and after changing the instagram login details the account now belonged to a random gmail account, and the phone number changed too.

They deepfaked my wife yesterday. No idea where they got the video from but it looks 100% like her with a different voice actor. Its so frustrating that theyre using her likeness for a scam, and theres no way instagram can help. They changed our email and theres no way to recover her acct. We cant even get IG to delete it

These sorts of scams also add to the perception (which I hold, too) that "Crypto" is a massive scam.

Instagram supports FIDO2/webauthn.. if I had an account I wanted protecting on Instagram I'd be using that (with a backup stored somewhere safe of course). You can get a $20 Yubikey Secure Key for that use case.

Can you contact law enforcement or lawyer to get a take down the account?

In an ideal situation I'm sure that could work, but I'm not sure if law enforcement would have any more luck getting hold of instagram help & support in this case. I also don't really know much about deep-fakes yet and if they do fall under some kind of copyright law and how that would work across international borders. I think it's very new problem that might not have a clean solution yet.

What do you expect a lawyer can do?

Write a letter, get the account taken down. This is how things work.

Now I'm really curious if you could share an example of these deep fakes... with your friend's permission of course. Maybe with a disclaimer: THIS VIDEO IS A DEEPFAKE.

A foaf had this happen but was scammed into reading a video about how the scam worked for her before they took over her account. Not a deepfake h in her case.

This was about two weeks ago and we did wonder if it was a deep fake at first but when we reached her she said she just fell for the scam because she saw a convincing video from her friend. She said she felt super weird reading the script before she got the money

GPUs have been weaponized to crack passwords, make deep fakes and mine crypto (which is used to launder money and buy illegal goods and services.

What do people even use to create deep fakes like this?

They probably accessed the account first, then invested the time in the videos and held off on taking over the account until later.

Just speculation though.

The best thing you could ever possibly do is to block all facebook products in your household. It's really quite easy these days

do these people just guess the passwords? how do instagram, fb, twitter, etc. accounts get hacked?

My wild guess: most people have the same passwords everywhere. You get hacked on service a, you get hacked on b.

Plus the usual phishing.

my instagram got hacked many years ago. i don't think i leaked any of my information. seems like it was somehow instagram's fault. either way i don't care. don't use it

This is AI getting smarter, testing the edges.

No it’s people stealing accounts and running the images through off the shelf deep fake software to pump whatever the cryptocurrency or NFT flavor of the month nonsense is.

This is actually quite good for Bitcoin.

Your surprised it took a while to resolve a problem with big tech? Really?

Delete your social media already

hacker news is not social and not media? :)

Moving the goalposts - it's social in the sense that you're interacting with other people, and media, but not social media as the focus is on the content and not on the people who post it

Where's the media? It's just comments on a link aggregator.

the links are the media, and the comments are also media.

newspapers are media, right?

so are digital newspapers also media?

what about just a collection of headlines?

Yes, you're very clever by being the 900th person to be "witty" like this. You know what they meant.

and what did they mean? delete your facebook and instagram? I guess the focus is on the "content" and not the "people" but I didn't really know they were distinct categories until now

Hacker News doesn't even allow you to delete your account.

or edit/delete comments, ah the permanence of it all is strangely... terrifying?

It does allow account and comment deletion, just not on a self-service basis. You can always ask dang if you need something gone, but HN tries to avoid the comment rot that for example Reddit has (where there are massive threads of [deleted] or "erased by GreaseMonkey", usually wiping out context or, worse, actual solutions).

How is that allowed under GDPR?

Is text media in the sense of multi media?

smoke signals are also media (as in mediums)


> Your friend should take this as a complement to her appearance, she's been deemed attractive and trustworthy-looking, after all.

whew boy, I was not expecting this on this website

Really? Have you not ever seen a comment thread here on a topic involving women?

I don’t know if this is because I don’t recognise it when it happens (bad) or if it usually gets flagged fast enough that I seldom see it (good), but such a blatant attempt to dismiss a problem by pretending it’s a good thing is not my usual experience here.

The comment 37 minutes later is already flagged and dead

> Your friend should take this as a complement to her appearance

Wow, that's a pretty fucked up take.

Thanks. Noted.

How would you take it?

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact