Hacker News new | past | comments | ask | show | jobs | submit login
Ask HN: Should Banks Phish Their Own Customers
21 points by jwally 8 months ago | hide | past | favorite | 63 comments
At my previous job, the bank used GoPhishMe to conduct internal phishing tests on employees. Clicking a dubious link or downloading a shady file led to an informative email about the dangers and tricks used by real cybercriminals.

What are your thoughts on extending this practice to bank customers?




Good practices I'd appreciate from my bank:

* Zero links on legitimate emails. Any email with a link is automatically a phishing attempt.

* Minimal content in mails. Any personal details other than my email address and first name absent in the notification. All the relevant content in the secure messaging section of my customer access.

* Clear categories of emails, and an easy way to unsubscribe from each.

* Direct-to-support-team phone numbers advertised in the website.

* Periodic reminders about good practices (e.g. a legitimate mail will never ask you to follow a link or click on a button; a one-time code can only be entered in the app and never used in any other way or given to anyone through any channel).


I would add

* ability to confirm email validity by logging into my bank and seeing same email in their communication queue with me

My bank sends me emails, and I have no way to confirm that the bank really sent them.


In addition to phone, also providing an accurate list of all other domains owned by the bank, all other domains used for legitimate emails, and all social media accounts.


Speaking strictly as a non-technical employee at an org that does this regularly, I view the practice as anti-pattern, un-educative and denigrating.

Should we place dummy card skimmers on gas pumps to teach people to be cautious of credit card fraud? Would you conduct a similar campaign over the phone or in person?

Don't lie to your dependents. Secure your infra.


I've seen an interesting trend at multiple companies. The security team starts sending phishing emails to the employees and if you get phished you have take a training. Simultaneously, HR and executives send company wide emails from third party domains because security won't let them setup email tools on the main domain. As a result, employees start reporting HR and executive emails as phishing and lots of corporate communication breaks down.


One way to do this right is to do trainings first, inform employees about future campaigns, and to only ever use reports from phishing campaigns in the aggregate to improve the training.

You will never be able to block 100% of legit phishing attempts, your employee alertness is another layer of defence.


What infra security prevents your customer from typing their password and TOTP code into someone else's website?


I was referring to an internal org with the ability to employ, e.g. DNS white/blacklisting, URL protection, hardware keys, and not sending official links requesting passwords and TOTP in the first place.

As far as customers go, good luck. I'd drop a business that treated me this way without a second thought. The method only works if administered from a position of authority.


i completely disagree, its definitely educational to people in demographics who fall for these things. and its less denigrating than the real thing.

and the dummy skimmers on gas pumps isn't a bad idea, there's a lot of people who have probably still never heard of them, although maybe videos would reach a wider audience for this one as having this irl could cause actual problems. whereas fake phishing emails to vulnerable customers will force them to wisen up.

even if its something that shouldn't need to be implemented at your bank, because there are better solutions, it can still help them elsewhere in life.


I absolutely support education around digital hygiene and online security, I just see the method discussed as behavioral operant conditioning, not pro-active education.

To take the example to the absurd, should we entrap people to teach them not to commit crimes?

I'm also not arguing that it is ineffective, I just view it as morally ambiguous and sowing distrust.


No and yes. I wouldn't want my bank to do that to me because it's too paternalistic. At the same time, we recently had these at work. I didn't fall for it, but I'm surprised I wasn't immediately suspicious from the sense of urgency it created. Telling people what to look out for is one thing, but unless we train for things, we often don't know what to do in the moment.


Extending managed phishing campaigns to customers sounds like a very good way to erode the trust of those customers in your bank.


This. If my bank contacts me I want it to be legitimate and honest. If I started getting phishing attempts I'd start ignoring my banks contact attempts completely. So when someone skims my card, makes some purchases and they try to ask me about it, I'll ignore them. Not good.


I remember thinking I received a scam call from my bank over an account, and I very nearly hung up on the lady (normally, I'll say something mean/hilarious, but I thankfully refrained from it this time).

The ironic thing is what made me realize it was legitimate: She was initially asking me about my physical address; I didn't give her the information but asked what she had on file. Two numbers were transposed. When I realized it was probably legitimate was when she was trying very hard to send me a bill for a statement they mailed to the wrong zip code, and she was insisting that I must have lived in that town at some point.

I told her I wasn't going to pay them a cent for a mistake on their part, and that I needed to talk to my local branch. So I hung up, called them, and found it it was legitimate. One of the employees transposed two digits on an account I'd just set up about a month prior.

But holy crap do you have to be careful about giving any information out. I can't imagine if this had been a phishing attempted from the bank itself. I think I would've dumped them to be sure!


Running managed phishing campaigns against your internal staff erodes trust too. It’s a widely implemented practice but I’ve never seen evidence that it actually improves security or whether the negative impacts of trying to trick your own staff are actually worth the tradeoff. My sense is it’s really only useful for measuring how porous your organization is to phishing to decide how to invest in training/other security efforts.

I suppose with internal users you can theoretically target test-failures for individual training or performance intervention - for customers you can’t do that.


One place I worked would use real, legitimate companies for their phishing attempts.

That annoyed me to no end.

Literally the email domain, address, company, etc would match something in real life (I checked).

Is that phishing or just being a dick?


I received an internal simulated phishing email to my work address, masquerading as a fake Google Play invoice - which happened to show up thirty minutes after I had adjusted payment details on a Google play subscription under my personal email.

It was obviously fake, but the timing was so suspicious, and it came in to the wrong email address - so my first thought was not ‘ah, here’s my Google play invoice’; nor was it ‘ah, a phishing test, let me report it and feel smug’. It was ‘oh crap, my phone must be compromised’ - if someone knows I just updated a Google play subscription, and they cross-associated it with my work email, the only place those come together is on my phone.

Then when I got confirmation that it was a simulated phishing email, my second thought was ‘wait, did the corporate endpoint security system monitor that I was just on the Google play store and send me a targeted phishing attack?’ - which is a significant hit to the degree of trust I place in my employer.

Turns out no, it really was just a randomly selected phishing template and a wild coincidence. But for me it says it is a very bad idea to send out phishing emails that masquerade as real services your employees might use in their private life.


That's a really good point, and its a tough needle to thread.

So my train-of-thought goes something like: If my customers are going to get hacked, its better they get hacked by my good-guys than actual criminals. If they're more suspicious about clicking on links from my bank (or links that LOOK like they're from my bank) - it isn't necessarily a bad thing.


> its better they get hacked by my good-guys than actual criminals

Yeah but they are not mutually exclusive.


I would avoid dealing with any company that did this sort of thing, even if they offered me a way to opt out of it. In order to play "we were lying to you for a good reason! isn't this fun!" games with their customers, a company needs to cross the "we lied to you" line, and that is something they should not do with their customers.

(my employer runs these sorts of tests and I'm fine with it, the expectations in that relationship are quite different)


A bank could add security drills into the contract with the customer. It's a kind of training so it could also get some good press.

Add some credits (maybe real money) if a customer reports correctly the phishing to the bank instead of ignoring it or falling for it? The downside could be a lot of pressure on the security related part of customer service so they should plan to get only a few reports per day.


Your post is a series of bad ideas but it did remind me of something: every interaction with the customer through customer service costs a bank something like ten or fifteen bucks (I'm sure that figure is out of date), and God forbid if they actually take a staff member's time at a bank branch. For that reason alone pretty much everything we're talking about here is a no go. Every "what the fuck is this shit?" question the bank gets about phishing, or - to the point of your post - every "where's the bonus I was supposed to get for correctly flagging your shit?" question the bank has to respond to, would cost them real money.


The customers who don't need phishing training - the ones who recognize what you are doing - will be understanding at best, and slightly annoyed at worst.

The customers who need phishing training are going to become more confused. Since some of the phishing emails now come from the bank, they are going to have a harder time than ever figuring out which emails are legitimate.

Offer free guides and classes to help your customers learn to remain safe. Do not include phishing tests that erode trust and confuse your most vulnerable customers.


The bank doesn't have to send the simulated phishing emails from their own domain. Actually I don't think they'll want to do it as it would lower the spam reputation of their domains. They should use shady domains like every real phisher.


Can I bill the bank when they waste my time on this activity?

Even if I could, I would still be dubious. It seems that the long-term efficacy of phishing tests is still disputed.


It would be better if banks educated customers on best practices such as "don't trust anything on a different domain" and "only provide PII for verification if you are the one initiating the call". Of course, both of those would require that banks stopped engaging in those two practices which make legitimate interactions indistinguishable from phishing.


> don't trust anything on a different domain

Then you receive an email from your "securebank.com"

from: "securebank-communications.com"

title: "Beware of fraudsters sending emails and text messages"

body: "Tap here to install our latest secure mobile app"


Make it an opt-in service. Offer a quarter point rate bump on customers’ savings accounts, and maintain it only if they pass your phishing attempts.

Good for the company because it increases the savviness of their customers, good for customers because they become more savvy (and make a few extra bucks)


Our company phishes its own employees, through a contract with an external security consultant. Hilariously, those emails include a special header that gives away their source. An engineering co-worker wrote a script that send them directly to the trash, even though we are supposed to send them to the consultant to "report the incident". Since there's zero reward/incentive from reporting them, most of the engineers don't bother with them anymore.

For the non-engineers, I imagine it's better for them to click on a test email and learn their lesson rather than clicking on a link from an actual phisher. We've had phishing attacks work in the past, so I suppose it's not a bad practice overall for non-technical employees.


Phishing campaigns barely do any good if they are well-prepared and accompanied with good communication in a company environment. Employees need to be aware of phishing tests and have to have a way of reaching out to those who conduct them. If you run phishing tests, employees need to be able to verify if what they have received is indeed part of a simulation or a real phishing mail. At the end of the day, phishing trainings just tell you who is already good at identifying suspicious emails. The training effect is negligible.

Without the support infrastructure of internal company communication, phishing your customers most likely leads to more confusion and open support tickets.


No. Banks reminding customers about the dangers but I expect my bank to not send me deceptive sht even if supposedly for a good cause.

At work I’m ok with it though. Certain roles move millions on cash regularly so fire drills are just part of life


These services attempt to teach users to guess whether email content has malicious intention. But if an email from no-reply@chase.com says someone emptied your saving, you can't just laugh it off.

From a technical perspective, in order to successfully reach customers, they have to actually pass DMARC/DKIM/SPF tests and also some spam filters. For a company to do this to their employees, they typically have to ask their email admins to whitelist such service or even let them use a legit company domain. When I got such email for the first time, I thought our email system was compromised.


Something like a quiz similar to [0] with a small reward would be better.

[0] https://phishingquiz.withgoogle.com/


If I received an email with a URL looking like yours, I'd NEVER click it.


The number of "phishing looking" URLs that are completely valid sent from banks (often pretty good, sometimes bad) and especially healthcare in the US (phenomenally horrible, things like securebillpay.net and mydocbill.com and even worse things with hyphens!) is way too damn high.


One could say the author has done a masterful job of making their art self-selects its own intended audience :)


If banks would spend money on this and not enabling support for hard to phish MFA options like hardware keys (FIDO2), I would change banks.

We have solutions to most of the phishing attacks, but most people find them hard to use or don't want to use them as they are seen as not important. I've made comments to several companies that SMS or TOTP based MFA is not phish-proof and that they need to implement something stronger, but it often is ignored.


> hardware keys (FIDO2)

and how would they work with smartphones, as banks a increasingly making them first-class banking clients?


If we had a strong professional media, you could possibly pitch it as proactive training for high risk customers. Unfortunately, the quality of journalism has declined dramatically over the next decade. Now, it’s a coin toss whether you could even convince a major publication that you weren’t in fact phishing your own customers.

A headline like “ABC Bank admits to phishing its customers” would most likely be the end of ABC Bank.


You can do it but don't tell them you're doing it. Either they don't bite, nothing needs to be done, or you hook them. Don't tell them, but now you know what works. Next, educate them through normal channels, or change something about your communication with the client that makes them recognize the phishing attempt next time. Change something to make them phishing attempt stand out more.


I would happily trim all passwords of these security assholes to 7 characters without any notice and watch them being locked out of the systems. If they are "testing" their customers this way as well, I'd love to avalanche them with dozens of "Are you American, respond in writing within 3 days" FATCA letters.


Most internal phishing tests are not that good. They sound good, but don't change much. Banks and emails should just be segregated to other more appropriate channels. Ultimately, you can't save a users from their own weaknesses.


This shouldn't be done directly from the bank but as a third party that is supported by the bank and a bunch of other companies concerned about this.

That way the bank doesn't have to worry about any legal or good will issues from doing this.


This is what I had in mind.

Bank.com hires pen-testers to trick people to go to Bank.evil, spill their ID/Password/OTP.


Why would a bank accept responsibility when a customer becomes a victim of fraud? How would the bank discern between customers that are actually victims, and customers using this responsibility to make the bank the fraud victim?


IMO teaching people about passkeys and making the onboarding experience as easy as possible would be many times more effective at actually preventing phishing.


No, you do phishing of your own employees to identify employees who need more training. What would be the outcome of doing it to your customers?


Banks keep all sorts of metrics on customers, to approve or deny transactions. If someone types their password into bank .ru/phishy.php and never changes their password afterward, that's useful information. If they then try to buy $10,000 of bitcoin in coinbase, the bank algorithm should probably deny it or require more verification steps. I'm not saying this practice is good, but I get it. Also, banks share the same technology like chexsystems, who could use A/B testing for research. It's cheaper than focus groups. For example, if A/B tests show requiring customers to submit their username first, then on the next page it says "Hi Bill! Great day in San Francisco! Enter your password: " causes them to notice the url bar isnt chase.com, thats good tangible info that will save banks from losing customer money.


The same thing? I'm guessing there is a large number of non-tech employees that are bank customers. Sure, tech jobs might "train" their employees about phishing. I highly doubt the employees in the construction, automotive, service industry, retail, or any of the other than tech industries do any kind of anti-phishing training.

I can sympathize with a bank wanting to at least offer some sort of anti-phishing something to their customers. After all, it is the bank that has to deal with it when things go wrong. They have already started some customer education attempts with "we will never call you" type of things. This could just be the next step in that.


I've experienced first-hand, that banks have a sort of "paranoia level" value for each customer account.

I've reported my ATM card phished+stolen before. I took money out from an ATM in a bodega in a bad part of town (where the thief was apparently watching me input my PIN through the window); and then, when I stepped outside, the thief mugged me for the ATM card.

Luckily, they only got ~$1000 from the account (= my daily withdrawal limit at the time); and the bank reimbursed me for this.

Annoyingly, to get the bank to do that, I had to file a police report, and give them the investigation number.

I also (obviously) had to change my PIN.

Ever since then, though, my ATM card will no longer work at bodega ATMs in this bad part of town. (I worked in the area at the time, so this was easy to notice.) It also doesn't work for the PoS systems in the bodegas themselves! However, it still works fine in ATMs at real banks in the bad part of town; and at ATMs / PoS systems in other, less-sketchy parts of town.

My belief is that, once my bank had to pay out an theft-insurance claim for the account, they labelled me in their internal systems as "at high risk of being a victim of future theft" (a.k.a. an "easy mark") — and this then bumped up my account's "paranoia level" to protect against future theft. So now, my bank, when being asked to authorize any ATM transaction, will reject the transaction if the business making the request has ever been associated in their systems with fraud/theft claims (even as an intermediary); or if the business has a high rate of refunds; or doesn't have enough history; etc.

In other words, the bank seems to internally assign businesses a kind of "credit score" for authorizing transactions; and a transaction is only authorized if the business's "credit score" exceeds the customer account's "safety threshold." My safety threshold was increased, so low-credit-score businesses can no longer interact with me through the bank.

---

It should be clear what I'm getting at here: failing an internal-red-team phishing test, should bump up a given customer's accounts' fraud-risk safety threshold.


That's an extremely good point I hadn't thought of.

If a customer fails this, their internal risk score goes up, which should increase the scrutiny/friction of future interactions between them and the bank.


The idea is to preemptively 'harden' your customers against real phishing attacks by exposing them to controlled, simulated ones. By increasing their wariness and increasing their suspicion of fraudulent communications, you're trying to inoculate them against actual threats.


To provided educational materials to your customers, and hopefully drive down fraud related expenses.


im surprised so many are against it. I think it should be mandatory, and include phone calls and letters as well. We cant force everyone to read the art of deception. But we can force banks to educate their users.


I'd rather prefer banks don't trim passwords implicitly and don't force clients to use their lousy apps working only with the latest mobile OS's and immediately invalidating any previous versions of their lousy mobile apps.


Dark patterns are basically phishing so some of them probably already do it.


No, but basic opsec should become part of home economics in grade school.


Do they still teach home economics in school?


Usually something similar (in scope if not in cooking) but under a different name.

I had a half-semester that was called "economics" but was basically an old dude ranting about of bunch of this kind of shit, very informative, not sure it was really from the text book.


I'm familiar with the concept. I just didn't think it was still available. Sadly, this is probably a class with some of the most pertinent info to just about anybody in regards to "when will I ever use this in life?" Yet, it is pretty much considered a GPA pad type of class


Let's start with some light grateful dead and see how it goes


Should your homeowners insurance red team your house?


Are these campaigns even effective?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: