Hacker News new | past | comments | ask | show | jobs | submit login
How we secure Monzo's banking platform (monzo.com)
169 points by coffeefuel on April 4, 2022 | hide | past | favorite | 140 comments



Good to see some practices like default deny networking (ingress and egress) and very limited interactive production access being laid out here.

A couple of other areas that aren't mentioned, although perhaps they're still doing them are around container breakout risks.

There's no mention of what (if any)hardening is being done on the container runtime, either restrictive seccomp, Apparmor/SELinux policies or using something like gVisor/Firecracker. With this year's number of container breakout CVEs, seems like an important area.

A related one is whether container aware runtime security is being used to detect where an attacker might have got access to a single container and be trying to breakout to either the underlying platform or to other containers in the cluster.


Exactly. containers are not secure sandboxes by default and if one is breached all those K8s networking ACLs are worthless.


> "Exactly. containers are not secure sandboxes by default and if one is breached all those K8s networking ACLs are worthless."

Your suggestion being? Putting a sandbox inside a sandbox? How many layers deep should this be, before being considered "secure"?


Most serious security teams do not consider containers a security boundary. So it’s not a sandbox inside a sandbox, it’s just a sandbox.

Gvisor and firecracker are the most popular sandboxes for containerized workloads.


I think this is outdated. Docker is a security boundary. There is no built-in way to get out of a Docker container just by asking by default (if you mount the socket into the container, it's trivial).

How good of a boundary it is may be another story. There's some seccomp filters going on and namespacing is pretty sweet too.

But an attacker can escape by exploiting the kernel, which I think most security people would consider to be not particularly high effort.

So, suitable for internal services that you generally trust, not suitable for hostile code or highly exposed services. In an ideal world maybe we'd all use Firecracker but it's not nearly as easy to do that vs just putting something in a container.


The reason that containers are not generally considered a security boundary is that many of the namespace primitives were _not designed_ as a security layer, they aren't designed to actively reduce the privileges from the current user's context. Since most containers are started as the root user, the namespace transition inherits root's permissions even if they're later dropped. Without SELinux or seccomp restrictions, root can still pretty much do anything to the host even inside the containers.

For the most part this is troublesome when parts of the kernel or host userspace code are not fully aware of the different forms of namespacing (there are still portions that just check for an effective UID of 0, without checking whether they're in a namespace for example). These are the components where a lot of container breakouts happen and is largely mitigated by having internal processes in the container not running as root in the namespace. Dropping privileges to a different user still trace's it origin back to the root user on the host, so in some cases being partially aware of namespaces in a section of the kernel or host user code actively hurts the security by tracing the user back to root and using those privileges again. SELinux really tightens the potential to pull these shenanigans, but most production k8s clusters at least that I've seen are built on Ubuntu where those protections aren't available. In this case the security layer is once again SELinux not the namespacing.

As long as the container runtime is performing the various namespace isolation primitives starting from the root user these container bypasses are going to be a risk. There are 'rootless' versions of containers which can only use the privileges available to lower (presumably heavily restricted) user but those aren't widely used. Once again this is relying on the security protections of the host user authorization, not on the namespaces.

The networking analogy is NAT. People treat it like a security layer as it kind-of-sort-of looks like an ingress firewall since you can't directly address devices inside a NAT, but its not and can be pierced pretty easily. NAT is not a firewall. Namespaces are not a security layer.


> Without SELinux or seccomp restrictions, root can still pretty much do anything to the host even inside the containers.

That's not true

        Having a capability inside a user namespace permits a process to
       perform operations (that require privilege) only on resources
       governed by that namespace.  In other words, having a capability
       in a user namespace permits a process to perform privileged
       operations on resources that are governed by (nonuser) namespaces
       owned by (associated with) the user namespace (see the next
       subsection).

       On the other hand, there are many privileged operations that
       affect resources that are not associated with any namespace type,
       for example, changing the system (i.e., calendar) time (governed
       by CAP_SYS_TIME), loading a kernel module (governed by
       CAP_SYS_MODULE), and creating a device (governed by CAP_MKNOD).
       Only a process with privileges in the initial user namespace can
       perform such operations.
> For the most part this is troublesome when parts of the kernel or host userspace code are not fully aware of the different forms of namespacing (there are still portions that just check for an effective UID of 0, without checking whether they're in a namespace for example).

Yes, like I said:

> But an attacker can escape by exploiting the kernel, which I think most security people would consider to be not particularly high effort.

> Dropping privileges to a different user still trace's it origin back to the root user on the host

It does not. Only if the process creating the container is root, which with unprivileged user namespaces is not (necessarily) the case.

> The NS_GET_OWNER_UID ioctl(2) operation can be used to discover the user ID of the owner of the namespace; see ioctl_ns(2).

"root" isn't the point anyways, it's about checking capabilities. The problem is that the Linux kernel has historically not cared about root -> kernel privesc, and containers expose more attack surface because of that. But an attacker outside of a container can still just enter a namespace (user namespaces are unprivileged) and perform the same exact privesc, so containers aren't making anything worse.

> As long as the container runtime is performing the various namespace isolation primitives starting from the root user these container bypasses are going to be a risk. There are 'rootless' versions of containers which can only use the privileges available to lower (presumably heavily restricted) user but those aren't widely used.

That's not how namespaces work. Even with 'rootless' containers your guest has CAP_SYS_ADMIN. The only difference is that the daemon that starts the container isn't privileged because user namespaces are increasingly becoming unprivileged. Rootless changes nothing, except that attacks against the daemon itself won't be an insta-privesc to root on the host, they'll only be a privesc to the user running the daemon on the host.

Anyway, let's step back.

What is a security boundary? I would say it is a mechanism by which an attacker is restricted where the attacker must exploit a vulnerability in order to get around that restriction. By that measure, containers are a boundary. Is exploitation difficult? Not necessarily, like I said, the Linux kernel has loads of attack surface. But it meets a reasonable criteria for a boundary.

As an example, chroot on its own is not a boundary because attackers can just call chroot again - this requires no vulnerability, it will never be patched, and you need another layer to prevent that. Containers have nothing like that, there is no "just let me out" syscall, you require another vulnerability.

You can read more about user namespaces here:

https://www.man7.org/linux/man-pages/man7/user_namespaces.7....


Using the Dirty Pipe Vulnerability to Break Out from Containers

https://www.datadoghq.com/blog/engineering/dirty-pipe-contai...


Yes? There are a million exploits that allow breaking out of a container. I didn't say it was some impenetrable force field, I said it was a security boundary.


I don't think this is the bottomless pit that you think it is. A virtualised instance is a lot more secure than a container, and it's probably fine to stop at virtualised instances.


A lot more secure? In what ways?


Containers are really a kind of process-isolation - you still share a kernel. You can find a lot of people saying that containers aren’t enough for running untrusted user code.

If you run a fully virtualised instance you get your own kernel and aren’t relying on process isolation.

Would you be happy if your cloud provider was running your containers on the same virtual I stance as someone else’s? Most people wouldn’t be.


The only meaningful difference between breaking out of a process-isolated "container" and a full-blown VM is what's waiting for you outside once you've broken out. Whether it's kernel/OS or a bare metal hypervisor isn't really all that meaningful: exploits and vulnerabilities exist for either.

There should be proper hardware-level isolation here, depending on the scenario. Most cloud companies can't afford that though, because they're not rolling out their own hardware.


> Whether it's kernel/OS or a bare metal hypervisor isn't really all that meaningful: exploits and vulnerabilities exist for either.

This is just not true, or at least it's extremely disingenuous.

Container isolation relies on the Linux kernel. Other than seccomp-denied syscalls (which aren't a thing in k8s by default) any program in the container has full access to the kernel. The Linux kernel has massive attack surface, especially to root users.

VM isolation like Firecracker is much safer. The attack surface is considerably lower. For one thing, you can isolate the process in the guest just as well as you could outside, further limiting attack surface. But more importantly, an attacker either has to attack:

1. Firecracker

2. KVM

Both are very small codebases.

Firecracker is:

1. Written in Rust.

2. Sandboxed aggressively.

KVM has basically never had a public guest to host breakout. You can read about one here, https://googleprojectzero.blogspot.com/2021/06/an-epyc-escap...

So, to recap, we have "security boundary relies on a fully exposed Linux kernel" and "security boundary relies on hardened, tiny, security-driven programs".

It is not even close.

> There should be proper hardware-level isolation here, depending on the scenario. Most cloud companies can't afford that though, because they're not rolling out their own hardware.

Hence hardware building hypervisor support in.


Genuinely, would you be happy with just container isolation between you and other customers of your cloud provider?

Most people absolutely would not.


> "Genuinely, would you be happy with just container isolation between you and other customers of your cloud provider? Most people absolutely would not."

But that's exactly how VPS hosting works today - you don't get your own private blade unless you're ready to pay premium prices and have the competence needed to run them yourself. The technicalities of how private resources in a VPS are isolated from each other will differ, but the concept remains the same nonetheless.

People bite the bullet, only to be subject to things like rowhammer [1], or other container escape scenarios [2].

The top comment in this thread reflects the proper way of dealing with this: containers or sandboxes are may not be treated as a secure boundary.

[1] https://www.usenix.org/conference/usenixsecurity16/technical...

[2] https://www.intezer.com/blog/research/how-we-escaped-docker-...


No, VPS hosting is not usually container-based today once you leave the utter bargain-bin offers. The difference between VM isolation and container isolation is quite significant.


> But that's exactly how VPS hosting works today

No, VPS is isolation by virtualisation, not containerisation.

The clue is in the V in the name.


So I'll start by saying that security is always relative and what's ok for one environment won't be for another :)

The challenge with Linux containers as used by Docker/Containerd/CRI-O et al, is that containers run against a shared Linux kernel. The Linux kernel has a very large attack surface, so it's easier for attackers to find some way to bypass the restrictions it tries to enforce. If you look at this year there have been several Local Privilege Escalation issues in the Linux Kernel, some of which have allowed for container breakout.

If you compare this to a hardened hypervisor (e.g. Firecracker) there is a much smaller attack surface visible from inside the container. It obviously could have a breakout vuln. but there is a lower chance of that occurring.


Developers working with docker are almost always in the 'docker' group on their local machine, which is functionally equivalent to running everything as root.


This doesn't matter if the attacker is in the container. It just means that if the attacker is outside of the container they have a trivial privesc to root on the host.


Opposite - don't mess with sandboxing. Use PaaS services like its > 2008, and let AWS / Google security teams harden their platform.


> With this year's number of container breakout CVEs, seems like an important area.

Worth noting that even basic hardening in docker will prevent a lot of them. I say "in docker" because K8s disables seccomp, which matters a lot since `unshare` is denied by docker's seccomp and is very useful for attackers in a container. If you use Docker the main thing to do is just not run as root.

If you do that much, and it's not hard at all, you are in a much better place than a default k8s pod.


Disgruntled former Monzo customer here. Do they still have a haywire fraud detection system that randomly freezes innocent people's accounts? It's happened to countless users and the customer experience when they do it ("we refuse to tell you why" and in some cases holding onto their money for months) is a kafkaesque nightmare.

https://www.vice.com/en/article/bvg7n3/monzo-freezing-closin...

https://www.reddit.com/r/UKPersonalFinance/comments/db9j5z/m...

https://www.theguardian.com/money/2020/jun/06/monzo-customer...

I'd be far more interested in a blog post explaining that than how great their infrastructure is.


No UK bank will ever tell you why your account is frozen, and this is why: https://www.cps.gov.uk/legal-guidance/money-laundering-offen....

Source: used to work on Monzo's financial crime team.


Others replying here are jumping to conclusions.

A frozen account could be due to any number of things. The customer service person may or may not have access to the reasons. In any case, it might be money laundering (from the bank’s perspective), therefore they can’t tell you anything.

Of course this is ridiculous. Due process should exist, even when a private organization “accuses” someone of illegal activity.


The law is really strict. If you look on reddit.com/r/ukpersonalfinance there's loads of people complaining that Monzo have frozen accounts and won't tell you way. In pretty much every circumstance, the person involved has been moving large amounts of money around through Crypto, so it's fairly obvious why they might flag that up as potential fraud/money laundering and cause an investigation.


I'm in the process of trying to open a UK business bank account for myself and two other directors. The hoops we have to jump through are insane.


It’s definitely their anti money laundering (AML) detection system kicking in. However it’s clearly getting false positives and their internal AML team is getting hit with a high amount of them thus causing the delays in resolving them.

This is why you don’t build your own solutions to well solved problems.


Monzo’s system is a little unique in that it runs fraud and AML checks in real-time while the transaction is being processed, and thus able to block a high risk transaction before it completes. As a result Monzo is frequently able to return stolen funds to victims.

Other banks run their AML systems in batch at the end of the day, and will only freeze an account after the money has left.

So yeah, you’ll see a lot more people complaining about Monzo, because their money will be frozen while they’re trying to exfiltrate it. That doesn’t mean Monzo has an unusually high false positive rate, it just means that people are used to being able to get away with fraudulent behaviour, and have their accounts frozen empty. Rather than being stopped mid-act and having their accounts frozen while they still contain illegitimate funds.


I wonder if their system, by virtue of being modern, is too good and catches a lot of fraud that would otherwise remain undetected by legacy banks' systems? Not to mention, opening a Monzo account is much easier and can be done remotely, thus it could be attracting a lot of malicious activity that just wouldn't even reach the legacy banks because the laborious account opening process would filter those out?


I work for a different fintech and both of your points are spot on. Much higher baseline fraud attempts due to online/remote accounts, but much more sophisticated real-time ML/detection systems.


>getting false positive <snip> delays in resolving them.

There should be a penalty for this. When "innocent" users are denied access to their money, those actions are much more felt by the user than the bank. That user's money is a rounding error to a bank, but to the user it is everything.

An incorrectly frozen account should come with some sort of "oops we're sorry" type of something that a bank can understand: monetary reward for the user.

I would rather criminals get away with a laundering transaction than me not being able to buy/pay for something in my day to day because of some dumbass AI algo.


> There should be a penalty for this.

There are, the FCA and Financial Ombudsman hold banks to an extremely high standard and hand out fines and compensation on a regular basis. Denying access to funds is a serious issue which banks and regulators take very seriously. However there’s an acknowledgment that AML systems have false positives.

> I would rather criminals get away with a laundering transaction

Easy to say when it’s not your money that’s been stolen. Most money being laundered through retail banks like Monzo isn’t drug money, or high stakes bank heists. It’s the millions stolen every year from normal people subjected to high pressure, complex, coercive and extremely effective social engineering scams.

We’re talking about peoples house deposits (because their solicitors emails got hacked), their life savings, their retirement funds.

Don’t assume you’re immune to these scams, or that you’re paying a cost for it. The scams trick even the most savvy individuals. Costs are borne by everyone with a bank account because victims are reimbursed, and I can assure you, no bank is sacrificing their profit margin to do that.


Generally when your account is locked any complaint against that doesn't even touch customer service, but compliance department.


The concerning thing is that the other commenter says it took him a year and a complaint to the financial ombudsman to recover his money.

False positives are one thing, but it shouldn't take a year to resolve?


I suspect this is more to do with the financial regulation & the bodies that investigate these things, rather than Monzo.


So if we pick any UK bank we will see the exact same issues?



I'd love to read a thread where you and OP try to see maybe what happened, theoretically, in Minecraft of course.


I used to work there :)


This happens in the US too. This kind of thing is why I'm using crypto for as much as I can, I always feel like my bank account might disappear one night and all the money in it will be tied up for a month or so. It's already happened to me once.


So 100% of account locks are due to money laundering, then? Not card theft, fraud, error or anything else?


The link doesn't really clarify much for me, can you elaborate? Are you saying the GP laundered money?


I'm saying that the bank gets royally fucked if they ever tell anyone who is even suspected of money laundering anything, so as a result they simply don't tell anyone anything. Whether the GP actually did it or not isn't really relevant because banks would rather lose a customer than incur the regulator's ire.


I’m sure you know this, but it’s not just regulator’s ire. It is a criminal act in and of itself to disclose that someone is a target of an investigation.

And it’s not just a criminal act for the organisation, but the individual customer service agent can be held personally criminally liable for disclosing, even accidentally.

So yeah, they have a really strong incentive not to tell you anything.


Ah I see, thank you.


> "we refuse to tell you why" and in some cases holding onto their money for months

I don't know anything about Monzo specifically, but perhaps anti-money-laundering laws might be the reason. Monzo is presumably obligated to take steps against money laundering, under UK law, where it's essentially an offence to say Our system has flagged you as a potential money-launderer. [0] Freezing your account without comment may be their safest course of action.

edit I see this explanation was mentioned in the reddit thread.

[0] https://www.lawsociety.org.uk/en/topics/anti-money-launderin...


Happy Monzo customer here with it as my primary account for the last 2 years. Haven’t had any issues, and neither have any of my friends. It’s the best possible banking experience imo. I’m happy they are proactive about suspicious activity.


Same here. Its funny that you get accused of being a paid shill, yet the people who so loudly complain get taken seriously. I dont know anyone who's had an issue and I would say 90% of my friends have monzo.


Same here. I've been using them as my main account for years, no problems since I'm not trading crypto. People act as if no other bank has issues with fraud detection


Sounds like a paid “amazon” review. lul


Ah yes, a long term commenter with 10 years of posts, housands of Karma and hundreds of unrelated comments was just playing the long-game so they could shill Monzo. And it was a person who just joined the site that found them out! /s

I've also had great experiences with Monzo, so will also back them up.


Haha, I just wanted to provide a counter example to the complaint. Yeah, it did sound a bit like a shill. Just a happy customer though.


Probably works at Monzo



My money is on it being crypto related, but that’s because the only people I’ve seen complaining about this publicly were doing some crypto stuff (tx both in and out) and it looked very laundry-esque out of context when you get into the actual details

And yeah as others have said that’s not a Monzo thing not telling you, no UK bank would tell you why they think you’re laundering money


Not in my case - no crypto, no recent large transfers. I'm a boring middle-aged IT consultant, not a terrorist, drug dealer or crypto trader. There was absolutely nothing I'd done with my account that should have caused this to happen.

That's what's so kafkaesque (and frankly rather violating) about the experience: you literally haven't the faintest idea what you've done wrong and the bank refuses to tell you.


Yes the law compels them to be like this. It’s actually criminal to tip someone off they’re being investigated.

Perhaps the law is too strict due to the impact it can have on individuals.


Use First Direct. They're designed for boring middle-aged IT consultants. :)


Not disagreeing (First Direct has a far better reputation) but it's really a brand of HSBC. The irony of having to move to a bank suspected of funding [1] actual drug dealing/laundering/terrorism so my accounts won't get frozen over imaginary drugs/laundering/terrorism is somewhat amusing.

[1] https://www.thebureauinvestigates.com/stories/2021-07-28/mon... / https://www.forbes.com/sites/afontevecchia/2012/07/16/hsbc-h...


They're not suspected, they admitted it, and paid their speeding fine.


I had a similar experience, not crypto-related.

My employer paid me a couple of grand in expenses, which I quickly moved to another account. That triggered an account freeze.

I asked the customer service rep for "why" - couldn't tell me. They then asked me for payslips to prove my story. I told them feck off because they could see the monthly entries marked "Salary" from the same source.

I escalated the matter and it took several hours before my account was unfrozen.

Lesson: only keep a minimum amount of money in UK bank accounts.



Sure, but that doesn't explain the sheer scale on which it happens (more than other UK banks and with seemingly worse consequences) or how the system actually works. It's also notable that this post was written a year before some of the news articles, indicating that Monzo really hadn't done anything to improve it.

One gets the feeling they're hiding behind the "we can't tell you why" excuse to escape accountability for a broken system that's literally ruining lives.


> Sure, but that doesn't explain the sheer scale on which it happens

How do you know the scale and how are you not sure that it's a vocal minority? After all, Monzo does target more tech-savvy users that might be more likely to voice their frustration online than with other high-street banks.

> One gets the feeling they're hiding behind the "we can't tell you why" excuse to escape accountability for a broken system that's literally ruining lives.

Because maybe they are being forced to hide behind that? UK banking regulations are notoriously strict when it comes to what banks are allowed to share with their customers.


> How do you know the scale and how are you not sure that it's a vocal minority?

Read the articles and look for similar experiences for other banks. There's definitely a lot more noise about Monzo doing it. Maybe (as another comment suggested) there's a selection bias at work but I assure you it's a real thing, and appallingly handled by Monzo when it happens.

> Because maybe they are being forced to hide behind that?

That misinterprets what I said. If there are rules that say they don't have to be accountable for a misfiring fraud prevention system that hurts real people, where is the incentive to fix it?


What's the average age of a Barclays or TSB user vs a Monzo user? According to this, 72% of Monzo customers are 18-35: https://www.businessofapps.com/data/monzo-statistics/

If that's the case, it's fairly obvious why there'd be a lot more noise - a lot larger percentage of people in that age group doing crypto, and a larger percentage are "very online" and likely to complain about it there.


I've switched to Starling. I still use Coinbase yet Starling have never frozen my account or held on to my money for a year.


You know that it does not happen to 99,9% of their customers either? For some reason everyone here thinks they are a snowflake.


You're rather contradicting yourself there. If it's not happening to 99.9% of their customers then we obviously are special snowflakes. Maybe you could think through your attempts at insults a bit better.


It's you who claimed that nothing happens at your other accounts and that has to be "proof" of anything.


No, I didn't. Try reading it again.


They are 100% not behind the 'We cant tell you why'. It only takes the smallest amount of reading to figure that out. They have to report it to the FCA and you can read the reports from the FCA that suggest its not actually any worse than other banks. Its just more talked about. (No I am not going to dig them out for you).


Searched and I can't find any. I did, however, find a Which? article that says "Resolver found almost three-quarters of complaints about frozen bank accounts mentioned ‘digital’ banks".

Statistically it doesn't seem likely that Monzo and its ilk would be so overrepresented merely because their customers are more fazed than others about having their money taken away. Still, if the FCA have published meaningful statistics I could be proved wrong.

https://www.which.co.uk/news/2021/09/why-banks-are-freezing-...


I can't tell which way the arrow of causality points, but all the digital banks in the UK are in various stages of still growing to reach the scale where they can become sustainably profitable.

This in turn means that they will have their risk assessments inverted from the usual high-street banks: optimise signup/account creation flow, and deal with AML requirements in a slightly delayed fashion. Making it really easy and smooth to open a current account brings in a surprising fraction of the crowd who would be rejected or otherwise earmarked by high-street banks.

Being digital upstarts, these modern banks also don't have the fraud and risk departments their established competition has. In order to not get hammered by the FCA, they almost certainly veer on the blunt instrument side when dealing with suspicious activity. And law of large numbers guarantees that there will be a significant number of false positives.


As a former Barclays customer I can confirm this. Last year I changed from self-employed to limited company and changing my business account at Barclays proved to be impossible despite always previously being in credit with no issues at all. I was able to open a monzo business account in less than 20 minutes.

This, I should add, at a time when there was a high incidence of fraud with people trying to open business accounts for dormant and non-existent companies simply so they could claim Govt backed loans where it was clear that very few checks were actually being carried out.


Most likely some automated AML check...

You are simply a false positive of this ineffective policy that has a success rate of 0.2% according to recent United Stations study: https://www.effectiveaml.org/un-slams-aml-success-rate/

Globally, it has almost zero impact on crime for an infinite cost.


Yeah, this ended up on Watchdog on UK TV which was entirely unfair. No UK bank can tell you why your account was frozen, it's illegal to do so.

(I don't work for Monzo, but I've been a happy customer since day 1)


Disgruntled former Monzo customer here too. Froze my account after I cashed out some bitcoin on Coinbase, held on to all my money for over a year and refused to answer support requests. Only got my money back off them when I brought the financial ombudsmen into it.

Don't touch them with a ten foot pole.


They explicitly tell you that Cypto is banned tbf.


Wait huh? Is this a UK law? Or they don't support transfers from Coinbase for some other reason?


> Do they still have a haywire fraud detection system that randomly freezes innocent people's accounts? I

As others have pointed out already, AML/KYC laws are strict. They are strict in general for financial services, but for banks, because of their privileged position in the financial system, its even stricter.

But there is a second aspect which is that challenger banks such as Monzo take an even more cookie-cutter approach. If you don't fit their definition of what a "client" is then you will be in for a hard time. Normal banks do this too (to a degree) but challenger banks are much more hard-core about it because if you fall outside the cookie-cutter then you mess up their fragile business model.

Case in point, I know of a well-known, well-established, UK VoIP operator. They moved their business over to one of these challenger banks (might have been Monzo !) because the challenger bank provided APIs to enable integration to their internal systems, which is something that the old-school high-street bank did not offer - and the banking fees were lower too, always a bonus !

TL;DR: $challenger_bank had a definition of a client that did not include provision of VoIP services. So after about a year as a client, said VoIP provider found their account frozen (in this instance they were explicitly told, it wasn't a silent freeze). VoIP provider attempted to constructively engage with $challenger_bank but it was like talking to a brick wall "computer says no".

(N.B. I have oversimplified the story a bit, so please don't nitpick !)


Is it Andrews & Arnold? I remember their director blogging about moving the business to Monzo so they can get real-time webhooks for incoming payments. I've just checked and they still appear to be using Monzo as per their "bank details" page.


> Is it Andrews & Arnold?

Nope, not them. I'm surprised they haven't been frozen given Monzo's stated policy[1]:

"Certain industries have higher risks, where we need to put extra checks and controls in place. We’re currently focusing on industries that don’t need these. In future, we might offer accounts to some of these industries. But we appreciate this is disappointing for some businesses at the moment."

Followed by a long list that includes "technology equipment, like lasers or telecommunications"

(For those wondering, telecoms wasn't there at the time the other guys had an account, and they were not selling equipment anyway)

[1] https://monzo.com/i/business/eligibility/


Didn't ever have that trouble but I have been repeatedly turned down for a joint account while both me and my partner have good records with credit, high earners and no worrying financial history.

Switched to Starling instead and created one instantly.


the fear of that happening to me was the reason I terminated my account too, reading people having their main account frozen out of nowhere for months was very concerning


Should banking really be on a cloud platform?

I do believe AWS is likely far more secure than any DIY computing environment but even so, should banking be on cloud infrastructure? I'm not saying I think this is a bad idea but it came to mind when I read this.

Also, is it really a good idea for a bank to be talking openly about its security strategy? Isn't an important part of security not to let on anything that might be used against you? For example if determined hackers know your systems then they can keep an eye out long term for vulnerabilities in those technologies and be ready to strike. Does this sort of thing matter or not?


On the first point, I don't see any particular reason why banking shouldn't go with cloud. Obviously banking has regulatory hurdles and things like availability are important so it'll require a specific architecture to help achieve that, but in general shouldn't be a problem.

On the second point, I'd say it depends on the level of granularity and detail. Here they're describing general mechanisms and they're not saying that this is all they do, so I think it's a good thing.

In general relying on obscurity for your security is a bad idea, as attackers will often find a way to get that information. That said I wouldn't give attackers a complete schematic of my env. and every protection, no sense in making things easy for them :)


All you need is a single disgruntled ex-employee to be bribed by hackers to reveal your complete security design.


Social engineering is likely the easiest way to get through security systems.


I don’t see why banking shouldn’t be on a cloud platform, you’re not really giving any reason why we should question it either.

As to your second point, security through obscurity is generally believed to not be worthwhile.


> As to your second point, security through obscurity is generally believed to not be worthwhile.

Security only through obscurity - sure.

But obscurity as an additional layer, as part of a defence in depth strategy, still has some value.

It’s rare for any large org to publicly discuss any details of its security design, let alone a bank. Monzo must be supremely confident in their system to go public with this information, or judge that the marketing/recruitment benefit outweighs any potential risk.


What about you are now exposing yourself to additional risks from e.g. a malicious employee at the cloud provider, or jurisdictional risk from FISA requests to the CP?


Why is risk from malicious employer in cloud provider somehow different from risk from malicious employer in colocation, or even private data center?


Because now you have risk from malicious employees at two organisations, your own AND the cloud provider, instead of just one. Furthermore, you have very little visibility into the cloud provider's security practices. And for anyone saying that cloud providers are inevitably more secure than your own organisation, have a look at the Azurescape vulnerability.


You can, and indeed must, mitigate risks from employees. These are part of regulations around financial services, which starts with PCI-DSS for payments and becomes more encompassing as you move up the service ladder. The types of cloud providers who can tick those regulatory boxes for you naturally wants to pass those costs to someone.


>> I don’t see why banking shouldn’t be on a cloud platform

What if AWS gets cracked/hacked/compromised?

I know it's not happened yet, but it's not impossible.


I guess there are two important questions:

* For individual banks and their customers, is it more likely that an AWS-wide exploit will compromise an AWS-hosted bank, or is it more likely that a self-hosting-specific exploit will compromise a self-hosted bank?

* For society, is it better that security efforts are concentrated in on centralised providers like AWS, or is it better that security efforts are distributed, on individual hosting entities?


That's more or less the same question as "what if the data center/servers operated by the bank gets compromised".

In reality it's always about tradeoffs: who to delegate to and who to trust.


>That's more or less the same question as "what if the data center/servers operated by the bank gets compromised".

The difference is that cloud relies on public services, which once compromised (e.g. via social engineering), allow for lateral attacks resulting in much bigger impact (e.g. Lapsus$) across the complete customer base. This makes social engineering much more attractive in cost vs impact. The resulting monoculture in not only the software, but the infrastructure and configuation also increase the impact on technical attacks on specific exploits.


> The difference is that cloud relies on public services

What are the public services that AWS relies on, and how are they different from a bank's server farm, or a bank renting out space in a datacenter?

The same, really, applies to all other concerns.


Route 53, CloudFront, AWS Console, AWS IAM, etc.

All of these services are hosted by AWS in a multi-tenant fashion, sharing not only the code, but infrastructure and configuration patterns.


>> security through obscurity is generally believed to not be worthwhile

Is not advertising your security architecture "security through obscurity"?


How else would you scale to meet peak demand without being wasteful?

Banking has a fairly predictable usage pattern, but there will be black swan financial events that cause 100-1000x load. On top of that, how else could you serve customers around the globe with reasonable latencies?

These are genuine questions. I’ll admit I’m an engineer whose entire career has been during the cloud era. I don’t see how cloud’s advantages of scaling and worldwide “edge” locations can be replicated by the average bank’s tech team.


> How else would you scale to meet peak demand without being wasteful?

I can only speak to the banks I've have the opportunity to work with who stayed on prem or built their own internal cloud infra; what you call waste, they consider a premium/cost of business for security and resilience. I'm sure a lot of folks would've said the same thing about JIT supply chains (that had been squeezed to be as efficient as possible) until they unraveled.

> I don’t see how cloud’s advantages of scaling and worldwide “edge” locations can be replicated by the average bank’s tech team.

As always, "what are your requirements and what are you optimizing for?" Most folks don't need web scale nor edge locations [1] [2], they'll get by just fine with a CDN and some API endpoints [3].

[1] https://news.ycombinator.com/item?id=19576092

[2] https://blog.bradfieldcs.com/you-are-not-google-84912cf44afb

[3] http://mcfunley.com/choose-boring-technology


Are there many examples of companies (today) building 1000x the infrastructure they normally need? I can see how it could be necessary for some companies.

> Most folks don't need web scale nor edge locations [1] [2], they'll get by just fine with a CDN and some API endpoints [3]

Isn’t using a CDN using cloud?


When cloud computing started, our banking clients said ‘never’ to the cloud. Now it is fairly normal depending on the region. In some countries in Asia and the Middle East, it is not allowed yet.


I think a lot of newer banks will be "cloud native" and there are companies such as Thought Machine that are developing cloud-native core banking systems (https://thoughtmachine.net/vault)

For legacy banks, it will be much harder to move a mainframe based core banking system to the cloud, or migrate from a mainframe system to a new cloud system.


To say it with the words of some guys in management why they picked alicloud: it was cheap :)


I am still confused about the potential GDPR issues of using an American company cloud service and the potential American law of needs of the cloud provider to grant access to data. Wasn't this an issue in the EU with Microsoft?


The biggest security hole for every organisation is its remote work from home workers.

I'd be interested to hear how this Monzo bank addresses the problem of someone walking in the home of one of their programmers and lifting access keys to AWS whilst that person is at the supermarket, and leaving with no-one the wiser.

Or installing a keylogger USB device onto their keyboard cable.


Not Monzo but I can tell you SOME ways we deal with this kind of risk.

* Work laptops are all using trusted computing, tamper detection and remote attestation which, while imperfect, does provide some verification that the hardware isn't being tampered with.

* Additionally it means if you try to access any service from a non-work laptop (or a work laptop failing remote attestation), it doesn't let you in. Even if you have all the credentials.

* Passwordless authentication means capturing PIN codes with a keylogger is of very limited value unless you also steal the laptop. Even then, an additional factor is required such as mobile push or biometric.

* No developer should have access to any AWS keys that would grant access to production data. But in any case, we use AWS SSO which only returns temporary AWS keys.

* There are lots of systems that monitor for anomalous activity. For example if a user account suddenly starts hitting lots of access denied errors or accessing things they don't normally access, that's a hint they've been compromised.


It's not like office networks are a security paradise either


Not Monzo, but I have a friend who works at a large global bank HQ in a mid-senior position.

They do remote desktop connection into their PC at the office. The work laptops given to them can't connect to anything but the PC in the office(so, it's merely a thin client). A fingerprint reader device connected through the USB port and a physical device for generating single use codes are used in addition to username/password.

At the start of the pandemic they used their own laptops to VPN into the work network and then connect to the remote desktop but not too much later they switched to complete solution.

I know this because I helped with troubleshooting connection issues. Does look quite secure to me, the only difference to an office environment security seems to be the possibility of an intruder to make an employee do something at a gunpoint.


I did some work for a Dutch bank a while back, they had a similar setup.

The lag from remote access (Citrix) drove me nuts!


I'd say the keylogger can be an issue if they're able to be alone with the computer for a while. I'm not sure that all laptops can detect that they've been opened (my HP Elitebook and previous Probooks don't), but I'd assume it unlikely that the attacker wouldn't leave other traces in the house.

But other than that, enforcing session auto-locking should work fairly well. Of course, if this is combined with some kind of agent that checks whether you're doing something that the employer defeated with a mouse jigger, all bets are off...

They can also enforce using MFA for AWS (and probably for GCP and Azure, too, but I don't use those) and not use plain access keys.


You wouldn't run or develop code locally. AWS keys would be secrets managed by Vault or something.

If you have AWS keys on staff laptops at home, you've already failed.

We don't allow any code at all on local machines.


Where is the codebase kept then? Do you have to remote desktop in to your development environment?


Remote desktops are generally the access point for anything sensitive like source code or data.


> more than 20,000 containerised workloads across more than 2000 microservices to date.

This is insane. What am I missing here that an organization is bragging about having 2000 moving parts?


They seem to be operating very well with the 2000 services (the number doesn't seem to have changed much in the last few years which is interesting, despite them adding new features, etc).

The domain is obviously very complex as its not just a ledger, but risk, loans, overdrafts, savings, etc as well as premium offerings, interacting with different payments networks like FasterPayments and Mastercard. Then there are the budgeting features, payments between friends, support system (Which is probably quite complex in its self), card issuing, internal dashboards and metric services, etc, etc, etc. I can easily see how this gets to 2000 services and I didn't even begin to think about the business accounts, US accounts, etc.


How is it insane? Do you think factories don't have 2000 total pieces of equipment that go into manufacturing widgets?

Quantity of moving parts is a nearly worthless metric on its own without describing the scale and complexity of those moving parts.


Those moving parts are defined by the complexity of the business. Banking software with 20000 classes deployed in J2EE application server on a mainframe would not be much different.


That's not really true, since microservices involve what boils down to RPC over a network. There are so many more failure modes involved when you have 2000 asynchronous processes talking to one.


That is true, but it also allows for much more orderly start-up and shut-down as well as automatic recovery. A service is a pretty well defined entity that can be exhaustively tested far easier than the corresponding monolith with 2000 classes and tons of non-local effects. To use processes for that purpose has definitive advantages. See "Erlang/OTP" for an example of how this can give you incredibly solid distributed architectures.


They are the actual inspiration for my blog post against microservices 2 years ago (https://blog.matthieud.me/2019/microservices-considered-harm...).

At least, they've only added 500 microservices in 2 years (not even 1 per day, how sad!)


A friend of mine working for a rival neobank was telling me about a tiny piece of the whole thing that used 5 microservices to achieve it. This was a security related piece, and when I pressed him why something as simple as what he was describing needed 5 services, he went somewhat into detail, and it sort of made sense.

I can imagine 2000 microservices being rather low for a bank.


Not sure they're bragging about it - they're just stating it as context for their blog post.

And is 2000 moving parts too many? How many moving parts do you need to run a bank? I can imagine they're having to comply with ~2000 legislation clauses, for example. Isn't that just the complexity of their domain?


From my understanding the approach to microservices was more or less a day 1 thing due to some of the early engineering hires being very experienced with them; so the number of them was comparatively high pretty early on.


What’s insane about it? Tooling provides what’s needed to build, test, scan, deploy, manage and monitor.


I think it's great that more companies are open about what they are doing for security. It makes it sound like they are confident in their abilities unlike other people who are nervous to mention things like "We use Octopus" or "We use AKS" because we are less confident that the information is not an invitation to a hacker!

Now all we need is to somehow capture some of this "best practice" and make it normal practice, enabled by default and documented well so that organisations don't set stuff up and then disable all the controls because it is too hard to understand.


One reason why companies would not do this is to give a little bit of protection against zero-days. When a zero-day is released all providers notice a huge scan for the vulnerability. Scanning huge blocks of the internet takes time but if a hacker has a list of companies using which tools and where it can be narrowed down a lot.

AWS/Azure/GCP/... for example have published IP-ranges of services. If a zero-day for any of those services is released a hacker can already narrow down the attack-range and gain a lot of time.


That seems like a bad reason. With a good enough connection and `masscan` you can "scan the whole internet" (single port) in 5 min. Security through obscurity on IPv4 make no sense.


One day I would love to see what having 2.000 microservices entails. Which features each service covers. I can't think of 2.000 micro services an online bank would have.


Maybe they mean instances? That doesn't seem too crazy.


It means services (I worked there). It works fine, all the services are very similar and tight in scope


I doubt it. Micro-service is a pretty specific term.


well, DeFi is secured by default.


The "S" in DeFi stands for Security.




Applications are open for YC Summer 2023

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: