Hacker News new | past | comments | ask | show | jobs | submit login

We do business in the US financial sector, and the sentiment we are getting with regard to cloud vs on-prem seems to be growing into a bimodal distribution.

I would say it's nearly a 50/50 split until we have conversations about how our product actually works and the incredibly sticky problem that is PII...

Once the risks are reviewed in open and honest ways, we find that virtually all of our clients would prefer to keep our solution on-prem. Only those who have already made a full step into cloud compute have to continue to take exception for obvious practicality reasons.




How is the problem of PII better solved on premises?


Think about it more abstractly from the perspective of trust and # of actors involved.

If you run 100% of your IT workload on-prem, the ability to control the flow of data can be boiled down into a physical exercise of following fiber channel cables in your own datacenter. Having a unified set of firewall rules that define your entire public interface also helps a lot.

You can actually make deterministic guarantees to your customers that not only your own systems are secure, but also that the systems of your vendors and other 3rd parties are as well. The moment you start configuring site-to-site VPNs with 3rd parties across which you intend to transact sensitive business knowledge, you are surrendering an entire mountain of security constraints.

If we are being honest with ourselves, a lot of shops that are 100% on-prem probably have worse security practices than AWS, et. al. Perhaps the biggest hazard is really the hybrid model. If some fintech went 100% into the cloud without even an HSM on-prem to worry about, then you could probably have a solid argument on the other side of the spectrum. Also, remember that multi-cloud might seem like a resiliency measure, but it also adds another target to your back.

The middle ground is where all the pain seems to be. Hybrid cloud usually means more required trust than most organizations ever wanted to enter into. I frequently find myself as the harbinger of bad news when I get into deep-dive technical calls with some of our customers. Turns out a lot of the other vendors we work with like to bend the truth in order to make a quick buck. Many perverse incentives are pulling these massive organizations into hilariously-contorted IT stances, and some of us are starting to see a consulting opportunity.


> You can actually make deterministic guarantees to your customers that not only your own systems are secure, but also that the systems of your vendors and other 3rd parties are as well.

You can make a "deterministic" guarantee, whatever that is, that your systems are secure? That's seems pretty bold and probably dangerous, no?


> That's seems pretty bold and probably dangerous, no?

Its not dangerous in my experience. The more dangerous angle for me is this belief that it is impossible (or hopelessly difficult) to build a secure system.

The reality is that it is only possible if you are willing to take total ownership of the entire vertical. If you control every single byte that enters and exits your enterprise, you can prove that things are secure. Is it practical to do this in all cases? No. Is it feasible in theory and in certain cases? Absolutely.

If you buy into the 3rd party hosting game, you instantly lose control over the critical variables you would need to in order to create the opportunity for these sorts of guarantees to exist in the first place. You (and your customers) will be stuck wondering about side channel damage and human factors that you have no direct control over. When you own the hardware and the real estate it is parked on top of, you can start to reel these things back in really quickly with powerful policy frameworks (2-person rules for critical changes, mandatory checklists, etc). These sorts of policies seem to work really well for very tricky areas like keeping our nuclear weapons from doing inappropriate things.


How do you know your own programmers aren't introducing security bugs? Or are even acting against your interests intentionally? It happens to every other software developer, why not you?

Do you build all your own hardware from raw materials? How do you know everyone in the supply chains is perfectly secure?

Attacks have succeeded against the CIA, against RSA, Google, and many others. Nuclear weapons plans have been stolen. I would not trust a vendor who claimed they could guarantee security.


> If we are being honest with ourselves, a lot of shops that are 100% on-prem probably have worse security practices than AWS, et. al.

Does it matter? You have the same freedom to fuck security up setting your AWS infrastructure as you have setting your on-prem infrastructure. All the very competent AWS staff is able to do is add less risk, they can't save you from anything.


Good ol IAM basically being an incoherent Json programming language documented in gibberish.


>the ability to control the flow of data can be boiled down into a physical exercise of following fiber channel cables in your own datacenter

I suppose the infamous Equifax breach was due to a secret fiber optic cable running out of their datacenter?


No it was due to the officially-endorsed fiber optic cables sitting in plain sight and the fact that they do business with so many other parties.

I work with some intermediate vendors in this space (they have direct access to the credit bureau data), and their security mechanisms are of concern. I am under some very strict NDA constraints, but I can say that there are serious problems and I am not surprised that breaches occur with regular frequency.

You can barely trust your own in-house developers to get these things right. How can you possibly hope to trust many other additional parties to get it right simultaneously as well?


> I suppose the infamous Equifax breach was due to a secret fiber optic cable running out of their datacenter?

No, of course not. But when you're dealing with physical infrastructure you can actually touch, it's much clearer and more certain what you're dealing with.


I don't think that is as true as you make it sound. I have managed on-prem and cloud infrastructure and am much more confident that my cloud servers are secure because a whole lot of stuff is done by the provider who know a lot more than me between them.

Even on a really simple on-prem scenario, you have switches to configure, vlans to setup, hardware drivers, a gazillion updates to make all the time and a tonne of employees making it all very difficult. The fact I can see it physically doesn't realy help me that much.


For one, at a certain scale you shouldn't be running an inhouse DC solo, you can get away with it more in the cloud but at a certain ace again you want more manpower for review/auditing and brain/man power. Most of what you described aren't actually principle attack vectors either. The primary vectors are the same between on premise and cloud. Misconfigured VPNs, stolen VPN credentials, poor network segmentation (Cloud absolutely does not fix this for you, you still need brain power and auditing to find accidental misconfigs).


Just my observations, but some business entities run into legal challenges that vary greatly by industry. B2B customers have a contractual relationship with you, not your hosting provider. If your hosting provider has an "oops we leaked your data" they only breached the contract with their vendor, not themselves. The contract can specify how data is managed. Adding to this, some companies/banks legal controls are compatible with SOC2 controls of a 3rd party data processor being audited and some companies are not. Some companies can cite the 3rd parties audited certifications and some can not based on preexisting legal contractual agreements with their own customers. I am not a lawyer, but had to sit in many meetings with lawyers and businesses and this is a real issue they have to address. I have also worked for a large bank. Rules, regulations and contracts around 3rd party data processors and financial institutions can get very complicated. There are a myriad of additional complicating variables that go beyond regulations. Legal obligations also vary by relationship. If a bank has preexisting relationships with other banks, they may be obligated to get approval from the other banks to change how their data is managed. Amending contracts is non-trivial. This rabbit hole can get very deep.


I'm a banking attorney and I think you've hit the nail on the head. Vendor management is popular with regulators, particularly with respect to info sec and data privacy, and the navigating the contractual responsibilities can quickly become burdensome.

Getting the lawyers on both sides to agree to language for the ongoing certification(s) that will also satisfy the internal auditors, the outside auditors, the regulators... suddenly something that sounded simple when the contract was signed takes several meetings and a lot of back and forth.


What's the worst that can happen in a on-premises attack, Vs the worst that could happen if AWS was hacked?

The amount of financial data that could be exploited at once is magnitudes larger in a popular cloud. I don't think it's strange that a regulator might look at that failure point with some trepidation.


On the other hand, at least AWS, Azure etc. all have a vested interest in doing things well and securely. At a bank, most employees know nothing and care nothing about the HW and SW systems, they just use and abuse them.


Why do you think banks don't have vested interest?

And I don't expect my cleaner to service my car. Banks have DBAs, network and physical security experts on tap (just like cloud providers do).


Well they do have the vested interest but they are predictably irrational in another one - trying to make more money and cutting costs from "non-core businesses". It isn't rational but certain sorts of predictable stupidity are more common in some contexts than others.


Smaller surface area to guard for one.


The surface area of vulnerabilities like spectre and meltdown is much lower


because you can control physical access to the hardware


> Once the risks are reviewed in open and honest ways, we find that virtually all of our clients would prefer to keep our solution on-prem.

I hope this kind of risk assessment becomes more common. I'm used to people not caring until things blow up on their faces.


Well, I guess you are speaking about companies who don't know what they do. But this article speaks about banks, who have hundreds of servers, ability to recover anything, full-time employed ops teams, monitoring & automation in place for 20 years already. Moving to cloud provides very little advantage (definitely not financially) to such companies. They are not SaaS who might need to double their infrastructure overnight. Lots of advantages provided by cloud has been in place years before clouds because of VMWare. Another thing is existing bank apps are often monoliths (when you are happy, in Java with Spring, C# .NET, when not, in Visual Basic, Cobol, PowerBuilder, SAP) or huge interconnected services, huge databases with thousands of stored procedures, SOAP APIs in place. This is not a place where you would leverage ElasticBeanstalk or AWS Lambda.


> Moving to cloud provides very little advantage (definitely not financially) to such companies.

As someone who has worked as a software developer for big NY banks for the past 25 years, that's simply not true.

The answer is, it's complicated. JPMorganChase for instance has a $12 billion annual IT spend. They do a LOT of different things. Admittedly certain things are best left on prem for regulatory audit points (more with respect to resilience/business continuity rather than security.) But a substantial portion of it could be moved to the cloud at some cost savings.

Additionally the brittleness of the service and database infrastructure is a pathology of the on-prem environment rather than an argument for it. Cohorts of SA's and DBA's are wasting their time doing work which in a modern environment would be scripted and more flexible.


Its not as simple as that. Just because its the case of "we have lots of old stuff" doesn't mean you need to ignore the new stuff. Building solutions with traditional data centers and staff, even if you have a lot of it, is often a lot slower. You can spin up entire fleets of servers (or even use services such as AWS Lambda, API Gateway and DynamoDB so you never use servers) and get a solution out in much less the time. Its not just about responding to load but also to be fast enough to get new stuff out there. Traditional financial services organisations are notoriously slow to adapt to changes in the market. Using cloud resources alongside the legacy infrastructure is one way to try and remain competitive.


> Traditional financial services organisations are notoriously slow to adapt to changes in the market. Using cloud resources alongside the legacy infrastructure is one way to try and remain competitive.

Or, you could have a single executive action revamp IT resource acquisition policies. Simply mandate that internalized self-service resource provisioning capabilities be developed.

It is not rocket science to put a web dashboard around vmware or some other virtualization solution. Most of them already sell something like this as part of their feature set.

You could set something like this up in a week if you had enough buy-in. There are no excuses if you want to win at this kind of game. The bad guys are way more patient in aggregate.

A 2nd perspective - One of our customers has a "traditional" process for setting up new IT workloads, and we were still able to get 3 servers provisioned within 8 hours along with 3 new publicly-routable IPv4 addresses and matching DNS+TLS certs. Anything less than this in 2021 for any organization is indicative of sheer incompetence IMO. We do have some customers that are really slow, but they are also really small. I don't think any F500 is taking weeks to provision SQL Server anymore.


Not to mention that that same infrastructure you say is in abundance is usually pretty heavily utilised already so there isn't just spare capacity lying around. Procuring new hardware can take months and if there is no rack space you are looking at more months to get that deployed.

The people that need to plan, coordinate and install all of this are also pretty heavily overworked at the moment because, believe it or not, there is a very large shortage of skilled sys admins in the world.

Using the cloud doesn't solve those problems, but it does help reduce their impact.


Does "on prem" ever include (a) on your own hardware, but in a third-party colo and (b) on rented hardware, where only you have root? Where do those sit between full cloud and traditional servers-in-the-basement?

To look at it another way, how much of the risk is about the physical location of machines, and how much is about who operates them?




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: