Hacker News new | past | comments | ask | show | jobs | submit login

I was there when C1 negotiated that deal with Amazon and they swore it couldn't happen but of course, we all know that's false.



Miss the LevelMoney folks...

Yeah AWS can’t protect you against a misconfigured environment


The problem with AWS (and other cloud providers) is that it's nearly impossible to properly configure an environment because of how many different methods there are to gain access to resources.

Capital One has been all in on AWS and has dedicated an immense amount of time and money to developing systems for managing their AWS resources (Cloud Custodian for instance) and yet they still couldn't protect their data. What chance is there that anyone else could?


The whole point of moving to a cloud provider it allow the quick setup and deployment of new projects/products as well as trying to limit your costs. With that sort of open-ended system, unless everyone is always thinking security first and okay with the inevitable slow downs associated with a highly locked down system then you will more than likely always run the risk of this sort of situation.


Having everything locked down by default on AWS/Azure/GCP would go a long way to improving the security of the internet. Centralisation isn't healthy, but at least these companies could make a credible impact on data security by pushing the mentality.


All AWS APIs are deny-by-default. Only if a pertinent policy (IAM or resource policy) grants access is it allowed.

IME, the usual mistake many implementors make is that they inadvertently grant too many privileges and often to the wrong audience.


> The whole point of moving to a cloud provider it allow the quick setup and deployment of new projects/products

There is nothing approaching quick setup and deployment at large banks.

Not Citibank, but previously worked for a financial firm that sold a copy of it's back office fund administration stack. Large, on site deployment. It would take a month or two to make a simple DNS change so they could locate the services running on their internal network. The client was a US depository trust with trillions on deposit. No, I wont name any names. But getting our software installed and deployed was as much fun as extracting a tooth with a dull wood chisel and a mallet.

This is my experience with one very large bank, but from speaking with others that have worked for/with other large banks, their experience has largely echoed mine. They tend to be very risk averse with external IT products, such as deferring critical security updates because they can't be sure what it could break and also likely don't have end to end tests for critical systems that could cost a lot of money if the upgrade fails.

I know this first hand, because you dont always know or understand whats going on in 3rd party systems. I once screwed up a 3rd party system hosted on site. I was testing an upgrade on a dev server. Part of it invovled schema changes, and I had dbo rights on both production and development servers. The hidden part that I didn't realize is that the 3rd party tool stored DB settings in your Windows roaming profile. So, because we only had 1 Windows AD domain and no otherwise network separation, even though I was on a dev box, I was talking to the prod DB. Didnt even realize it (wasn't directly evident unless you dug deep into settinga) until I started getting calls from my users, complaining of errors. This was on the 3rd of July in the US. By the time I figured out the issue, it was about 3-4am on the 4th of July.

Had to make the call of rolling forward or back. But, the supplied installer was missing some packages, so couldn't complete the install. If we rolled back, an entire days worth of tedious work by a 10 person team would have been lost. Worse yet, the tool was used by traders in Europe who were about to start their day. Being early in the morning on a US holiday, I couldnt reach their support. Couldnt even get of their EU support. I was on the phobe with my boss, his boss and the head of back office at the wee hours of the morning on a holiday.

Decision was made to hold off on doing anything until we could talk to the vendor on the 5th. Ended up rolling forward and completing the install, but I was nearly shutting myself. We were handling somewhere around 25B USD notional in bank debt for several days (which caused huge issues in PNL - proffit and loss - reporting for several business days) that we coyld take no action on.

Thought for sure I was going to be fired. But, in the post mortem, I explained everything, and it was agreed that while I shared some blame, the totality of it wasn't my fault and that because I had diagnosised it and fixed it in the most timely manner I could, I was ok. IIRC, I think the only real remediation we took to prevent a similar mishap was to disable roaming profiles on the dev server and delete all existing profiles on the dev servers...


Yep, sounds like a bank to me. I worked at one of the big 4 for 6 years (way too long, I know) and the experience was horrible. It once took us a full year (no exaggeration) to get a single server allocated...and my group was actually one of the well funded teams


Funding wasn't a problem for the client in my story. They were happy to spend money. I think the initial contract was for X million USD that would have covered something like 5000 support hours on our end (was based on time spent, not per incident) and then after, it was like 300 USD per hour.

Separate project, I know I was billed out at 500 USD per hour 10 years ago. That was working with an exchange. Initially a joint venture, my company decided to divest itself. We sold all the source for the system that we developed and theyd be running to the exchange. We clearly documented our "build" process and requirements. The core part of the system (and as far as I know the only part that ever went live) was a Python app that used very specific modules, but we also had some patches that were submitted upstream, but not yet in public distributions. So, we were very explicit that you need exactly these versions of Python, these explicit versons of the libs and you need to apply our patches to the libs. We had also only developed and tested on a specific version of linux, and made the indication they should use the same, or we couldnt guarantee the software.

Well, we handed all of the source and documentation to the exchange. They, in turn, hired an outside consulting group. For the life of them, they could not get it to work. First question asked was: did you follow the instructions? Response was "of course, do you think we're idiots?"

The assertion that they followed the instructions exactly sent me down around a 3 week debugging session, attempting to reproduce the issues they were having in our office. Starting from scratch and the exact instructions I had written up for them (I was the only author of the Python app that was failing), I could not reproduce the issue.

After 3 weeks of back and forth, escalations on all sides and some thinly veiled accusations of sabotage, I went on site, sat down with the consultant, told him to start from scratch and show me what he'd been doing.

First thing I notice is that he installs the latest version of Python, and latest version of all the extra libs we needed. He'd completely ignored all of our instructions despite telling us the exact opposite!

It took all of 15 minutes to identify and correct the issue. Ended up billing close to 40K USD in support because the contractor didnt follow instructions and, well, lied (intentional or not) about having done so. Never heard a peep about it from management about the hours or questioning the resolution, and as far as I know the exchange paid the bill without question, even in the height of the aftermath of the 2008 crash.


I think AWS's use of synthetic reasoning in this space is groundbreaking and shows the way to go forward for complex systems in the future.

See also: https://aws.amazon.com/blogs/security/protect-sensitive-data...


Are there AWS experts who can do some sort of quick audit or "sanity check" of an environment's configurations? AWS almost makes it too easy for someone who only sort of knows what they're doing (like me) to get things up and running.


There are many different automated systems for checking for misconfigurations in your AWS organization. Capital One even developed a very popular one (Cloud Custodian). Like most automated configuration checkers or monitoring systems they rely on being configured by experts because at their default settings they are mainly a source of annoying alerts that end up auto-filed to email folders you never look in because this is agile and we can rationalize the alert rules in the next iteration (we won't). They can also auto apply actions. Have fun debugging your Cloud Formation stack that failed because the automated checker system terminated the instance without notifying anyone because it was missing a required tag.

As useless as these checkers are, the main problem is that there are so many different ways to gain access to resources that it's almost impossible to have a system that's useful to the business while also provably secure either manually or automatically.

Don't forget even AWS themselves created a "managed" policy for some minor service which accidentally gave users root access in the account: https://medium.com/ymedialabs-innovation/an-aws-managed-poli...


Ironically Capital One built Cloud Custodian, which does just this. But as you can see by the number of pull requests, it is an immense problem space: https://github.com/cloud-custodian/cloud-custodian/pulls?q=i...


AWS locks everything down by default. As far as I know, there is no direct way through the GUI to make a bucket public, you have to know how to add the JSON policy and even then you get a very noticeable warning.


Check out the Trusted Advisor dashboard, GuardDuty is also a good thing to have running in your account


We run this tool every now and again which has helped me in the past. Not sure how it compares to Cloud Custodian though.

https://github.com/toniblyx/prowler


Basically, no. AWS is flexible enough to let you set it up in any complicated way you want, meaning it gives you plenty of rope to hang yourself with. It's arguably much easier to audit a random Linux box for security than an AWS account.


I don't know that there's a "quick audit", there are too by vectors for any single professional to check. You'd be best served by using an auditing or monitoring solution. Even then, you're really just auditing _known_ vectors as it's likely impossible to cover all possible ones.

I used to work on an auditing and monitoring platform, there really are too many vectors.


AWS roles and access are incredibly complex to configure and audit though. Needlessly so.




Applications are open for YC Winter 2020

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: