
Cloud Leak: How A Verizon Partner Exposed Millions of Customer Accounts - tbourne
https://www.upguard.com/breaches/verizon-cloud-leak
======
LinuxBender
15+ years ago, there were similar issues with people leaving directory
indexing enabled with Apache. At least in those cases, people knew they were
putting their data on a web server. And more often than not, there was at
least logging of what was downloaded by default.

Now people are moving their infrastructure to the public clouds. With the
flick of a single API call, you can expose the most sensitive data over HTTPS
with no authentication and no logging. That seems to me worse than the
problems of 15+ years ago. If I suggest ways to fix this, I will get responses
with the word "friction", which was usually the reason people moved out of
their datacenters to begin with.

Is there a practical solution to this? Could AWS do something better?

~~~
btgeekboy
If you know you have sensitive data in a certain location (e.g. an S3 bucket,
RDS database, application in EC2) it would be pretty straightforward to get a
VM somewhere outside your "network" that periodically scans these known
endpoints and alerts when something is amiss.

~~~
tbourne
Scanning is a good idea but unfortunately most inexpensive scans only do
vulnerability scanning which wouldn't catch this particular problem. A
potential solution is to have a second set of eyes on something. Take some
insight from the pair programming model and have 2 sys admins look over a
system design before it goes into production. I once heard someone say
"Experience isn't worth what it costs, but you just can't get it any other
way." It's really unfortunate that 14 million of us Verizon customers had to
pay the price for that sys admin to learn the lesson.

But back to the original article, this whole thing was broken on many levels.
Whoever wrote the logging function really shouldn't have been logging
sensitive data like that. The logs should have been consumed by some sort of
logging platform directly instead of flat files on an S3 bucket. The S3 bucket
shouldn't have been publicly accessible. Someone should have reviewed this
system. It's also quite possible that someone did review it and threw up red
flags and was squashed by management because they had to hit a shipment date.
And I'm sure there are many other reasons.

~~~
true_tuna
There are monitoring tools to alert on this situation and many like it. I use
threatstack but there are several others. I think Cloudwatch can even do it.

If you want to be stupid simple you can use the utility mon. You have that
curl a test file from the public URL and alert if you ever get it. Also, in
what universe does it take more than five minutes to fix this? "Dude, you're
leaking customer info from an S3 bucket", "Shit! Which one?", "customers-
bucket", "OK fixed, postmortem scheduled"

