> No bank account numbers or Social Security numbers were compromised, other than:
Then below that, in non-bold, it basically says "oh, except for these 140,000 social security numbers and 80,000 bank account numbers" - which is the primary reason folks are worried about this!
To me, the first thing you are going to see is "No bank account numbers or Social Security numbers were compromised" in bold letters. Which is completely false and misleading. Technically they are telling the truth, but the way they've done it is clearly meant to be misleading.
On top of that, I'm a Capital One customer myself, and I can't figure out how to find out if I was affected at all!
But further, why even word it that way? It was clearly done intentionally. There's no need for this to be presented in this way other than to intentionally try to mislead. Why not just say, in bold letters "140,000 social security numbers and 80,000 bank account numbers were compromised". Or say it "The following were compromised".
> We will notify affected individuals through a variety of channels.
Gives me some confidence the very small subset of individuals who should be worried about those much more exploitable leaks will be informed and offered assistance.
But as another Capital One customer I'm quite irked I can't just query a simple page to find out what data of mine was leaked, if any.
> the cloud-computing company, on whose servers Capital One rented space, wasn’t identified in court papers.
Does this feel like it was just an S3 bucket with permissions set incorrectly? I've come across sensitive documents in S3 buckets with a well crafted google search.
Correction: according to the complaint, the defendant is alleged to have assumed an IAM role in the context of Capital One's account whose policy provided access to the S3 bucket in question. So it wasn't that the S3 bucket was public, but rather, that there was some vulnerability she took advantage of by which she obtained indirect credentials to it.
(Complaint, page 6, lines 14-27.)
> About 140,000 Social Security numbers were accessed, as well as 80,000 bank account numbers from credit-card customers, the bank said.
I haven't yet read (all of) the complaint but I presume it goes into even more detail than the article did.
I won't link it here, but here's a screenshot of a snippet: https://i.imgur.com/NezWVKw.png
"Thompson was previously an Amazon Web Services employee. She last worked at Amazon in 2016, spokesman Grant Milne said. The breach described by Capitol One didn’t require insider knowledge, he said."
(If I could query all AWS permissions for publicly exploitable permissions, that would comply, for example.)
Point stands; they’re being very careful to say that there aren’t any CVEs, but they are also very carefully not saying whether she abused the privileges of her role to identify misconfigurations more rapidly than she could have otherwise.
Special access can make the difference between "locating X% of misconfigured users in a single admin panel query" and "locating X% of misconfigured users by scanning every S3 bucket in existence without being caught".
Or to draw a weak analogy, knowing that a closed-source PRNG algorithm is defective does not necessarily help locate all keys generated by it, but having access to force it to generate numbers for you (or to study its source code) absolutely does help.
assume it is no longer private.
Not much really there to learn
Violation of copyright would appear to be a significantly worse offense according to present US law.
In point of fact, the prosecutor on Swartz case (Stephen Heymann) had previous authored an article describing how the Internet age allowed crime to scale, enabling hackers to commit thousands of criminal acts per second. It's my personal belief that Heymann wanted to use Swartz' case as a validation of this belief.
(Source: The Idealist: Aaron Swartz and the Rise of Free Culture on the Internet, ISBN 978-1476767727)
I think the minimum to be considered an IDE, you need to be able to edit, possibly compile depending on the language, and run/debug from within the same tool. By last loose definition, I've joked my most used "IDE" would be bash. I can edit with vim, compile/link with make/gcc/ld, and debug using gdb or run my bins directly.
I mean it's an integrated development environment in that I can access all of my tools from one centralized location, the bash shell, but certainly not integrated in the sense that I have a GUI that hides the nuances of commands of various tools behind menus and friendlier non-command-line names and making it appear that the half dozen or so tools are a single entity.
I also use Visual Studio for Windows development and I've been switching between VS Code and PyCharm for Python development.
But are git and svn an IDE? No. They are both merely source control management systems.
But then I read your comment and realised in *nix the program is actually called "git". So I concede :-)
The last commit in the Git repository where her resume is located shows this:
(HEAD -> master, origin/master, origin/HEAD)
Author: Paige Thompson <email@example.com>
Date: Thu Jan 10 14:38:02 2019 -0800
update linkedin address
diff --git a/cv.pdf b/cv.pdf
index bf26140..add1ea9 100644
Binary files a/cv.pdf and b/cv.pdf differ
While attempting to recover, the s3 team discovered and/or decided the nameserver needed a full restart. That's when they discovered the info in the nameserver had grown so large since the last full restart years previous that it took far longer than expected to restart the nameserver. Right around that point in time my guess is they realized just how shit their morning was going to be. And their afternoon.
Somewhere in there, they realized that their health dashboard depended on s3 working.
Though to be fair, as an aws customer, we -- along with the rest of internet -- were well aware that stuff was badly broken.
I feel terribly for whoever did this, because IIRC, he or she just fat fingered part of a command in a standard playbook, and the config script had no safeguards. I personally took down a company you've heard of in the exact same way; I knocked all pops off the internet because the config script had a hard requirement around certain values that was neither communicated to me nor checked. And I was trying to figure out wtf I did to a system that I was not particularly familiar with while receiving forwarded texts from the CEO about cascading datacenter down alerts.
> Capital One determined that the first command, when executed, obtained security credentials for an account named XXXX-WAF-Role, that in turn, enabled access to certain of Capital One's folders at the Cloud Computing Company.
Unsure how one would obtain credentials for an IAM Role, but the above verbatim from the complaint.
* edited to reflect this is lifted from the complaint, not indictment.
You use your own credentials and issue an API call to do it. If you're using the AWS CLI, it's "aws sts assume-role".
We do something similar with our accounts. You can place a restriction on the role that an MFA token must be used while assuming the role, so this allows you to give out longer-term credentials to your devs/admins that can then be used (with an MFA token) to assume a more privileged role.
The role itself needs to be configured with a trust relationship that allows for this, and many roles are restricted to AWS services (i.e. you are authorizing an AWS service to assume the role--not a specific user). I've never used WAF before though, so I'm not sure if it's typical for the WAF role to have that trust relationship or not.
They used those creds to launch like 1700 gpu machines across the globe for a bitcoin mining network...
The culprit was from germany...
We got it cleared and AWS forgave all the charges.
Every SDK that I have used let’s you use a constructor without a parameter and can get your credentials from the config file/role.
(If anyone needs me, I’m busy feeling old after remembering having this conversation with a new PHP developer in 1998)
He was a remote worker and aparently had a poor handler on our end. I was head of ops not in dev, so i just had to deal with the fallout.
From there any IAM role access the underlying server had, you would now have as well. And that would work with any sort of access (don't need root, etc.)
They were still successfully prosecuted though. And AT&T received no punishment.
When a company says jump the USG asks how high.
But sounds like she's an engineer that used to work in aws, specifically S3. If true, seems likely as she would have insider knowledge of existing attack vectors and possibly vulns. Maybe even using something we discovered while on the job.
-Paige left code used in the "attack" on her GitHub.
-Paige left text files with unencrypted data there, too.
-Paige openly posted about it in an open (!!!) Slack channel and publicly named her VPN service of choice, which of course, matched access logs AND GitHub server logs. (Also tor, which the FBI agent was able to confirm and add yet another data point)
-Paige said "I have a leak proof IPredator router setup." nice.
Nice opsec there. Sheesh.
EDIT: Thanks for the PACER share, by the way.
Nothing about Opsec here. She basically asked them to arrest her. Probably had some of the usual motivations: "look at me I'm clever", "look at this stupid big company with bad security", or maybe used the opportunity for some political thing with banks. Not the sophisticated hacker type. But who knows.
Edit: originally I asked about her Github profile listed in the complaint as paigea(5x * characters)thompson but was iffy on whether that was okay on HN.
Not much is left.
She did indeed work at Amazon on AWS in 2016 , specifically on s3:
That PDF also contains her home address and other personal information so I'd rather not link directly.
Interesting note that she comments that they skipped 3 for the fan values. Seemingly an oversight for the fact that these fan values of 1,2,4 indicate that it is probably a bitfield with each bit indicating a fan speed.
(this person's original comment was asking for her github profile)
There is such a lack of talent out there right now in the cybersecurity industry that it’s very easy for criminals to slip around undetected. You’d have to be a total idiot to get caught, or catch the attention of someone really motivated to catch someone.
No one calls Snowden just Edward, this comes across as a form of degrading women to girls.
Theo de Raadt is often just called "Theo" here for similar reasons. Rarely if ever have I seen him called "de Raadt" on this forum.
Your comment is unfortunately typical of drive-by Internet outrage these days.
I'm curious about what the first command could have been
Also this all unfurled after a report to their security line from someone monitoring gists - that public feed as well as text dump sites have always been a good source of new vulnerabilities
While it might not be okay to instigate such breaches, we might also consider it the actions of a whistleblower. Especially given the unusual way she went sbout disclosing things.
Sure, perhaps there is a little bit of hey look at me about it, but at the bottom of the trough it is actually the corporation that has ultimate responsibility.
I look forward to a statement from Capital One of regret that they allowed the breach to happen and will strive for better standards of security.
And that is actually a message for the entire industry.
It occurs to me now that if I did that it would likely be a crime because of the harm to the company. The irony.
Plus what are you going to do with credit card applications anyway? Sell them to a marketing company with some phony story? Or the 'sell them on the darknet to fraudsters in Russia' angle? Unless you're already involved in some dirty business already this isn't very valuable.
Reading the mistakes made in the hack itself makes me wonder if black markets and money laundering are a skill they posses.
If you were lazy you could just hit up an existing vendor and ask them to sell your data in batches.
I’m not saying this would be a good idea, but it certainly wouldn’t be very difficult.
This will just intensely increase the scrutiny of where the data came from and they'd likely be caught anyway, unless they did a very clean job security-wise. Which very few people seem to be able to do when the feds really want you.
Moving to Russia or another country without extradition treaties would probably be a good first step of that plan.
But now its more like "Don't worry The 5Eyes have made back for everyone"
In the UK this would definitely open you up to the Computer Misuse act, and I imagine the police would have something to say to you about evidence tampering too.
But if this were my credit card company, I would be pretty irked to be finding out about it weeks after the company knew, from the news.
Either way, not good.
Yeah AWS can’t protect you against a misconfigured environment
Capital One has been all in on AWS and has dedicated an immense amount of time and money to developing systems for managing their AWS resources (Cloud Custodian for instance) and yet they still couldn't protect their data. What chance is there that anyone else could?
IME, the usual mistake many implementors make is that they inadvertently grant too many privileges and often to the wrong audience.
There is nothing approaching quick setup and deployment at large banks.
Not Citibank, but previously worked for a financial firm that sold a copy of it's back office fund administration stack. Large, on site deployment. It would take a month or two to make a simple DNS change so they could locate the services running on their internal network. The client was a US depository trust with trillions on deposit. No, I wont name any names. But getting our software installed and deployed was as much fun as extracting a tooth with a dull wood chisel and a mallet.
This is my experience with one very large bank, but from speaking with others that have worked for/with other large banks, their experience has largely echoed mine. They tend to be very risk averse with external IT products, such as deferring critical security updates because they can't be sure what it could break and also likely don't have end to end tests for critical systems that could cost a lot of money if the upgrade fails.
I know this first hand, because you dont always know or understand whats going on in 3rd party systems. I once screwed up a 3rd party system hosted on site. I was testing an upgrade on a dev server. Part of it invovled schema changes, and I had dbo rights on both production and development servers. The hidden part that I didn't realize is that the 3rd party tool stored DB settings in your Windows roaming profile. So, because we only had 1 Windows AD domain and no otherwise network separation, even though I was on a dev box, I was talking to the prod DB. Didnt even realize it (wasn't directly evident unless you dug deep into settinga) until I started getting calls from my users, complaining of errors. This was on the 3rd of July in the US. By the time I figured out the issue, it was about 3-4am on the 4th of July.
Had to make the call of rolling forward or back. But, the supplied installer was missing some packages, so couldn't complete the install. If we rolled back, an entire days worth of tedious work by a 10 person team would have been lost. Worse yet, the tool was used by traders in Europe who were about to start their day. Being early in the morning on a US holiday, I couldnt reach their support. Couldnt even get of their EU support. I was on the phobe with my boss, his boss and the head of back office at the wee hours of the morning on a holiday.
Decision was made to hold off on doing anything until we could talk to the vendor on the 5th. Ended up rolling forward and completing the install, but I was nearly shutting myself. We were handling somewhere around 25B USD notional in bank debt for several days (which caused huge issues in PNL - proffit and loss - reporting for several business days) that we coyld take no action on.
Thought for sure I was going to be fired. But, in the post mortem, I explained everything, and it was agreed that while I shared some blame, the totality of it wasn't my fault and that because I had diagnosised it and fixed it in the most timely manner I could, I was ok. IIRC, I think the only real remediation we took to prevent a similar mishap was to disable roaming profiles on the dev server and delete all existing profiles on the dev servers...
Separate project, I know I was billed out at 500 USD per hour 10 years ago. That was working with an exchange. Initially a joint venture, my company decided to divest itself. We sold all the source for the system that we developed and theyd be running to the exchange. We clearly documented our "build" process and requirements. The core part of the system (and as far as I know the only part that ever went live) was a Python app that used very specific modules, but we also had some patches that were submitted upstream, but not yet in public distributions. So, we were very explicit that you need exactly these versions of Python, these explicit versons of the libs and you need to apply our patches to the libs. We had also only developed and tested on a specific version of linux, and made the indication they should use the same, or we couldnt guarantee the software.
Well, we handed all of the source and documentation to the exchange. They, in turn, hired an outside consulting group. For the life of them, they could not get it to work. First question asked was: did you follow the instructions? Response was "of course, do you think we're idiots?"
The assertion that they followed the instructions exactly sent me down around a 3 week debugging session, attempting to reproduce the issues they were having in our office. Starting from scratch and the exact instructions I had written up for them (I was the only author of the Python app that was failing), I could not reproduce the issue.
After 3 weeks of back and forth, escalations on all sides and some thinly veiled accusations of sabotage, I went on site, sat down with the consultant, told him to start from scratch and show me what he'd been doing.
First thing I notice is that he installs the latest version of Python, and latest version of all the extra libs we needed. He'd completely ignored all of our instructions despite telling us the exact opposite!
It took all of 15 minutes to identify and correct the issue. Ended up billing close to 40K USD in support because the contractor didnt follow instructions and, well, lied (intentional or not) about having done so. Never heard a peep about it from management about the hours or questioning the resolution, and as far as I know the exchange paid the bill without question, even in the height of the aftermath of the 2008 crash.
See also: https://aws.amazon.com/blogs/security/protect-sensitive-data...
As useless as these checkers are, the main problem is that there are so many different ways to gain access to resources that it's almost impossible to have a system that's useful to the business while also provably secure either manually or automatically.
Don't forget even AWS themselves created a "managed" policy for some minor service which accidentally gave users root access in the account: https://medium.com/ymedialabs-innovation/an-aws-managed-poli...
I used to work on an auditing and monitoring platform, there really are too many vectors.
At a minimum, AWS Support has near complete read access to AWS accounts in connection with support cases.
It would be interesting to hear from an AWS employee how access to customer information is controlled.
But they can't see what you have on the volumes attached to those instances, what processes are running, etc.
Internally we also talk to AWS support. They absolutely don't have much visibility into our accounts at all - much to my frustrations. They only see metadata - even for internal accounts.
The only teams that have some access to such information is security team, or when you Grant access explicitly to the other person via standard AWS auth mechanism (IAM)
I'm at AWS and we have basically zero insight into these things.
So my recommendation is only use AWS services that have been included in compliance certifications that are important to you: https://aws.amazon.com/compliance/services-in-scope/
That of course doesn't mean you won't get hacked, but there's at least some evidence that the service is operated in accordance within AWS control standards, which are generally quite good and should minimize your exposure to rogue admins run amok.
The other was someone I followed on Tumblr. I was shocked about him being arrested. He was pretty popular on Tumblr and me and him would chat on TinyChat from time to time.
The affidavit is a good read. Linked elsewhere in this thread.
(Edited: complaint, not indictment.)
A former Seattle technology company software engineer was arrested today on a criminal complaint charging computer fraud and abuse for an intrusion on the stored data of Capital One Financial Corporation, announced U.S. Attorney Brian T. Moran. PAIGE A. THOMPSON a/k/a erratic, 33, made her initial appearance in U.S. District Court in Seattle today and was ordered detained pending a hearing on August 1, 2019.
According to the criminal complaint, THOMPSON posted on the information sharing site GitHub about her theft of information from the servers storing Capital One data. The intrusion occurred through a misconfigured web application firewall that enabled access to the data. On July 17, 2019, a GitHub user who saw the post alerted Capital One to the possibility it had suffered a data theft. After determining on July 19, 2019, that there had been an intrusion into its data, Capital One contacted the FBI. Cyber investigators were able to identify THOMPSON as the person who was posting about the data theft. This morning agents executed a search warrant at THOMPSON’s residence and seized electronic storage devices containing a copy of the data.
The affidavit states “exfiltrating and stealing information, including credit card applications and other documents”.
She used a particular role to exfill from an S3 bucket. Not sure how she got the creds for the role she used to execute List Buckets, etc...
Affidavit shows the accused was an employee at the unnamed cloud vendor (clearly AWS at this point) from 2015 - 2016.
That seems... unwise. Anyone have a pointer to the github post? Would be interesting to see if it was a "Haha! Look what I did!" kind of thing, or a "Crap, CapOne has an open S3 bucket" kind of post.
I mean if that's what some legal news site calls it… :P
I was ready to think this person was being set up by someone who didn't like her, given how exposed she was to being identified, but the Twitter and FB posts strongly suggest a vulnerable person making poor decisions instead.
>Jesus christ, how many times did she come back into Discord rooms she was banned from under new names, just to brag about how she "snuck in," like within two weeks, and of course getting banned again. Being a desperate attention whore is bad opsec.
>I guess she's finally getting all that attention she's been begging for.
>She pulled the same shit with our tiny IRC network nobody on earth could possibly give a shit about. I don't know how a person can be this insane. Relentless stalking of individual users, histrionic rants, literally attempting to dox randos and flooding the server with spambots, you fucking name it.
Sounds like personality disorder.
The idea that it's on the same level as ambien is absurd.
It really seems like a lot of cases of gender dysphoria is more society driven. Younger trans or non-binary kids I know seem to be quite a bit happier than trans folks I know in their 30's. Gender is not inherently tied to sex, and variation in gender expression is normal and not unhealthy at all (societies all around the world recognize it). I think improving attitudes might really be having an effect of reducing the amount of gender dysphoria.
But yes, those tweets do show that this person is clearly experiencing a mental health crisis.
I wonder if we should create a new BSI (Broken System Interconnection) model
1 - Customer
2 - Former Employee
3 - Current Employee
4 - Bitcoin Miners
5 - Unknown Hackers
6 - Own Government
7 - Foreign Government
8 - Hardware Vulnerability
They appear to have turned over historical images and chat logs, not just for the person indicted, but even others in the same channel.
Did the FBI ask nicely or was there actually some formal process?
The entire server chat log is a few Google searches away.
Now there's a fail.
The ingress was okay, but the egress flow was very very bad!
The operator’s approach to bad actors was to respond as slowly as possible instead of quickly rejecting.
You can also add IP address restrictions to a bucket access policy; this was obviously not done here because once she had the credentials, it didn't matter where she was accessing from.
Tor node IPs are published, so you can just block that list.
I get that much of this info is already public, but this feels like borderline doxxing.
That is some old school naming.
After reading through some of the complaint, it seems quite fitting.
But that can get you in trouble when you're playing with fire.
I asked if my personal data was stored in files downloaded to the laptop, they said yes.
When I asked why my data needed to be downloaded to the laptop and not limited to just online access they stopped responding.
This of course was the same company who mailed me my co-workers salary in spreadsheet form, twice because my name was similar to another manager.
Why that was necessary was beyond me too.
Even then, it’s unlikely that a security person would recommend compartmentalizing this particular data set. Any application that needs access to some of it probably needs access to all of it, and it makes little difference if you compromise a server and get one key or if you get 30 keys. The trust boundaries haven’t moved, so it would increase cost without really mitigating any threats.
Doesn’t excuse what happened, obviously
> The cloud-computing company, on whose servers Capital One rented space, wasn’t identified in court papers
I can’t tell whether the company virtual server got hacked or whether the cloud provider was who got breached. Hopefully just the vm
If you think about the attack vectors here, it was most definitely the virtual server that got attacked. If it was the cloud provider (Amazon), there are a lot of safeguards that these banks use to make sure that any data that touches the shared server persistent storage is encrypted. And when I say safeguards, I mean automation to make sure that this sort of scenario shouldn't ever happen.
This is a huge blow for the public cloud and financial services companies, unfortunately.
Edit: Seemingly a WAF firewall issue. I wonder what happened. These rules should be applied automatically for Capital One using Cloud Custodian , so a config issue definitely occurred somewhere.
Final edit: A leaked account with access to IAM permissions. Good lord was occam's razor correct here.
the best link for understanding what happened is actually the court case filing not the media reports.
so this isn't a case of s3 bucket being public/wide open, its a case of a waf iam permissions being overly broad if I'm parsing the filing correctly. Its unclear how the waf product was hacked/bypassed and its credentials obtained.
wrt to custodian in this equation, its not really related afaics, custodian has lots of filters to help determine stuff like is my ec2 or anything with iam role (lambda, etc) overly permissive wrt to permissions (check-permissions filter). it also has the ability to filter individual statements and access on any resource (s3, lambda, etc there are many) with an embedded iam policy on a fine grained basis (allow y accounts but not x accounts) to protect against account level access (cross-account filter). And the ability on ec2 via guard duty alerts to auto remediate (suspend, memory snapshot, yank role, volume snapshot). its used by lots of users/enterprises across the governance, security, cost-optimization domains because its flexible and supports many clouds.
If you've somehow left access to a bucket open the odds are that you also have it configured to let anyone with access to the bucket decrypt the files. AWS calls this server side encryption, where S3 automatically encrypts and decrypts files for you. You can also do client side encryption, of course, but it's much more difficult to manage because you have to deal with keys in your application.
Well,SSE-KMS is not difficult to manage if you have sensitive customers data like Capital One does. I use it all the time. You can pretty much audit the buckets and see what is going on.
And if Capital One has used SSE-KMS on the buckets,we might not be talking about this data breach today.Incompetence? Complacency?
There, I gave you more than 10 seconds. Trying keeping up.
He worded it carefully. He's not apologizing for the actual and potential harm of the breach so as to not take responsibility for it. Not a real, sincere, apology, but just a legally defensive move.
"The first command, when executed, obtained security credentials for a role known as *-WAF-Role" says the affadavit.
1) check if they were affected by this breach and
2) what customers who are affected should do??
great way to start the day...
Not just for this breach but for all the past and current ones we don’t know about and future ones that will happen.
The real problem is there is zero security/identity management in our financial systems which is beyond nuts in this day and age.