Hacker News new | past | comments | ask | show | jobs | submit login
CIA hacking unit failed to protect its systems, allowing Vault 7 disclosure (washingtonpost.com)
215 points by sunils34 on June 16, 2020 | hide | past | favorite | 98 comments



This happens in many corporations as well. It's fun and exciting to be on the red-team (doing the penetration testing, writing exploits, etc) but the blue team (infrastructure teams and developer teams hardening things) is not only boring to most, but it's also the team that gets the most grief from developers for inducing friction. If your company has a red team, ask how big the blue team is and if they have the same freedom to develop and implement mitigating controls as the red team has to exploit things.

Hacker competitions mirror this. Red teams are allowed to bring in any exploits and do just about anything (as criminals would be expected to do) and the blue team are stifled by bureaucracy and not allowed to bring in anything.


Another, related paradox is that in corporate org structures, the CIO is responsible for making sure the company's systems are available and working correctly, but the CISO is responsible for securing systems. Departments of CIOs can frequently be seen as a profit center which unlocks potential for the company while CISOs are almost always seen as a cost center which (ostensibly) slows the potential of the company.

This also contributes to perverted incentives (like the red/blue teams) where the CIO frequently gets their way and is more likely to get budget while CISOs take all the blame when their budget increase requests get declined and IT is tasked with keeping unpatched systems up and stable rather than patching systems quickly. Obviously, the best orgs find a way to get both done, but resources are always scarce for the rest of us.


I left a high pay info-sec position at a large insurance corporation for this very reason. CIO trumped CISO (fractional) on literally every security issue that was surfaced - and worse yet the CIO and CEO refused to acknowledge the risk being onboarded/ignored. The irony of insurance execs refusing to acknowledge information security risk was just too much.


I've been in infosec since the 90's. A lot of times I think this is on us. As much as I respect the technical acumen and creativity of my colleagues in the industry, I don't think we broadly understand risk that well and as a consequence we do a pretty bad job of communicating it. We tend to peg the panic meter with multiplied likelihoods and catastrophized impacts of possible scenarios while directly causing revenue losses by adding sometimes insane amounts of friction to the product delivery process.

That's not to say there aren't cowboy CxOs recklessly ignoring reality, but accepting risks is part of the job. The real answer generally lies somewhere in the middle of the two extremes.


> As much as I respect the technical acumen and creativity of my colleagues in the industry...we do a pretty bad job of communicating it.

This is the root of so many problems for technical teams in ostensibly non-technical businesses. More developers and engineers really need to embrace the reality that your work doesn't always speak for itself - sometimes you have to speak convincingly on its behalf.


Or you wait until it explodes and then get the money either way. Plus, you don't have to bother with people who do not want to understand, which is the second problem commonly faced by technical teams. I've seen more than enough technical people doing anything they could to make people understand, but at the end of the day Sinclairs adage about people not understanding something if their income depends on not understanding it holds true.


It's not always about understanding, sometimes it's just about making them believe you. The relationship between business and tech doesn't have to be adversarial - learning how to get yourself a seat at the table and what to say once you get there can be a quality of life improvement across the board.


Agreed, It doesn’t seem appropriate for info-sec people to be making decisions about what which risks to mitigate, ignore, etc. They should provide input into that process though. We struggled to even get the CIO and CEO to acknowledge and discuss info-sec risk and make decisions regarding what to do about that risk.


Oh yeah, if they aren’t going to even show up to the conversation then it’s time to yank the ripcord.


By yank the ripcord do you mean leave the organization? I see this type of behavior at just about every company I have worked. There is no real priority to fix security holes even when they are discovered.


Depends on the circumstance and what your career goals are. If you want to develop your leadership skills, stay put and try to drive change. If you're developing your IR/SOC/threat hunting skills, maybe stay put b/c you're likely to be needed (assuming org is large enough target to get interesting attention). If you're doing assessment/red team/pen testing I'd stay a short while then move on b/c your reports are going to start to be recyclable. If you're doing security architecture/engineering/etc you're going to be resource starved so maybe move on.

Moral of the story is determine how it impacts your career goals and chose.


Strange.

I can imagine the average corp board member underestimating the risk accumulated by consistently ignoring CISO request for more cybersecurity investments, but the insurance industry is used to dealing with the low-frequency, high-impact payouts.

Do you think it was mis-communication, ignorance, greed, hubris, or something else?


All of the above.


I have hoped that "Cyber Insurance" might be able to price these risks, and also price information assurance best practice into premiums.[1] Do you think this has, or could work?

If an insurance company is unable to price it's own internal IA risks either at all, or at a non-zero value, I'm discouraged from hoping for a market solution to the problem that, as the truism states, "offense is easy, defense is impossible." I think the intelligence services and LE have also done a bad job, as evidenced by the hoarding, instead of reporting or fixing, of vulnerabilities.

Schneier has lately argued that regulation is necessary. The idea of GDPR for infosec is unappetizing, but I have trouble thinking of any other solution that hasn't already failed.

1.https://en.wikipedia.org/wiki/Cyber_insurance


There are currently (or were recently) 2 large lawsuits regarding cyber insurance claims working their way through litigation. If they both go in a certain direction, the concept of cyber insurance may be much less appealing (far fewer claims could be paid out, making the concept relatively expensive for less benefit than many companies anticipated).

Basically, insurance only works when the insured has faith that the insurer will pay and that both parties understand the boundaries of the contract. One of the lawsuits involves the effects of WannaCry, which the insurer claims was a state-sponsored attack. "Acts of War" is one of those common exclusions to insurance policies, so the insurer has an incentive to always claim cyber attacks are nation-state sponsored if the insurer wins that case.

The other case I think is about the difference between a general corporate insurance policy which has some coverage related to fraud and the insurer who claims the insured should have purchased a standalone cyber insurance policy. I think that case partially revolves around "when fraud happens on a computer network, is that a 'hack' or is it traditional fraud?"


I'd actually expect this to be the opposite. Insurance is heavily risk analysis based. It sounds like they were choosing to take the risks because either you didn't show them properly, or you don't realize how cheap the actuated cost of non compliance is.


I follow your reasoning... but no, that wasn’t the case here. A number board members of this org fought for and succeeded in getting increased investment in a true info-sec program due to years of very lax security culture and a series of internal audits elaborating the risk to the org. The CEO and CIO were constantly grossly over budget on pet software dev initiatives, which the board was becoming increasingly concerned with - then here come the info-sec folks with a laundry list of gaping security holes in said over-budget software projects, to which the CEO and CIO proceeded to dodge meetings, ignore risk assessment communications, direct their underlings to exclude and shut out the sec team, and keep the board in the dark. It was a toxic culture, glad I left when I did.


Hacker competitions often seem very contrived to me. I suspect that in order for the red team to make any progress you have to tie the blue teams hands behind their backs. Most of what I see from the penetration testing community is pretty gimmicky and situational generally and often doesn't take into account the attackers risk/reward ratio.


I disagree completely. Red team tools and techniques are different and gimmicky for a reason, their goal is to demonstrate lack of or effectiveness of security controls and processes. While bad guys have more time and more precise target. For example, 0days and disruptive actions are mostly prohibited for red teamers


I agree completely. I see it as entertainment and a way to recruit people out of college.


What would be a less gimmicky setup?


Allowing Blue Team to fight back maybe? Or to be able to actively track the red team instead, using an active defense, instead of only passive defense?

Moreover, the outcomes are different for both teams:

- RedTeam success => they are seen as "real" hackers/heros and the BlueTeam are the poor incompetent

- RedTeam fail => the BlueTeam did "only" its job, the investments in cybersec for the company paid off... so the budget for the cybersec can be reduced.

So, for RedTeam, it's either a win or a tie. And for BlueTeam it's either a tie or a loss...

If the BlueTeam could fight back, maybe this could change...


That's true but only because it mimics real life. The defenders are always at a disadvantage here, they have the boring job but one where one mistake is one too many. And they have to achieve that perfect score while operating within the rules.

On the other side the attackers have the more exciting job and only need one success which they can achieve by using whatever means they see fit.

You'll see this outside of IT just as well, like in sports. Goalkeepers (defenders) vs. strikers come to mind but at least there they all operate within the same set of rules.


I kind of like the dual approach. First team to get in to the box has to try and hold onto it while still maintaining specified services it's supposed to be providing in the simulation. Winner is whoever holds it the longest.


It's inherent to the field. A successful blue team is a distributed win - every line of code did what it was supposed to do. A successful red team is a concentrated win, for the people who found the few lines of code that did something else. The job of a red team is to make things interesting. The job of a blue team is to keep things boring.


That's good. Perhaps something like if they can attribute the attack to a particular machine the red team gets "arrested".


Easier said than done, the red team can’t break real laws (routing through compromised hosts) where real hackers will.


Do the feds still attend DC? >:}


No and they don’t come because hackers asked them not to. >:/


Found the Fed.


Let the non-red teams use pre-existing scripts, code, etc, to harden things. This of course would make the competition a level playing field and would make it much less fun for the red team. Attendance would drop off quickly and companies would no longer sponsor these events, as the primary purpose is to recruit people out of college.


Actually, this could be made like "CS:GO" competition:

- RT is the terro - BT is the AT

The RT has to "plant" an exploit. The BT can either block/track the RT or "diffuse" (find/disable) the exploit.

The "maps" would be the kind of system:

- an AD behind a firewall - a WebServer with datas to extract from a backend DB - and so...

The sponsors could sell either the skills of their pen-testers to hire, or their solution to secure a system, so it might be a good maketing campaing for the winner...


I can't tell if you're being facetious, but you just invented 'capture the flag' competitions.


That's why "purple team" is the way (not sarcasm for people not aware of purple team methodology)


Words can't describe how normal that is. Exploit tools are require local systems to be super open in order to be frictionless.

Even in the consumer industry; anyone remember all those very silly people who installed backtrack2 (precursor to kali, based on slackware not debian) to their main drive and then went to defcon and got rekt because their OS was insecure (and couldn't be updated!)

Exploit development is a glass cannon, remove all friction to modify the system and craft packets, invoke monitoring modes for hardware and frictionless tracing... that's going to have a security cost.

This echo's a wider issue in the industry "Development" vs "Sysadmin" mindsets, where sysadmins are stifling and developers are all about removing barriers to progress faster and iterate more.


What's the story re: backtrack2, for the uninformed?


I'm trying to find a citation here, but it's difficult because "Backtrack 2 ssh exploit defcon" is going to produce a lot of content which is unrelated.

Anyway I can give you the skinny of the situation:

1) Backtrack 2 did not have an installer, it was a live-CD. But that doesn't stop you installing it by just copying the live environment to a disk (with some mount-binding and grub install, you're all good!) There were guides for doing this although they all had large warnings and the backtrack maintainers cautioned heavily against doing it.

2) because it was a liveCD there was no package update mechanism, it was not based on debian at the time so there was no apt or anything similar, even if there was there was no repositories, backtrack was a "tool" not a distro really.

3) sshd is one of the services that gets started on system boot for backtrack2.

4) someone at defcon unveilled an sshd exploit, a pretty nasty one, they had disclosed responsibly and everyone had been patched for at least 6 months, except the people who went against recommendations and installed backtrack2. They all got rooted.

Bonus: everyone who ran backtrack2, without exception, ran it with the root user; as that was the default and they had patched software that normally complains about such things to not complain. xD


I don't remember that one but it's similar to the wifi pineapple vulnerability that was being exploited a few years ago.

https://www.csoonline.com/article/2462478/hacker-hunts-and-p...


>4) someone at defcon unveilled an sshd exploit, a pretty nasty one, they had disclosed responsibly and everyone had been patched for at least 6 months, except the people who went against recommendations and installed backtrack2. They all got rooted.

Yeah, I don't think this happened. Nobody has publicly exploited an opensshd rce for ages.


It may have been the kernel; frankly I'm fuzzy on the details I just remember the staunch warnings and feeling vindicated.

This was like 2007-8.


> Exploit tools are require local systems to be super open in order to be frictionless.

Yes, but your "local system" that receives traffic or whatever doesn't need to be the one having access to all your data…


That means that your software can never actually be deployed anywhere.

Once deployed your self-produced tools which have very little security protection themselves can be pilfered. Bonus points for tapping into the software deployment platform and downloading everything.


The article tries to make it sound like the failure is a lack of prioritization and if they just focused correctly the problem could have been avoided, but I do not see why anybody would assume they would be able to protect their systems even if they tried.

How well protected do you think cyber-weapons designed to surveil countries, disable infrastructure, and destabilize governments should be? How capable and well-funded should the attacker need to be before gaining access to cyber-weapons designed to kill economies and people? $1B, $10B? A team of 1,000, 10,000?

Does anyone know of any system or organization in existence that would even be willing to claim they can stop a team of 1000 dedicated hackers working full-time for 10 years funded with $1B let alone put it in writing? What is the highest you have heard? Is it even in the general ballpark?

It is absurd to assume that the failure to solve the problem is just a lack of prioritization if no one even claims to be able to solve it and it is meaningless to propose that they should adopt policies that do not even claim to be able to protect against the actual threat model let alone have evidence of such protection. They either need to find someone who will make the extraordinary claim that they can provide an actual defense and have the extraordinary evidence to back up that extraordinary claim or they MUST NOT deploy such systems since they can not be protected.


Yeah I guess some people really misunderstood how hard making secure system is. Of course you can't claim to kill economy or too many people with it, but really you don't even need that kind of funding to break into most networks.

I guess it's safe to say that even with $1M of funding and small team of dedicated security researchers coupled with right people for social engineering you can break into any network. Everyone can be fooled and humans are always the weakest spot. Especially now when information about everyone is publicly available on social networks so you can gather all information you need remotely.

And when it's come to hacking into networks of company with no dedicated budget for cybersecurity cost of attack would be one or two orders of magnitude lower. Some self-organized groups of hobbyists prove you can even do it with no funding at all.


How does somebody exfiltrate 34 TERABYTES from a secure facility without getting noticed?

To misquote Dr. Strangelove, "ze whole point of ze secret hack is lost if you don't keep it a secret." https://youtu.be/2yfXgu37iyI?t=205

Oh, maybe they have a firewall built on a RaspberryPi somebody ordered online.

Seriously, WTF? This is as insecure as having contract sysadmins with root privilege spread all over the globe.

And when will these state actors with unlimited funding figure out that NOBODY can keep secrets forever, not even them?


Man I got to tell you if you there are low standards almost everywhere. I've pulled off multiple (legal) gigs where you'd think "surely X has done Y to stop obvious negative conclusion Z" and no, they did not do Y. They did some dumb B or C and it was trivial to detect and get around and, at best, it took them a month to notice what you did and their new countermeasures aren't up to the challenge either.

This is why I've been so concerned about cybersecurity and cyberwarfare. I do not see gross competence here and most of the people I respect that write about this type of thing are sounding the alarm. Click Here to Kill Everybody or Matt Tait (@pwnallthethings on Twitter) ending an Infiltrate conference talk with a nuclear bomb as the final image.


Absolutely. So now let's consider the source, the role that three letter acronym fulfills, and the strategies and tactics it's know to use.

Put another way: perhaps it's not an accident? And perhaps some of what was leaked was a decoy?

Yes, keeping secrets is difficult. All the more reason to take advantage of that.


>So now let's consider the source, the role that three letter acronym fulfills, and the strategies and tactics it's know to use.

Like leaving data of their secret assets available on Google searches, leading to hundreds of deaths? And firing the employee who warned then of the problem seven years before it was exploited?


I would suggest you research a bit on how intelligent and counter-intelligence actually works; not the Hollywood version.


I have, I was describing the CIA's recent history. Thinking CIA incompetence is some classic subterfuge is more of a Hollywood plot.

https://finance.yahoo.com/news/cias-communications-suffered-...


You'd think at least some of these inept cyberspooks would have read Neal Stephenson's Cryptonomicon. Or Brian Krebs. Or Bruce Schneier.

Or even the news story of how their old boss(!) John Brennan had his AOL(!) email account(!) cracked(!) by a teenager(!) guessing his password(!). The teenager exfiltrated something sensitive, a job application I believe, and was prosecuted for it. Meantimes, the former Director of Central Intelligence gets to keep his reputation.


He did not keep his reputation, at least not among the people who care about that sort of thing.

Source: lived around DC when it happened, had contractor friends complaining out loud about it


What are the tools to help orgs notice exfiltration?


Glossing over 10 years of tens of thousands of people's work, things like Titan Rain (1, 2) led to a lot of thinking about monitoring your production environment with things like the istio sidecar system.

(1) https://en.wikipedia.org/wiki/Netwitness

(2) https://en.wikipedia.org/wiki/Shawn_Carpenter


Preventing any unauthorized USB devices or as cards is a basic one. Many defense contractors have USB disabled and/or the ports filled with glue.


Firewall alerts about large outbound data flows.


I saw a screenshot of a CNN article which said that that the CIA frequently used tactics to make hacks appear as though they were from Russia. Which is something I always suspected was relatively easy to do...change some logs, some timestamps, use some existing code...I'm not a hacker per se, but most of us write code here and deal with these kinds of things...

So does anything in this vault possibly call certain recent allegations of Russian interference into question?


The intelligence community's opinion that the DNC hack was done by Russia was based upon the single source of a private organization CrowdStrike. But given all the heavy hitting nation states regularly frame others, "Russia's fingerprints" can mean either they did it or they didn't, so it's functionally worthless.


That's completely untrue.


Shawn Henry said "We said that we had a high degree of confidence it was the Russian Government"

Sorry, but "high degree of confidence" is not proof, especially not from the organization that told us Iraq had WMDs with high degrees of confidence.

Additionally, at no point in time did they have access to the hardware.

Are you forgetting that this is the same collection of people responsible for being unable to secure their own hacking tools?


Skepticism of the claims of law enforcement and the intelligence community are good, for a multitude of reasons, but the case here is a lot stronger than you're suggesting and is substantiated by much more than mere finger-pointing by the US government or other governments.

It's unfortunate that the political climate in the US is on such a knife's edge right now that basically no one trusts anyone and everyone is running with their own databases of the facts of the world.

I understand the US government is itself very largely to blame for this deep distrust, but posts like yours make me worried for the next few decades. This isn't a criticism of you at all, but just general concern that things are kind of coming apart at the seams societally. I really hope the "two movies on one screen" phenomenon doesn't escalate to the point that the screen shatters into a billion pieces.


You're either misleading or ill-informed. Since 2016 it is well documented Russia intervened through hacking and disinfo operations.


It's my understanding that nothing truly concrete has been shown to the public?


There has been direct testimony from intelligence officials and thousands of pages of reports including very technical details. Do you want server logs, intercepts, confessions? All these provide nothing of value to the general public.

When intelligence agencies share clear evidence a dictator gassed his own civilian population, no one cares or trolls ask for more evidence.


>When intelligence agencies share clear evidence a dictator gassed his own civilian population

Funnily enough, there's no clear evidence of this. According to OPCW leaked documents there's a higher probability the gas was manually placed at the site. [1] Which of course, calls into question the Syrian government's involvement, especially given earlier intelligence showing ISIS had possession of such chemical weapons.

[1] https://www.independent.co.uk/voices/douma-syria-opcw-chemic...


You're asking for clear evidence but then using an op-ed from a known controversial journalist on Syria, sharing a Wikileaks leak after the GRU was caught hacking the OPCW ?

Clear evidence you can't fake: a rush of hundreds of people (including children) to the different hospitals near the Khan Sheikhoun site while all showing the same respiratory and neurological symptoms. How can one fool so many doctors?

Here's a breakdown of the exact, and single email/document used to "discredit" all chemical attacks perpetrated by Al-Assad on his population https://www.bellingcat.com/news/2019/11/25/emails-and-readin...


This seems to be some form of strawman, given I never even implied there was no attack. Merely that it was misattributed according to leaked documents written by chemical experts.

Also, Assad was by all accounts winning the war and pushing back on all fronts at the time. Do you think he's such a lunatic and so strategically bankrupt that he'd launch a chemical attack on his own people while he's winning? Or is it more likely that ISIS launched a false flag attack using chemical weapons that we know they have in order to get the West to do their bidding against Assad?

The Syrian war is a mess, and there are no good guys. The US-backed rebels commit war crimes and behead children, for example.

The source of leaked documents really doesn't concern me as long as they are authentic. For argument's sake, if Snowden was a Kremlin double agent I wouldn't care because he revealed genuine government wrongdoing.

Attacking the source generally isn't a valid argument, especially given the authenticity of the information.


All of that was based on the opinion of a private organization. No intelligence official ever had possession of the server or was involved at any time.


Russia did not limit it's election interference to hacking one single server. This is actually very straightforward.

Here are more details and evidence if you are sincere and want to dig deeper: https://www.intelligence.senate.gov/sites/default/files/docu...


Do you think it's prudent for the intelligence community to allow private organizations to attribute nation state attacks on their behalf without inspecting the evidence?

It's a pretty simple question, and that's what it boils down to.


An account from 3 days ago alleges that the CIA is faking Russian hacking info.

Remember folks: there are disinformation campaigns on HN too.

Maybe they're right, but it's a little suspicious, no?


No, Russian interference allegations were confirmed through other means, mainly human intelligence and other types of intercepts. The dutch even filmed the meddling operations through GRU hacked security camera.


I don't see how the Dutch story is relevant, if it's the one I looked up, and it sounds therefore like there is at best circumstantial evidence. Even motive isn't very reliable because all kinds of people are out to do things like influence the elections.


It is intelligence, not "at best circumstantial evidence". And no, "all kinds of people" did not have the same explicit motives highlighted by the Muller Report, the Senate Committee, or 18 US intelligence agencies showed. I guess spitballing theories on hn is always more accurate than thousands of analysts sharing this analysis in Western countries.

Just one source: https://www.intelligence.senate.gov/sites/default/files/docu...


An account from 3 days ago alleges that the CIA is faking Russian hacking info. Remember folks: there are disinformation campaigns on HN too.

Maybe they're right, but it's a little suspicious, no?



Reminds me of any “security” product. Next time you get the chance, I suggest you tear into any industry standard security tool and you’ll be surprised at what you find.


I find it ironic that the CIA didn't bother to have it's systems secured/verified by the NSA. I'm sure the CIA thought that they were good enough, coming from an organization that was infiltrated from its inception, their hubris isn't surprising.


My limited understanding is that these orgs compete with each other for budget allocation and would never allow access into each others systems, but I could be wrong.


It's less about budget and more about we're not the DoD and can do whatever we please, stay the hell off our lawn.

Even if it was a "hey, could you look at this and tell us what you think" with no obligation to address issues, it is undesirable to establish a precedence.

They do use standards and recommendations from NSA/OMB for enterprise systems. But even the US Courts went that route, just with a lot of renaming of things so it can't be seen as being subservient to the Executive branch. There are some good frameworks and standards that you shouldn't waste time re-implementing.


Plus there is a reason you secure and compartmentalize information. The NSA may be comprised in some way, and giving them access means that deliberately or accidentally leak something vital.

Same idea in reverse with the CIA -- maybe someone in the CIA is a bad actor and now knows the secret 0-days the NSA is using -- because they're busy locking them down -- and those get leaked.


All the more reason to criticize both of them.

Half of the NSA's mission is to build/design secure communication systems for the US government and military.


This is true to an extent. The other half of the equation is just a cultural thing with the CIA. There’s a lot of intelligence groups in the US, but the CIA considers themselves the top tier. They’ve been around the longest and even other agencies recognize them as kind of the eldest when it comes to intel.

The NSA does some seriously insane stuff, but I don’t think even they take themselves as seriously as the CIA does.


Maybe they saw a benefit in having no logs?

No logs, no congressional investigation.

These are smart well-resourced people. They don't do things like this for no reason.


Guarding information and guarding physical assets have one thing in common. It is largely a passive exercise in waiting for something to happen. For this reason it is very boring and unreliable. The only way to improve the situation is to have active and random drills when someone attempts to steal the assets. This would make the work of the Blue team a lot more rewarding rather than just be relegated to mindless blocking access to anything and everything.


I mean you have more or less described a modern Cyber security Red Team.


>34 terabytes of information, or about 2.2 billion pages.

That's insane that they could leave so much data available to be stolen.


Most of it likely useless and junk, or thousands of pages of logs I'm guessing. No doubt there is some juicy stuff in there though.


Guess it's good to know that even big gov orgs are disfunctional


all big orgs are dysfunctional. successful big orgs manage to work around it to a greater or lesser degree.


Unless you make engineers and entire companies focus on security through proper designs and standards, nothing will be secure. Most software is unsecure because geopolitically, the countries who make software are also the one who are able to penetrate those systems better than the rest of the world.

No government will push to improve door locks unless that government isn't the most capable of defeating those locks. It's a cost/benefit function.

Right now, improving software security is a net loss for the US. So it won't happen when the US is controlling the computer and software industry.

So I'm not surprised to see even the best experts being beaten so easily.


A hacking unit is offensive. It's like saying, "america's elite nuclear force failed to stop an ICBM". Blowing up things (attack) is a different ballgame than defenfing things. Think of it this way if you are a hacker devoting 40hrs a week carefully studying and planning to infiltrate a network, you will succeed. APT actors have entire groups of teams dedicated to infiltrating one target at a time. Getting in is feasible, persisting,lateral movement and exfiltration without getting caught is very difficult but even commercial tools like cobaltstrike are built to allow different teams to focus on different stages of a hack.


It's more analogous to saying "the defense contractors for a new stealth plane failed to protect the designs and prototypes, so the enemy now has all of the detailed info they need to build countermeasures against this stealth technology". Securing the plans for stealth is a key requirement of the stealth continuing to work.

Also, I'm sure those members of "the hacking team" weren't allowed to discuss their work with their family/friends, so it's not terribly unrealistic to expect them to use even just basic security hygiene (eg. don't share admin passwords).


No, that's not what the analogy at hand. The designers of a stealth plane are just that. The right analogy would be if the navy seals designed a secret weapon, someone infiltrated their ranks and exfiltrated the weapons plans. Navy seals are not immune to moles. No org is.

Your implication that this was due to lack of proper security hygeine is unfounded. Security hygeine reduces risk it does not eliminate it. Risk is proportional to threat and attack surface, for an org like the CIA they have not-so-small attack surface and the whole world as their threat, so reduction in risk by means of common security controls and hygeine will not reduce risk from the most persistent and resourceful attackers.analogy to your reasoning would be "Google has an army of devs and security pros, so Chrome should never have a remote code execution vuln" ,no, as much as they may have money and talent, modern software is too complex for those resources to eliminate all bugs. Perspective is important.


I agree that your analogy works better.

> Your implication that this was due to lack of proper security hygeine is unfounded. Security hygeine reduces risk it does not eliminate it.

Nope. No security professional will admit that anything ever eliminates risk, so that's a strawman fallacy.

The point is that sharing admin passwords is a blatant violation of cybersecurity hygiene which every employee of the CIA is capable of understanding and avoiding. If the org can't enforce even just the basic stuff, there's not much hope of raising standards above that.

> from the most persistent and resourceful attackers.

Here's a secret that everyone already knows: the most persistent and resourceful attackers will always get in given enough time.


I agree on both of your last two points. Not sure where disagree then.


You screw up at offense if your weapons are destroyed or disabled. In case of exploits, this is exactly what happens when they leak out. Your ability to attack in this case is equivalent to keep your arms useful.


This isn't what happened, their weapons were exposed and adversaries now know about them. Their effectiveness is still greater than 0. Digital weapons are copied not stolen, this is the equivalent of russians sendig spies to the US to steal nuke secrerts and the they developed their own nuke. The fact that the US has nukes has nothing to do with their ability to keep secrets and keep out spies. Furthetmore, russians having nukes did not make american nukes ineffective, they simply lost an advantage and to be frank it was only a mattet of time. Just like with the cia hack. And it will happen again!




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: