Hacker News new | past | comments | ask | show | jobs | submit login
DOD Just Beginning to Grapple with Scale of Weapon Systems Vulnerabilities (gao.gov)
414 points by molecule on Oct 9, 2018 | hide | past | favorite | 215 comments



The good stuff is in the PDF:

https://www.gao.gov/assets/700/694913.pdf

- Running a port scan caused the weapons system to fail

- One admin password for a system was guessed in nine seconds

- "Nearly all major acquisition programs that were operationally tested between 2012 and 2017 had mission-critical cyber vulnerabilities that adversaries could compromise."

- Taking over systems was pretty much playing on easy mode: "In one case, it took a two-person test team just one hour to gain initial access to a weapon system and one day to gain full control of the system they were testing."


My thoughts on this are always related to "skin in the game": does it matter personally to the people making and procuring the systems, especially at senior management level, whether it actually works?

Back in WW2 it definitely did, especially in the UK where bombing had no respect for the class system. Winning or losing the war would make a personal difference.

But since then? All the wars have been overseas with no real threat to the mainland US; there was a real technological race against the Soviet Union, but that ended in the 1990s. The post-911 wars were more of an excuse to settle scores and play the Great Game than a real effort against terrorism (no pursuit of the Saudis for example).

The consequence is that the main thing that matters is selling the technology to the Pentagon, or promoting a career inside it. Nobody really believes that if they procure a crappy IT system the enemy is going to fly a 747 into their office. Someone might get killed, but nobody they know or who matters, and it's never going to come back to the project manager or procurement person who made terrible, expensive, uninformed choices about technology.


This is a very good question I've been pondering for years, and I generally came to the same conclusion wrt. military-industrial complex in general - not just software. It seems to me that no one expects any war that would hurt the US any time soon, so it's an open season for fleecing the military budget for all it's worth.

I also wonder sometimes if a similar thing isn't happening in enterprise software - that is, actual software doesn't have to work; it only has to serve as an object of trade between companies, and all the problems will disappear in general organizational noise & inertia.


Some years ago I worked for a DoD contractor that builds systems like this, and honestly I think people do care, but they are wildly, woefully, ignorant about the risks. They truly do not understand the vulnerabilities. I'm not making excuses for them (especially given the pushback I received when I started calling out the more egregious things), but I think it does help understand the problem better.


What are some examples of pushback you received? If it’s sensitive I’d enjoy hearing a made up scenario that followed along the same lines with a problem pointed out and a deflection response given


I could write a long paper on this, and I would have if I thought it would've made a difference... But some highlights:

- Stovepiped organisations: stick in your own lane. But security is cross cutting.

- Security orgs want to stick to what they know about, not what the threat scope is.

- Security unwilling to own risk, fall back on ass-covering checklists and mandatory processes. This leads to them being an obstacle, a cost rather than a benefit.

- True lack of expertise at stakeholder level. Particularly in the US, the experts are contracted, and never speak out of turn.

- Staying quiet. Americans are extremely conscious of organisational (rather than technical) status and embarassment, and it isn't career enhancing to identify naked emperors.

- Good security costs money upfront, and pays back over time. Bad security is free at the beginning, and costs massive amounts to fix, but: (a) fixing is someone else's problem, (b) fixing is new contracts and more work, (c) systems might not be noticeably hacked.

Large companies (e.g. Lockheed Martin) are often very adversarial, and deny fault with lawyers. I've often wondered if places like Japan, with more cooperative cultures, can address this differently.


Security unwilling to own risk, fall back on ass-covering checklists and mandatory processes. This leads to them being an obstacle, a cost rather than a benefit.

How well this statement retains its correctness across time and space. Pretty much my experience in every company beyond certain size.


I mostly agree with the points you're making, but...

> Security unwilling to own risk

Security cannot fundamentally own risks created by other parts of the organisation. I'd actually put it the other way around - the organisation is often unwilling to own risks identified by security.


Security needs to accept that is their job to secure the operations of the org, not prevent the org from doing things which don't fit into easy use cases.

For example, the chem eng group own the risk of the chemistry being wrong and the plant blowing up. They don't get to say 'let's outsource production to ChemCorp'. Likewise, security needs to secure the ICS, not just ignore it and say 'the SCADA guys do that, it needs SMB1' or when the risks are pointed out say 'you must now change the passwords every 30 days'.

Business units burying their head in the sand? Well, that can happen too. Pen-tests are great for demonstrating problems, but how many security orgs have the ovaries to do them and force realisation? Did security work with the business unit to mitigate risk, or just want to shut it down?

I'd love to get specific but my point is that there is a lack of holistic vision across the enterprise, and incentivising cooperation between stovepipes is needed, and being willing to take risk -- not throwing away the rule book, but writing a new chapter on how to apply it in context.


Japan doesn't appear to be doing much better.

https://securityaffairs.co/wordpress/53856/cyber-warfare-2/d...


There isn't really sufficient information to judge. Persistent attackers would almost certainly achieve eventual compromise on the type of network described. It's really a question of whether when they find a problem (crashed system, non-compliant senior staff, buggy security protocols, etc) they report it, and then it gets acted on.

Japan invented some of the best aspects of safety culture, like being process driven, checklists, point & call https://www.atlasobscura.com/articles/pointing-and-calling-j...

It has been remarked before that if security was treated the same as safety critical systems (like aviation operations, and increasingly, hospitals) then we would have much better security. Tbh, I'm not sure, because of the adversarial nature of attack and defence, but it would be interesting to test.


I hesitate to say, because there's a possibility that even years later many of the vulnerabilities are still there.

However, one that I know eventually got fixed I'll talk about (it makes a great example anyway). When port scanning one of our pieces of equipment, I noticed a strange port number that was accepting packets. I started sending random packets to it and for the most part it ignored them, but occasionally I could get the system to crash and restart.

Turns out, the server had a debug port enabled and active, even in the production build. This allowed you to essentially invoke any C function you wanted, remotely, if you knew the format (which was published in the OS manual)! Very, very bad.

When I reported it, I got a lot of responses like, "Well, this will be on a closed net anyway." and "If they get on this network, there's much bigger problems." Both statements were true for the most part, but still a very dangerous attitude to have. Just because a network compromise would be bad, doesn't mean you should make it worse by neglecting defense-in-depth. And never assume that somebody isn't going to plug an ethernet cable into your equipment that shouldn't be there (this happens all the time).


It is absolutely happening in enterprise software. I have met sales guys and startup advisors that prided themselves in being able to play that game well. In a certain segment of this industry, you get laughed at if you try to actually come up with a solution.


That’s a sad aspect of humanity :(


>It seems to me that no one expects any war that would hurt the US any time soon, so it's an open season for fleecing the military budget for all it's worth.

If that was the whole story, the military budget would be plummeting as our representatives realized that they could also, and far more legitimately, take money away from the military to put in their pet projects.


Military is the pet project. It's the only form of public spending with broad support even from anti Federal government people.


> Military is the pet project. It's the only form of public spending with broad support even from anti Federal government people.

Law enforcement has about equally broad support, including from anti-federal-government groups (though not always the same ones that back the military, as their are pro-law-and-order anti-interventionist groups that aren't keen on military spending, and pro-military groups that are federal law enforcement as jackbooted authoritarian thugs.)


Unfortunately federal law enforcement has lost some of its support among the law and order crowd because of the perception that they are in bed with the political opposition.


For political reasons it can be easier to justify defense spending. Then you just make sure that it's _your_ pet project that gets the spending.


False they can’t cut it back for economic reasons. The economy is based on it


Bullshit Jobs!


"Show me the incentive, I'll show you the outcome"


This is more like: "Show me the outcome, I'll guess the incentive"


That's fair, parent is post-hoc theorizing.

I have to hope that the people in charge of these things -do- care, and that it's simply difficult to get this right. However, given some of the things in the PDF, one has to wonder...


I would suggest that the problem is too much wriggle room / dissonance during the design process (in nice safe meeting rooms admittedly) We can all persuade ourselves that as all items are ticked, the job is done.

But testing gives the lie to all this. The patriot system was battle tested in the 1990s and its deficiencies became apparent - and lessons seem to have been learnt.

So perhaps more adversarial testing is the right approach - set the marines to take out the air forces weapons, the navy to destroy the army.

If people know their beloved weapons systems are going to get roughed up, then the tick box stops being the determinant of achievement - it becomes "what would one of those navy/marine/air force/army bar stewards do?" That's a much higher bar.

tl;dr blow shit up and see if it still works


I always assumed the US military does such tests routinely. Don't they?


the fact that this one test is news suggests, no.

I seem to remember a story about "how good is the SAS" because they were asked, after years of development, to try and destroy an armoured train that carried nuclear material from power plant to disposal (it ran through populated areas so was proof against head on collisions at a gazillion miles per hour and so on)

The guy took a calor gas canister, filled the train half full of gas and lit it. The armoured million dollar train of course ruptured with the internal explosion.

kudos to the well trained SAS demolitions expert, but i suspect a lot of the army could have done that, at various earlier stages.

not sure where that is going, except, yeah blow things up, early on.


Now -that- sounds like a fun job to have that comes with lots of dinner stories to tell. Course I guess they’d assassinate the guy if he was telling these stories Willy nilly (to protect national secrecy)


Is that reasonable to you?


Ok maybe assassinate is a bit of a hyperbole for USA but I wouldn’t be surprised if Russia did that


It's not that hyperbolic at all, actually.

But I was asking a question of ethics, not fact.


no its not ethical, but in general war isn't ethical. it is a reality though until all of humanity can come together slowly over the next few centuries.


Yes, but they don't always want to hear a negative result, e.g. in Iraq wargaming: https://www.theguardian.com/world/2002/aug/21/usa.julianborg...


Honestly I'm really offended by this comment. To suggest that coders writing weapons systems have little skin in the game is condescending and shows how ignorant of the environment you are. Low effort comment. Every industry is for the most part disturbingly bad at security in general.

Maybe write some weapons systems or work with people that do and you would have a different perspective.


I agree that the comment came across as from some one without skin in the game themselves. But I also believe the current procurement process is broken and after spending time using these systems I don’t hold the people building them responsible, but the Admirals, Generals, Executives, and Politicians who smooze at places like Tailhook and shoot down opposition to the status quo. The parent may be right that we won’t course correct until a catastrophe happens. All industries have issues, but the military isn’t an industry and deserves better for $1.6 Trillion. This report is terrifying and exemplifies the sad state of the military’s conventional weapons systems. But agree that most those in defense are often trying their best to do good.


>Maybe write some weapons systems or work with people that do and you would have a different perspective.

Given that that's not really reasonable, maybe you want to give us some perspective? You can't go around accusing others of low effort comments and then not provide any insight yourself.


It's possible for every coder to be committed and the system as a whole to be a disaster due to poor integration or even decisions at the contract or legislative level.


> Test reports we reviewed make it clear that simply having cybersecurity controls does not mean a system is secure. How the controls are implemented can significantly affect cybersecurity. For example, one test report we reviewed indicated that the system had implemented rolebased access control, but internal system communications were unencrypted. Because the system’s internal communications were unencrypted, a regular user could read an administrator’s username and password and use those credentials to gain greater access to the system and the ability to affect the confidentiality, integrity, or availability of the system.

"Do you want to play a game?"

This is some scary bad WarGames like security, password 'joshua' level.

> Program offices were aware of some of the weapon system vulnerabilities that test teams exploited because they had been identified in previous cybersecurity assessments. For example, one test report indicated that only 1 of 20 cyber vulnerabilities identified in a previous assessment had been corrected. The test team exploited the same vulnerabilities to gain control of the system. When asked why vulnerabilities had not been addressed, program officials said they had identified a solution, but for some reason it had not been implemented. They attributed it to contractor error.

Ah, the old blame the 'contractor error' and 'not invented here' syndrome. Looks like engineers aren't in the power structure to change these things, scary if the military is driven like an MBA only led business with no influence from engineering/security.


> scary if the military is driven like an MBA only led business with no influence from engineering/security

Having known many people that worked in/around the military and defense industry, this seems like our reality.


The modern DoD is based around the Asst Sec Defs and business processes put in place by Robert McNamara, who came from Ford. It's all stats and businees. Engineers and scientists are generally considered a sideshow, a workforce to quantitate.


This is, unfortunately, all too true.

> “An interesting question is, ‘Where did the name, dynamic programming, come from?’ The 1950s were not good years for mathematical research. We had a very interesting gentleman in Washington named Wilson. He was Secretary of Defense, and he actually had a pathological fear and hatred of the word, research. I’m not using the term lightly; I’m using it precisely. His face would suffuse, he would turn red, and he would get violent if people used the term, research, in his presence. You can imagine how he felt, then, about the term, mathematical. The RAND Corporation was employed by the Air Force, and the Air Force had Wilson as its boss, essentially. Hence, I felt I had to do something to shield Wilson and the Air Force from the fact that I was really doing mathematics inside the RAND Corporation. What title, what name, could I choose? In the first place I was interested in planning, in decision making, in thinking. But planning, is not a good word for various reasons. I decided therefore to use the word, ‘programming.’ Iwanted to get across the idea that this was dynamic, this was multistage, this was time-varying—I thought, let’s kill two birds with one stone. Let’s take a word that has an absolutely precise meaning, namely dynamic, in the classical physical sense. It also has a very interesting property as an adjective, and that is it’s impossible to use the word, dynamic, in a pejorative sense. Try thinking of some combination that will possibly give it a pejorative meaning. It’s impossible. Thus, I thought dynamic programming was a good name. It was something not even a Congressman could object to. So I used it as an umbrella for my activities"

--Richard Bellman on the naming of dynamic programming [1]

[1]: http://smo.sogang.ac.kr/doc/dy_birth.pdf


A technology/security/defense product that the DoD is, should be more product focused rather than just metrics, metrics are important, but product, usability, security is more important. The wrong metrics can lead you astray[1].

The amount we pay for security, it is appalling some of the data in OP's report just to hit some arbitrary number over security and understanding of a defense product.

> The modern DoD is based around the Asst Sec Defs and business processes put in place by Robert McNamara

Robert McNamara, a Harvard Business School MBA, 'epitomizes the hyper-rational executive led astray by numbers' [1]

[1] https://www.technologyreview.com/s/514591/the-dictatorship-o...


While we're on the topic of McNamara, I'd like to plug the documentary The Fog of War: Eleven Lessons from the Life of Robert S. McNamara [1]. It's great to watch, and you can see how McNamara's perspective has changed. I would recommend it to everyone, especially anyone who cares about defense/foreign policy.

[1] https://www.imdb.com/title/tt0317910/


I recommend Ken Burns' Vietnam series to put McNamara's statements in Fog of War in context.


This discussion doesn't make much sense. Economics is the science of making organisations work. It doesn't come with rigid objectives; Those are inputs to the process. Now I'd be glad if the DOD accidentally let a pack of MBAs with default settings do their thing, because they'd probably create a world-wide cartel within the first year, and reduce all the world's standing armies to just themselves in a very fancy conference room within four.


I think the problem of releasing MBA types on an organization is that they're specialists in business in general, not in whatever a particular organization wants to do. The actual goal is just a parameter - input to the process. And that input can be changed, or abstracted away, and as a result you get a typical soulless corporation - an organization that lost its soul, it's actual object-level goal, and remains a mindless automaton optimizing profits.


They're like those fungi that infect snails and cause them to become zombies, crawling to the tops of blades of grass, where the birds eat them, they infect the birds, the birds die, and then the next generation of snails come along and eat the dead bird, re-infecting themselves.

MBAs are a zombie fungus disease upon companies


What nations military isn’t driven by an MBA culture though? How is Russia’s And China’s in comparison ?


The problem isn't just MBAs. It's that scientific management has become central to the DoD's mode of functioning. Which is remarkably sad. The DoD in the WWII and post-WWII era actually developed a lot of good systems engineering practices and was largely successful on some massive projects.

Since then, however, there has been the constant desire to deskill workers. That is, they want explicit operating/work instructions that mean even a trained monkey could do the job. This is useful for some things (got a broken down Jeep? Pull out the manual and even your Lt can fix it!), but for development, acquisitions, and sustainment this is actually a horrible idea.

The infection of deskilling spread to the office. They've removed the office managers (among others) and left the work (critical, but time consuming and secondary to the mission) with the highly educated and trained staff. This diminishes their ability to focus on real work (see [0] from yesterday). From there, they continued to try to document precisely how engineering work is done. Believing that the process of the work is the same as the work. So if only my engineers knew how to properly fill out SF-1234 it wouldn't matter how educated they are, magically the work would be done.

Of course, we all know this is bullshit. Knowledge work is called that for a reason. The capacity for work is based on the knowledge of the workers. You cannot deskill computer science or aerospace engineering. You can deskill aspects of it, or really the workflow, but not the science and engineering work itself. I can eliminate or largely reduce the need for a classical configuration manager by establishing a (automated) peer review and version control workflow. But the creative act producing content that enters the workflow will always require skilled, competent, knowledgeable engineers and scientists.

[0] https://news.ycombinator.com/item?id=18157885

EDIT: I will not change the above, but I will make a note. After re-reading the Wikipedia page on Scientific Management [1] I see that some people consider Lean and others to also be extensions of it. So read the above as describing Fordism and Taylorism, not scientific management in general. Any management largely based on, or influenced by, statistics, models, and experimentantion could be argued to be "scientific management", that doesn't mean it's bad. It's when it's taken to an extreme. Like, in Taylor's case, where workers are treated with contempt. The purpose of the management being to make them fully expendable. They had no unique knowledge or skills that could really contribute to the business beyond their physical presence and ability to follow directions. His goal was to disempower workers, whereas other examples (Lean particularly) work to empower workers.

[1] https://en.wikipedia.org/wiki/Scientific_management


Specialization, metrics that don't capture true growth/success, and too large of teams also ends up to skilled workers that care less because they have less power or responsibility/ownership.

Small teams with skilled people, the startup model, works best even in large companies. Closer to the customer and more cohesive unit makes for a better project, that is why innovation happens at smaller companies or small research groups in companies or universities.

n(n-1)/2 is the formula to calculate the number of communication channels on a project where n=the number of team members/stakeholders on a project.

On a team of 1, it is all you, you make sure it works.

On a team of 2-3, it is just that few, you make sure you and others are doing their part.

On a team of 10, 45 communication connections, you might not have the same level of care because that isn't my role or someone else has got it. Beyond 7-8 team members communication becomes humanely impossible.

On a team of 100, you have so little power/responsibility that you feel out of place calling attention to issues, you don't know of everyone on the team even.

Metrics are a good thing, to use as an input, so is customer feedback, product goals/focus, usability, employee satisfaction and many other things that aren't always measured like technical debt, product quality, security, simplicity of processes/production, research and development and more.

If you measure the wrong things, it can give you the wrong idea that you are moving in the right direction while possibly overlooking aspects not captured in metrics.

If you only looked at revenue and employee head count for instance, growth might be deceiving. Profit and revenue per employee better, but also employee satisfaction, customer satisfaction, the market, product quality, timing, security, long term direction etc etc get overlooked in the metrics department mainly because domain knowledge has less power than oversight in most mature organizations.


> On a team of 100, you have so little power/responsibility that you feel out of place calling attention to issues

This reminds me of bystander effect [0]. The demarkation of responsibility in large enough teams could invoke delays just in finding who is responsible for what.

[0] https://en.wikipedia.org/wiki/Bystander_effect


Contractors and procurement also has to follow trade agreements and quotas . Probably way higher in priority than some engineers memo with indecipherable tech jargon about ‘security holes’


> an MBA only led business with no influence from engineering/security

As an MBA holder and avid HN user, I take issue with that statement...


You should note specific issues, rather than a general complaint.


I’ll bite. There was no indication in the article of anyone with an MBA in particular being responsible for these issues. The reasoning reads as: “lots of stuff is going wrong” ==> “must be because they have management with MBAs”

Why try to make this link if it isn’t there? What if I replace MBAs with ‘foreigners’, ‘women’, ‘people who read HN’, etc?

Why do we need this? Does blaming non-technical people for the failures of management make us feel better about ourselves?


There is no reason to believe that foreigners or women contribute negatively.

The thesis that parties who have negligible expertise in a given area given charge of people with substantial expertise in that area contribute minimal or negative value seems at least arguable.

Maybe we as a society value the idea of teaching people to lead people irrespective of what those people actually do instead of elevating skilled individuals in that area because it allows us to reward all the rich peoples kids that aren't smart enough to do.

This needn't mean you automatically are unintelligent or bad at what you do. You could personally be awesome and contribute greatly to the efforts you work with/lead.


When people blame things on MBAs here, they tend to not elaborate with specifics either.

MBAs are used as straw man punching bags on HN. Anything that goes wrong with a company where there’s the perception that the “obvious technical solution” was ignored, is blamed on this nebulous cabal of MBAs, who are apparently hired in droves just to sabotage their employer. For some reason it’s totally ok to vaguely blame the business folks.


> MBAs are used as straw man punching bags on HN. Anything that goes wrong with a company where there’s the perception that the “obvious technical solution” was ignored, is blamed on this nebulous cabal of MBAs, who are apparently hired in droves just to sabotage their employer. For some reason it’s totally ok to vaguely blame the business folks.

I think you're building a bit of a straw man. I think the criticism of MBAs rests on criticism of the idea that good decisions can be made by people that chiefly have "management skill" but lack "domain skill." A lot of people believe domain skill is extremely important, and if you stuff an organization full of empowered people who only have management skill, you'll get more bad decisions. You'll also get a lot of dysfunction as they spend too much time pursing the "management ideas" they're more comfortable with, and neglect "domain ideas."

This comment actually touches on some of these issues in detail: https://news.ycombinator.com/item?id=18179247


The assumption that MBAs do not have domain experience is often incorrect. Stereotyping is supposedly frowned on in the comments here.


> The assumption that MBAs do not have domain experience is often incorrect. Stereotyping is supposedly frowned on in the comments here.

Obviously not. I think the critique is about MBAs who don't have domain experience and don't think they need it, because they have a generically-applicable "management skill." That ponderous specification is typically shortened to "MBAs" for brevity, since it apparently has some root in ideology that's common in business schools.


No modern business school actually teaches such an ideology. Have you ever attended one?


I think MBA in its usage implicit means 'with no domain experience' because otherwise there would be a reference to as their most relevant skillset in the field. If someone has domain experience they are a 'senior engineer' with the MBA only coming up if that aspect is relevant. If someone doesn't have and they make it they are just a MBA.


OK, I'll bite.

Here's a few concepts that came out of B-Schools/MBA land. at a grossly brief, thus over-simplified level:

* "Shareholder value" as the prime goal of management, justifying the exclusion of all other stakeholders (employees, customers, community supporting the business, country supporting the business, etc.). Few concepts have done more to damage this country and capitalism itself (note the main problem of shrinking middle class able to purchase the goods produced).

* "Manufacturing is fungible & expendable, just outsource it to the lowest-cost country". Massive damage to developed nations' economies, and national security, all to goose short-term profits. Actively destroys massive corporate value of know-how, simultaneously providing both the existing know-how and ability to generate new know-how to new competitors.

While all the MBAs thought we were exploiting the cheap Chinese labor, the Chinese were looking long terms and exploiting our myopic need for short-term profits to gain massive economic & military advantage (note the recent controversies about planted chips, among many other similar security nightmares). This will go down in history as one of the all-time great geostrategic blunders, brought to us by MBAs.

* "Invest in the stars, milk the cash cows, dump the dogs" -- breaking down your business units in this way, guarantees that the business will become a shell of its former self -- the dogs will be sold off with no regard for their potential if invested in (they may be developing key breakthrus, but not yet showing good financials), the cash cows will be starved of all potential and soon become dogs to be sold off, and maybe the stars work out or maybe they don't. Great way to kill a business. GE was financially managed -- where is it now?

* The general concept that I've seen multiple times of focusing on managing to the financials instead of managing to CREATE great value. The financials will ALWAYS look better when investment in building true value and a competitive moat is overlooked to cutting costs and goosing current sales. These numbers will look good and increase for a while, possibly years. Until they don't. At that point, the business has no hope, and is a walking zombie, unrecoverable.

So, yes if MBAs are kept in their suitable container of 1) installing good accounting practices for tracking and security of financial assets, 2) managing the profits that are made to ensure good returns, 3) optimizing operations & taxes (lease vs. buy capital assets?, etc.), then things will work out great.

But putting them in charge without also having serious domain knowledge, both on the customer/market side and on the engineering/manufacturing side -- very bad idea.


> Operators reported that they did not suspect a cyber attack because unexplained crashes were normal for the system.


The massive weight of the American military is going to be a wonderful addition to its enemies when they take it all over using "admin:admin" .


They will be in for a surprise: Using those massive buggy systems is not one bit easier for the hackers than for the actual users. Maybe the many bugs in those huge systems will turn out to be the best protection against enemy takeover... not actually too crazy an idea, when I think of biology and the mess that are biological systems, where even errors are vital for the functioning of the whole system (e.g. accidentally making a protein that is turned off, but sometimes it turns out it's useful to have it around when the environment changes, but an error-free efficient system would not have made it).


Actually yes. It takes a horde of military personnel to operate the hodgepodge of modern military information systems and technologies. There's nothing easy about it. All sorts of incompatible and buggy tools. Our enemies would have a hard time putting to work the military command & control apparatus -- I mean, we already have trouble enough as it is.

But that doesn't mean the enemy can't learn information and be able to predict our next moves in a conflict. Like what we did with the enigma in WWII, we are subject to the same type of listening where someone knows our next move before we make it.


all they have to be able to do is make a piece or two break or become unresponsive at the correct time


A company I worked for had a CEO who was rather paranoid about the Chinese stealing the software for our innovative product, and would often rant about it at our all-hands meetings. To which I could only think "Well, if they can make sense of it, good luck to them." I really think it would have been easier for them to re-do the implementation from scratch!


It's not about taking over. Disabling them is sufficient.


Precisely.

> expose, alter, disable, destroy, steal or gain unauthorized access to or make unauthorized use of an asset

This is the objective of the adversary in a conflict with respect to information and systems security. It doesn't matter if they can control a system, if they can make it less reliable it's still a win (but not as good). If they can get it to feed out false or misleading information to their opponent, it's a win (hacked a radar, can you show an extra blip on the screen or cause them to sometimes shift position and reduce the confidence of the operator?).


I have been thinking the recent Navy navigation related crashes are related to enemies tampering with systems. They are testing live how weak a windows based fighting ship is.

https://www.wired.com/1998/07/sunk-by-windows-nt/


From what I understood the recent Navy collisions are caused by under-staffing => sailors having to work too long days => sailors literally falling asleep at their posts / seeing things that aren't there / not seeing things that are there


They seem to work twelve on, twelve off, with requiring that some of their personal time being used to maintain physical fitness, so does seem like a disaster waiting to happen.


There is zero evidence of enemy tampering in recent Navy ship collisions / allisions. It was simple incompetence and bad luck. There's really no way to tamper with shipboard surface radars, binoculars, horns, and VHF radios. As long as that equipment is working correctly all collisions can be avoided.


I did not specify an exact scenario because that was not the point I was trying to make (it was actually meant mostly as a joke, no?). If they already have to fight with so many bugs, disabling it will be just as hard on the one hand, and the operators already have top live with disabling issues in the non-invaded system, as one comment posted here said, they didn't even notice the invasion because it didn't feel different from normal operation :-)


Or just being a passive viewer of what the system is managing


That isn’t remotely how it works.


What exactly do you mean? The specific scenario I described in very, very broad terms was from a lecture by Eric S. Lander about a specific bacterial cell. If you have an issue with the general description I don't understand it, since you don't say anything at all apart from some snide comment that doesn't even make sense to me. At the very least I would expect someone who bothers to reply because they disagree to say what exactly it is they disagree with, and also be specific about it. While I'm a CS person I have a broad background in life sciences too.


If the bugs were annoying enough that they'd prevent proper functioning of the system, they'd be fixed. If the current users are able to get some utility out of the system with all the bugs, then you can be rest assured so will the hackers.


Itsme, I believe the disagreement is not with your statement about biology, but with its comparison with software systems.


The joke was not about software specifically, but about the whole system, everything. But even software feels like evolutionary forces are at work - when you work on huge systems developed and added-to over years, sometimes decades, often by new people (lots of churn, contractors), the "design" is less and less visible and it becomes a mess, the role of deciding whether a new "gene" (feature/big fix) works is taken by the (also messy) huge test suite and since nobody understands the whole system any more people code to pass those tests. I made the comparison looking back at when I once was part of those adding to a huge existing piece of software without understanding (it had become impossible, the only objective was to have the tests pass, or even just most of them).

When I later also took biology classes (and org. chem., biochem., physiology etc etc) I could not help but see parallels - I say parallels, not transferring the models over! - to how we develop huge systems, be it software, hardware or the combination of both. No single human understands even a significant part of them any more. "Deliberate design" is not the sole force working on those systems any more.


I'd add that most software systems compete on the market, which is as close to direct evolutionary process as you can get in modern environment. And that process has a fitness function that's quite misaligned with what a designer wants at any step of the process.


SHALL WE PLAY A GAME?


It's like the worst possible scenario. Could it be this will be a wakeup call to the people that work on these systems? I doubt it, based on government procurement strategies like we saw with the website for obama care.


I had the opportunity to tour the "USS BONHOMME RICHARD," as well as talk to visiting sailors and marines, this weekend during SF Fleet Week.

My takeaway impressions (other than that god damn do these people drink and holy shit are they young), especially after talking to the mechanics and network IT folks, is that a ton of their systems are old, the manpower turnover is between 1-2 years as they get cycled between boats (or 4 max as most of these kids are just putting in their 4), and training is extremely specialized. Most parts of the systems (this especially from the mechanics) usually perform to about 10% their pitched lifespan from whoever made them before they fail, repeatedly.

The windshield wipers on all Ospreys (those dank helicopter/plane things, think Ghost in the Shell) have been disabled/removed because their motors would catch fire in inaccessible places near the pilot's feet.

The only thing preventing access to a boat's network is standing orders and the threat of punishment. You can just plug right in.

Every system runs on the same network. This includes radar, weapons systems, anti-air, emergency comms, in-ship cameras...

This on top of the fact that half the people I talked to, the ones actually running these systems, are overworked 19 year olds with circles under their eyes. The only people over 25 seemed to be officers and pilots, maybe those guys know about the systems and can offer expertise? I'm not sure, I didn't get to talk to any of them.

I'm hoping my perception of the military is completely wrong, which is entirely possible because I didn't get to talk to that many people, maybe like 4 or 5 mechanics, a couple marines, and a couple of the network IT people, all relatively low rank. But, as of right now, I have absolutely no confidence in the military to withstand a full on cyberattack from a similarly provisioned military.


I too toured the boat.

> The windshield wipers on all Ospreys (those dank helicopter/plane things, think Ghost in the Shell) have been disabled/removed because their motors would catch fire in inaccessible places near the pilot's feet.

Well, this is not related to the main point about cyber security. If true, it's just a piece of equipment that was found to be flawed. It is a non-essential system that was made INOP.

The Osprey is a marvelous piece of engineering that is difficult to replicate by other nations.

> Every system runs on the same network. This includes radar, weapons systems, anti-air, emergency comms, in-ship cameras...

Physical network or logical network?

> The only thing preventing access to a boat's network is standing orders and the threat of punishment. You can just plug right in.

And then do what once you are in? Unless we are assuming there is zero security, this doesn't mean much. Besides, you have to be on the ship already.

I'm not saying that the systems are adequately protected, they may not (as the article states), but there's too much information missing for us to play the role of security auditors.


>Well, this is not related to the main point about cyber security.

It's a testament to the attitude and quality control practices of the suppliers who also supply the networked equipment.


"The Osprey is a marvelous piece of engineering that is difficult to replicate by other nations."

Whatever I have heard about the Osprey nobody should want to replicate it. Too expensive, too complex.


>Besides, you have to be on the ship already.

That's a problem if you just encounter the ship on open sea, but if you anticipate a conflict it shouldn't be hard to turn one of the literally thousand people crewing the ship. Just find one person who you can force/incentivize to plug an LTE enabled network device in and start hacking from a safe distance (bring your own LTE base station for hacking on open sea).


The Osprey is shit engineering. It can't even fly in a dust cloud, which makes it essentially useless for one of its main intended missions.

https://medium.com/war-is-boring/the-v-22-can-t-spend-even-o...


>And then do what once you are in?

"Running a port scan caused the weapons system to fail"


Good points, but physical access is a fundamental of security, and for good reason.


Many of your technical details are badly incorrect. Not sure who you talked to, but they're not well informed.


Based on what? He's quoting people he talked to and reports what he's seen. Whereas you just naysay.


Based on my first-hand experience as a solider in the US Army, talking to 4-5 low-ranked sailors is unlikely to give a meaningful picture of the whole system. I don't have specific experience with Navy systems to judge the technical details of komali2's post, but I would caution against taking a summary of second-hand accounts from operators as fact.


I would take his recollection with a grain of salt but what they told him most likely was more true than false.

So that leaves a number of specific statements which you could each refute, in part or in their entirety. Judging from the title of this article and a number of other anecdotes in this thread (some by other people that served) it seems his anecdote is entirely believable.

That you can't extrapolate to all of the army would be a given.


>>I don't have specific experience with Navy systems to judge the technical details of komali2's post

Well, there you go then. Thanks for being honest at least.


I have multiple decades of experience with Navy and Army systems and personnel, and he's correct.


Can you expand? Why are the IT people I talked to, working on these systems, so poorly informed about how they work?


First, most IT personnel on ships (especially one as ancient as the Bonhomme Richard) do not work on weapon systems. Most of them would not even be able to discuss where on the ship they are intelligently, let alone what they connect to. The people you talked to simply aren't informed. You even note that you were talking to 19 year old kids, and they're not generally the ones who know what's going on.


Well that's a question I have as well - who actually knows what's going on there?


People like me.


People named jki275 that post one-sentence replies on Hackernews? ;)

What kind of work do you do? You're in the military? What's your rank / job description? That's the kind of information I'm curious about. If the answer is "I can't tell you because it'll expose personal information," well, I'm not the one that outed you lol.


I gave very specific comments above, and explained that I have many years of experience with these systems. No, I'm not going to give you details other than that I'm a very senior person in the field. And I'm not 19 years old...


I don't know that it's just the procurement strategies to blame. Many years ago, I was asked to bring a command and control system into (Orange Book) C2 compliance. Among the things I introduced were personal user accounts with some restrictions around allowable passwords. The users of the system (most of which were "former" fighter pilots) were furious with the restrictions, which they viewed as getting in the way of their jobs. They invariably created a shared login with the simplest password they could come up with that would meet the requirements (e.g., Abc123 or some such).

Security can't be imposed by the system on its users. They have to cooperate.


A decent fraction of the population views password restrictions as a challenge to come up with the shittiest, least secure password that they possibly can while still meeting all restrictions. You can blame users for that with some justice, but as a system designer, it's still your responsibility provide security despite shitty but reasonably likely human behavior.

With modern crypto there are very few systems where it's appropriate to have a user-selected <=12 character password for primary auth, yet unfortunately that continues to be widespread for banks, ecommerce, and (probably) some military systems. High-end security people seem to almost universally hate short user-selected passwords (except when they have to break them..) but old practices die hard and old systems take a long time to be replaced.


>Multiple weapon systems used commercial or open source software, but did not change the default password when the software was installed, which allowed test teams to look up the password on the Internet and gain administrator privileges for that software.


Even worse is that institutional problem where you have people constantly cycling on and off of this hardware that was never designed for a multi-user environment, so default passwords are the order of the day. At best they changed the password and then put it on a sticky note attached to the monitor.

The last thing you want is someone forgetting their password to their tactical system while out at sea and having to sail back into port to get the vendor to reset it for you.

And really, the first case is no worse than the old days with manual controls that just anybody could walk up and fiddle with.


> And really, the first case is no worse than the old days with manual controls that just anybody could walk up and fiddle with.

In that case, you could trust physical security to some extent; someone really not intended to be in contact with the device could be prevented from doing so by some dude with a big gun. Now, those devices are networked, so someone could figure out an access method and use it on all the devices in the field at will.


While not trying to reason from fictional evidence, this reminds me of Battlestar Galactica. In the series, they made a point of Galactica running on non-networked computers (unlike the rest of the fleet), as networking them together made it easy for the adversary to pwn the whole ship.


They'd probably helicopter the IT guy in to reset it but still I'm going to posit that they have a procedure for this that may not require physical access and may have remote-access via a secure line.


The day they stop calling it "cyber" is the day we can rest easy, knowing that people who know what they're doing have been put in charge.

https://xkcd.com/1573/


"In operational testing, DOD routinely found mission-critical cyber vulnerabilities in systems that were under development, yet program officials GAO met with believed their systems were secure and discounted some test results as unrealistic. "


Aren’t there reams of security standards and thousands of man-years of security compliance bureaucracy for even the most basic DOD IT projects? And they still have trivial vulnerabilities like this? Is the process really that useless?


Bureaucracy not only does not discourage vulnerabilities unless they're on a very short list, it actively encourages them by driving away the kind of imaginitive thinking you need to think of them.


I think the difference is between "DoD IT projects" and DoD projects that have networked computer systems. My hunch is that most of these vulnerabilities are in systems that are not labeled as "IT projects".


Maybe? I somewhat doubt it, since I work for a company that has a couple of DOD IT products that are in fairly widespread use, and I don't know that we have done any security compliance to speak of over the past seven or eight years. In that time period we haven't done a ton of work, but we have had to make some changes to move from a really ancient JRE version to a slightly less ancient one.


A side note: the picture in the first few pages of the pdf looks like the original authors intent, aka, not pointing to a particular part of the fake plane for each subsystem. The picture on the web was "upgraded" editorially to point to specific parts for... ? Marketing reasons? Not sure but its hilarious because the logistics system of the web version of the fake plane is in a missile.


> the logistics system of the web version of the fake plane is in a missile.

To be fair, it could also be a death ray laser. The whole thing looks a lot more like Star Wars than a real plane. It has asymmetrical wings.


I was a dev contractor for the US Army for a few years. None of this surprises me.

They had some goofballs policies that made it seem like vulnerabilities were the goal. I could bitch at length. Their TSA style security theater practices were the order of the day. The IA training was an embarrassing joke and they made you do it often enough to make you a little crazy.

I just checked the certificate of networthiness page and they don't have a valid SSL certificate. I recall that being the case years ago too. I wonder if it's been that way for the last 7 years? That's a cute little terrarium of the whole biome I remember.

Off topic a bit, but that all aside... I am more proud of the work I did there than at any other place in my career. I got a lot of excitement and engaged feedback about the interactive learning materials I created.

I'll never know if it made any difference, but the mere fact that someone's son or daughter COULD have noticed an IED threat they wouldn't have otherwise because of my work gives me all sorts of proud fuzzies.

That work had way more meaning than all the other CRUD/ML/Advertainment schlock I'll get to do for the rest of my life :)


> I just checked the certificate of networthiness page and they don't have a valid SSL certificate. I recall that being the case years ago too. I wonder if it's been that way for the last 7 years? That's a cute little terrarium of the whole biome I remember.

That's not quite true. Internal use sites don't have a valid cart issued by a "default" external vendor.

Public sites use existing CAs that are in use by the public. E.g., the Marines public facing site[0] is signed by DigiCert. If you go to a site that's public facing but for internal use like MoL[1], you'll see that the cert is issue by an internal DoD CA. This is intentional.

The DoD has an internal CA already set up. These internal use sites are a gateway to sensitive information, so the DoD doesn't want to rely on an external CA for HTTPS. What I never understood was why these internal CAs weren't marked as trusted on the internal machines. That would avoid the browser warnings when accessing one of these site from DoD hardware, and it would (in theory) force the user to double check when accessing the site from an external device.

[0]: https://www.marines.com/ [1]: https://mol.tfs.usmc.mil/mol


They are trusted by internal machines -- since a lot of internal authentication relies on these certificates. The DOD long ago moved away from password-based authentication mechanisms to certificate-based authentication (GSC-IS initially (CAC), now NIST SP 800-73 (PIV; CAC II)) and so the system will have the correct certificates or the user generally won't be able to login.

What I find as the most common error is that users setup an alternate browser (such as Firefox) that does not use the system certificate store and then lack the system's local certificate authorities.

Additionally, DOD PKI is now cross-signed with Federal PKI (FPKI), so it's larger than the DOD now and other agencies also use the same smartcards (PIV).


This bit is curious. I was issued a CAC while I was in, and as you said, it eliminated the need for passwords. But the internal sites (no matter if it was a laptop from the comm section, a hardwired desktop in a unit's building, or a desktop in a base facility) always failed the check for the certificate store. I always got the security warning (or insecure message) regardless of browser.


Slightly off-topic - but its semi-relevant here as this conversation involves the requirement of knowing (1) the state of the system security store (2) the state of an application's security store ... and maybe in some cases (3) understanding how an application modifies any trusted stores.

It seems we end up with a lot of possibilities for the states of these stores to diverge from our expectations ... I've been wondering how to verify a sane state for all these stores for even a use as simple as my own personally owned/controlled notebook ...

I'd really like a way to audit the system trust store in macOS and enforce that is in alignment with whatever the current 'blessed by apple' certificate trust relationships are and that any trust relationships I ever manually added by mistake/debugging have been removed...

I asked a question about this on stackoverflow but no one has responded ...

https://stackoverflow.com/questions/52527886/revert-all-cert...


I don't know the answer to your StackOverflow question as I don't use macOS/Mac OS X.

What I ended up doing to help this process along is including the relevant certificates inside my DOD Smartcard PKCS#11 module as certificate objects (with, of course, no corresponding private key objects).

For applications that use PKCS#11 (such as Firefox, via NSS), this means that when the module is loaded the appropriate certificates are also made available automatically. This was also (I believe) supported by the "TokenD" driver used to support macOS/Mac OS X so that enabling this driver made those certificates available and provided by the token, so no modifications to the local macOS system trust store were needed.


What happens in some cases is that new intermediate CAs are introduced and older client's certificate stores are not updated. In this case it is a TLS server error as the TLS server should be sending you all the intermediate certs to chain you to "DoD Root CA 2" (the root of DOD PKI -- which is also where smartcards are issued from, via different intermediate certificate authorities, prior to FPKI).


That's my recollection as well.


Ah thanks for educating me. My island at TRADOC didn't have anyone who knew those details. :P


Thank you for your work and for this comment. Regarding the last line: if you can work in the US and are not hamstrung by personal circumstances, there is no way, given the skills you imply having, that you can't find meaningful work: health care, education, energy all have dozens of good companies straining to find additional competent technical staff.


Thank you for the kindness. I'm in a bit of a slump right now so my cynicism is leaking.

Job offers are trivial to get. Meaning.. proper autonomy / feedback balance.. impact.. Life must be too easy for me to be such a snob. Neural fatigue is real.


I'm not sure what specific training you're talking about regarding DIACAP (which it would likely have been when you were working there; now replaced with RMF), but over all the goal of certification and accreditation is about assuming risk, and the DAA (Designated Approving Authority) assumes the risk so they need to be informed about the risk. More information can be found in DoD Directive 8500 (DoD Instruction 8500.02 specifically).

As far as the SSL certificate, I assume you mean: https://www.atsc.army.mil/ ? That site seems like it has a valid certificate, if you validate against the DOD PKI (now cross-signed with FPKI) root CAs:

    $ openssl s_client -CApath ./dod -connect www.atsc.army.mil:443 -servername www.atsc.army.mil
    ...
    SSL-Session:
        Protocol  : TLSv1.2
        Cipher    : ECDHE-RSA-AES256-SHA
        Start Time: 1539109915
        Verify return code: 0 (ok)


Looks like I was remembering the wrong acronyms. It was information assurance training. We had to do it every 6 months, and like twice in a month when Snowden did his thing.

My first year there it was a goofy flash game with uncanny valley cartoon characters awkwardly telling you not to share secrets at the bar to get laid. Every year I stayed it seemed to get longer and more awkward. At some point they added a boxing minigame that didn't have any training value. Nothing was optional.

It became a goal of mine that they'd let me remake it in a way that was... not patronizing... I never found anyone who knew who to talk to get me the project though. :(


The increased training around the time of the Snowden-based leaks was largely focused around informing people who hold a national security clearance about the fact that information remains classified even if it has been published on the Internet. There was concern that people holding a national security clearance may use their unclassified information systems to process this (still classified) material, causing those machines to need to be treated at the highest classification of information on which they process (as they are not "periods processing" machines). Additionally the "need to know" principles still apply, and looking at classified information without satisfying that criteria could cause revocation of sponsorship for holding a national security clearance.

The training likely stressed that just the source of the classified information was somewhere "outside" DoD it did not change the classification. This is because changing the classification requires an Declassification Authority to act on it, which is generally the Original Classification Authority -- of which there are very few.


If you are interested in helping the US Government fix this particular trashfire, consider joining the Defense Digital Service. We work on a variety of DoD projects as part of the US Digital Service "tech peace corps". https://www.dds.mil/

If you're not ready for that level of commitment (though it's amazing work), and you're interested in being involved as a security researcher, reach out to me and we can talk about joining our bug bounty program.


If this intrigued anyone else, just a quick summary: 3-6 week interview process, no relocation assistance, no bonuses, no equity, citizenship requirement, oh and the kicker: drug testing.


Yup! We’re all employees of the federal government, so we have to meet the requirements of all Federal positions.

Honestly, you don’t do this job for the money. I took a pay cut when I joined, on top of losing bonuses and equity. You join because you want to make a real difference in people’s lives, in a visceral, real way.

I can say without exaggeration that there are people who would have died except for the work that our team had done. Even when the stakes aren’t life or death, the impact you can have working for USDS is massive compared to anywhere else. You can personally change the lives of hundreds of thousands or millions of people. That’s the kind of hook that beats equity for me any day.


> You join because you want to make a real difference in people’s lives, in a visceral, real way.

What that difference may entail varies greatly though. For one, it might be not being blown up by that IED. For another, it might be being bombed to bits at your cousins wedding, along with the other 40 members of your family, by a drone operator in Nevada. Very visceral indeed.

If you think that working for the military is "doing good" and the US is oh so innocent I suggest you watch the excellent documentary The Untold History of the United States by Oliver Stone [0].

[0] https://en.wikipedia.org/wiki/The_Untold_History_of_the_Unit...


I don't understand why the benefits have to be so terrible when we spend 600 billion a year on military


Keep up the good work. If you see Matt Cutts, say hi for me.


It's almost if they are trying to limit their candidate pool to the smallest possible set of potential employees.


Thanks for chiming in. Curiously, I had a few questions:

1.) Does DDS really pen test developmental/operational weapon systems? I'm talking about custom flavors of standalone PIT systems at the lowest embedded level, not just public-facing unclassified commidity IT systems. Maybe I'm missing something, but the projects highlighted on DDS's website suggest otherwise.

2.) How's your Blue team ops? The current RMF meta in the field strikes me as an all-Red team party, while the Blue side of business is pretty much always MIA. I suspect it's partly because pen testing is fashionable these days, successful outcomes can be quite dramatic and perceivably understood by stakeholders, and avoidance of the inherent liablility of defensive posturing without significant impact to performance/capability if a complex system's requirements are not well understood (a compounded issue not exclusive to weapon systems), to name a few.


How can one best reach you?


You can reach me at harlan@dds.mil!


I was an operator on a weapon system within the last decade that did not use encryption. I was horrified, naturally, but the explanations were:

1. Well, this is rapid deployment, we can't have everything.

2. The enemy here is fairly low-tech. Shouldn't be a problem.

Needless to say, I'm not surprised by this report.


> The enemy here is fairly low-tech. Shouldn't be a problem.

Would be perfectly acceptable if your hardware was only used for 2-3 years against only low tech enemies that don't have access to electricity during that whole time.


I think this can be a downfall of the US military if they ever get into a conflict with a capable enemy. They are so used to use super complex and expensive weapons against enemies who can't really put up a resistance. I wonder what would happen to the B-2 bomber or aircraft carriers if they had to fight China. My guess is these weapons would be eliminated very quickly.


> They are so used to use super complex and expensive weapons against enemies who can't really put up a resistance.

Tell that to Vietnam and Afghanistan. Historically the US does well against standing armies (Iraq for example), but absolutely terribly against low-tech enemies who don't engage in a way that allows these super high tech weapons to be used effectively.

Reminds me of this: http://www.kiplingsociety.co.uk/poems_arith.htm

  A scrimmage in a Border Station-
  A canter down some dark defile
  Two thousand pounds of education
  Drops to a ten-rupee jezail[1].
  The Crammer's boast, the Squadron's pride,
  Shot like a rabbit in a ride!
1. https://en.wikipedia.org/wiki/Jezail


I meant it in a sense of an enemy that can take on the high tech weapons. Since the Korea war nobody challenged the high tech equipment in meaningful way.


I have to quibble with that a bit. The US regularly overflew the USSR and China through at least the mid 70s, meaning our best aircraft were in a very real sense fighting their best air defense systems 20 years+ after the Korean war ended.

There have almost certainly been satellite, submarine and other engagements too, they just aren't generally publicized by either side until 30-40+ years later.


True. However, I think in a real shooting war those aircraft could be attacked by a huge number of low tech weapons and get overwhelmed. From what I know about warfare often large numbers will eventually overwhelm every kind of defense. For example could an aircraft carrier handle 10000 incoming drones? I hope we'll never find out...


10,000 drones? How big a drone are we talking? They would have to be big enough to carry a weapon big enough to penetrate at least 1/2" steel (at the thinnest, only accessible from the side). If out to sea, a small EMP could drop them all.

Battles won by numerical superiority are usually won by defenders. If it's an invader, it's almost certainly early in the game. Even at the end of WW2, Germany wasn't invaded so much as it lost in France and Russia. The Allied rush to Berlin was an early aftermath. By the time supply chains necessary to conduct a protracted war have been committed, the true cost starts making invaders progressively less interested.

A more interesting concern is the major powers using proxies to demonstrate their new tech. If Russia sold Syria 10,000 drones, that might get interesting.


> Even at the end of WW2, Germany wasn't invaded so much as it lost in France and Russia

Sorry, no. Germany was very quickly overrun in 1945.

https://commons.wikimedia.org/wiki/File:1945-05-01GerWW2Batt... https://commons.wikimedia.org/wiki/File:1945-05-15GerWW2Batt...


What they possibly meant was: the war was already lost when they got invaded at all.


That certainly was true. The war was already lost when they were still deeply into Russia. the last 2 years of WW2 were just trying to fight off the inevitable.


> Since the Korea war nobody challenged the high tech equipment in meaningful way

Le Duan tried to in Vietnam, the Easter Offensive. Despite fighting to a strategic draw, he under-estimated the effectiveness of US airpower and lost 100,000 men on the field.

https://en.wikipedia.org/wiki/Easter_Offensive#Aftermath


Thankfully the answer is "If we are fighting another nuclear power such that they are trying to shootdown a Bomber that didn't invade their airspace or sink an aircraft carrier something has already gone horribly wrong." Pax Atomica is in effect and there is a very reason why all of the wars were proxy wars. Everybody knows that it can only end in everyone losing.


Let's hope it stays that way but I am not too optimistic.


I think partisans are the only ones who would dare and the only way that would be remotely deniable for intelligence agencies is if they don't have major unexplained resources - including training. Which I suppose is where cyber attacks could be useful in the sense of "remote chance of working without being utterly atomic suicidal" - if sensors go down long enough for low budget explosive attacks or their own weapons decide they must sink is their own ship. The later /really/ shouldn't happen if people are doing their jobs given the sheer number of at all given the munitions handling and design sins that would require to be possible makes juggling loaded guns look like the peak of caution.


Sounds like classic underestimation of your opposition.


Yup. The enemy may be poorest of poor, but in this day and age, their entire population probably has smartphones (or at least dumbphones), and there's plenty of smart people with nothing better to do than to play with computers.

There aren't many low-tech places left on this planet, where it comes to computing.


The catch is that on DOD systems, encryption is very difficult to add. That is, to be certified by the NSA and compatible with the military key infrastructure. So its better to avoid mentioning it unless its forced on you. Better is a relative term here. I mean, in terms of cost and effort to add. Not security.


So since it's hard to get the rubber stamp you just do include encryption, that seems worse.


You're waiving encrypted channels around as if it were de facto mandatory. Without knowing the ConOps of the system, how could you possibly conclude that confidentiality was an imperative? Effective acquisition of weapon systems is about balancing budget, schedule, performance, and risk--a lot easier said that done.


> Nearly all major acquisition programs that were operationally tested between 2012 and 2017 had mission-critical cyber vulnerabilities that adversaries could compromise.

It's not too surprising and a little reminiscent of the security nightmare that are IoT devices.

All those weapon systems come out of hardware/engineering companies with little background in software engineering and the accompanying security best practices.


Most hardware engineering companies have no idea about software. To them, software is just another line item on the BOM, like a bolt or a piece of sheet metal. Something that you need to source as cheaply as possible and stick into the package somewhere on the assembly line. Nobody cares what it does or how buggy it is as long as it meets the checklist of requirements written into the contract with the supplier.

Look at things like cable set top boxes, and automotive entertainment systems. It's like they don't care what the software is as long as some bits that some supplier sent them are flashed onto the device.


They don't know how to hire a security advisor or external team?

What I'd be most concerned about is that the procurement process is favouring companies who clearly aren't up to designing in rudimentary security, in weapons systems, ... smh.

That seems like getting clothing made and not having anyone flag that it was glued together with PVA instead of being sewn; and the company you hiredb not having anyone who realises that's a fundamental problem.


Meanwhile, the software companies capable of fixing these issues face internal revolt at the idea of defense contracts. Apparently inaccurate targeting systems and vulnerable firmware in equipment that is going to deployed (regardless of protest) is better for pacifism?


There's been companies around willing to do the work for a small premium for a long time. There's also software designed for security or making it easier. Here's a few, semi-random examples:

http://www.sis.pitt.edu/jjoshi/Devsec/CorrectnessByConstruct...

https://www.ghs.com/products/safety_critical/integrity-do-17...

https://runtimeverification.com/match/

https://galois.com/blog/

http://sel4.systems/

https://muen.codelabs.ch/

The defense buyers just don't use such companies or products in most cases. They know they don't have to due to corruption mostly. The money they save might even get someone a bonus for achieving some metric like keeping costs down. Mass market is similar where they don't buy the stuff either. So, the supply of high-security systems are extremely low, usually high per-unit as a result, and not prevalent.

Sad but true...


It's a conundrum. Do you not work on it, and have innocent people accidentally killed? Or do you work on it, and have innocent people purposefully killed?


I think it is reasonable to assume that the number of innocent people that the military is intentionally trying to kill is less than the number of innocent people accidentally killed by imprecise munitions, miscommunication, lack of verification/impulsivity, and bad sensors/intel (areas where technology helps).

For perspective, not too long ago during WW2 and following wars the best measures we had were napalm firebombs and binoculars. Civilian deaths were much higher, and friendly fire incidents were commonplace. Technology, despite dystopic appearances, has helped reduce the brutality of war.


When I was at Lockheed - we were building the RFID tracking systems they used to track various everythings all over - and they were trying to make it a part of the Port Security for every port... and even had Tom Ridge join the board...

well, I recall asking about the security of the systems (I was the IT lead and was to help design the global port tracking system which they hoped to track all shipping containers) -- there was no encryption/authentication on any of the tags.

If you had a reader, you could read/write the tags.

They had not even thought about securing these systems - and they were trying to tout them as a security system for weapons shipments. They even had tags that had G-sensors that were to be able to tell you if a munition was dropped, if it had armed (some weapons will only arm themselves once a certain g-force is reached which indicates to the weapon they have been launched.)


The graphic on page 26 of the report is kind of cute: https://i.imgur.com/MWrM2i8.png

The inclusion of this graphic makes me realize the report is not intended to explain the situation to engineers. It's to explain the problem to well-decorated higher ups that probably don't understand modern technology all that well, yet are calling all the budget shots.


The US is going to lose a war this way.


Is there any reason to believe the state of Russian/Chinese/etc. security is any better in this regard?


Russia's aging military hardware is an asset in this case, as it's not as vulnerable to electronic intrusion as a result of having little to intrude.


Then there's a good argument the billions the DOD spends on its "modernization efforts" should be spent elsewhere.


US military strategy and tactics are much more reliant on high-tech advantages than other countries though. If everyone’s tech all goes down, we’re going to be hit a lot harder.


No, but all that does is ensure we all lose collectively


That is true of any war that the US has even a remote possibility of losing.


That is true of any war that the US has even a remote possibility of entering.


Now they can queue up some multi-billion dollar contracts to fix it. I'm in the wrong business.


Lol let's do a startup


Telnet: the backbone of our Defense Industry


"Another test team reported that they caused a pop-up message to appear on users’ terminals instructing them to insert two quarters to continue operating."


A ton of commercial systems have similar vulnerabilities. Teslas have gotten hacked remotely a multitude of times over several years. People who attack/hack systems are specialized in ways that those engineers that build systems are not. None of this should be all that surprising. New recommendations on proper system design should mean future programs should have budgets to hire people to mitigate these problems. However, it should always be assumed there are vulnerabilities that can be exploited by others; any claims to the contrary should be met with extreme skepticism.


It's like, on a civilizational level, we're just begging for a scenario where we accidentally destroy ourselves.


Most of the comments outline how awful and dire the situation is (or probably is).

I'm less interested in this than I am in what we could do to fix it. Is it just more money to hire competent security engineers? Is it a more responsive talent acquisitions process that gets the right people in at the right time?


There is no motivation on the defense contractor side to do anything more than satisfy the requirements of the contract. And any R&D spent should result in an interesting demonstration that brings in more business.

Standard operating procedure would need to change so the government entity has security as a requirement, details on how the requirement can be satisfied, and a bunch of money to pay for it.

So tack on $X million for each contract to have a 3rd party audit the code, documentation, and hardware for security vulnerabilities. And an added maintenance contract to fix any future vulnerabilities for the lifetime of the program (20+ years most likely).

From the higher up side, what do you get for all that money spent? No new functionality, no fancy demos. Going to be hard to convince them security is important when they can fund something they view as more critical or more interesting.

EDIT: To answer the question of what can be done, I think it'd require a culture change on the contracting side. The engineering side of the house is mandated to only do work that relates directly to the contract. The hours bid will likely be for the minimum necessary to satisfy those requirements. You can create a new interface, but you won't have the time to do any fuzz testing for example.


I guess my question then is why have a computer attached to these systems in the first place, or if you must, why not make it as dumb as possible? Why include more points of failure?

Also, I couldn't help it, the DOD plans to spend 1.66 Trillion on these systems! Perhaps if we instead stop making new fangled, more complicated devices that with have tenfold more vulnerabilities to catch, how about we just stick with the machines we have and make then hardened. I imagine that it would save us loads if we just do that.



Good luck closing the barn door after the horse has bolted: https://www.wired.com/2011/11/counterfeit-missile-defense/

I am no military expert, but it seriously looks like China has us in a stranglehold.


$1.7T is a lot of money just to protect your major investments in killing people efficiently. Modern society I guess.


Are these remotely activated systems that are at risk (like drones)? if not, why is any weapon system that doesn't need remote activation actually plugged into a public network?


If I were the Russians, Chinese, or North Koreans, I would heavily invest in offensive hacking capability. Oh wait, they're already doing that.


Silver lining: when the DOD find good ways to harden their systems, we can all copy them.

Cloud: it's probably unplug the aerial / network cable


I'm afraid they need to catch up with the rest of the world before advancing the state of the art.

The best case scenario is a quick cultural shift, with awareness of computer security threats overflowing from the military to laws and society in general.


You'll see things here that look odd, even antiquated to modern eyes. Phones with cords, awkward manual valves, computers that barely deserve the name. But all of it is intentional. It's all designed to operate in combat against an enemy who could infiltrate and disrupt all but the most basic computer systems.

Of course, those attitudes have changed through the years and Galactica is something of a relic. A reminder of a time when we were so frightened by the capabilities of our enemies that we literally looked backward for protection. Modern battlestars resemble Galactica only in the most superficial ways...


No networked computers on my ship.


No networked computers? I do not believe you. Even my little soft-skinned two-person wheeled command vehicle has a network of about twenty discrete computer systems, such as multiple radios, GPS, displays, input terminals.


Your car probably has a dozen networked computers, unless it's a really old one.


Reminds me of Battlestar Galactica, where the all the ships in the fleet get hacked by Cylons, have their shields taken down and promptly destroyed, but Galactica survives because it's computers aren't networked.


There's a reason it reminds you of it...


I know the quote is from the series.


That's even more confusing! :)


No shields, and this is a direct quote... from a character... who's a cylon!


*If by some chance you haven't seen the BG reboot by now, this is a bit of a spoiler. :)


Surprise... only one new 'modern battlestar' survived... because it was offline.


.. and now I need to re-watch the series again. Or at least the opener.


GDC4S (now General Dynamics Mission Systems) and NICTA have been working on seL4, and it at least seems that USDOD has something to build on, if they want to start providing assurances of some form on weapons systems.

They'll really have to set the passwords properly though.


What's eyebrow-raising is that it's been used as para/virtualization platform for Linux. (Ordinarily, SELinux MLS/MCS is pretty good though.)

If something like Minix 3 "NetBSD" in Rust ran on seL4, that would inspire more confidence.


> If something like Minix 3 "NetBSD" in Rust ran on seL4, that would inspire more confidence.

Yeah, I've been thinking about that for a while. There is Genode/seL4, but it's hard to say if it makes as much sense.


Not to play the Whataboutism card, (proceeds to play whataboutism card), but has anybody pen tested the Soviet's or Chinese' systems?

Just thinking this isn't a U.S. only problem.


"The Soviet Union... I thought you guys broke up?"

https://youtu.be/yFNRlvEh7ok


They are surely much worse, but the US has more to lose.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: