I worked with Nic on and off for almost his entire tenure while I was CTO for Kessel Run and I can state with full confidence that this is at best him mis-representing his importance and the problems with the DoD IT; and at worst this is his attempt to spin his being fired (or being asked to resign ala Nixon) by the incoming Secretary (timing here is not just a coincidence).
A couple of core points, that are important to keep in mind that have nothing to do with Nic's character, integrity, communication style or technical capabilities (which is a separate and important topic but not suitable for this public forum IMO):
- The CSO position was made up by him, it's not related to any GSA Schedule and it had about the kind of charter you would expect for the position: Namely ill-defined and loosely empowered.
- There was no office of the CSO in the sense that it was not congressionally funded, had no budget, no personnel and no real authority for writing, implementing policy or actually doing engineering.
- Nic never held a clearance, and as a result was never actually involved or aware of most of the programs that he intended to impact
- His primary mission seemed to be to push any organization that was developing software for the USAF to immediately adopt microservices architectures, containers/kubernetes and a couple of very specific "DevSecOps" practices - and specifically to the specifications that he approved/suggested. Make of that what you will.
That said, a lot of what he says is true and IT/network infrastructure, development and test etc... in the DoD is far from modern and in some places completely broken. Other places, where it matters a lot it's like nothing you've ever seen or will likely see in the commercial sector for decades.
Bottom line, I suggest taking this tirade with an EXTREME amount of salt.
I'm sitting here right now on a Friday afternoon while the Air Force is on a four-day weekend ahead of labor day, trying to deploy to a broken ass application his DevSecOps reference architecture forces everyone to use, but it doesn't work because it uses a checksum algorithm disabled by FIPS-compliance hardening in this environment, which we have absolutely no control over. The biggest impediment to even getting this far was another vendor enterprise service he forced us to use that was broken until July. We were stuck just waiting on a bug fix to be provided. And, of course, we have to use Iron Bank container images for everything, but Iron Bank container images are themselves perpetually broken. They do security hardening, but no functionality testing, and their process of pushing breaking changes to the same tags can break you in production unexpectedly. And you can't pin to the actual sha because Harbor only holds onto five orphaned shas at a time that don't correspond to a tag.
He's touting a lot of accomplishments here that are accomplishments because pushing broken functionality and calling it done is a very easy way to say you delivered faster than a normal DoD program that has to actually prove it does what it says it does.
One reason might be that they are receiving some kind of compensation from that vendor. A more charitable explanation would be that they want to cover up a big mistake by loudly affirming that it was not a mistake, and you are in the way of that. Either way, the fact that they don't really want to know whether it was a mistake, and if they know, don't want to fix it, tells you all you need to know about them.
To me this sounds like the fundamental difference in approach between "agile" and "waterfall" (and underscores his accusation of the DoD using broken "water-agile-fall"; which is more commonly called "water-scrum-fall", but I digress). You can't use an agile process for a project whose contract demands requirements (and all that implies) up front. It would require a complete revamp of the entire DoD contracting process. Which may or may not be something that can or should be done, but with the understanding that it IS the way it is, for a very good reason: to prevent fraud. Which was a painful learning experience going back to the very beginning of the Federal practice of providing national defense. The current system is NOT perfect, but you don't throw the whole system away in pursuit of "cost savings" or you'll attract every grifter and con artist out there into this business looking to scam every penny they can from taxpayers.
>He's touting a lot of accomplishments here that are accomplishments because pushing broken functionality and calling it done is a very easy way to say you delivered faster than a normal DoD program that has to actually prove it does what it says it does.
100%. But dang, wouldn't it be nice if we really COULD do all this? And given my limited exposure to P1, and the traditional delivery systems, it seems to me there's a very worthwhile middle-ground to pursue. It didn't seem he was interested in pursuing an incremental approach. It seemed like very much an all-or-nothing mission to him.
The thing is, at least for software dev, DoD projects don't know their software requirements any more than private sector ones do.
Plus, the current acquisition system is by no means more fair or less riddled with fraud. The WWII model of much easier acquisition regulations run by bureaus or offices with deep technical expertise and independent auditing did quite well.
Just because we intend to reduce fraud with a given set of policies doesn't mean we should assume that the policy actually works, or that the other problems such policies introduce are to be ignored as a result.
Who can solve this? There is no common authority. Theoretically, that is what the DNI was supposed to be for, but they don't have any kind of IT expertise.
The issue with Iron Bank itself is a lot more structural. We report it every time a specific container breaks, and it eventually gets fixed on a report-by-report basis, but the base issue is they're doing two things completely wrong:
1) The process requiring disconnected builds in the container build stage to avoid pulling in dependencies from the Internet, combined with lack of expertise on the part of people assigned to the container hardening teams in terms of how to actually build software, results in them usually copying down the upstream official container, and making a naive copy of the desired executable from that container into a UBI base image. That happens to work for dynamically linked executables when the upstream and UBI have the same glibc version, usually right after a UBI release. Then it later breaks spectacularly, and when the issue is explained to Iron Bank engineers, they don't understand and simply call it fixed when UBI releases again and glibc is aligned by chance with upstream.
2) They push functional changes to tags. You can pull some <image>:8.4 one day, and the next day the exactly same tag will have different environment variables set, different paths, will have executables removed (notably, the jq image used to include aws-cli for some reason, which it shouldn't have, but once it did, you can't just remove it). Normally, the fix for this is pin to a sha instead of the tag, but Harbor doesn't hold onto the shas once you have republished to the same tag five times, and Iron Bank is continuously rebuilding those tags. We've reported it and Iron Bank technical leadership is at least aware this is a problem, but figuring out how to fix it has never been a priority for them.
Is your experience that normal DoD software programs have to actually prove they do what they say they do? Because that has not been my experience at all in the Navy.
We work on something for years until political and leadership support for working without concrete progress can finally no longer be maintained, throw whatever we have at that point at the users, and then force people to use it because "we've already invested $XXM into it and worked on it for years and all the rest of our plans assume that this system is there."
When you see people above asking "what are the examples where we're a decade or more ahead of the commercial world," satellite imagery is one of the places where we're a decade or more ahead of the commercial world. I don't know if you recall Trump accidentally declassifying a collection on Twitter a few years back and imagery experts skeptical that it actually could have been taken from space given the resolution. That is not even remotely the tip of the iceberg of what we can do. I would not have even believed what we can do until I saw it.
So I actually am used to being in an environment run by technical experts in physics and image science who know exactly what they want, write down clear, specific, and exacting requirements, and actually meeting those requirements.
But you know what? We had the infrastructure to do it. A complete clone of the operational system with a cloned production data flow, putting the next release through exactly the same load, so we could detect the differences and any bugs immediately. Actual continuous integration because there was somewhere we could integrate to. Then I come to Platform One and it's here's an AWS account and some buzzword tech products that say they enable GitOps and CI. Figure out how to use them and stand up your own servers from scratch.
And you know what? I'm a confident person. I perfectly believe I can do that. But not very quickly. And if my five person team is supposed to do design, development, test, operations, and maintenance all by ourselves, it is never going to happen quickly. Process can't save you from underprovisioning of resources. But apparently the government is just not willing to spend money any more. It's hard to see how. It's not like the federal budget is getting any smaller. But I have no idea where that money is going. At least some of that used to be going toward extremely superior radical technology that the public is skeptical of because it's all classified and they don't hear about it, but trust me, it's there.
Even with P1, I would point to their rapid provisioning of MM chat as an example of where they were able to move quickly... but that was the P1 team fielding it, not P1 fielding a platform that downstream teams can jump onto quickly.
Even now there's been little point to trying to adopt P1 directly, as I understand it you're supposed to onboard through a separate software factory instead. And that's hard to do in the Navy lol (we're trying with things like Black Pearl to build on top of what P1 has done but that launched a year ago and there's still little ability to onboard new efforts).
> But apparently the government is just not willing to spend money any more. It's hard to see how. It's not like the federal budget is getting any smaller. But I have no idea where that money is going.
The federal government spends so much on IT that's its well into "waste, fraud, abuse" territory. I don't know where it goes either, but I think it's 20% labor of staff who are no-value-add middlemen, 30% on cybersecurity paperwork documenting the lack of security, 30% PowerPoints documenting the lack of progress, and maybe 10%-15% on the actual technology/cloud/development.
I'm heartened to see that someone is able to do better (even if they need the 'classified' shield to keep the technicals focused on technology) but I really wish we could hurry up on replicating the IC / Kessel Run model to a wider part of the DoD.
That's something I'd really like to see. How does that kind of difference come about? My guess is that it requires a certain degree of funding and commitment that may be impossible in wallstreet companies. But what else does it take for an organization to get there?
Would love to more about this as it sounds like you are implying that their practices and technology are significantly ahead of the commercial sector.
I honestly don't know how he'd ever expect to succeed in a job like this without a clearance. Without it, he'd have little practical knowledge of how the actual air gapped networks (SIPR, JWICS, etc.) are designed and implemented.
Even if he didn't know the classified technical details, there's a lot of differences between deploying software in these environments vs the wide open public internet.
From experience, the type of exploits that would be devastating to a big company just aren't a big risk for most apps on these networks. OTOH, classified apps have all sorts of bureaucratic rules that the unclassified world doesn't have to consider.
For example, if this guy is coming from Silicon Valley, he might spend 80% of his time worrying about exploits that cost corporate American billions / year and which are due to unpatched systems which become targets for script-kiddies pounding every IP on the internet again and again, looking to take advantage of dozens of known vulnerabilities.
On the classified networks, these types of exploits just aren't as likely. There aren't fleets of bots in China and Russia trying known root passwords on every machine with port 22 open or trying to access open Mongo or Elastic Search systems on whole IP ranges.
On the other hand, a private sector SecOps guy isn't even considering that they absolutely cannot store multiple big program's databases in the same data center, staffed by the same contracting company. Because there's a huge risk that some rogue IT contractor with general root access in some small signals intelligence "datacenter" in Hawaii or Germany will end up having access to way too much sensitive data that is shared by multiple intel agencies and military branches, and it'd be disastrous if he decided to start taking raw disks home with him each night a la Snowden. The Air Force and other DOD agencies certainly must ensure that a single contractor doesn't have access to too many data sources.
And with everyone moving as fast as they can to AWS, Azure, etc, I'm guessing these type of procedure are far more important to the Air Force than worrying about some development team deploying an app on an older docker image based on some old version of Java or Python with a few known exploits, which could cost a private sector company a lot of pain and money the minute it was deployed on the open internet.
Especially in the military it seems entirely plausible that bad people gain access to your equipment.
Can you refer us to your opinions on the matters expressed in the article in a public friendly fashion?
So; indeed, the size of the USAF, AND all the contractor developers they depend on, and the command-structure of this combined "organization"; I wouldn't want to contemplate the level of "sufficiently empowered" that would be required to change that in 6 months. You'd have to summarily fire a LOT of people, and magically find skilled replacements, and then magically overcome the knowledge gap, while maintaining operational readiness for warfighters.
That's not true for many other specialties in the military, where the officers will have extensive experience in the area they are leading.
Doesn't solve anything by itself, but it would help.
Fred Brooks (a name some of you might recognize) recommended in a 1987 report for Defense Science Board on problems with DoD software that DoD needed to improve its uniformed and government software expertise. Even there he said such personnel would likely have to be heavily supplemented by contractor help -- but at least they'd know how to do the technical management.
Fast forward 30+ years and there are pockets of talent but if anything the problems are larger as digital products have become more important to business operations around the world.
Recent example from Reddit on DoD IT: https://i.redd.it/dyp2nyaca5l71.jpg
Do you have any other articles/materials that we can reference for additional information related to this topic?
It’s weird how the federal govt is like this across the board. Most things are “fine” being held together with bubblegum and duct tape. Some things matter a lot though, and when they do you get to see really smart people apply themselves in ways that are cooler than the movies.
> One of Chaillan’s main concerns is incorporating security into software development, a practice known among IT professionals as DevSecOps. With a lack of basic IT infrastructure, implementing DevSecOps has proven difficult, he said. What’s more, there has been some resistance among those used to the more traditional approach of considering security after development and operations.
We're standing up basically everything ourselves from scratch. The mandate was basically "we have a critical need for a new capability. Here is an AWS account and five developers, so make it happen." That's it. So everything from standing up CI/CD pipelines, to building out a cluster, to configuring storage and networking, to writing and testing the application code, to maintaining environments and deployments, is falling on us, with no support.
I'm not going to say what the product is for reasons of OPSEC, but it is inherently a product that has extremely high security needs. Yet in the rush to be able to tell some high-ranking people we have put an "MVP" in production, we've skimped in every which way it is possible to skimp. I am aware of so many holes in the system, but Air Force pen testers didn't find them, so our product manager is being pushed to go forward and we'll worry about security later.
To my mind, this is absolutely unacceptable for a critical defense system, but nobody is asking my opinion. Supposedly, we keep being told we'll lose funding and get the plug pulled if we don't hit some important milestone at some exact date. By being "agile," we can deliver a broken, insecure "MVP" and "iterate" on it until we have a real product that actually meets its requirements.
You can't do this crap with defense systems. This isn't Etsy. Deploying broken shit has far different implications than when all the exemplars from the DevOps Handbook do it in order to find all their bugs in prod and turn their users into beta testers.
And on the legacy non-cloud side of things… it’s a horror show. No CI/CD. No testing (a lot of my job was bolting awkward test harnesses on to existing legacy software to compensate). Inconsistent and ever changing project management systems (they switched from TFS to Jama to Azure DevOps to Jama again and then when I left were talking about moving to JIRA. Our cocontractors were insanely unqualified. They were really proud of how cutting edge they were for adopting git for VCS. In 2019. It’s crazy how bad all of this software is, but at least it wasn’t on some internet connected server before.
Yes, even weapons consoles! I served on a submarine whose fire control system would literally crash and reboot if we encountered a situation that was especially frequent for the class of boat I was on. We worked around it, like the users always do, but please do not confuse 'DoD unique requirements' with 'Etsy can't do this'.
> You can't do the same thing with a weapons platform control module. You need to test the hell out of it and know for certain it works, in every possible edge case. Releases to production are heavily gated for a very good reason.
Do you really think Etsy doesn't run tests?
Let me tell you as someone who has run reactor restart procedures a hundred times on a real live submarine, really under the water when the reactor was scrammed: the best way to get good at something is to do it a lot. Whether that's starting up a reactor, or deploying to production.
Remember that the term SNAFU came from the military; watch some WWII through Vietnam depictions of it: Before the modern era of its glorification, the US military was synonymous with absurd, screwed-up systems and policies that the soldiers overcame with chewing gum, duct tape, initiative and a sense of humor. (Some say the reputational change is due to the shift from the draft, which caused a wide segment of the population to be familiar with the military, to volunteer professional personnel, which results in most people having no clue about it.)
I'm not saying it's a good thing or that it shouldn't be improved, but the military (and every large institution) have always had a lot of that crap. I remember a Marine officer telling me that to never fly in one of their tilt-rotor aircraft unless I see a lot of hydraulic fluid on the ground - because if I don't, then it's out of hydraulic fluid. As they explained, they go to war with - their lives depend on - tools made by the lowest bidder.
It sounds like you need to have a very serious talk with your management.
I really hope that leads to a resolution of the problem.
If not, lives and nations sometimes depend on these systems working, so you might be obligated to ensure that the right people understand the problem.
He means the initiative to provide in-air updating of the surveillance payload in response to tasking. Probably ELINT-related.
Edit: He's probably talking about this https://mobile.twitter.com/WILLROP3R/status/1318161379304591... and this https://www.thedrive.com/the-war-zone/38162/u-2-spy-plane-ta...
1) If the military gets it right with anything, it's encryption. This isn't connecting to the aircraft over the Internet using Verisign PKI. You're not gonna man-in-the-middle inject your own code into the update. The only attack vector is the software supply chain itself, but that is already an attack vector regardless of how the software gets loaded.
2) Part of the purpose of being able to do something like this is to push new software capabilities to platforms that can't be brought back to manually do it at all, like satellites in orbit. A software update that doesn't require you to launch a new rocket into space can save billions.
What gives you this confidence?
Beyond what your valuable input, I will make the general observation - I have no personal knowledge of military IT security - that the keys are necessary but they are not nearly sufficient for effective security. Also, other high-priority assets have been compromised, such as a key nuclear warhead design.
Basically, use a sufficiently secure source of randomness to produce a couple copies of an enormous (terrabytes, probably) one time pad. When the sub comes to port, deliver the drive using armed guards. When sending or receiving a message, use a section of the random data to XOR your message. If each section is only used once, none of the data can be decoded by someone without the one time pad.
I also imagine the NSA is willing to give an honest helping hand to the US military. Whereas any other entity has to hope any help from the NSA is honest, and likely doesn't include the best known bits.
Developers who know how to do this are relatively scarce. The military almost certainly does not have enough of them.
> Developers who know how to do this are relatively scarce. The military almost certainly does not have enough of them.
FYI: the NSA is part of the DoD. They most certainly have plenty of people who know how to do encryption properly, and securing military communications is also part of their job.
> The National Security Agency (NSA) is a national-level intelligence agency of the United States Department of Defense... The NSA is also tasked with the protection of U.S. communications networks and information systems.
I'll be the first to concede that the military is significantly behind on modern digital product management, cloud architecture, and software development.
But that doesn't extend to key management and distribution or communications security in general, which is something we're pretty damn good at.
I don't think it's highly trained at all!
What kind of training do major tech companies do? I've never done any in my career, outside my degrees, and not everyone does that even! Is that unusual?
Contrast that with the military, which is obsessive about training and invests a huge amount of time and effort into it throughout your entire career.
So who are we taking lessons from here?
I wouldn't say it's unusual for most tech employees to fail to avail themselves of the training opportunities in their company. But that's not because of a lack of opportunities I would say.
Many licensed professions have mandatory continuing education requirements--you're required to take a dozen-ish hours a year of classes in your field--and so there's a pretty big cottage industry of actually providing and discovering those opportunities. Since CS isn't licensed, there's fewer of those opportunities around--although there is some, and if you work at a large company, you might even get periodic mailings about some of these opportunities.
Instead, a lot of training in the CS fields tends to be a mixture of hands-on training, learning about stuff outside of the company (and potentially conveying it to other people internally--we've had learning about containers taught to us that way), and going to seminars or conferences. It's worth noting that if you work at a large company, there's a decent chance that the company has already budgeted a certain amount per person for training, so if you find a conference that's not totally irrelevant to your interests, you can probably get manager approval to go to it, fully paid by your company.
In some services, even the uniformed staff with IT training will have last had that training 10-15 years prior, with all their skills having atrophied in the meantime.
The underlying military organizational management assumption is that operational experience translates to general management effectiveness (e.g. "I commanded a ship of 300 Sailors doing a wide array of jobs, so I can lead an agency of 500 civilians doing a wide array of jobs"). With the right support this can work in a heavily digital office until the leader can pick it up with on-the-job training, but it's not the way you'd plan to do it.
This is why the nuclear Navy requires even their senior supervisors to be technically competent to the task of leading nuclear organizations. Navy offices dealing with aviation are led by naval aviators. Shipbuilding offices are led by officers with a career operating surface ships and submarines, etc. etc.
But the bench of uniformed talent who have been groomed through a career of working on software teams to lead future software factories is very small. That will change over time as the services work to define specialized communities and agencies to build that bench.
But yeah, as an entering assumption you'd like your digital offices and agencies to be led by people with current digital expertise, ideally through a career actually shipping code, and at least part of that time within the DoD ecosystem.
Mine provides open access to a ton of online resources as well as maintaining a regular budget for developer-initiated things like going to conferences/seminars or buying books. It's actually rare that the training budget gets fully spent, but on the other hand I've never had a request turned down.
For a while, my employer was even footing my college bill when I decided to back for my MS. That one came with a contract to stay on longer to "pay" them back, but that was fine because I had no plans to leave.
If you work in an risk-averse industry like banking or aviation, you're not likely to experiment or learn anything new on the job.
This linkedin post seems way more... balanced... than TheRegister.com implied.
> While we wasted time in bureaucracy, our adversaries moved further ahead.
Zoinks! This matches my experience working in defense and is one of my biggest fears.
> I am becoming “technology stale”.
> The DoD is still using outdated water-agile-fall acquisition principles to procure services and talent
So glad that I left the industry. It's infuriating too because it's not a matter of if, but when. When the US faces a determined and modern adversary, the ones paying the price will be the men and women who serve in the military. It won't be the Pentagon brass or defense CEOs paying. This shit keeps me up at night. Worst of all the government has known it's a problem for decades if you read the Defense Innovation Board reports.
> Nothing is changing: most of this has been said before and the 1987 DSB report on military
software pretty much says it all. What is it going to take to actually do something?
one of the most efficient way to balance the scales is by taking away that smartest and hardworking top of the population through immigration.
From his LinkedIn post... this really is the crux of the matter... they want to whitewash security, not actually implement it.
Which is to say, the upstream and downstream didn't change how they do things at all, and somehow developers acting differently is supposed to convert everything to agile.
Or to put it another way, this is what you get when you tell everyone they need to "do agile" without actually retraining people on what that means and update processes to enable it.
Source: experience with healthcare "agile" and "sprints"
Sounds like my corner of the private sector.
Hire a scrum-master! that will fix it.
I can't speak for Chaillan, but as a military member who led an agile software development team similar to his during the same timeframe, I think he's referring to DoD's fondness for buzzwords.
Because "agile" is the new hotness, every DoD office and vendor tries to slap the language of agile onto a waterfall model. See this wonderful report from the Defense Innovation Board on "Detecting Agile BS": https://media.defense.gov/2018/Oct/09/2002049591/-1/-1/0/DIB...
> The DoD is still using outdated water-agile-fall acquisition principles to procure services and talent instead of leveraging “Capacity of work” agile contracts to staff teams. Improving acquisition ensures teams have the ability to groom their backlog and move at the pace of relevance. Only Platform One, and teams like Kessel Run, are truly end-to-end agile, from what I have seen to-date.
I don't know what "water-agile-fall" is exactly, but he's probably talking about some terms in the Air Force. Maybe he means that the waterfall model still exists, and a bunch of people are trying (unsuccessfully) to convert to agile. But he's only seen Agile properly happen in a minority of projects.
Typically the way government does this is by not contracting for staff at all, but instead contracting for a company to develop a software product that meets a long laundry list of requirements. Since 'agile' is in the DoD zeitgeist these contracts often throw in some agile buzzwords like "Story points" instead of "work breakdown structure", or split up the product into some kind of phased delivery scheme. Sometimes they'll throw in to do user-centered design.
But invariably these contracts lay out the same thing: develop a product that meets X requirements. This means you have to know the requirements. You can't instead hire smart people to help explore a problem space, do interviews and experiments to determine whether you've achieved product-market fit (or what we'd call mission-market fit), and only then to start fielding a digital product in increments.
This is considered a 'services contract' and these are frowned upon because they seem ripe for fraud ("you paid BigCo $40 million a year to do 'market research and product discovery'????? What did they discover? Why couldn't it have been $20M instead??").
TL;DR: The type of services you'd contract for are different for a modern digital product compared to a waterfall-style project. Even though DoD says we're doing agile, the reality is that the contracting system still pushes you hard to waterfall because no one seems to have the expertise to do 'agile acquisition'.
Isn't this (general pattern) what led to the creation of the USAF as a separate military branch from the Army?
Perhaps we need a use military branch - The U.S. Software Force!
It's sort of like people who are both awesome software developers and good managers. Those qualities often do not overlap, but they do sometimes. If you can afford to be selective enough (which is rare), you can check both boxes for everyone you hire.
(I've been out of this area for a few years, so my perspective might be a little dated, but I doubt it has changed that much)
Pilots aren't simply "selected". You have to get through multiple gates to become a pilot in the USAF. Most of those gates involve demonstrating some degree of devotion and/or skill at flying (for example, having a private pilot's license before competing for a pilot slot is a really good idea).
Having said this, pilots for the most part either end up in combat roles (e.g., fighters, etc.) or in leadership roles (as in, you have a whole crew you for which you are responsible). Furthermore, pilots are officers and all officers are expected to be effective leaders. So sure, leadership qualities are one of the things you look for - because you look for them in all your officer candidates. Now, you may not agree with the personality traits identified as leadership traits. In general, it is true that the military tends to favor personality traits over management skills (the argument being that management skills can be learned, but some innate personality traits cannot). They judge that things like "likeability" and "ability to get others to trust and follow you" matter.
And here comes the backwards part. General officers are selected for their perceived ability to understand the mission of the USAF and move it forwards. This requires leadership skills and so is biased towards those with those skills. But there is also a general belief that the people who have most directly been involved in executing that mission are the people who are best positioned to lead that mission. In this case, being a "rated" officer (this used to be pilot/navigator/missile launch officer, but now seems to include a couple of other designations) actually dramatically improves your chances to make O6+ (Colonel -> 4-star General). So it isn't that you are selected to be a pilot because they think you'd be a good General - they think you'd be a good General because you've been a pilot.
A final note - while all officer candidates are selected based on leadership skills, there are other factors that are also considered. For example, if you are competing for a technical slot, having a STEM degree is generally a requirement. But traditionally, the rated slots didn't have any particular educational requirements (other than a 4-year university degree). As a result, pilot candidates generally just have two things in common:
* Those personality traits
* A demonstrated commitment to become a pilot