Source: seen more than one of these external “transformation” efforts.
“ Through an eight-week Red Hat Open Innovation Labs residency, Lockheed Martin Aeronautics replaced the waterfall development process it used for F-22 Raptor upgrades with an agile methodology and DevSecOps practices that are more adaptive to the needs of the U.S. Air Force.“
I used to work for Lockheed Martin. I believe it.
I found it so hilarious I was relieved when I finally got caught up in the layoffs.
A huge understatement. It's the only true 5th gen that's tailored for performance, rather than cost savings (ala the F-35). The others are completely unproven (Chinese) or both unproven and in extremely limited quantities, while not providing true stealth (PAK FA, though if anything, the SU-35 family is the closer analogue).
> Service officials had originally planned to buy a total of 750 ATFs. In 2009, the program was cut to 187 operational production aircraft...
It may have looked more expensive on paper with ideal assumptions to produce lots of expensive F-22s instead of a few F-22’s and lots of cheaper JSFs. But in practice I bet the costs would have equalized, due to the increased volume of F-22s and the development of extensive institutional knowledge of that airframe - from manufacturing to maintenance to piloting/operating.
In addition, the threats anticipated that led to its development never materialized.
That's why it was cancelled.
What computing technology can't be put into the F-22? Iirc it has a much bigger radar housing, and EO DAS (fancy term for 360 IR / situational awareness) can be added to it.
To answer the parent term - the F-22 was used for air-to-ground in the middle east, so it can definitely fill that role. There was also an attack plane/bomber variant proposed.
With that said, I don't think "its old" holds.
Retrofitting new computing technology into an aircraft is a huge deal. Just getting a data link into it so it can talk to the rest of the DOD has taken over a decade and I’m not sure it’s even complete yet.
Yes, it’s old. That doesn’t mean it’s not useful, the F15 is far older and still in use and will be for decades, but it was more cost effective to put the money into the new platform instead.
I completely understand that it interfaces with the helmet. I also know that it took ages for the F-22 to get a simple FLIR built in, but if even a fraction of the F-35 resources were directed at the F-22, this would all be more than doable quickly.
As far as I am concerned, the selling pitch of the F-35 is the STOVL and EODAS. At that point the F-22 could be fitted for carrier operations, because no one uses VTOL (or plans to) on the F-35 anyway except for moving it around parking lots with no ordinance.
Again, this is ignoring all the political info.
Also, to get more specific, I don't know of any "sensors built into its skin" - EODAS is just a bunch of little pods with thermal cameras and fancy computing. Obviously don't tell me if it's something that's not public knowledge.
The main function of the JSF is situational awareness. The F22 doesn't even have a functional data link yet. JSF has all of it built in from the beginning. Everything you add to an airframe costs millions of dollars, and there's no point in doing it.
STOVL is an important part of the JSF program. The F22 isn't capable of doing STOVL or carrier operations, you don't refit a non-CVN capable aircraft for carrier operations.
F-35's MADL wasn't added to F-22 because Air Force deemed it "not ready to use" and cited maturity problems with the whole stack.
A lot of JSF sensors are to patch over its horrible pilot situational awareness, mostly a legacy of the STOVL variant (which is responsible for most issues) and which exists only because USMC needs it to fight against Imperial Japanese Army in Guadalcanal. Few other purchases of F-35B happened because building a proper carrier would result in political shitstorm (Japan) or because F-35B being supposed to let build carrier more cheaply and when its issues became known it was too late to refit the carrier with catapult (UK).
As for refitting a non-carrier aircraft to be a carrier aircaft - F-18 is the best known case, and its competitor was navalized F-16. As far as I know, there was carrier variant of F-22 in the works as well.
The JSF was designed around sensor fusion to give the pilot situational awareness like no other aircraft.
Stovl has been used by the USMC in every conflict since they got the Harrier. They flew off of highways in OIF.
Japan hasn’t purchased the B model yet, they likely will. But the B model has doubled our carrier force by giving the ARG strike capability.
You do not “refit” a non naval aircraft for carrier duty. The f18 was designed as a carrier aircraft from the ground up, and there was never a carrier variant of an F22 except in some people’s fantasies.
You don't usually already produced units to "navalize", but making a derivative is the norm, and was the case for both.
F-22N was proposed but never went far.
As for JSF sensor fusion - the helmet itself comes form pretty bad visibility from the cockpit. Incorporating modern passive sensors was an obvious choice, though.
(I'm still waiting on reports of "sensor fusion finally works", given our local idiots in charge decided to jump on the Lockheed Welfare project)
And outside of F-35, everything talks Link-16 with possible tunnelling/subnetting, and MADL was considered "too immature" to start fitting on F-22 despite Congress "ordering" it.
The JSF's sensor fusion is not a result of "bad visibility". It's how the aircraft was designed.
F22 also does not talk Link 16, for obvious reasons.
Guide the pilot into a suitable position above the landing area. (with visual indicators, much like those for a bombing run) Once the plane is in the right spot, a computer finishes the job with full automation. The aircraft is stalled. The pods fire rockets. The aircraft is guided down and brought to a stop.
This retrofit would be particularly sensible for the F-15, which normally carries conformal fuel tanks. That would be a fine place to install the retrofit.
Sounds like a great thing for a scifi novel, but not useful to do something like that in real life.
The idea is not feasible.
Also your list of "carrier aircraft requirements" is incorrect, the JSF is a single engine aircraft, and landing gear is a major requirement of a carrier aircraft. The F22 is not a naval aircraft, was not designed to be, and never will be.
As produced, the F-22 is obviously not a naval aircraft. The tail hook is single-use, the landing gear is not reinforced, and the more modern stuff for carrier approach is probably not installed. All of that would be easy to change, and in fact a carrier version was proposed.
There is no such thing as "easy to change", you would have to redesign the aircraft from the ground up. You don't "navalize" an air force asset, you have the air force use Naval aircraft if you want dual use. The F22 is not and never will be a carrier aircraft.
Doing that for fighter jets is actually much easier. The speeds are much slower and the distance is much lower.
It's absurd, as anyone who spent 9+ years on CVNs would know...
Obviously there is no reason to bother with vertical landings on a fully functional full-sized US aircraft carrier. This would be for other ships, clearings in jungles, and cleared-out parking lots.
You'd descend toward an area that has been cleared of debris and personal. It's not more absurd than flying toward a ship at 135 kts and expecting to grab a cable without crashing into things and people on deck. Compared to what we do on a CATOBAR ship, rocket-enabled descent is really tame and safe.
Vertical landing with the F-35 isn't exactly safe. This is the standard for comparison. Rockets can respond faster. This allows better stability and faster shut-down.
Vertical landing with the F35 is exactly safe. It's the definition of safe, it's been done thousands of times with zero incidents. It was done hundreds of thousands of times in the Harrier before, and they applied the safety lessons learned from the Harrier to the JSF. How many rockets has spacex lost already?
rockets are for a completely different use case.
This is a ridiculous discussion. It's absurd.
I note that the F-22 was primarily produced in Georgia. That usually isn't a swing state and it doesn't have a lot of representatives in the US House of Representatives.
The F-35 is produced in nearly every state, certainly including all the swing states. This is terribly inefficient, but probably kept the F-35 from being cancelled.
There was even a notion that you could use the same plane across all branches of the military so the same supply chain could be used for all three and you could build them in higher quantities to spread the development costs over more aircraft. But then of course the aircraft got saddled with requirements from three different branches of the military at once which made it extremely difficult to design and build and thus very very expensive.
Sounds rather like the Space Transportation System. During design, it went from a compact inexpensive passenger shuttle with modest payload capability, to a complete pig of a ship. And all because the Air Force contributed cash on the condition that it be capable of classified high-payload-pass missions to polar orbits.
It is nothing short of a tragedy that fully-reusable compact shuttles with flyback boosters (like the Rockwell P333) lost out to the disposable-booster design that was eventually built.
Of course this is a problem when you have Congress breathing down your neck and looking for any excuse to cut your program. One big advantage of skunk works projects is that they keep you firewalled off from idea men.
A. The "light" F/A part of the heavy/light fighter model.
B. Meant to save costs by having one aircraft with shared parts (across the three models) fill virtually all roles.
It's not totally crazy on the face of it- The expensive but undefeated F-15 and the relatively cheaper F-16 successfully pulled it off in the 20th century.
I imagine a couple of couches and a book shelf in a corner.
One thing I've observed working on many different types of software projects is that there exists a continuum between waterfall and agile/scrum, and one way of thinking of it is what the length of the sprint is. Waterfall is a single sprint the length of the product, spiral development is several sprints over the length of the product, and agile/scrum is sprints of a couple of weeks to a month.
The length of the sprint is simply the length of the planning cycle. What I've observed is that systems with certain characteristics, such as safety critical systems or systems that involve hardware that doesn't yet exist or is expensive or time consuming to test typically don't work with an agile/scrum planning cycle. If you want to deliver them in a reasonable time, complex and parallel requirements mean that you must plan things far in advance so that everyone is ready at the same time.
For such environments you definitely want to have no deadlines at all, to have a special focus on well-defined requirements, and in software quality.
That investment should be marginal compared to hardware costs.
And in fact, by going more slowly, you end up delivering faster after a couple years, when you enjoy zero tech debt and an excellent foundation.
This implies that you plan exactly when and where tech debt is accumulating or found. The scrum argument is that by iterating and releasing, you find out where unexpected tech debt is more rapidly; and you go back and fix it.
One way that #AgileIsDead happens is when products disregard the tech debt for features even as it becomes apparent.
(I'm open to be convinced)
Generally I have witnessed tech debt as something pretty blatant and obvious - generally it's detected in the code review process. Sometimes it's big faults being introduced, more often it's small faults leading to 'death by a thousand cuts'.
My current policy is to allow zero tech debt, following the "no broken windows" principle. https://pragprog.com/the-pragmatic-programmer/extracts/softw... That involves a greater investment in code reviews.
I was in an office that had such a policy - it didn't go well. Perhaps it's a matter of definition, because:
> I hadn't heard of tech debt being something so subjective/invisible that it just creeps in and you have to discover it.
Tech debt is _inevitable_. Any decision will eventually be debt - rot begins before the code is even finalized. Focusing on avoiding short term gains at long term cost is totally fine, and I think that's the point of your "zero tech debt" strategy, but that's not zero tech debt. And even when maximizing for the long-term, you're picking between options, which means you're picking the FORM of your tech debt, not the existence of it.
To return back to my (admittedly anecdotal) experience with "zero tech debt": What happened was a language shift. People would avoid the phrase "tech debt", but still dealt with the reality. The decisions were being made, uncovered debt (such as a paradigm shift in the industry, or a change in dependent technology, or just an anticipated business flow that turned out to play out differently) was hidden. Basically, it seemed fine for a while, but management was getting out of the loop because management had decided an unachievable purity was more important than frank and open communication. The result was what happens any time management pushes themselves out of the loop - at some point, the compound interest on the debt came due. By then, multiple devs had left because in an industry with so much choice, why choose to work somewhere that refuses to face reality?
Not selling the long term for the short term is a great approach, but if you're trying to enforce "zero tech debt", I suggest you talk with your devs to see what they think that means and make sure that you're all on the same page, because tech debt is maintenance costs, and those will never be zero. If you're being told they are zero, then there may well be a disconnect.
I think you are describing tech debt of the "unavoidable" kind. My efforts tend to be focused on the avoidable kind.
Definitionally, no methodology makes the unavoidable avoidable?
Now, let's say you get to that dreaded point - requirements have changed, dependencies have changed, the industry landscape has changed. Which codebase will be easier to adapt - the debty one or the zero-avoidable-debt one?
I feel this is a bit of strawman, though that may not be your intent.
As I mentioned in the above comment, wanting to avoid hurting your long term for the short term is fine and admirable. However, I think the distinction between "unavoidable" and "avoidable" debt is somewhat facile and unrealistic. It's a continuum, with points scattered throughout the center. Is this decision the best one? That is debatable, and should be debated, but the result won't be "yes debt" or "no debt".
I also said that if someone is shooting for "Zero debt" they should talk with their devs and make sure everyone is using words to mean the same thing. I continue to stand by that, because however clean and rational your personal definition may be, it's worthless if that's not the meaning anyone else is using.
At the same time, I'd note that anything can be debated ad nauseam, so nearly everything can be considered subjective.
So indeed I can't aim to 0.00000 tech debt, nor identify "avoidable" tech debt with 100% accuracy.
But I _can_ follow certain practices and have a given team follow them. Those practices being quite objective, benefitial, and superior to the status quo of the industry.
Can you expand on what you mean by this?
"zero tech debt", when understanding that tech debt is maintenance efforts at a minimum, is unavoidable. If a workplace has a "zero tech debt" rule, that workplace (the management, assuming they're the ones putting the rules in place) is placing that unachievable purity above the frank and open communication about their very real tech debt.
Where I was at, when "zero tech debt" was the policy, we stopped talking about potential tech debt to management. Decisions were made to avoid short-term debt (good), but no discussion was had about any longer term issues (bad), because we couldn't HAVE those decisions when every choice involves some tech debt.
When tech debt issues arose, we didn't want to surface them, because we'd spend more time trying to justify why this wasn't our fault, wasn't predictable, or taking lumps when it was our decision instead of actually addressing the issue.
Management had placed the ideal above the reality, so they saw the ideal. That didn't prevent the reality, and it left the reality unmanaged. We did what we could - we all took pride in our work and tried to make the best decisions - but the company limited what could be done.
If you are in a team that discourages upgrading build systems, code reviews, code quality, testing etc there isn't really anyone that can point to instances where tech debt was accrued or how much there is.
It is however true with assets; The housing crisis provides a great example; housing is an asset when it is always appreciating; however, if it starts depreciating (2008), it quickly became a liability.
This translates nicely to product. Each feature is an asset for selling; until it is found to have liabilities (pii data storage, inability to scale), then the feature either needs work again (requiring reinvestment) or removed (sold).
Fast iteration means you can kill ugly design. Slow iteration means you get stuck with a design choice because you discovered its flaws too late.
Trying to determine what people do is likely a futile discussion.
Finally, by writing decoupled, modular software, you are always free to reassemble it later (discarding/rewriting undesired parts), no matter when it was originally assembled.
Especially considering that many codebases are expected to have a shelf life of 5-10 years.
IME working deadline-lessly doesn't mean you're suddenly knee-deep in a metaprogramming rabbithole. You just do the same stuff, stress-free.
Nothing to see here folks, just the mold poking through the cracks of our public sector.
"Good news, General. We've found a way to lower maintenance costs in the cockpit systems by 10% if we removed this thing here on the blueprints, so we've already gone ahead with that change."
"But that's the emergency ejection system!"
"Yes, and our data indicates that it's almost never used. It doesn't really make sense to devote manpower to keeping a system running that few people ever need. If we get rid of it, we can free up developer-hours to making the canopy opening/closing action a little smoother. Everyone uses the canopy."
That said, how can you not wince reading, "To do this, Lockheed wanted to adopt principles and frameworks common in software lexicon like agile, scrum, minimum viable product (MVP) and DevSecOps."
Instead of having separate computers, you now tend to have fewer "servers" which run multiple modules in isolated partitions, communicating over the common network using unidirectional messages; whether using SAE AS5643 - aka IEEE-1394 - like in F-35 and X-47, or over mutated Ethernet known as AFDX (A380, A350, B787).
However a big portion of JSF issues is also in ground software - there's a dedicated set of platforms required to operate F-35, which is also known for infamously depending on connectivity to Lockheed servers or the airplane stops working.
The system involved, ALIS, had als infamously took more time to deploy during a test squadron redeployment than the whole redeployment - which I guess they might be trying to speed up using Kubernetes.
From what I heard ALIS appears to still be done the same style as certain other logistics software from lockheed back in 2010, and that doesn't say anything good.
They later talk about agile vs waterfall as well as devops. It doesn't read like this is going on a plane, probably just dev and test environments.
Could someone here with experience in this things enlighten me ? F22 raptor is still an embedded system, I cannot believe that it runs a linux with container. What I am missing to comprehend this article.
I've been doing network security for twenty years, so I know what that is, this just sounds like some marketing and sales people started mashing buzzwords together.
Yes it's a re-wording of an existing concept. No, it is not a new idea. Yes, it is important enough to call it out because so many companies don't let security/operations/development work together.
You're not the target audience, your CISO or CTO is.
I know Google has security teams, but what about the smallest company that has one?
To answer the question more directly: I've worked with 10,000+ employee companies with no CISO and I've worked with <1,000 employee companies with separate CTO, CIO, and CISO roles. At the executive level, job titles are more of suggestions than strictly defined silos. It all depends on how the company is organized and what their strategic priorities are.
I would be entirely unsurprised to find north korean telecoms/state government agencies using centos or debian. In fact if you google "north korea linux" you'll find that they already created their own weird custom GUI desktop distribution.
In the bigger picture, far more people use your code, whatever it is, to do useful and good things in random places in the world than people using it for purposes you find objectionable.
Isn’t it allowed to go further that sources access, what’s written on a piece of paper, and caring about ethics?
Does OSS contributors really needs to be so alienated?