I don't disagree with your point re: Boeing v SpaceX, but precision scientific instrumentation is a very, very different domain where "move fast and break things" cannot be naively applied. The sensitivity to failure is so ridiculously high that you need 6 to 7 sigmas of reliability for each of ten thousand critical components in order to even have just a 90% chance of mission success. Something as simple as an instrument bring slightly out of tolerance, faulty rad-hardening on a CPU, the sun shield having a minor tear, are all enough to completely jeopardize the entire mission. As a fully integrated system, JWST is at least an order of magnitude more complex than Starship, whose main design goals (launch, orbit, land, don't explode) tend to favor a more iterative approach.
Pretty sure this argument could have been levied against rocketry. For scientific telescopes most likely a frame shift would be needed to break out of this mindset that would be analogous but distinct from the one applied to rocketry.
There are two key differences between rockets and space telescopes (any space-faring probe, really) which prohibits "iterate to failure" as a development technique:
1. Forensics difficulty. You need extensive data to debug issues for complex systems. You can collect so much more data from terrestrial testing, which allows you to do the extensive forensic analysis required to achieve the required component reliability. Once you put a telescope in space, you can't inspect it anymore. A lot of the sensitive components which might fail on JWST have to be inspected to microscopic precision in order to perform adequate failure analysis.
2. Design requirements are far, far more precise. Many failures in deep space are effectively impossible to correct in later iterations due to point 1. Telemetry and sensor data is enough to debug rockets, but for JWST you would need to ship so much extra data/sensor infrastructure alongside the telescope that the whole project becomes recursively intractable.
The system complexity is so high that you absolutely must have the ability to make arbitrary system corrections because the chance of "building everything correctly the first time" is effectively zero, even with perfect hindsight! Essentially, any time you build the thing from scratch, you will always find mission-ending statistical deviations. The objective of on-ground testing is to identify and correct those specific deviations, until the whole system is within design tolerances. If you were to totally rebuild it, the next iteration will have a totally different set of statistical deviations which will need to be corrected. This process of development involves "hardening" the entire system through extensive testing, because it impossible to build a fully-hardened system to start with, even after "learning" from previous attempts.
I’m not arguing that the frame shift is identical to rockets, just that assumption breaking probably is a way to avoid such risk laden events like this launch. For example, finding a way to create a flywheel to drive costs down, or a way to take smaller steps, or a way to incentivize more disposable missions that will build on each other but can tolerate some failures. I’m not an expert but the reason rocketry is moving forward again isn’t because of “move fast and break things” per se but from a rethinking of foundational assumptions in general about how rockets “must” be developed that led to that methodology being discovered as a useful one.
I do agree with this. Most systems have significant basis in unnecessary or outdated methodologies. I think with the JWST project (with partial hindsight), we probably could have benefited from having a "stepping stone" optical telescope after Hubble which we could have used to proof out some of the hard parts of JWST. We learn quite a lot from just running missions beginning-to-end, and extremely long cycle times sacrifice this learning opportunity. It also means that we focus less on developing extensible "platforms" in favor of one-of-a-kind systems which have somewhat less carryover knowledge for the next project. Shorter mission timelines mean that you can better leverage state-of-the-art technology, rather than being forced into a design which constrains you to decade-old technology.
No, it's actually more expensive because you don't get anything done. Precision equipment either works or it doesn't.
Commercial software development practices might be relevant where you can fix things overnight, but this is hard science instrumentation made of atoms, you ship things that must work as expected because the whole point of all this is making better measurements as the field evolves and there's a scientific case for doing it. It is not about selling production versions of the experiment to potential customers.
Then we need to design precision equipment differently. More redundant parts. More wiggle room. More fuzz testing. Find out where precision actually matters, and not just overbuild everything. And iterate more.
Sure, "move fast and break things" can't be naively applied but that doesn't mean it can't be applied.
Instead of sending up 1 super precise super accurate super reliable telescope let's send up 1000 mostly POS telescopes and combine the results using AI of the 900 which actually make it to orbit and return useful images.
I'm hardly saying that's a 100% working plan, but, I think that kind of paradigm shift which is required.
Sounds impossible to park 900 satellites on the same L2 Lagrange point. Hope you realize this telescope is to be positioned on an infinitesimally small and unstable point in empty space, on the "far side of Earth" from the Sun. Because that's the spot with the best view, apparently.