Hacker News new | past | comments | ask | show | jobs | submit login
Webb placed on top of Ariane 5 (esa.int)
377 points by _Microft on Dec 14, 2021 | hide | past | favorite | 212 comments



I've found that there's a weird internal psychology I have with any news relating to Webb. I'm just as excited and hopeful as everyone else. But somehow the whole story underscores the perils of singular hope. It also seems to underscore the inherent issues with massive, carefully planned, "vertically" scaled science projects a la 20th century NASA. I've carved out a little nook in my psyche to cope with the outcome that something goes wrong during the Webb's incredibly complicated launch process and all those years of effort come to nothing.

It's also made me wonder if there's a way for humanity to pivot to more "horizontally" scaled science. I'm imagining a kind of science conducted by aggregating the data collected by many cheap, easily produced instruments instead of a few incredibly fragile, extremely difficult to build contraptions. I have an intuition that we'll increasingly run up against practical limitations in undertaking ever more complex projects of this kind in the coming years.


The analogy - I'd rather have 1 sharp knife than 100 dull ones - seems apt.

I'm not smart enough to know if something like Webb can be horizontally scaled into 100 or 1000x small sats to produce the same result.

My guess is not.

I think we need a super sharp knife and it's worth the investment.


I don't see this as a valid analogy. You can't aggregate the dull knives in order to almost replicate or even improve the cutting performance of a sharp one.

But you can use many separate mirrors with artifacts to replicate a perfect mirror, as long as you are able to do some post-processing on the array of mirrors with artifacts. And with proper post-processing you may even be capable of extracting additional information which a single perfect mirror would not be able to deliver.


You're right, the analogy has its limits.

> But you can use many separate mirrors with artifacts to replicate a perfect mirror

It is theoretically possible, but is it practically so? My argument/usage of the analogy is that it isn't today. That as the analogy says, if we put up 100x small mirrors we just get 100 dull knives and blurry science[0].

[0] https://images.ctfassets.net/cnu0m8re1exe/3grbH0eWtw0AxG3MUT...


It’s been done many times.

Some examples: https://www.dragonflytelescope.org/gallery.html is an array of telescopes (long off the shelf photographic lenses) to achieve exactly that. Photographic lenses tend to give higher image quality, but don’t gather enough light. Solution? Stack many of them and point in the same direction. It’s not just a flight of fancy, they made some breakthrough science with this.

A somewhat less relevant case is ALMA - millimetre wave array in the Atacama. Millimetre waves are a bit longer than IR, somewhat between IR and microwave, and telescopes look like radio dishes, not optical devices. Anyway, the array stacks many tens of such receivers to produce one antenna with a massive effective diameter. In this case, the key advantage is in fact resolution: by spreading the dishes around, you get much higher overall resolution than one massive dish, something we probably couldn’t achieve with IR (too computationally expensive).

Whether launching 10 satellites with 1/3 mirror diameter is cheaper is a whole different question.


Not sure why you're bringing land based solutions to the conversation with "it's been done many times".

If that's the case Webb has been done many times and is super easy.


The physics of stacking detectors doesn't depend on space vs. land. 100 "blunt" telescopes does give you a "sharp" one, on Earth and in orbit.


It does when the position of each sensor relative to the position of the others matters. On earth that's trivial, but if you're 1.5 million kms out in space you could imagine this being significantly more difficult.

Not to mention, each of these 100 units would require the same shielding from radiation as the giant one does for the same reasons. We chose L2 for a very good reason; where are these 100 going to go? What are the odds that each one will end up where it needs to be x100 attempts?


I agree with that, but still it's sometimes worthwhile to have a few dull knives around as well as backup. I mean, Hubble isn't huge; it's a 2.4 meter aperture. Building and launching something equivalent shouldn't be that big a deal these days, especially if you can launch a couple different versions with slightly different capabilities rather than trying to build one space telescope that tries to do everything well.


The argument today is, additional Hubble(s) won't get "more science" per dollar put in. i.e. it's better to put those dollars elsewhere.

We've basically maxed out what Hubble can do. While a sharper visible light telescope would be nice, we need longer wavelengths to see "further back" in history. i.e. Webb

Another thing Hubble did was kick off Dark Energy, but then we needed different instruments to know more about the things that Hubble started. GAIA, Plank satellites are doing nutso things in that space.

All to say, Hubble is f'ing great, let's bring it home and put it in a museum to help inspire the next generation.


> We've basically maxed out what Hubble can do.

My understanding is that all major observatories, and certainly Hubble, are overbooked by a factor of 3 or more for months, and sometimes years, in advance.

Which implies that having 3 Hubbles would allow us to do more research than we can do today. It's not ready for the museum while astronomers still battle over who gets to use it.

An instrument doesn't have to be cutting edge in order to be useful and produce valuable results. Even today, amateur astronomers are still making discoveries (asteroids, sometimes comets) with telescopes that major observatories would have laughed at 100 years ago already.

"The telescope that ate astronomy" (Webb) will undoubtedly make important discoveries. Whether it will be worth its cost considering what we could have built for the same price (such as 10 ELTs!) remains to be seen.


This makes a lot of sense. Imagine being able to afford on a reasonable research budget to schedule within a few days notice the use of a visible light space telescope the equivalent of a Hubble but much cheaper due to optics advancements and miniaturization.

I do have to believe that any satellite designed specifically for one type of measurement would be more robust and cost less than something that did more than one, but I suppose that's not really true. Sometimes multiple measurements simply complement each other too much not to use.


We don't need to replicate Hubble exactly; I'm just suggesting we should put up some cheaper space telescopes with smaller mirrors than James Webb, if such a thing makes sense. Maybe give them a variety of different focal ratios too.

Usually with telescopes you need a huge aperture to see dim things, unless you're willing to wait a long time for an image to resolve. Not everything is super dim, though, so maybe having telescopes with an aperture much smaller than 6.5 meters would make sense for those tasks.

I don't know if there's some other reason James Webb needs a huge aperture than being able to resolve and image quickly. (Maybe the lower wavelengths don't behave on small mirrors?) If that's it, though, it seems like having backup would be good.


The point isn’t dimness. It’s resolution (well angular, the number of pixels also doesn’t matter that much). If you want to resolve things that are close together you physically can’t separate the signals. And what is too close depends inversely on the aperature, so yeah you really need the size, for some observations.

Fun fact for dimness: We are pretty much at “we can count every photon that comes” so if you want to pick up far away things, increasing the aperature is also one of the few things you can do. But at that point you’d probably prefer multiple telescopes if you can align them properly


Is it just that infrared light doesn't focus as easily as visible light unless you scale up the size of the telescope accordingly? (For instance, at some point light stops behaving like light and starts behaving like radio waves.)


The wave length of the photons also goes into it yes. I’d have to look up the precise formula. But it’s basically larger aperature -> smaller angle of separation. Longer wavelengths -> larger angle of separation.

From a physics point of view there really isn’t much difference between light and radio waves. Both is just em radiation. But the absorption by matter (well really gas, there is nothing else) changes with wavelength too. And you’d optimize the ccds for whatever you want to look at. So that would be the primary consideration and keep in mind these things tend to also be build as tech demos (which also means you typically don’t get to realize all design capabilities in action). So “because it’s the most we could get away with” is a reasonable reason


We do actually. One area in which we make enormous progress is simulation. The entire perseverance trip was almost perfectly simulated in a simulator. It's predecessor only partly. The JW launch and deploy steps are also simulated in a very detailed way. Many different (fault) scenarios are tested. I have good faith everything will go as planned.


That's sounds really interesting! Any resources for the layman that would provide more detail?


I can't even follow the news because I know how disappointed I'll be if the launch crashes. Wake me up when it's over.


Couldn’t agree more. I’m a huge space nerd and this is the LEAST I’ve followed a project because I’m so worried it’ll fail. I keep up with the headlines but I’ve found it very difficult to get excited over it as I’m pretty much expecting the worst and anything more than that will be a bonus to enjoy later.


Ditto! As we approach the launch date, the feeling in my stomach when I read about Webb is getting worse.

JWST is going to revolutionize our understanding of cosmos, but only because it's target spectrum, which requires operating temperatures around 45K(-228°C!), to completely eliminate blackbody radiation. Which means that tender layers of sunshield must... OHMY panic attack ...


I’m less nervous about a launch crash than about the hundreds of moving parts that have to perfectly unfurl the thing between the launch and L2


I have an intuition that we'll increasingly run up against practical limitations in undertaking ever more complex projects of this kind in the coming years.

This is already happening - Webb took 7 years longer than expected and cost $10b instead of the projected $5b.

Specifically for space telescopes, I am optimistic for the future because launching mass into orbit is getting cheaper and cheaper. One main reason Webb has to be extremely expensive is that it also has to be extremely lightweight. If it becomes 10x cheaper to stick things in orbit then we can save money on construction too because we won't have to optimize weight as much.

Of course, once space telescopes become cheap and normal, it'll be time to think about constructing a radio telescope array on the far side of the moon.... :D


The cost of Ariane 5 rocket is below $200m, which is a rather small fraction of the overall budget of $5-10b. If the main reason for the huge cost is to make it so lightweight, why didn't they go with more capable heavy-lift rockets to cut some R&D costs already? At least, Delta IV Heavy seems to have a significantly larger max. payload [1]. The JWST was still in re-planning phase as of 2005 and Delta IV Heavy was already operational at that point.

[1] https://en.wikipedia.org/wiki/Heavy-lift_launch_vehicle


> It's also made me wonder if there's a way for humanity to pivot to more "horizontally" scaled science.

Today with F9 (and Starship not too far) yes. Launch costs are a fraction of what they were just a few years ago.

When Webb was planned? No way. Keep in mind that it's the same agency that gave us the SLS.


> It's also made me wonder if there's a way for humanity to pivot to more "horizontally" scaled science.

We do a lot of that already.

The purpose of flagship observatories is to do what can't be done with many cheap, easily produced instruments.


SpaceX is showing the way.

No being afraid of failure allow to simplify, accelerate the process and reduce the cost by an order of magnitude


I dislike this comparison because we want many of SpaceX's rockets, but we don't really want many Webbs. The rapid iteration argument isn't amazing.

I mean it'd be awesome to have many, but not realistic in any real sense.

But I 100% agree w/ the cost overruns being a massive problem.


It might be an apt comparison though.

I much rather have reusable rockets than having ISS.

I much rather have in-orbit construction infrastructure than having either hubble or JWST.

I much rather have remote repair and maintenance satellites/crafts than having a robot rover on Mars.

If you keep doing what you barely can do, you'll keep doing it for the highest price possible. Instead you can invest in the infrastructure, and do things as they get cheaper with your current infrastructure.


Send the components up over several launches and build it directly in space?


If you were able to launch and rendezvous in orbit, I wonder how much cheaper something like the JWST gets if it could be built up piecemeal? The unfurling mechanism itself exists because of payload width limits. How much cost is bound up in "we can't just keep launching rockets because rockets are expensive" and how much is bound up in "precision technology for precision observations"?


Exactly. It is so sad to see many great engineers working on decade-long projects with 300-something single failure points.

That time and money should instead be spent on reducing the cost per kg to orbit, as well as in-orbit manufacturing. Then engineers would spend the bulk of their time on actual science goals, not on tangents of fragile deployment sequences or shaving few grams from there and there.


Reducing cost per kilogram sounds like a good idea at first - but then, that means people will become less careful about what they send up, eventually leading to space debris issues. The tragedy of the commons is a thing, even in space. I guess there should be an equilibrium price on kilogram-to-orbit.


Yeah, but it is somewhat like saying as your revenue scales, so does your expenses. And it indeed does. But it is a net win at the end of the day, for 10% of a billion is much better than 10% of a million.

With space debris, come debris cleaning bots, with those bots come more space debris, and next generation of cleaning bots ad infinitum. The net profit/benefit will scale in the meantime, which is what really matters.


This is how the Soviets beat us to orbit. They were less afraid to blow up a few rockets.


And dogs.


Ah yes killing dogs. Much less ethical than the animals that NASA used right?


NASA did animal tests too.


I think these are large vertical projects because of the nature of the problem. I.e., the data we're after is inherently obfuscated by our earthly dwelling so we need to get our instruments a long ways "out-there". And to do so incurs a lot of risk and, generally speaking, the only institutions large enough to incur a risk that large without much profit upside are governments.

Now maybe we'll find clever ways to get different data cheaply that meets similar ends, but that seems to be hinged on hope at the moment.


That's part of the problem, another part is lack of scale in the space launch market, which results in relatively small and expensive launchers. Lots of complexity and thus cost in JWST could've been avoided if it could be heavier and larger.


How would you propose to fix that problem? I suppose we could get some asteroid barons that bring in a profit motive but fundamental science rarely has a profit since, by definition, it hasn’t found an application yet


Two tools:

One, commoditize your complements. The complement of satellites is rockets, make rockets cheaper and you simplify the process of getting to space.

Second, build towards economies of scale. A one-off is going to be ludicrously expensive, even the Apollo program saved money by reusing technology from military missiles. SpaceX has built more than a hundred rockets and launched more than a thousand satellites. Asteroid barons launching megatons to orbit would certainly make launching a 6.5 ton payload a non-issue. The complexity in the smartphone in my pocket boggles the imagination, I bought it for $200 but if it was a one off you'd have to be Bruce Wayne to afford it.


SpaceX dropping launch costs with Falcon 9 already did that. The IXPE mission launched last week went up in a gigantic, barely filled fairing, because SpaceX bid below what the previous "cheap" launch was going to be.


Sometimes there just is no substitute.


Talking about Ariane 5, it's been interesting reading parts of its history. Particularly:

"Ariane 5's first test flight (Ariane 5 Flight 501) on 4 June 1996 failed, with the rocket self-destructing 37 seconds after launch because of a malfunction in the control software. A data conversion from 64-bit floating point value to 16-bit signed integer value to be stored in a variable representing horizontal bias caused a processor trap (operand error)because the floating point value was too large to be represented by a 16-bit signed integer. The software was originally written for the Ariane 4 where efficiency considerations (the computer running the software had an 80% maximum workload requirement) led to four variables being protected with a handler while three others, including the horizontal bias variable, were left unprotected because it was thought that they were "physically limited or that there was a large margin of safety". The software, written in Ada, was included in the Ariane 5 through the reuse of an entire Ariane 4 subsystem despite the fact that the particular software containing the bug, which was just a part of the subsystem, was not required by the Ariane 5 because it has a different preparation sequence than the Ariane 4"

I wouldn't want any part on something as mission critical as getting a rocket to deliver something into orbit.


Eh, aerospace involves a lot more testing and specification and generally being careful in engineering practices and still failure for new products is expected.

You launch your first few rockets expecting them to explode because even with all of the care, it's just not reasonable to expect that you got everything right in simulation, test, and design.

This is why the public response to "failures" from SpaceX, NASA, etc. etc. are so frustrating, people don't understand the nature of product testing. You don't get to see a washing machine or microchip or car or whatever else do its equivalent of exploding on the pad, but they all do and much worse.

The first flight of a rocket not exploding is like spending days writing code, compiling it once without errors and running it in production with no issues. It's basically unimaginable and when it does happen you sit and wonder what's actually going wrong that you don't know about yet.

Once you do get a working configuration, you don't change it or make only very safe changes. You rely on what's worked before because things are actually exactly the same, not constantly changing like so much software today. This leads to a lot of conservatism that people don't understand (why is military hardware using X, it's so outdated!)


I think this is a fair take when you're talking about failures of nascent designs because of unknown failure modes. But it's frustrating when there are failures from known failure modes that just had process escapes. A couple examples:

- CST-100 was sent into orbit with software mapped to valves incorrectly. There is no credible reason this shouldn't have been tested thoroughly on the ground first.

- Falcon 9 had a failure due to a tank support strut that had strength characteristics a fraction of its design spec. Good supplier quality control should have vetted the material prior to its use. (SpaceX has since added material tests).

Finding unknown causes of failures help us learn more about the science of the industry. But not catching known failure modes due to lack of process control is a different animal.


> But not catching known failure modes due to lack of process control is a different animal.

I agree, but for any complicated product like a rocket launch there are probably many many tens of thousands/hundreds of thousands/millions of things like this where one small uncaught error can cause mission failure. At some point it's just statistics, you'd expect to see some catastrophic failures across some percentage of products early in their lifecycle, regardless of how good your process control is.


Yes, there is reliability analysis involved which invokes probability. However, that's not the case with the items I mentioned above.

It would be one thing if SpaceX tested the material coupon and found it within specs, but by sheer probability, the strut had a portion of its material structure that was anomalous. That's what you're talking about. What I'm talking about is there are specific process steps that would have caught those mistakes above, and those processes were just not performed. Neither of the cases mentioned above are the result of "We did everything right, but the probability gods were just not on our side."


> there are specific process steps that would have caught those mistakes above

Overestimating the risks can bury a project just as quickly as underestimating them. If you test for every possible failure mode you may never get off the ground. There's always a test or check that can catch something before the risk is realized. Every nut and bolt can be X-rayed, every line of code and possible combination can be tested, it's just a balancing act. Sometimes the risk doesn't justify all the tests, sometimes a test is accidentally omitted, or poorly defined because someone made a mistake or misunderstood the explanations behind it, or someone misread the result, or the risk was considered avoided because the code was tried and tested tech on Ariane 4.

Rockets are so complex that there are millions of situations where a mistake can be made and later amplified.


I agree that there needs to be a balance between testing and cost. One one side, QA managers can become chicken littles who define the only acceptable level of risk as zero. On the other are project managers who let cognitive biases help them rationalize the answer they want in order to avoid cost overruns.

But the issues above aren't that. I can think of very little reason that valve command mappings on a human-rated rocket were not adequately tested. The larger point is there needs to be good testing on credible risks, not that risk needs to be driven to zero.

I would say someone who thinks "the risk was considered avoided because the code was tried and tested tech on Ariane 4." doesn't actually know the nature of the risk in order to determine if it's credible. The risk was not mitigated because changing configuration added an untested interaction risk.


> I can think of very little reason that valve command mappings on a human-rated rocket were not adequately tested

The procedure and checks were probably repeated multiple times during earlier preparations with no issues. God only knows how many potentially critical problems exist in every system on a code path never taken, or with a component that's never used. It's one thing to test everything for the design of the system and call it complete, and another to test everything for operations. Human error caused an operational misconfiguration on a system that was otherwise validated.

Think of aviation where the entire plane design and the technology used are fully tested and known to meet all the requirements. But every time you operate that plane you have a shorter pre-flight check to confirm operational worthiness. Some things that can cause a disaster won't be covered, and some items on the the checklist will be mistakenly marked as OK, which may or may not cause a problem.

I'm not saying it's normal just that statistically speaking at some point you'll miss something for one reason or another, and eventually the proper conditions will be met for a disastrous outcome. Might as ell be a valve control check.


I understand your point and agree with you philosophically. But I disagree fundamentally that it’s always a just a matter of fact that test coverage can never be 100%. There’s more to it in many cases. And human factors in design is a huge focus in aerospace (I wouldn’t be surprised if the formal discipline originated there). Aerospace is highly process driven to avoid these issues.

Those issues were caught by luck, due to the way follow on testing was conducted after the timer failure. The result indicated they were not adequately tested previously.

I get that not everything can be tested. But those unknown errors that go untested should be relegated to those low probability events. Neither of the examples I gave fall in that category. Hell, NASA even requires testing for random bit flips due to radiation, there’s no excuse to let a valve command mapping error slip through the cracks.


You have the benefit of hindsight. If it is so easy to know ex ante, what other currently untested parts of rocket development do you suggest SpaceX spend more time testing today?


Your claim is reasonable for the extremely low probability events, but the fact that you brought it up here makes me think you're missing the larger point.

The last few levels have been dealing with CST-100, not SpaceX so I'll try to continue that thread to be more coherent and then touch on the SpaceX example.

Here's all that's really needed to capture the CST-100 issue:

1) Valve A is listed a "must" requirement. E.g., "Valve A must work when commanded".

2) As a "must" requirement, there is a test plan that covers that operation (again, another requirement).

Safety-critical development is driven by formal requirements more than many other types of software. Requirement #2 can happen any number of ways. If they decide to meet that requirement via simulator, they need to ensure the simulator is of sufficient fidelity to ensure #1 is met. This fidelity would mean that the simulator valves are mapped to the correct simulator commands. To answer your question directly, I would expect every "must" and "must not" requirement to be tested; any that aren't would be suspect for potential failure. If you can get me a list of requirements (ideally, with a failure modes effects analysis or hazard analysis) and associated test plans, then I can give you a list of potential failures.

So the question becomes: would you be okay with a test plan that doesn't check that a software command is received to meet a simple but critical high level requirement? Do you think a test plan that misses such a simple test is within an acceptable risk envelope? This isn't some low-probability event, but a test error that can be traced to a very simple requirement that wasn't tested.

With SpaceX, supplier quality has been a major point in aerospace for decades. I've worked in organizations that have rigorous procurement processes that check for such things, including audits of manufacturing facilities for critical items. Could the supplier forge the certifications and the system fail because of it? Sure, but that's not the case here. The checks just didn't happen. Materials checks are extremely common, even on small items like bolts let alone large structural components. Likewise, I would expect any non-checked critical procurement to have a higher level of risk in producing a failure.

This isn't about predicting some random, unheard of failure or ex ante stock prediction. It's about due-diligence on well-understood risks.


> It would be one thing if SpaceX tested the material coupon and found it within specs, but by sheer probability, the strut had a portion of its material structure that was anomalous. That's what you're talking about.

Actually, that's not what I'm talking about. What I'm talking about is lessons from QA processes and the chance that any particular bug escapes from one QA process to the next.

The idea being, suppose you have:

* one million different "components" in a system

* each of those components have, say, 10 possible failure modes

* There are 100 of these new complex products a year.

Obviously the numbers above are all made up, but the idea would be in that example you'd have a billion (1,000,000 X 10 X 100) possible failure modes. Any QA filter, might be expected to catch, say, 95% of remaining bugs.

So you run as many filters as you can, but at some point you're still left with the small probability for new bugs, and they only way to remediate those bugs is to figure out what went wrong after the fact.


Not every bug creates a failure, so there's not necessarily a need to be completely bug-free. The point is to focus efforts on the bugs that can cause failure and design mitigations for them. If your system has a billion potential software failure modes, my hope is that a project manager will say the software is too complex and needs to be redesigned. I think this is where some in the software crowd get it wrong in their understanding of how software reliability in safety-critical applications is performed. There are a lot of project managers who deliberately avoid using software as a mitigation because of the complexity issue (see the 737Max use of software as a mitigation for a good example of where this practice can go wrong).

So take the software example from above on CST-100. The valve mapping error should have been a known failure mode ("We command valve A and it doesn't respond." or "We command valve X and valve A responds"). Those failures are very straightforward to test for with simulations on the ground, yet somehow those errors made it into a flight configuration. Arguably, the software timer issue on that flight falls into a similar category, but the mitigations there are a bit more complex.


> It's basically unimaginable and when it does happen you sit and wonder what's actually going wrong that you don't know about yet.

Every time that happens I'm terrified and overcome with a quiet, pervasive sense of existential dread.


> were left unprotected because it was thought that they were "physically limited or that there was a large margin of safety

You didn't mention it, but IIRC what made the overflow possible was the increased performance of Ariane 5. Such values couldn't physically be reached with Ariane 4.

Now, the real irony is that indeed that computation and whole subsystem wasn't needed. It tells you something about removing unneeded parts. On the other hand, the aerospace sector is traditionally very conservative and reluctant to change things. Have a look at the superstition surrounding launch processes: http://stevenjohnfuchs.com/soyuzblue/yes-they-pee-on-the-tir...

I can picture code being included whole for fear of breaking something by mistake.


I still remember that one, I was at school 80 kms away when it blew up. Quite the boom given the distance, but nobody scared, just heard a large bang.

Most launches were at night, and we could watch the trail of flames pass by but 501 was launched during the day. Impressive to watch how wrong it went, how fast. There's even still the dude saying "all parameters and trajectory normal" literally as the thing explodes.

Good times and nice memories.


Looks like 95.5% success rate. I don't know if there is anything better with that much payload capacity and as long a track record.

https://en.wikipedia.org/wiki/Ariane_5


Success rate isn't very interesting. If a random 1/20 rockets fail, then that's very bad. If your first 10 rockets failed, and then you next 190 succeed, that's likely to be a much better rocket.


SpaceX F9 has 106 successful launches after their last failure.

Ariane 5 has 106 launches since their last full failure. (1 partial failure in there.)

Of course, F9 is a little smaller, everything other than Delta IV Heavy is a little smaller, so your statement including "with that much payload capacity" is true by construction.


A few small expansions, not to take away from your overall point:

>SpaceX F9 has 106 successful launches after their last failure. Ariane 5 has 106 launches since their last full failure.

Worth noting given these rockets' history (and also your mention of cargo), both have had a number of major variants (as well as minor evolutions). The only operational F9 is the current Block 5 (first launched in 2018) and for the Ariane 5 the ECA (first flown in 2005). F9 Block 5 has flown 75 times and all have been successful. Ariane 5 ECA has flown 77 times with 1 failure. They're reliable rockets, though the F9 has obviously had a higher cadence as well as the distinction of human flight.

>Of course, F9 is a little smaller, everything other than Delta IV Heavy is a little smaller

It's a little more complicated in terms of cargo. F9 expended (which I think is fair given the Ariane 5 is expended) actually does slightly more mass to LEO (22.8t vs 22t). But the A5 is more optimized given its staging around mass to GTO, where it beats out F9 (8.3t vs 10.9t). F9 of course loses quite a bit when reused (LEO: 15.6t, GTO: 5.5t), but at massively less cost so not quite the same thing.

Also when you say "everything other" you've completely forgotten about Falcon 9 Heavy :). That does 63.8t to LEO and 26.7t to GTO, and of course vastly outmasses either the F9 or A5.


So you're saying that I completely forgot about Falcon Heavy, which I have personally seen launched twice? OK then. It has 3 successful launches, which is fewer than 106.


What? You said: "F9 is a little smaller, everything other than Delta IV Heavy is a little smaller". How is everything other than Delta IV Heavy "a little smaller" than Ariane 5 when F9H is very much not smaller than Ariane 5?


OK, let me expand it out: F9 is a little smaller than Ariane 5 and FH, everything other than Delta IV Heavy is a little smaller than Ariane 5 and FH.

You probably know all of this already. Please, let's stop deconstructing my grammar, and instead start communicating.

BTW the two Falcon Heavy launches I witnessed were extremely cool.


Wow, that's a much better continuous success rate than I would've guessed on either account.


Yes, it's amazing how the numbers pile up over time -- Ariane 5 was a great market fit for a long time until recently, and SpaceX has a crazy high F9 launch cadence.


https://en.wikipedia.org/wiki/Delta_IV and https://en.wikipedia.org/wiki/Atlas_V have essentially perfect records. The IV had a lower-than-expected orbit on a non-payload test flight, and the V had a lower-than-expected orbit that the payload was able to fix.


You have to take statistics like this with a grain of salt. Sometimes less than perfect reliability is by design, sometimes development details (like how different the "new" rocket that earned that name is from previous designs) hide failures in earlier rockets with different names.


Sure, but if anything, Ariane 5 is a good example of that; it blew up because of reused software from the Ariane 4.


NROL-30 on Atlas 5 apparently had a shorter lifetime after the "fix", and that is normally considered a "partial failure".



>I wouldn't want any part on something as mission critical as getting a rocket to deliver something into orbit.

I hear there's a need for 3rd party libraries to log data.


I find it interesting that bug is so normal, like it's exactly the type of bug we encounter all the time in terrestrial software. Yet the stakes are so much higher.


That part about the saying "well, it's not rocket science"

In this case, it is rocket science.


Amazing. I wonder if given the large advances in computing power, that we might develop something like Ada+Rust for such systems where we can eliminate all sorts of error classes.


I worked on radars. We used Ada. The knock on it is that its slow, and didn't have a huge ecosystem. but 20 years later at least slow is not really much of a problem. You kinda got used to the fact if the software compiled it probably would run. It had some neat features (like constraining values, eg this variable will be between 0 and 10. Of course you'd have to deal with things when you ended up outside that range.. ). Really quite reliable.

https://learn.adacore.com/courses/intro-to-ada/chapters/stro...

They tested a lot (some of the testing was delivered with the software even, to run anytime...).


I interned in IT on Space Systems and looking back I'm basically amazed we ever put 'software' in space.

There were some organizational attempts to control for such issues, but there were no real solutions other than a bunch of guidelines.

Some of the code written for the space station was done in long, sleepless 'power-coding' binges. I thought that was 'cool' now I think it's 'insane'.

I do remember learning a bit about Ada, but it's been so long, and it's probably out of the range for most things today.

Even Rust is a bridge to far for most projects, I really wish there were ways to 'back off' from Rusts ultra-hard principled approach because I think what most projects need is '65% Rust', not '100%' which is the only current option.


Rust is very low level. When I looked at Go (I haven't really learned it beside some simple things), it seemed the most Ada like of languages I've looked at. I'm not sure how safe it is, but considering some of these govt. projects were moving to c++ from Ada, it might be reliable enough.

> the space station was done in long, sleepless 'power-coding' binges

Yikes.

I grew to appreciate Ada. We would disable the interupts on some cpus and the software had to be reliable as if the code crashed or got stuck, you were rebooting the machine.


I would be very, very surprised to see Go in this space and not at all surprised to see Rust.

Go has a garbage collector, which fundamentally introduces nondeterminism. Due to implicit interfaces, it can't enforce exhaustive typeswitching. Every pointer type is nullable, which means entire classes of runtime bugs which can't be caught at compile time. And the type system requires far too many escape hatches (resorting to `interface{}` and typeswitching) to have the kinds of type safety guarantees necessary. And lack of sum types means functions return errors and values instead of errors or values which—in my experience—has been an unending source of foot-guns in production code.

OTOH, Rust has none of these drawbacks and many, many other strengths that enforce static guarantees about safety at compile time. It's not perfect, but it's an extremely worth successor to Ada and just about the only feature from Ada I wish it had was the ability to specify tighter domains on types (for instance, numeric ranges on integer types).


Yes, there are no 'garbage collectors' in space.

In fact, in my experience, there's not even any dynamic memory allocation!

You allocate your memory on boot and work from it.

Rust has some nice pointer advantages, but it's not specifically suited to those kinds of system, and would have some big cultural baggage to overcome.

Rust has performance and perfect non-null safety in mind, not necessarily perfect logical safety.

Space systems would be 'totally ok' with extra overhead that checked parameters, pointers, memory etc. before certain operations.

That said, I can see Rust happening in space some day.


I've never been so nervous for a launch in my life.


I'm more nervous about the deployment. There are so many things that can go wrong after the launch but before it is fully operational.


I never thought the Mars rover sky crane would work, but it did!


I think most of the engineers on the project didn't either, what did Adam Steltzner, something like "It was the least worst option out of a lot of bad options".

I'm sure there's a commentary there on modern life, but sod that, this far cooler than any addage :)


I saw an HN comment once, purporting to come from a NASA engineer that quit over disillusionment from being certain that the sky crane maneuver would never work.


Although I guess once it's in orbit, and major problems could be sorted out over the following years with manned repair missions, like they did with hubble


Not really, unfortunately. Webb will be much further from Earth than Hubble is, so as far as I'm aware a human mission to go repair it isn't really considered an option.

https://www.nasa.gov/topics/universe/features/webb-l2.html


If Starship delivers on its deltaV and economic promises, it's at least feasible, although that's a big if. Besides, if you can boost something the mass of the telescope with twice the volume and much less money, why would you bother fixing the one you have?


They've worked on the design for over 20 years AFAICT. Even tho Starship will enable bigger and better telescopes, if there's a repairable problem and it is within reach, would be odd to just bail on JWST when the whole design/build process of the next gen will likely be a very long endeavor.

https://en.wikipedia.org/wiki/James_Webb_Space_Telescope#His...


I think it's fair to say that with lower stakes you can definitely take less time on each telescope. The Webb is a miracle of engineering, but a lot of that miracle goes towards squeezing it into a pretty narrow fairing.


Sure, I'd also expect (as an outsider) that new iterations will be quicker from this new knowledge base and a 9m x 18m rocket payload, I'm just saying if there is a way to fix broken JWST I see no reason to not even try and wait out the next gen. They are not mutually exclusive.


You're better off with a plan along the lines of "launch a less-good-but-still-perfectly-fine new space telescope every couple of years" so that you eventually get good at it, and can afford to make mistakes.

A few hundred million for each launch; heck, the long-term ground support is probably more expensive than the flights and the hardware after a while.


About 10-15 years ago the European Space Agency was advertising that philosophy. IIRC they contrasted perfectionism (or zero risk) and slow learning against smaller experiments and faster learning. There was also some third term contrasted, perhaps that being that all-or-nothing huge projects demand a shift toward zero-risk and slow learning.


> major problems could be sorted out over the following years with manned repair missions, like they did with hubble

Not anytime soon. The Hubble is in low-earth orbit at an altitude of 540 Km, whereas JWST is at the Earth-Sun L2 point at an altitude range of 370 Mm to 1.5 Gm; further than the Moon. No spacecraft capable of carrying humans beyond LEO has been built since the Apollo missions.

Starship and SLS will change the game, but both are still years away from manned flights.


Even then, it's quite possible a human mission beyond LEO to repair the JWST would end up costing more than just building a replacement telescope.


178 things!


Ohh, you optimist :-)

>"There are 344 single-point-of-failure items on average," Menzel said about the Webb mission, adding that "approximately 80% of those are associated with the deployment

https://www.space.com/james-webb-space-telescope-deployment-...


Well, thanks for making everything calm. 178 were the amount of individual actions in the deploy sequence but that number of SPoF makes it, just fine.


step 138: npm run


Now it becomes clear why it takes 6 months to get to first light.


i was more nervous for Crew-2, the first manned spacex flight. If this doesn't work we've got SLS and starship coming down the pipe that should make more telescopes much cheaper and easier to build. If Crew-2 blew up it would have set spaceflight back much worse.


Understandable that crewed missions are more risky, but by that point I think they'd had enough cargo loads and F9 flights on reused boosters that it seemed BAU.

I don't think SLS will be in a position to be able to do this any time soon. Starship, well, let's hope so!


Super exiting.

- JWST costs ̃$8.8B USD, about the same as aircraft carrier, or Large Hadron Collider and it goes into a rocket!

- JWST has 344 single-point-of-failures, 80% of them related to deployment. Mars landers have significantly less If I remember correctly.

If each single-point failure has 0.0001 failure probability, the mission fails with 3.4% probability.

If each single-point failure has 0.0002 failure probability, the mission fails with 6.6% probability.



I can't find an answer to this anywhere: is there a backup JWST (or parts thereof) if this one fails to launch?


To me, this is a major dysfunction of the NASA model.

Why wasn't 5 Hubble telescopes launched? The marginal cost for the other 4 must have been vastly smaller than the first. Observation time on the original Hubble is still, after 30+ years a scarce resource. With 5 of them, we could have gotten vastly more science done at smaller cost.

I don't have a fully fleshed out theory for why, but part of it is that NASA as a government organization is ultimately a political endeavour, and it will have to optimize for what makes politicians look good in the press, rather than what gives the most science bang per buck.


> Why wasn't 5 Hubble telescopes launched? The marginal cost for the other 4 must have been vastly smaller than the first.

Economies of scale only apply at, well, scale. Five Hubbles would cost 5x one Hubble to construct. There's very little in the way of savings because most parts are custom fabricated. The parts need to be tested, integrated, then tested after integration. The testing and verification of each subsystem takes the same number of person-hours. So testing 5x the systems requires 5x the time.

You can't just accept a high failure rate of something the size of Hubble just because you've built five of them. If you launched a Hubble and it failed in some Loss of Vehicle fashion after inserted into orbit you've got a big uncontrollable mass of telescope flying around the Earth.

Even if we still had the Space Shuttle (or a fantasy version of the SpaceX Starship) flying an uncontrollable Hubble couldn't necessarily be retrieved. If it wasn't accepting commands and was spinning on some axis it would be too dangerous to approach with another vehicle. Even an unmanned vehicle couldn't necessarily approach it without a collision that causes even more problems. See the recent idiotic Russian ASAT test for an object lesson in what could happen.

This is all to say you don't want some large satellite in a long lived orbit to fail. So there's no savings on testing and verification. Even though at launch Hubble's mirror had problems the satellite itself was fully under control and operable.

Something like Starlink is very different. All the satellites in a given block are functionally identical and fungible. If one fails it's easily replaced by another. They also deploy to a low unstable orbit. If a Starlink satellite fails and is uncontrollable it will de orbit by itself quickly. They must be functional to enable the onboard engines that get them to a stable intended orbit. A Starlink satellite then needs far less testing and verification than a Hubble or JWST, not that they can be poor quality but they don't need to function Or Else Bad Things.


Is that really true? I'm guessing a big part of the cost is figuring out how to actually invent stuff needed for the project. That knowledge is now known. That cost is now 0.


>There's very little in the way of savings because most parts are custom fabricated. The parts need to be tested, integrated, then tested after integration.

To piggy back on the the GP's point, the fabrication research is only one small cost center. Quality control costs much more in aerospace than many people realize. The reason why a bolt may cost $150 is because it has to be tracked, material coupons kept/tracked/tested, held in bonded storage etc. and not because we had to figure out how to make a bolt. Those costs don't come down as much with scale as, say, raw material.

If it were all about knowledge costs, many designs today would be dramatically cheaper because many use technology (and even refurbished rockets) from 50 years ago.


> I'm guessing a big part of the cost is figuring out how to actually invent stuff needed for the project.

We're talking about marginal costs, not development costs. So the cost to invent a component is already out of the equation. The high production cost of something like Hubble comes from testing and validation more than materials or fabrication.

The failure modes for something big in space can be extremely dangerous. Even a tiny "cheap" cubesat requires a lot of relatively expensive testing so there's a reasonable assurance it won't fail in some catastrophic fashion and destroy the entire rocket.

Testing space hardware is additionally difficult because you can only just approximate the hardware's operating environment on the ground. Space hardware also can't be easily repaired or repaired at all once launched. So you've got to test actual flight hardware and if it fails possibly rebuild it from scratch testing and validating the entire way.

Developing space hardware and high precision science instruments is expensive and difficult. But it's just a small portion of the overall mission cost for something like Hubble or JWST.


I defer to your knowledge, i'm not trying to be a jerk. But Hubble took like 30 years? or so to develop and build? How long would a second Hubble take? I'd have to imagine it's less than 5 years, no? That has to be a huge cost savings. The testing process would have to be more streamlined or efficient, i would think. You have a much better understanding of failure modes.


The Hubble took a little over ten years to build and develop. It's launch was postponed to 1990 because of the Challenger disaster.

> The testing process would have to be more streamlined or efficient, i would think.

Why do you automatically assume testing for a complicated and delicate machine is inefficient or not streamlined? That's a pretty bold assumption with zero evidence.

Understanding failure modes doesn't help test and validate something any faster. There's a finite number of hours in a day and components need some proscribed amount of testing.


Hubble took ~12 years from initial funding to launch with the challenger disaster postponing it 4 years out of that. Additionally if the decision was made to use traditional mirror techniques the Hubble was predicted to launch in 5 years already, instead of the difficulty they found with the alternative method but they didn't know that at the outset.


You will encounter different bias the second time around. Although maybe this kind of stuff is less common in space industry. People could want to correct the mistakes of the first time. Spend time optimising things and taking more time. And people could take learned knowledge for granted and assume success was based on that.


Just because the knowledge exists in the world doesn’t mean that you know it. Consolidating knowledge is a non-trivial exercise.


The US absolutely could've built 5 Hubbles - because it did, they just don't look at Space - they look at Earth. The Hubble is basically a KH-11 spy sat chassis. NASA was given a gentle suggestion that if the optics package was a particular size, there was definitely a company in the US which could polish lens and mirror packages of exactly that size.

[1] https://en.wikipedia.org/wiki/KH-11_KENNEN


If they had built five Hubbles that each had defective mirrors and all of them needed a shuttle visit to correct the issues and make them usable...


There was. Sort of.

https://en.wikipedia.org/wiki/KH-11_KENNEN

In particular some of the manufacturing tooling for the mirrors were shared, leading to the HST mirror being downsized to 2.4 meters.


Its surprising how many people don't know this all these years later. Hubble is repurposed spy satellite. The main cost of the vehicle isn't in the spacecraft itself though its the optics which are different from those used on KH-11. What is good for looking at Russian military bases doesn't directly translate to looking at galaxies.


In fact, the NRO donated two Hubble-like KH-11s in 2012, on the condition that they not be used to observe Earth.

https://en.wikipedia.org/wiki/2012_National_Reconnaissance_O...

One of them forms the base of https://en.wikipedia.org/wiki/Nancy_Grace_Roman_Space_Telesc....


The person or persons, at the NRO or whatever other clandestine agency, who championed this donation, over what must have been very considerable hand-wringing about the consequences of giving this technology to civilians, are very fine people.


The story from the astronomy side was that the 2.4 meter mirror was mysteriously inexpensive compared to anything else in the R^2 mirror cost curve.

It's a little odd how people re-tell that story, but ok.


The techniques and tooling can be fit for both purposes but the telescope you build to look at something 250km away at high resolution and illuminated by the sun and the one you build to look at galaxies thousands of light years away that are fainter than can be seen with the naked eye are very very different.

Hubble's optics also weren't easy to produce and notoriously were faulty which had to be later corrected in orbit with a Shuttle mission. The bid being lower than expected could just as easily be explained by the contractor massively under bidding to win the contract. I've never heard anyone claim the Perkin-Elmer also made the optics for the KH series before now. Its not outside the realm of possibility but unless you have some compelling proof then I have no reason to believe that. I think the fact they screwed up indicates they probably weren't because if they had done the optics for the KH-11 series then they would have been well practiced at making mirrors that size by the time Hubble was built.


NASA was trying to get some form of series production for base system bus for long distance probes in late 1980s.

What happened was that pretty soon the budget got slashed, and the out of the prototype two units they barely scrambled to have one of them flown, with very complex process to balance the budget in face of constant attempts to cut it by Congress.

Maybe if NASA's budget was allocated in different process, things would be different. As it is now, the people controlling the purse strings care about their reelection so they will at best jerk projects around to send money to their states, or to place their own spin on things, etc.


I'd expect the largest part of why is that the marginal costs of launches to space unfortunately don't go down. Each launch is nearly as expensive as the first. The marginal cost on the ground is one thing, and it sounds like Nasa often builds things in triplicate or more on the ground, but then uses the parts in training exercises or for debugging purposes. Nasa probably made three or four equivalents to Hubble. Probably plenty more than that to do all the underwater training on the incredible repair missions for Hubble that extended its lifetime from a projected 5, then a sad "failed mission", then finally the 30+ years we should be grateful to have gotten at all.

They just have never had the budget in launch costs to ever launch more than one.


Even in the Hubble days, launch costs were a relatively small piece of the pie compared to the costs of actually building the thing. I think for Hubble it was less than 10% (depending on how you count the servicing missions). For Webb it's significantly lower.


> the marginal costs of launches to space unfortunately don't go down

Until recently, neither launch nor satellites had economies of scale. With reusability, launch does. And with common platforms and constellations, satellites are beginning to.


>marginal costs of launches to space unfortunately don't go down.

Some aspects certainly do, like ground-support-equipment and basic launch infrastructure.


Even in high school robotics we built a main chassis and a 1:1 spare for testing and hot swapping parts.


That would require staffing up for the production run and then firing them when the contract was finished. There is a limited volume of sporadic work available for contractors to bid on. Increasing production capacity doesn't increase demand. At the end of the day public investment into science is always a jobs program first.


Relatedly, I'd be curious to know what the split has been of the $10B "program cost" in terms of development vs manufacturing.

I assume that like with most things, the vast majority of the cost has been in the design and documentation over its 30 years of development. At the same time, there's obviously a ton of extremely high precision bespoke components in there, so "building another one" would definitely not be trivial.


If it's anything like the hardware engineering programs I've been involved in, at qty=1 the cost is almost all R&D. That includes development of processes to build something like 1.3m wide ultraprecise beryllium mirror segments...


Aerospace isn't necessarily like that for a number of reasons. A large driver of cost is quality control. For example, when a custom part is manufactured it needs to go through a number of tests, some of the raw material needs to be kept and tracked for future testing, etc. When they get shipped they have to be be tracked and handled differently, sometimes accelerometers are placed on the cargo to measure any shocks, etc. They need to be stored in a bonded warehouse, often climate controlled throughout its life. All of that control, testing and documentation adds to the cost even after R&D is a sunk cost.


Oh absolutely, but I still wouldn’t be surprised if the cost for unit 2 was still another half a billion dollars. It would be fascinating to know.


The answer is no for mainly two reasons:

1. Prohibitive costs. Its extremely expensive (both in money and time) to manufacture many of the parts of the telescopes, such as the reflector dishes. Its even more expensive to do all the testing & certification of the manufactured parts.

2. Its all outdated anyways. The JWST project has been ongoing for literally decades (starting in '96 if I remember). If we were to start from scratch today we would make a lot of different design choices based on more recent technological developments


I would love to see a video where one of the engineers on the project talked about what would be different if Webb was started today. Materials, software, processes, everything!


A huge design issue that might - or might not - be pushed would be the power system.

Solar panels require that JWST stays in the sun while conducting observations, while at the same time being critically dependant on the observing equipment to be cool - and being unable to just keep it inside bigger structure like with Hubble.

This means that there's a huge problematic sun shade structure which is considerable part of the SPOFs still endangering JWST even if it is inserted into orbit correctly.


It's worth considering that nowadays we don't really do backup copies of satellites/probes because launch is pretty reliable by now, especially on rockets that have been flying for a while (which is pretty much a requirement to be chosen to launch a flagship project like JWST).

Ariane 5 has a long flight record with very few failures and the design has been further carefully reviewed specifically for JWST, with the previous two launches having served as verification for any fixes that needed to be made prior to flying this mission.

Basically, the chance of JWST being lost due to a rocket issue is much lower compared to the chance of it being lost due to design issues with the telescope itself.


This is our only shot. They didn't replicate the build.

https://www.quora.com/What-would-happen-if-the-James-Webb-Sp...


Imagine having the replacement gold reflector dish in storage somewhere.


Nope. Wouldn't be a good use of money anyways. If it fails, it might be because of a design or manufacturing flaw. Better to make the new one after we have some idea what went wrong.


no


I'm wondering why is that, which part of the effort was dedicated to research and how much time and money would it cost to build a new JWST?


The reasoning that I heard (and I hope I'm relaying this right) was that if it doesn't work then it will most likely be because of something that needs to be redesigned, so there's no point in producing an exact replica.


I think OP means "what if the rocket fails".

The answer is that building another telescope specimen would be quite expensive, even without any changes to the design. The supply chains aren't set up for volume and integration and testing is very involved. I can't put a number on "the second telescope would be x% cheaper", but it would be very disappointing.


Also Ariane 5 is one of the most reliable rocket capable of launching JWST on the correct orbit.


It's also going out pretty far (the L2 Lagrange point on the other side of the moon), so they also won't be able to perform any manned post-launch repair missions like they did with Hubble.


> so they also won't be able to perform any manned post-launch repair missions like they did with Hubble.

In terms of hardware that's readily or soon to be available, it is technically possible. Starship, if it gets off the ground, is the obvious choice. Orion could do it, but it would mean dumping another two billion into JWST.

If it really came down to it, Falcon Heavy launched with a Crew Dragon sitting on top of it could probably reach it, although it might need an additional kick stage (I haven't done the maths), and it would need to be crew rated first (and SpaceX don't currently have an interest in doing that).


Having a ship and a launcher capable of reaching the telescope is a bit different of being able to service it. I understand that just planning and preparing a mission on this scale, with current technology and processes, would take more time than the service life of the telescope itself.

Also, AFAIK, Crew Dragon does not have airlocks or other resources for EVAs. Not sure it the capsule can be evacuated for that too. But I would like more information in this issue ... could not find too much from reliable sources for now.


> Having a ship and a launcher capable of reaching the telescope is a bit different of being able to service it.

True. But generally when people bring up the statement that we can't service it, they usually back that up by citing the distance. There would still be a lot of hurdles to overcome, but none are insurmountable with relatively small investment.

All of that said, if Starship works, then the economics change drastically anyway. You could probably launch an entire fleet of less reliable, but much cheaper telescopes, for less than the cost of servicing.


If you consider the human to be part of that hardware, it's over 4x the distance that a human has ever travelled. There would still be a lot of technical problems to be figured out.


Webb is going to the Sun-Earth L2 (about 1.5 million km from Earth), not the Earth-Moon L2 (which is about 61,000 km from the Moon and 445,000 km from the Earth.

https://webb.nasa.gov/content/about/orbit.html


Actually I have a question about this. What happens if it fails at L2? Is that position like occupied or more difficult to deal with from then on?


Solar wind will push it out of L2 and into an orbit around the sun. This is already going to happen once JWT runs out of fuel to maintain its position.


The L2 point is unstable do energy needs to be expended to maintain position there.


I think this assumes a successful launch and some kind of issue with the deployment.

The other concern is, of course, what happens if it blows up on the launchpad.


Presumably they've actually quantified these probabilities and think that the chance of that is relatively very low compared to the chance of something going wrong in the design. These things are not cheap, so it makes sense not to have to make three if the launch succeeds and design fails.


Oh yes, I totally agree— this is a very proven rocket, and the deployment sequence is very, very much not proven. So it absolutely makes sense to have gone this way.


That's assuming that anything that goes wrong in the deployment sequence necessitates a clean-sheet redesign of the whole thing. No reason to think that.


But with so many things that could go wrong in such a tightly integrated system, it's impossible to predict which components or assemblies may be affected by a partial redesign.


If you say so.


Ariane 5 appears to have 96% mean reliability. Source: 2021 Space Launch Report – Launch Vehicle by Success Rate (spacelaunchreport.com) - https://www.spacelaunchreport.com/log2021.html#rate

https://news.ycombinator.com/item?id=29554274


The footnote about Lewis Point Estimates is super interesting.

"Lewis Point Estimate Determined as Follows.

  Maximum Liklihood Estimate (MLE)= x/n 
      where x=success, n=tries
  For MLE<=0.5, use Wilson Method = (x+2)/(n+4)
  For MLE Between 0.5 and 0.9, use MLE = x/n
  For MLE>=0.9, use Laplace Method = (x+1)/(n+2)
 
  Lewis, J. & Lauro, J., "Improving the Accuracy of Small-Sample 
   Estimates of Completion Rates", Journal of Usability Studies, 
   Issue 3, Vol. 1, May 2006, pp. 136-150."


Concur. You have to see this thread on it in the NASA Spaceflight forum - https://forum.nasaspaceflight.com/index.php?topic=39928.0


I'd imagine they just cannot fabricate the mirrors as many as they want. The mirrors, arguably the most important parts of any telescope or optical instrument, need delicate manufacture due to the required precision. At the end of the day, this is not automobile industry.


Makes me wonder if it would have been smarter to build it in a modular way with different pieces being sent up one at a time, assembled on the ISS, then launched in the direction it needed to go.


Then you have multiplied your launch risk by the number of launches it would take to get it up there, plus there is the difficulty of assembling it in orbit. The only other project built like that was the ISS, and it took decades and a launch vehicle that no longer flies.

The biggest problem would be that it would be in the ISS orbit, which would require a lot of delta-V to get it to its final orbit.


An advantage of assembling it in ISS orbit is that you have DEXTRE and a team of spacewalking astronauts available to do the unpacking and assembly!


Or an altogether different design: thousands of cheap, totally fungible sensors in an array at the L2 point.

Not only would they be replaceable, but we could increase capacity in the future.


Except that interferometry at IR is still extremely hard.


humanity has a lot of experience building reliable one-offs, actually a lot more experience and a lot better experience than with modularity and bits and pieces. I'm a software engineer by trade but every time I look into industry (ship building, skyscrapers, etc) you realize: that aphorism about software engineers, woodpeckers and civilization was absolutely right.


"If builders built buildings the way programmers wrote programs, then the first woodpecker that came along would destroy civilization." ~ Gerald Weinberg.

For those who don't know what the quote is. I found it here[1].

[1]: http://www.ganssle.com/articles/programmingquotations.htm


Can you explain the colloquialism a bit? Is it a knock on modular design?

Having worked both in construction and software, modular design seems fairly common in both.


No, and I think they should have made two of them.



I could be wrong, but I feel SpaceX has mastered how to add pizazz to such videos :-)


I think the problem with such projects is not only they’re incredibly complex, but also they are NOT modular and not organized as a bootstrap. If any part of the launching or deploying process goes wrong, the whole endeavor could ultimately fail. On the other hand, if we somehow could make the telescope from many equal parts, transport them into orbit, assemble and test it part by part, organized in stages with each stages the telescope will become more powerful then I believe the chance of success will be much higher. Of course with our current technology and industry base, assembly in orbit might cost significantly more. But in the long run, I don’t think we’ll have any better alternative. The most import aspect is however, if due to any error a transport ship exploded or a robotic arm jammed, the entire project will not in jeopardy.


I would actually even want to see something like that done on Earth even with remote control. That is bring multiple pieces with trucks to single location and then combine them together without non-remote human involvement. And have the whole thing work as expected inside specs.


Assembly in orbit would probably be impossible. The telescope has tolerances measures in microns. You just can't do that as a modular system.


I keep reading about how there are ~350 single points of failure once Webb is successfully launched. Can someone with expertise and chime in about why the telescope wasn't designed with more built-in deployment redundancies? It would seem to be good engineering practice to ensure that there are back-up systems, etc.


Redundancy is only one way to increase reliability. For example, having redundant elements can allow you to use cheaper components and meet the same level of reliability as a single, more expensive component but at the cost of mass (or vice versa). Most often, there are reliability requirements that are defined at a very high level, and a reliability engineer helps determine the right tradeoffs to meet those requirements. In addition, some of the points of failure don't lend themselves well to redundancy, like release mechanisms.

For certain risks, like human-rated spaceflight, they may just have requirements like "no single-point failures", which means redundancy is a must.


Redundancy costs mass and there are a lot of deployment steps that simply cannot be made redundant. The trick is folding everything to fit into the constraints of the payload fairing. Good engineering is to design within these limits and but if you make it too safe you end up with something that is not worth launching.


Plus you don't get to send a Space Shuttle to L2 in case the mirrors are malfunctioning.


On the ESA web page… 4 likes. Come on, world! We can do better than that.


The launch is super scary, but the deployment seems even worse. When will we know whether or not all is working?

I don't know if I could handle that much pressure to launch that thing.


I am there with you. I've been watching this unfold for half of my life, and seems like EVERYTHING must go right for it to happen. The unfold is scary complex. I really, really, really, really want it to go right and worry.

On other hand, NASA has done this amazing thing with landing rovers on Mars flawlessly and those were incredibly complex processes with sophisticated pieces of engineering.

biting my nails here


How about Cassini? (sp?) threw this little sensor disk to Saturn for information - what a cool project.


From this article[0] which was on HN a few days ago:

>By June 2022, if all goes well, Webb will finally be ready for science.

[0]https://www.nature.com/articles/d41586-021-03620-1


All the scary stuff that can go catastrophically long is within the next 40 days or so. However it'll take months to go through the calibration process, tweak the mirror positions, check out optics, etc


Yea, I'm having trouble finding a detailed schedule, but it looks like the main mechanical components are all deployed in the first 15 days

https://upload.wikimedia.org/wikipedia/commons/6/6a/JWSTDepl...

https://webbtelescope.org/contents/media/images/4180-Image

After that they still need to a do a ton of checks, wait for outgassing, etc. The first "science-quality images" are expected by the end of the third month.


When's the launch date? I couldn't find a definitive answer, as both December 18 and December 22 are listed.


December 22, delays are always possible but it's not going to launch earlier than that ("no earlier than" is often abbreviated "NET" in that context).

https://jwst.nasa.gov/content/webbLaunch/countdown.html



As an extra question: when do we start getting data? Specifically, how long until we can start finding out if planets around other stars have oxygen, or even industrial pollutants like CFCs (my understanding is that if we see CFCs, it's almost certainly due to complex life and couldn't be a natural phenomenon. Oxygen is unfortunately more ambiguous, apparently).

Ever since I heard about JWST (I feel like it's been 8 years ago now), I was looking forward for an answer to this question. All the delays have been so painful...


First light (or do IR 'scopes do "first heat"?) in June, 2022, if all goes well:

https://www.nature.com/articles/d41586-021-03620-1

30 days to achieve orbit.

Six months to settle, synchronise, allign, and calibrate.


December 22nd; they were aiming for the 18th but there was a small issue that required a bit of additional testing so they pushed it back to the 22nd.


Wasn't the most recent 18 -> 22 delay because they dropped it?


They didn't drop it but a clamp released unexpectedly during some operation and it caused vibrations in the telescope. At that time they didn't have sensors attached that would be able to confirm if the vibration was within specified tolerances so manual inspection of sensitive components had to be done again.

Glad to hear it went well and we are back on track!


Yep.




> caused a vibration throughout the observatory.

I hope there won't be vibrations during the launch of the rocket.


The launch vibrations and their resonance frequency are accounted for in the design.

The unplanned vibrations due to this mishap was not planned for.

I understand they ran some tests and calculations after the accident and believe that the instruments are good to go.


One means you felt something else have an bad day, the other means you had a bad day.


After checking, it was fine.


Dec 22, several minutes of terror followed by ~33 days of nail biting until the majority of deployment is complete


Unfortunately the launch is now no earlier than 24 December


This fills me with anxiety. It's like "PLEASE DO NOT BLOW UP!!"


Are there any onboard cams to watch the deployment after it reaches L2?




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: