Hacker News new | past | comments | ask | show | jobs | submit login
The $11B Webb telescope aims to probe the early universe (nature.com)
353 points by infodocket 41 days ago | hide | past | favorite | 280 comments

The deployment sequence that takes approx 30 days is terrifying but also probably one of the most complex things we would ever achieve if successful.

Have been waiting for this since I was a teenager. Can't believe we are almost there (launch on Dec 22).

Here's a short 2 min video of that deployment sequence if anyone wants to be fascinated: https://www.youtube.com/watch?v=RzGLKQ7_KZQ

Also a short interview with Dr. John Mather (could listen to him all day) if anyone wants to know how the telescope works: https://www.youtube.com/watch?v=4P8fKd0IVOs

How does one even begin to engineer tests for such a sequence of events? Yeah you can test each step individually, but then you have all the integration effects. How do you know you have enough coverage?

$10B buys a lot of QA and I'm sure they try to engineer everything with the right margins, but it's still an unfathomable amount of state space.

Are there techniques to stay sane and manage risk without just throwing money at it? I feel like that kind of knowledge could be useful for software test development.

There's no real silver bullet other than applying the systems engineering process diligently. You start by writing down your user requirements (what the system needs to deliver), and you follow the thread of figuring out that "to do X, this subsystem has to provide conditions A, B, C..." recursively, in a breadth-first search. The level of detail codified in these functional, performance and interface requirements depends on the level of assurance you need.

Then, you need to validate that each requirement is met by your system. This can be done by test, analysis (mathematically proving some property), review of design, or inspection. It's true that you can't fully validate most space systems on Earth, because we can't simulate all environmental conditions simultaneously. That's why you ideally you want each requirement to be validated by two methods.

When you find anomalies due to integration effects, it's usually because your interface requirements are not specified well enough ;)

this level of rigor always makes me snicker at the engineering in "software engineering"

Good god, please stop this misguided snarkiness. Comparing software engineering (or 99.99% of EE, ME, CE, etc) to the web telescope, then haughtily using that comparison as a way to dismiss the entire industry as not really being engineering at all is just ridiculous. I have spent almost 10 years as an EE and another 12 as software engineer and the idea that the level of rigor in EE or ME is any higher than high caliber software teams is just laughable. CI? Detailed process and best practices for review, readability, reusability, testability, security... I won't even go on, most EE teams I've encountered are laughable on all those fronts, all practices that are very common in software engineering. If you think otherwise I highly doubt you have real-world exposure to the processes generally followed by those other disciplines. I do, FYI they are just regular jackasses like all of us :)

Also, guess what? The web telescope has a ton of software on it. I suppose that is just "software engineering"? (in quotes, har har)

Look, there are terrible teams doing terrible or very simplistic software work. The same is true in all engineering disciplines and it is of course true that in all engineering disciplines (including software) there are also teams operating with a high degree of passion and rigor in creating very complex things. So yeah, please stop with these silly and really pretty insulting comments.

Also, consider that a half-way complex piece of software (from our outside POV) is enormously more complex than your average IC, vehicle (if you exclude the mountain of software controlling it), or general "engineering" project in other disciplines. Obviously that isn't always true (CPUs, James Web telescope, for example are also enormously complex) but most seemingly simple things you use every day (random example: Grubhub) are massively complicated to build, evolve, and keep running each day. Dismissing them as not real engineering is really ignorant and rude.

The ME and EE associated with creating new product like the Nest Thermostat (for example) are, by comparison with something like Grubhub, of trivial engineering complexity (although still very respectable engineering efforts)

I’ve been writing software professionally since the 90s. Tons of software is pointless crap to make jobs.

I’m gonna go with Carmack on this one; you think you have a better understanding of it than him? It’s just ifs and for loops.

Sure some contexts require real nuance. But just because Grubhub IS massive doesn’t mean it needs to be engineered that way, that’s a byproduct of socio-political forces (money).

There’s nothing in any science or engineering book I’ve read that says “all software must be engineered as a large distributed application”. Again, a byproduct of contemporary business goals.

My generative art object app is not sending people to the moon and merely relied on me wrapping some well understood math in machine language syntax; it’s librarian work.

Your preference for adverbs and emphasis where you see fit to place it does not move me.

Accomplish something net new for humans, cause I’ve been sending data grams over the internet since the 80s. Grubhub isn’t all that interesting

What percentage of software projects are complete disasters? In spite of being staffed almost entirely by people with CS degrees? 50%? 60%? I guess that is because it is so easy? Whether from a societal POV they need to be that complicated, the answer is of course "no", it is all about business goals and is mostly crap we don't need. But what has that got to do with the price of tea in China? The discussion is about whether software is "real" engineering, not if engineers are engaged in meaningful work.

On that front, the idea that software systems with 100s of features, millions of daily users, extreme uptime requirements, hackers all over the world trying to break in daily , and billions of $$ on the line is "ifs and for loops" or "librarian work" is... oh come on. Never mind, this conversation is silly.

You say you are an experienced developer, but honestly it sounds like you've never built anything except generative art object apps.

> seemingly simple things you use every day (random example: Grubhub) are massively complicated to build, evolve, and keep running each day

What exactly is massively complicated about a predatory aggregator site compared to a vehicle?

What does their predatory nature have to do with complexity?

Remove predatory, keep the question.

Look past trees, see forest. Let go of your desire to shit on developers and the cheap satisfaction you get from doing so.

> Look past trees, see forest.

I have.

> Let go of your desire to shit on developers

So you're a psychic now, and you read into people's desires by looking at HN posts?

The question remains. But it's clear you can't answer it without ad hominem.

You are asking a silly question. Jeeze, maybe it isn't the best comparison, maybe I should have been more specific. The point you are asking about in no way affects my larger point so who cares. You harping on it is just weird.

> You are asking a silly question.

It's a perfectly valid question: what makes an aggregator site like GrubHub a more complex engineering problem than a vehicle?

> maybe it isn't the best comparison, maybe I should have been more specific.

May be it is, may be you should have been, but you weren't

> The point you are asking about in no way affects my larger point so who cares

What is your larger point exactly? That "having a CI" somehow make you an engineer because your software is somehow "more complex than a vehicle"?

> You harping on it is just weird.

The fact that you answered literally nothing, and keep sticking to ad hominem attacks show that you have no substance to what you're saying.

Likewise... I was pretty impressed with myself in the 90s as a young guy passing the Microsoft Certified Systems Engineer exam.

"According to Encyclopedia Britannica, the first recorded 'engineer' was Imhotep. He happened to be the builder of the Step Pyramid at Ṣaqqārah, Egypt." [link]

I look forward to drinking ale with Imhotep (and NASA JW Space Telescope engineers) in the great heavenly hall of engineers.


One of the best talks I've heard recently was by an early-20s engineer talking about safety-critical software for trains.

Yes, it's very different from most software engineering. No need to snicker, just do the appropriate thing for your situation.

Can you link the talk or post its title?

Sorry, it was at an off-the-record conference. As a result it was a very candid talk!

Fair point, but the software that eventually ends up in flying space hardware has usually been put to a similar test.

This is how you are supposed to build software too.

Who supposed that?

Broken software can be fixed cheaply after the fact. Yeah, it’s cheaper if you find the bugs earlier but it shouldn’t come as any surprise that pre-validation is more extensive in systems that are expensive to change.

There are lots of different types of software.

There are phone note apps and control systems for jets and artificial hearts

All that falls within the same paradigm, it's just now that now your user requirements have a little flexibility around fault tolerance.

You still have SLAs to manage and meet.

Whether you are “supposed” to do that or not really depends on what kind of software you are building.

Precisely. Besides aerospace there aren't many industries that know how to do software well.

I see it as an Agile vs Waterfall thing.

Waterfall works great, if and ONLY if, you know 100% for sure exactly what you're requirements are at the beginning of the project and they never change significantly. This seems to be mostly true-ish for aerospace. Nobody's going to pivot the Falcon 9 into being a washing machine next week.

It seems to me that the point of Agile is that Waterfall fails hard if you have no idea what you want your business software to actually do. Agile is an attempt to build a reasonable software development process around that reality.

So when you get down to it, the lack of rigor isn't so much in software itself as in business. If you don't have any idea what your business is going to actually do, no software or engineering process can fix that.

> Nobody's going to pivot the Falcon 9 into being a washing machine next week.

I may have to borrow that.

An alternative explanation: we're so in a hurry that we don't want to take time out to decide what it is that we are going to build, and so we embrace the fact that we don't know what we are doing and put a sexy label on it, because hey, who doesn't want to be Agile? It sounds so much better than 'clueless'.

Most of the time, we don't care that much about what we're going to build. The actual goal is "build whatever makes money". In FOSS circles without money it's often "whatever gets users" instead.

If I'm doing a custom project that does some sort of warehouse management for a client, I'm not really deeply invested in my particular vision about how a warehouse should be managed. I'm deeply invested in making my client happy.

In terms of software development and testing practices, what the military/aerospace/medical industries do would never scale to any other industry. If the law were to mandate those practices across the board, that might sound great, but it won't matter because you won't be able to afford the resulting products.

It would scale just fine. Software would cost a bit more, profits would go down a bit, quality would shoot up and support costs would drop. I wouldn't be surprised if it was a net positive in the longer term.

But in a world dominated by quarterly earnings this won't fly.

Not necessarily advocating for it, but if that would mean less bloatware, de-facto spy-ware and lousy, pointless apps, or god forbid even les social media apps, I would call that a net positive.

It would catapult the industry back to the 1990s, at best.

How "at best" is interpreted is obviously up to the individual, of course.

Given Boeing’s recent history, I’m not sure aerospace knows how to do software well either.

That's a regulatory problem, not a software development problem. Simply put: that plane should have never been certified under that type.

True. I would even argue that the software and hardware Boeing developed did exactly what it was supposed to do.

You can probably snicker at most engineering in "X engineering".

Is any of the systems documentation for any NASA project publicly available? As someone who spends a considerable amount of time sifting through systems documentation where the process hasn't been applied so diligently, I've always wanted to read a NASA SSDD or similar.

Your description is a good one.

My experience in industry is that we validate requirements when we confirm that they are necessary to achieving a particular mission.

We then verify that the system under construction does in fact satisfy the set of validated requirements.

True, verification is the correct term in this case :)

I agree, it's a recursive search! Translated into software testing:

"level of detail codified in functional, performance and interface requirements"

functions, usage frequency, APIs.

"usually because your interface requirements are not specified well enough"

It's probably a bug in the API.


"There are only two hard things in Computer Science: cache invalidation, naming things, and off-by-one errors"


Suggestion: use ctags to list all functions, variable names in your code. Look for ambiguity (e.g. variable name "i"). Look at neighbouring code. Zoom in and out. A small bug in the most-used code is actually more serious than a big bug in code that rarely gets used.

"How long can you work on making a routine task before you're spending more time than you save?"



NASA Systems Engineering Handbook https://ntrs.nasa.gov/citations/20170001761

That's the 2017 version; maybe there's a later one. IIRC, it's an abridged from NASA Expanded Guidance for SE, but my link to that is broken.

Wait no Agile ?

Did you read it all? ;)

I would start with https://en.m.wikipedia.org/wiki/V-Model. System designs of everything in automotive, aerospace etc are based on a V model.

As with all good systematic approaches, Lean and Six Sigma fall in the same category, the V-model is great. It is also, basically, just codified common sense, and that codification is key and incredible important. And as with Lean and Six Sigma, people can apply the substance of it without rigidly following the form (read: checking boxes in process description) and be fine. Or they can can follow the form without the substance and be fine on paper and still produce crap systems engineering. Kind of like the old the saying: Inspection ready troops don't pass combat, combat ready troops don't pass inspection.

Systems Engineering is the discipline that oversees this. They define what tests will be required to validate the thing will do what it is supposed to before any hardware is built. I don’t think there is a good analogy to typical software QA, which is usually a “make sure it doesn’t break anything that already works” type of discipline.

That kind of exhaustive integrated/system-level testing is precisely the kind of thing we're trying to enable at Auxon, FWIW. A really rough sketch, for sake of brevity, of the approach is cribbing from property-based testing, mutation testing, and model checking and applying them to system & software interactions instead of programs, functions, and source code. Our users are often Systems Engineers by title or more often by circumstance.

You gave the answer! Integration tests. And they work recursively, with a Kalman filter to approximate even in noisy conditions.

"USL was inspired by Hamilton's recognition of patterns or categories of errors occurring during Apollo software development. Errors at the interfaces between subsystem boundaries accounted for the majority of errors and were often the most subtle and most difficult to find. Each interface error was placed into a category identifying the means to prevent it by way of system definition. This process led to a set of six axioms, forming the basis for a mathematical constructive logical theory of control for designing systems that would eliminate entire classes of errors just by the way a system is defined.[3][4]"


There's a diagram of rules on the USL Wikipedia page. The rules show triangle feedback loops with a parent, left, right child. Those are like generations of a Sierpiński triangle. Every part is trying to serve the Good Cause that it's working for, and love its neighbour.


"any state-space model that is both controllable and observable and has the same input-output behaviour as the transfer function is said to be a minimal realization of the transfer function The realization is called "minimal" because it describes the system with the minimum number of states."


"the problem of driving the output to a desired nonzero level can be solved after the zero output one is."

An electronic analogy: find GND, then solve for 1.

A common solution strategy in many optimal control problems is to solve for the costate (sometimes called the shadow price) A shadow price is a monetary value assigned to currently unknowable or difficult-to-calculate costs in the absence of correct market prices. It is based on the willingness to pay principle – the most accurate measure of the value of a good or service is what people are willing to give up in order to get it. The costate summarizes in one number the marginal value of expanding or contracting the state variable next turn.

Each part looks ahead 1 generation, chooses left or right.


convert a representation of any linear time-invariant (LTI) control system to a form in which the system can be decomposed into a standard form which makes clear the observable and controllable components of the system

Take a big problem, break it down, look for I/O ports. Or in software test development: layers of abstraction. A suggestion: only add a layer of abstraction when it's too big to fit on the screen at once. Use tools like code folding, tree views.


Optimise for time? When we're in a hurry we break things. Another suggestion: aim to minimise entropy, maximise connectedness.

Thank you for asking a good question, and thank you for reading! Let's go and tidy up this world together, in software and hardware.

Thank you for the deep response!

That is kind of what I figured, basically it always boils down to divide and conquer.

It just feels like with software, it's simultaneously easier than mech eng in so many ways (everything is in silico, near infinite abstraction power, ability to automate so many parts), yet it feels like we are constantly struggling with complexity. It starts simple but then quickly becomes a ball of yarn.

Maybe that perception is wrong, but if there is some truth to that intuition, I think it is due in part to this: software is all degrees of freedom and very few invariants to start, but then the system space quickly fills with rigid rules than can intersect in unpredictable ways. Physical engineering, you start with lots of invariants (the laws of physics, materials, chemistry, electronics, and geometry) but they are all thoroughly documented at this point. Things intersect, systems interact, but it feels much more bounded. You're only ever gonna have 3 dimensions, 6 simple machines, 92-ish usable elements, the standard model and universal constants.

This is why I value Rich Hickey's "Simple made easy" as the the exemplar of software philosophy. It's easy to say "oh yeah modular code good" but it's another to actually write code that is naturally decoupled.

As an aside, both disciplines tend to benefit from "throw more hardware at it!"

Thank you so much for being a helpful messenger!

divide and conquer

Yes, problems are finite, so we can Binary Search them to a single answer.

quickly becomes a ball of yarn

Looks like chaotic system, but actually chaos is just a fractal at a dimension we don't understand. Every XML tag is a dimension {for loop, if statement, indenting, git blame with names should be git thank}. Chaos is just a Wolfram rule that's not recursive self-sacrifice Rule 90 Sierpinski Triangle. Let's simplify our software design to make it follow a fractal pattern, and balance the forces.

Physical engineering, you start with lots of invariants

Sounds like Haskell or functional programming. Personally I like to add the extra tags, but as commented code to describe the meaning, translate to humans what the code does. Prefer to do so inline (cache) and top of function/file (RAM) not in separate documentation (disk).

6 simple machines

Thank you so much, you good messenger! Through your message, God just taught mechanical engineering paradigms to help with making more analogies. Pendulum is clock signal. While in the shower, there was a spider. It tries, falls, tries again with slightly different parameters. It outsources processing to the Web (there was another Hacker News article about spiders and webs and brains).

Everything can be modelled as a mass-spring-damper system, so we can just translate the transfer functions. We can optimise for Optimal Control in the limited 3D we understand, then project out again to higher-dimensions. Spiders make many 2D webs that intersect.

inclined plane, // gravity (?) // not only gravity // inclined plane is like a wide lever

lever, // straight line

  wedge,    // fulcrum. fixed point on ground. GND 0. Triangle. Sierpinski Triangle. 

   wheel  // continuous motion, round circle (?) or a spiral? do circles exist? yes. ours are imperfect though. colleague said it's the bearings that break in appliances. 

  axle,    // turn, change the world. // balance electron spin // balance like yin yang cosmic microwave background, 2 axles

 pulley,    // rotation equivalent to lever on wedge. // bicycle is 2x pulleys with gears (cone), digital-analogue converter moving forward
screw // reproduce throw more hardware at it! // ! factorially // make a spiral cone from a round circle and straight line.

Sorry that the thoughts are not translated to full sentences yet, please email and we can chat more! there's many more ideas where these are coming from, not for my personal glory, but for the greater Good. Ideas of spiderweb network topology, Facebook/Meta engineering director of social network bringing peace in diversity using shared {music, memes} taste, UncleBob https://www.youtube.com/watch?v=BSaAMQVq01E&t=2021s "love your neighbour the code -> clean as you go".

Let's pray for the JWST telescope. Young men will see visions, and old men will dream dreams. It's our dream that the humans can see the tree better: not just galaxies (branches) but stars (leaves) and planets (pollen). There is a root system underground also, which mirrors the tree we can see. Humans can also use other senses {eyes, ears, nose, mouth, hands, feet} but the JWST is the best eyes that humans can make with technology today. It's not perfect, but it's the best that humans can do. And it will work well enough to be useful, not just for James Webb's sake, but for the Greater Good.

The really scary thing is that because it's in L2 orbit (past the moon) it's not designed/intended to ever be serviced. So they can't go up and unfuck it if it's fucked.

At least not until ~2023 or so, when Starship is ready.

You are -severely- underestimating how big space is.

roughly 30 days How long will it take Webb to get to L2? It will take roughly 30 days for Webb to reach the start of its orbit at L2, but it will take only 3 days to get as far away as the Moon's orbit, which is about a quarter of the way there.

Or 2028, Musk has never set a deadline he hasn’t blown straight past.

That is definitely true.

On the other hand, 2028 is better than the promises coming from the other option, SLS, which I believe will be ready never.

That's the difference between engineering and 'move fast and break stuff'.

Let's hope it all works out, if not, some expensive lessons are about to be learned.

it's wild to me, given all the delays and complexity and risk, that the mission length is only 5-10 years max. but even if it blows up on the launchpad we've learned a ton, if only about the difficulty of manufacturing such devices in the 21st century. i am praying it does work, though, and that we get 10 years of amazing data from it before eagerly deploying a replacement.

Is 10 years a hard max (like does it crash into the moon or something?) or is it just a projected max timeframe?

I wonder that mostly because we've managed to use a lot of our other space equipment well past their their mission lengths. I'd be interested if JWST is possibly the same.

Unlike Hubble, since JWST will need to be stable and orbiting around L2, this is cited as the reason for it being a finite mission:

Edit after someone corrected me.

Please refer to this comment: https://news.ycombinator.com/item?id=29490291

The article you linked says absolutely nothing about the helium cooling medium.

Three of the four imagers on the telescope are passively cooled and will work as long as they don't succumb to radiation, diffusion, etc. The fourth one (MIRI) has a cryocooler that uses liquid helium, but it will leak out very slowly and mechanical wear and electronics lifespan is expected to be the limiting factor there. [0, 1]

As stated in other comments, the primary driver of lifespan is a combination of how stable the telescope orbit is, and the resulting amount of fuel needed to keep the telescope in a stable orbit. Depending on how things go it has enough fuel for somewhere between 5.5 and 40 years of operation. Assuming nothing else goes wrong. :)

"Webb is designed to have a mission lifetime of not less than 5-1/2 years after launch, with the goal of having a lifetime greater than 10 years." [2]

0: https://jwst.nasa.gov/content/about/innovations/cryocooler.h... 1: https://www.nasa.gov/feature/jpl/how-cold-can-you-go-cooler-... 2: https://jwst.nasa.gov/content/about/faqs/faq.html

You are right. The source for my statement above is this link: https://www.americanscientist.org/article/jwsts-limiting-fac...

At the end of the link is the clarification:

Drs. Heng and Winn respond:

As pointed out to us by Drs. Jason Kalirai and Jason Tumlinson at the Space Telescope Science Institute (STScI), as well as Mr. Sykes, our article misstated the reason for the finite lifetime of the upcoming James Webb Space Telescope. The mission duration of 5.5 to 10 years is not limited by the supply of liquid helium, as we stated. Rather, it is limited by the supply of hydrazine fuel needed to maintain the spacecraft’s orbit.

Thanks for the correction, will edit my parent reply.

Does this mean an ion thruster or solar sail could have significantly increased the service life? Or would something else give out shortly after the fuel runs out?

It is due to orbit in L2, in eternal shade of earth. So no solar power

No, that's not true. It will be orbiting around L2 and not stay at L2, so it will have access to sunlight, which powers the solar array that faces the sun. The actual observatory and the mirror are shielded by the sunshield.

Here is Dr. John Mather explaining it: https://youtu.be/4P8fKd0IVOs?t=1321

I’m sure one of our manned moon missions can swing by and top her off


//a little

The limit is propellant in the tank, which needs to be used for station-keeping.

5.5yr is the minimum, 10 sounds probable (stated goal), while 20-40yrs is the best guess with expected fuel usage.


Can't they design the tank system modularly to be replaceable? Like (also projecting how SpaceX etc are also making space cargo much cheaper) having a rocket carry payload that would replace the tank cartgridge with a new one, giving, say, another 5 years' worth of propeller.

I'm pretty much 100% sure NASA knows this better than me of course, but I'd love to see the reasoning behind planning to retire such an expensive project after a (relatively) short ~10 years.

1) its more complex to design a re-fuelable fuel system

2) no vehicle exists/existed at design that could support a re-fuel system.

It's a hard limit due needing fuel to maintain its orbit. It's in a lagrange point (https://en.wikipedia.org/wiki/Lagrange_point) which requires occasional orbital corrections.

Yes and no. Fuel is the limiting factor, but it could go beyond a decade. See here: https://space.stackexchange.com/questions/55309/james-webb-t...

IIRC, it was also not designed to be serviceable.

-ish. They have no firm plans for servicing it, but it does have a docking adapter and the fuel/coolant connections are designed to be usable in space.

Basically, because there is no reasonable way to service something in L2, they can't really plan for it, but it's expensive enough that they made sure there is the capability if someone in the future would, say, build a spaceship that is orbitally refuelable and designed so it can take crew that far out.

It has a docking ring for potential service mission.

Well it's not strictly a hard limit but it's currently planned to be a hard limit. If SpaceX can pull off even a fraction of what they claim with Starship, it's not unrealistic to think that it'd be financially viable to attempt a refuelling of the JWST.

That's an event I'd like to see!

a good video about lagrange points https://www.youtube.com/watch?v=Gu4vA2ztgGM

The Opportunity rover had a planned mission duration of ~93 Earth days. It went on to serve for ~5,500.

I wonder how credulous I've been about those estimates. Underpromise, overdeliver is an old tool for managing expectations. I wonder what NASA really expects for these projects.

(The projects are still amazing; I'm not complaining about the engineering or performance!)

We really need Starship in operation. It should be able to carry much larger objects into space - no need for complicated folding and unfolding mechanisms.

Starship could possibly take normal sized heavy equipment to other planets, such as heavy earth movers. (Not those with a combustion engine, but still useful.)

The one thing that I think starship has proven is that for any major mission that stretches our current launch capanilities, it may be worth investigating developing a new launch vehicle instead of accommodating the ones we have.

Just think of all the engineering and risk that’s going into a process that will be used once.

I thought about this a lot while I was designing my own amateur liquid rocket. You would almost certainly save a lot of money across all launch programs from the payload, just by spending the effort upfront to qualify a new heavy lift system like a Saturn-VI or Starship.

I think its just such a difficult thing to justify huge upfront costs to congress, especially if you see how long it has taken SLS to get anywhere. IMO, ULA, Boeing (solo) and Northrop's cultures couldn't possibly cut it for developing another amazing vehicle at any appreciable speed without the Euphoria and meaning that the Space Race provided.

Good points. Sometimes projects need constraints to get moving.

Not even that — seeing how cheap Starship launches would be per kg of payload (I've seen a figure of $10), we could as well build a huge orbital station and just manufacture and/or assemble arbitrarily sized stuff in there or even in space. No atmosphere, and especially pesky oxygen, to deal with, no contaminants to keep out, no gravity to fight against. I'd imagine that any scientific and fabrication processes that need a deep vacuum would also greatly benefit from being done on a space station.

Zero gravity engineering is going to be an interesting challenge, and a source of many funny videos. (Possibly some less than funny, too.)

I would definitely love to see something like the Space Station V from Kubrick's 2001 - A Space Odyssey IRL. AFAIK it was almost a quarter of a mile in diameter. This seems to be suited for in-orbit fabrication and assembly.

I think if they had starship they'd have just built it foldable anyway but with a bigger mirror

From that video it appears the unfolding sequence is set to occur prior to the insertion burn into L2.

If the JWST will in fact spend almost a month in earth's orbit, does someone have an educated estimate on the magnitude of risk posed by space junk to nominal deployment?

Looking at those solar shields I imagine that they could be destroyed entirely by even the smallest of debris fragments. Same with the mirrors.

Edit: I'm wrong here (thanks @thethirdone). The burn set to occur after deployment is the L2 insertion burn and not the transfer insertion burn. Most of deployment will occur in the transfer orbit en route to L2, far away from earth-orbiting debris.

I interpreted "orbital insertion burn" to mean the stabilization into L2 burn. With that interpretation the unfolding occurs during its travel to L2 where there is little space debris.

Basically none cause it's outside the typical orbit

It will orbit and deploy at the point where the Earth's and sun's gravity cancel out, which is far beyond most anything else, especially space junk.

Suddenly extremely worried about this:

"James Webb Telescope will run a proprietary JS interpreter by a bankrupt company "



"The James Webb Space Telescope (JWST) will use an event-driven system architecture to provide efficient and flexible operations as initiated by a simplified, high-level ground command interface. Event-driven operations is provided through the use of an on-board COTS JavaScript engine hosted within the payload flight software..."

Edit: Found something ....Is it too late to postpone the launch?


"...The JWST science operations will be driven by ASCII (instead of binary command blocks) on-board scripts, written in a customized version of JavaScript. The script interpreter is run by the flight software, which is written in C++. The flight software operates the spacecraft and the science instruments.

The on-board scripts will autonomously construct and issue commands, as well as telemetry requests, in real-time to the flight software, to direct the Observatory Subsystems (e.g., Science Instruments, Attitude Control, etc.)...

The flight software will execute the command sent by the calling on-board script and return telemetry, which will be evaluated in real-time by that on-board script. The calling script will then send status information to a higher-level on-board script, which contains the logic to skip forward in the observing plan in response to certain events (see Section 4.1)... "

Found it...

"JWST uses an extended version of JavaScript, which was developed as a COTS product called Nombas ScriptEase 5.00e. ScriptEase provides functionality common to many modern software languages and follows the ECMAScript standard."



Latest errata from 2004, moving from worried to full panic mode...


It is common in such commercial agreements to provide source code or to escrow source code with a third party service with conditions that trigger release of the sources (eg bankruptcy or sale to another company that discontinues the product). So it is possible they have the full source.

It is also worth considering that the JS engine likely hasn't changed much (if at all) in the past 15 years. Its bugs and limits are well-known at this point.

It is also an interpreter which makes it slower* but less subject to vulnerabilities that impact the host. Honestly that's probably the correct choice for a spacecraft where reliability and safety is more important than performance.

Don't get me wrong: JavaScript is a big ball of WAT and nonsense we've spent way too much effort improving but I don't blame them for making the choice so long ago and sticking with a known quantity rather than risk introducing new problems by changing things.

* I once worked on a project that used IronJS to run JS in the .Net runtime. It took advantage of the runtime's JIT but was a lot of not terribly optimized F# code. I built a V8 bridge and was very excited for the increase in perf... but it got slower. It turned out most customer-written JS code spent most of its time using the API which was backed by C# code and that meant lots of bridging. At the time I left they were still using IronJS because it was faster for their workloads. It taught me the importance of testing your actual workload and taking a whole-system approach to perf.

I'd like to understand how such a pinnacle of human design and engineering came to depend on a technology that is, putting it politely, certainly not.

Software assurance within NASA is often a low-priority if not just an afterthought. Many project/program managers are from the hardware side (e.g., mechanical, electrical, or industrial engineers) and don't always give the appropriate gravitas to software in terms of its ability to contribute to failures.

What is that based on? And has NASA had many software failures? Their missions seem incredibly reliable, especially considering how far beyond the bleeding edge they operate.

>What is that based on?

Personal experience as a civil servant with NASA. Often, quality aspects take a back seat when schedule pressure is high. It’s the whole reason their current safety and mission assurance org became a separate entity after Columbia

>And has NASA had many software failures?

There's been quite a few high profile ones like Mars Climate Orbiter and more recently with CST-100. In the case of the latter, there were clear process gaps that should have caught the issues if the software assurance procedures were actually followed. Note that last one is for a non-crewed test of a vehicle meant to take people into orbit; presumably this is the highest threshold for quality procedures. There are many, many more lower profile ones that don't get talked about, even within the agency, dating back to Gemini.

>especially considering how far beyond the bleeding edge they operate.

I know that's the public perception. Much of the missions are bleeding edge (because there's very little incentive for anyone except the government to do them), but you might be surprised about how they don't always use state-of-the-art tech. Now some of that is by prudent choice because you'll often prefer tried-and-true of bleeding-edge, but some of it is just because of complacency.

Thanks for sharing your perspective. I still don't grasp what you saw there:

> There's been quite a few high profile ones like Mars Climate Orbiter and more recently with CST-100.

Mars Climate Orbiter failed in the 1990s. Isn't CST-100 (Boeing Starliner) still in development (I don't remember the latest).

I don't doubt you have something in mind; I'd be interested in knowing what it is. Is it the idea of seeing sausage being made - it's messy and doesn't fit the public image? That I would completely expect. IMHO, that's true of every organization; failure is succumbing, success is delivering regardless.

> Much of the missions are bleeding edge ... but you might be surprised about how they don't always use state-of-the-art tech.

It's not about state-of-the-art tech, but addressing novel engineering problems far beyond where there is mature, developed knowledge. Helicopters on Mars, intersteller probes - it's incredible to me that these things reliably succeed. Will Europa Clipper not reach its destination? Is anyone even worried? They succeed, it seems to me, at a much higher rate than run-of-the-mill corporate software projects.

A few things:

1) I don't think 'run-of-the-mill corporate software projects' makes a good comparison. For one, the NASA projects referenced don't come about very often so there's a relatively small sample size. Secondly, they are a completely different risk profile and naturally have different quality expectations. NASA does quite a lot of home-grown CRUD apps, but nobody really hears about them because they just aren't that interesting. A fair number of them are really, really bad. Like, no real configuration management or change control, no test plans or reports, nil unit testing, changes made on the fly to production systems, using extremely antiquated development platforms etc. Some of the reasons are there's limited software assurance so naturally NASA focuses resources on the high-risk/high-profile projects, meaning business software is easier to fall through the cracks. Another reason is that NASA work is predominately contractor supported, meaning much of the work is done by lowest-bidder. It's much easier to be the lowest bidder if you keep your costs low; sometimes this results in lower quality developers. Why pay a high developer salary when I can grab someone who wrote VBA 25 years ago and I can just give them the title of 'Lead Developer'? When there's lack of oversight and downward pressure for costs, this is more common than someone would hope. My hunch is that if you did an apples-to-apples comparison of 'run-of-the-mill corporate software projects' with similar NASA business applications, you might be surprised at which is better.

2) I know the Mars Climate Orbiter is an old example but I referenced it because it's the kind of glitch that people immediately understand without any background knowledge. One group was writing software in metric engineering units and another group used Imperial engineering units, obviously causing a hand-off/interaction error.

So let's dive a bit more into CST-100 since that's a newer project. I'll try to be careful to only talk about stuff that's publicly available. Yes, CST-100 is still in-development. But the demo flight which caused concerns about software quality was meant to essentially an end-to-end check that the system was ready for use; it was meant to be one of the last checkboxes, meaning there is really no reason for glaring errors. In that demo, it couldn't make it to orbit because it burned too much fuel. It burned too much fuel because it incorrectly sync'd it's mission timer with the launch rocket and the spacecraft was confused about where it was in the mission duration. Later when troubleshooting on the ground, they found additional software errors where propellant valves were incorrectly mapped within the software (meaning when they try to command thruster A, they inadvertently fire thruster B). This latter issue potentially could have been catastrophic by causing CST-100 to crash into the space station when docking [1]. To a certain extent, they were lucky the first software error prevented a docking scenario. Troubleshooting all of this is a big reason why the system is still "in development" despite the first demo mission being nearly two years ago. Pay attention to wording in these types of press releases; a lot of times you won't find the word error for failure. They'll instead put some PR spin on it an call it an 'anomaly' or 'unexpected test result' when in reality it's a red flag for lack of quality. If you hear those terms, there's a good chance there was some procedural check that should have been conducted but wasn't. In the example of that Demo, ground simulations on a high-fidelity system could have caught them before the mission. There are requirements already on the books for this [2].

It's not just about peeling back the curtain and seeing how the sausage is made, it's more about an organization having high-minded goals where they have requirements to a certain standard of work, but in practice they often turn a blind eye to those standards. It's akin to if someone who worked for Google in the "don't be evil" days and felt like they weren't living up to that mantra.

3) A small nuance. Many of the robotic, non-human rated missions that get in the news are Jet Propulsion Laboratory projects. JPL does fantastic work, but they are quasi-NASA and are actually generally managed by Cal-Tech. As such, they follow different rules than NASA and there are actually only a handful of true civil servant NASA employees at JPL. NASA of course supports those missions, but they are a bit of a different animal. I believe Europa Clipper falls into that category.

[1] https://www.space.com/boeing-starliner-2nd-software-glitch-p...

[2] https://swehb.nasa.gov/display/SWEHBVC/SWE-073+-+Platform+or...

I have no doubt that NASA's 'business' software is like everyone else's. It would be a waste to invest in high assurance, high talent development for the HR and bug tracking systems.

> It's not just about peeling back the curtain and seeing how the sausage is made, it's more about an organization having high-minded goals where they have requirements to a certain standard of work, but in practice they often turn a blind eye to those standards.

My perspective: Few people live up to the high-minded goals; we're human. We achieve great things when, after experiencing humanity, we don't despair but keep our faith and enthusiasm for those goals. When the founders of the US wrote the Declaration of Independence, they were not naive - they had lots of experience of humanity (including their own), much worse than what we know. Yet they believed in something higher, beyond themsleves. If they didn't, if NASA didn't, we'd still be living in a early modern monarchy and not flying to Jupiter.

Thanks for sharing yours! TIL a few things.

Perhaps I’m jaded, but I think there’s a difference between aiming for a high goal but missing because we’re human vs. deliberately aiming short because it’s easier. I saw a lot of the latter, like refusing to learn how to tune PID parameters to manage system dynamics or saying they don’t want to run certain tests because they would be expected to fix any problems they uncover(!).

The lowering of standards is particularly troublesome when higher standards are contractually obligated. There’s a sad phrase that I had heard at high levels called the “NASA salute” which is basically shrugging one’s shoulders to say “yeah, I know I’m supposed to do that, but I also know I won’t be held accountable if I don’t”

The fact that a project as profoundly important as the James Webb telescope only has a $11 Billion budget is staggering to me.

It had a 1.5 billion budget, 9.5 billion in overruns.

Well, it started off with a $0.5 BB budget, and was supposed to be launched about 14 years ago...

it’s not running Node 0.10.0 for gods sake, it’s an interpreter to write jobs that scientists can use for their studies - the flight critical stuff is a different stack

You sure about that? From the linked paper ( unfortunately behind all kinds of paywalls...)

"The major characteristics of our process are

- 1) coordinated development of the operational scripts and the flight software,

- 2) an incremental buildup of the operational requirements,

- 3) recurring integrated testing. Our iterative script implementation process addresses how to gather requirements from a geographically dispersed team, and then how to design, build, and test the script software to accommodate the changes that are inevitable as flight hardware is built and tested.

The concurrent development of the operational scripts and the flight software enables early and frequent "test-as-you-will-fly" verification, thus reducing the risk of on-orbit software problems...."

“ 3.1. Event-driven Operations

The JWST science operations will be driven by ASCII (instead of binary command blocks) on-board scripts, written in a customized version of JavaScript. The script interpreter is run by the flight software, which is written in C++. The flight software operates the spacecraft and the science instruments.”

and in section 3.5, sounds like javascript just has an API to lower level system functions:

“ ScriptEase JavaScript allows for a modular design flow, where on-board scripts call lower-level scripts that are defined as functions.”

[0] https://www.stsci.edu/~idash/pub/dashevsky0607rcsgso.pdf

I am mostly curious as to how they came to JS as the embedded scripting language of choice, as opposed to Lua or Scheme or anything else.

It would have had to be an interpreter that was available off the shelf when this was being designed (so late 90s or early 2000s?) that ran on vxWorks on an old PowerPC processor. That could have limited the available choices.

"They" is likely the contractor. It may be simply a choice that allowed them to be lowest bidder

Sounds like somebody at NASA should contact Brent Noorda.

"Nombas,Un-Incorporated" http://brent-noorda.com/nombas/us/index.htm

He is in the critical path...

They really ought to look into Ada.

Cool videos thanks. Do you have any handy links to why it takes 30 days to unfold everything? I assume there are good reasons, but I just can't imagine what they are.

Here's another video expanding a bit more on that deployment sequence: https://www.youtube.com/watch?v=WY9KckPI68Y

The observatory has around 7000 moving parts with complex structures for the primary and secondary mirrors and more importantly, the sunshield that would be used to keep the observatory instruments at a specific low temperature. It will take roughly 30 days for Webb to reach the start of its orbit at L2.

At the end of 30 days, the telescope should have stabilized itself in an orbit around L2. But I would assume it takes that many days for deployment and unfolding everything because of the sheer number of parts and motions involved coupled with things like getting to L2, stabilizing orbit, temperature stability and all the checks for the instruments on board along with the mirror deployment (since it's not one big sheet of mirror).

Here's a link which gives an idea about the logistics involved (along with a cool video series of the journey embedded): https://hackaday.com/2021/11/02/30-days-of-terror-the-logist...

To fathom how complex the sunshield deployment is (and that's just a part of the whole sequence), from the link above:

"Full deployment of the sunshield is without a doubt the sketchiest part of the whole process. The sunshield consists of five separate metalized Kapton sheets, each the size of three tennis courts. Each one must be unrolled, extended to its full size, tightened, and spaced out vertically for the sunshield to do its job. This takes the coordinated action of 140 release mechanisms, 70 hinges, eight deployment motors, about 400 pullies, and nearly 400 meters of cable to accomplish, not to mention the sensors, wiring harnesses, and computers to control everything. It’ll take the better part of two days to complete the sunshield deployment."

The whole thing is just insane.

From this talk by Dr. John Mather: https://www.youtube.com/watch?v=2RLGx_wgyAw

Around 1:47 you can see the number of people involved. 3 space agencies (ESA, NASA, CSA), over 3000 engineers and technicians and 100 scientists worldwide.

Woah - this thing is really freaking cool. Thanks for all this info - I feel equipped to go on a long "nerd out" after work today.

I think most critical phases you’d want to happen when the satellite is in direct contact with the ground stations (they probably make extensive use of relay satellites to maximize windows of telemetry/payload data transmission, but here we are talking about issuing critical command sequences). Those windows are not 8 hours long. Further, it apparently takes almost 30 days to travel to the L2 Lagrange point and not all systems deploy until then.

Edit: nope, I was wrong, it’s going to deploy a whole range of systems while on the way to the L2. https://youtu.be/RzGLKQ7_KZQ

I think there is a lot of testing done after each step. It also may have to do with the cooling.

AFAIK, the equipment is super sensitive as well. They likely don't want to proceed to the next stage until they are absolutely sure the previous stage happened successfully, otherwise they'll risk damaging things which will hose the whole mission.

I guess every time after you unfold a thing, you want to check it behaves as expected and keeps behaving as expected before you unfold the next thing.

the "fabric" tensioning looks really sketchy to me. i can barely get a fitted sheet tensioned correctly on my bed..

JWST has something like 300 SPOFs in the deployment phase. Then there are operation issues; the MIRI cryocooler has had a long, troubled development history. Three weeks ago NASA put out a Hollywood production quality disclaimer video[1].

[1] https://www.youtube.com/watch?v=uUAvXYW5bmI

They should deploy in Earth orbit near the ISS, then move to L2. Would give option for repair.

The shuttle was decommissioned after the primary Webb design was done. What would have known the launch options at the time it was ready?

You can't just put things in space wherever you want them and then move them whenever you want to or to wherever you want to.

It’s probably too risky given all the debris in orbit.

Do you have any idea how little "debris" there actually is?

Do you have any idea how little risk "too risky" actually is?

Thank you so much for both of those links. Absolutely fascinating - especially the Smarter Every Day channel which I wasn’t aware of. Excited to watch more of it.

For those complaining about the spending: some countries spent TRILLIONS on war and and nation building. It's developments like these we should be focusing our energy and intelect.

$11 billion for a 10 year mission is peanuts in modern government operating expenses. It's not even $11 billion for 10 years, but already spent money over something like 15 years.

So, roughly $11 billion over 25 years. Something that many nations could afford.

For perspective, the United States was spending $11 billion dollars every 90 days fighting the Afghan war from ~2001-2020.

Well not 25 years of use though

While true, its also a huge problem when contractors promise something for 2B$ and then it costs 10-15B$ without the contractors suffering any consequences. What stops them from doing that for every single contract?

There is a reason recently NASA has started to focus on Fixed Price contracts.

We need a shift to more missions, building these things more often and more on price. Putting absurd amount of money into 1 mission compared to 20 missions for 500M$ likely doesn't make sense.

The Webb telescope has been so long in development that lots of subsystems could have evolved considerably since then.

> We need a shift to more missions, building these things more often and more on price

This only very recently became financially reasonable with SpaceX. It would have been impossible for NASA to know at that time that private industry would manage to lower the cost of launch by magnitudes.

Cost didn't matter back then because you only got one shot to launch anyways. If the rocket blew up and took the satellite with it you're never getting approval to launch again. It would be better to let the project overrun than to worry about financials and contractual obligations when the risk of cancellation or failure is already so high. But now, with launches being relatively cheap, it is actually possible to envision a backup plan where a second satellite goes up with a second rocket. Again, this only very recently (maybe ~5 years ago?) became possible.

I agree with you now though - NASA ought to be reconsidering overhauling its procurement process now that Falcon 9 exists.

These things are a consequence of unaccountable, unvompetitive contractors, not just a cause. We could have had routine volume space flight many decades ago, if the money we spent went to competitive free market purchases. Even the Space Shuttle was originally meant to have something like weekly launches, and then they didn't, mostly for bad political reasons.

Disagree. Even with Atlas 5 this was not totally unreasonable.

> If the rocket blew up and took the satellite with it you're never getting approval to launch again.

That's not accurate.

This is all hypotheticals of course, but is it really likely that if a major satellite was lost NASA would be willing to bankroll the construction of a second identical one? I think the whole project would be scrapped and something else down the line would be approved to replace it. It certainly wouldn't be a quick turnaround time.

There is considerable effort on this direction already. A lot of the bus (on board computers, power, etc.) is standardized for a lot of missions.

But there's always have uncertainty in the payloads. First, because there are lots of contractors involved. That's somewhat fixable.

Second, and most important, is that many of these payloads are cutting-edge. They've never been built before and some of them push physical and engineering limits. I've worked on missions where the only delay was the payload for many years, and they were simpler than JWST.

It's a problem of not knowing what we don't know. When working on those kinds of systems, the estimates lose a lot of meaning.

Agreed: there must be consequences. And this human management problem is not specific to science nor cosmology. The total dollars sunk into research is small compared to revenues spent elsewhere. Heck, in the US Government, it is still not yet using its check book leverage to reduce drug prices.

Just for reference, the total U.S. Department of Defense budget for 2021 was $705 billion.


But I'm sure this project will have many technological spinoffs that could, with only a little additional funding, be used either to kill a lot of people or to generate personal wealth for at least a few select individuals (and those are not mutually exclusive). It's a bargain at twice the price.

This is a poor argument and poor reasoning. This way, instead of improving, we continue to regress (“Look there! They’re doing it too!” argument). We should be halving expense on all fronts while demanding the same output - whether it’s military or space spending. Look at ISRO’s budget, high efficiency is key.

Just because DoD budget is $750b, doesn’t mean that we should have a free pass to waste money. I’d like to see DoD spending cut in half while holding vendors accountable. Same with space industry.

Another way to think about this if it helps is for $11b, we should have gotten more done. Imagine James Webb Telescope + 5 more projects for the same $11b. Wouldn’t that be awesome?

I think GP's parent is a fair argument against criticisms that start and end at "$11B is a lot of money!". It is valid to point out that $11B pales in comparison to the US's defense budget as a means of providing context for how big these really big numbers are.

If someone were to point to specific ways in which the project wasted money, that'd be different. But I haven't seen such detailed criticisms.

Nobody is arguing that wasting money is a good thing.

An ISRO comparison seems a bit absurd given the hugely different scales of science and technology development being done. Much of the technology used in the JWST was practically invented for it.

Not to imply that ISRO isn't doing important work, but simply looking to cut costs for the sake of cutting costs is just as bad as the waste you're complaining about. It'd be like comparing whatever it cost TSMC to switch to 3nm to what it cost to setup 20nm fabs.

What we should be pushing for is more accurate cost estimates. It isn't a problem that the latest and greatest in space telescopes cost $11B, technological progress is often expensive, it's a problem that initially we were told it would cost $2B. If we have a better cost estimate from the start, we can better control our expectations.

You're being uncharitable. It was just to make a point about efficiency, not to compare technological aspects of it.

I could have used paper-clip manufacturer instead of ISRO for that matter.

Totally agree. Since when would it kill us to manage money better regardless of the node on the organizational chart to which it belongs? Never.

That's barely enough for a negative revenue electric vehicle startup these days

or you can let a country have multiple billionaires so that they can fund these projects with their own money and compete within themselves, take more risks and compress the timelines of frontier-conquering and innovation.

But, we want a large bureaucratic organization (by design), extremely risk-averse(by design), extremely slow(by design), having only one shots (by design) to do this for us

Here’s hoping that “incident” a couple weeks ago will be the only one and everything will work out just fine.

This launch and perspective for science has me anxious and excited since its inception - and it’s been a while.

I will open a bottle of champagne when the first data will be sent from L2 with something along the “fully operational” lines.


I have an acquaintance that's been working on the team for this telescope for as long as I've known him ~10 years. He's had so many disappointments with the continued delays and issues. I hope for his sanity and his research this launch goes flawlessly.

$11B is actually not very much for what this is. Good deal!

The problem is not the price as such but contractors promising something for $2.5B and then spending 4x time and 4x the amount of cost without suffering any negative consequences. For Northrop this has been nothing but nice, basically finance its space division for decades.

We need a space industry that delivers on target and reward those contractors.

The growth in cost of Webb, directly detracts from other missions. It also has to be compared to actually building 4 actual $2.5B telescopes. There is lots of evidence that faster iteration, more missions give more overall science.

Continuously launching and continuously improving would be far better plan. Webb has lots of subsystems that have been done base on 10-20 year old engineering. But because of the approach one iteration cycle for some of these technologies takes 30 years.

I think the JWST is managed as part of NASA's Space Science Directorate. That directorate gets a little less than $8B of the agency's roughly $23B budget. You'd have to look at the breakdown by year, but 10-15% of the annual directorate budget is substantial but not absurd given the project.

Why is that? If the pricetag was, for example $25 BN, would you say that would have been too expensive? Where do you draw the line? Or no price is too high for this telescope?

When US gov expenditures are like $4 trillion yearly, $11 billion for an era-defining, cutting-edge space telescope built over 10-15 years does not seem much at all.

My question still stands: would $25 BN be too much? $100 BN? Why is $11 BN a good deal? Would any price be a good deal for an "era-defining" project?

In the '90's there was this huge Manhattan-like project, called "The Human Genome Project" [1]. The pricetag was about $3 BN. It took more than a decade. Then out of nowhere a startup appeared, and sequenced the human genome ten times faster and ten times cheaper (and fully with private funds). Nowadays, of course, we can sequence someone's genome for literally cents.

The JWT project started before SpaceX was a thing. Right now it looks quite likely that in less than one year we'll have a launch vehicle that will be able to put 100 tons in orbit in one shot, and for cheap. All the complexity of the folding involved with JWT would become unnecessary with Starship. If someone were to start right now a JWT project, there's a realistic chance they'll finish it in a tenth of the time and a tenth of the cost, just like Celera did. We would get the same scientific results, but maybe one or two years later.

So, now, am I allowed to ask again: why exactly was the $11 BN a good deal?

[1] https://en.wikipedia.org/wiki/Human_Genome_Project

[2] https://en.wikipedia.org/wiki/Celera_Corporation

From link 2: "However, a significant portion of the human genome had already been sequenced when Celera entered the field, and thus Celera did not incur any costs with obtaining the existing data, which was freely available to the public from GenBank". The reason Celera was able to finish the project cheap was because public funding had already done the first 90%.

These days it costs <$1000 to sequence a de novo genome (not cents). For human genomes it's cheaper of course, but that's because we have a reference sequence to compare to. The Human Genome Project (and other efforts) had to build that reference and annotate it. So not quite an apples-to-apples comparison to compare sequencing of a modern-day patient sample to building a reference genome from scratch.

> These days it costs <$1000 to sequence a de novo genome (not cents).

Ok, that's still 6 orders of magnitude cheaper, isn't it? You could argue that $1000 now would not be possible without the initial investment of billions, but we'll never know. What's undeniable is that independent of the advances in biotech, the advances in computing between say, 1990 and 2010, were astounding. If the Human Genome Project had been postponed by 20 years, it could have easily been done for 1% of the pricetag, without any other breakthrough (by the way, Celera did what they did because some really cool algorithmic breakthroughs). And it's not like in 1990 people didn't know about Moore's law, and couldn't project where the computational power would be in 20 years.

The same question stands now: if you want to do Project X, and you consider the choice to do it now or do it in 20 years, is it likely that you could do it for much cheaper in 20 years, and can you afford to wait 20 years? If we are talking about CO2 scrubbing from the atmosphere, then maybe we can't really wait, but if we just want to better understand Big Bang, or the Muon gee minus 2, then maybe we can.

That's a fair question. I've never heard it argued that the human genome project was anything other than a huge success, but I'm curious to look into it more. That being said it probably doesn't apply here. Considering we were sending people to the moon 50 years ago, I don't feel space travel has scaled similarly. In contrast to your human genome example think how far back the scientific community was set back by canceling the Super Conducting Supercollider. There's just no other way to get this information from Earth. You could argue that the knowledge could wait 100 years, but you could say that about a lot of things.

20-30B would have been about what I had estimated given the complexity involved for this project. I think 11B spread over 2 decades and some change was a pretty good deal comparatively.

Now I think that hindsight being what it is, if we had known that Starship was in the pipeline and this would be launching right when Starship is getting into production (considering JWST started development in the mid 90s), I would have said that we should be designing a cheaper/simpler telescope that uses this larger launch package.

But that's all hindsight. For what the JWST actually accomplishes, it's an engineering marvel and given what we knew when it was being designed, I think NASA and the associated committees did an excellent job making it as cheap and large as it is.

Now if we were to take what was learned from the JWST (a lot of innovative work on beryllium mirror design and segmented telescope design was done on this project) and were to design a new telescope today using modern technology, modern materials knowledge, and a launch vehicle like the Starship, I'd suppose we could make an equivalent telescope for 25% or less of the JWST's cost. Unfortunately however by the time this would be feasible, the majority of that money had already long since been spent using existing technology and techniques. This hypothetical cheaper telescope would also likely not be ready for launch if started in say 2015 until 2025 or so when the Starship would be considered safe enough for such a high value mission.

TLDR: It was a good value for the era in which it was designed and built. It is limited by what NASA knew when they designed it. And if it was to be built today, it wouldn't launch for at least a decade after the design would start and you'd undoubtedly be able to make a similar statement about said design from "now-era" vs a hypothetical better value proposition from "future-era". Knowing what we knew at the time it was worth it and to wait indefinitely for the optimal time to start a design will always be a race of better vs perfect.

> For what the JWST actually accomplishes, it's an engineering marvel

Sure, but so are the Event Horizon Telescope (which cost less than $100 MM) and LIGO (cost about $1 BN). And those were truly revolutionary, and they hold a lot of promise for more scientific results down the road. At any given moment the scientific world has lots of ideas, some are truly ingenious, and some are just bigger-is-better iterations of older ideas. The really ingenious ones tend to be cheaper, if for no other reason than they can't get huge amounts of funding given they are not proven yet. The bigger-is-better ideas get eye popping dollars, and the public opinion is always positive. Just like it happens with Hollywood sequels.

> If someone were to start right now a JWT project, there's a realistic chance they'll finish it in a tenth of the time and a tenth of the cost

What is your basis for saying that 90% of the cost and time was due to folding mirrors? That sounds like the easy part - it's mechanical, and satellites have been unfolding in orbit for a long time.

With unfolding mirrors, the hard part isn't the unfolding, it's unfolding with micrometer accuracy since you care about the optical properties of the unfolded mirrors.

1996 cutting edge

Is there something better in production going into space right now? No, because this is literally cutting the edge right now.

But then people complain about ITER costing 25 billion although its possible impact in the world is much bigger.

Never mind government expenditures as a whole; $11 billion is less than a single aircraft carrier.

Yet we can't house the homeless.

It’s astounding that for a $5 Trillion dollar neighborhood (Bay Area), we are incapable of having a first world infrastructure and solve homelessness.

The will of the people is weak.

Truth! And look at all the techno-sociopaths downvoting me! That is why the Bay Area cannot solve the homeless problem!

Money is a completely abstract thing and at this point says nothing about the material economics. It’s used to manage agency.

Essentially we allowed people $11 billion in human agency to occur for scientific reasons.

Sorry we didn’t put more of it into cars and video games, but your economy surely benefited from people doing the real economic exchange this required.

Personally I’d love to put it into designer drugs we can use to let me hallucinate a reality where miserly bean countering control freaks don’t exist, since we’re all going to die anyway and entropy will erode the universe.

Excepting rules against violence and careless end of the species, why all the rules?

> Sorry we didn’t put more of it into cars and video games

But that's not the alternative here. Cars and video games are created via private investments, by for-profit corporations (and sometimes by volunteering developers). JWST was funded by NASA with public funding. The alternative to JWST was not cars, but rather the other projects that NASA could have funded, but didn't. When NASA made the initial decision to build the JWST, it excluded other projects on the premise that the JWST will cost about $0.5 BN. When later on NASA had to revise upward the cost, it had to either forgo other projects, or ask the Congress for additional funding. Well, the Congress did not provide additional funding [1]. Those with an internal view of NASA know what other projects were dismissed, or canceled because of the perpetual JWST cost revisions. We (the outsiders) will not be privy of these projects, but they certainly existed.

Make no mistake, I consider JWST to be a phenomenal scientific instrument. But when people applaud the launch of JWST, they don't see the non-launch of the multitude of projects that had to be canceled because of the JWST cost overruns.

[1] https://en.wikipedia.org/wiki/Budget_of_NASA

I wonder how much of the vehicle's final cost is directly attributable to the complexity of deployment. I.E. how much easier would this be if we had a launch vehicle with a fairing capable of fitting fully deployed configuration.

You would probably make up for the cost savings by building something strong enough to withstand the forces of achieving Earth escape velocity in a fully deployed configuration, not to mention all the increased mass that would be required.

Since it's impossible to do maintenance on this observatory while it's in solar orbit, and since launches have strong vibrations and forces, it's important that the delicate and sensitive equipment be stowed in a way to minimize the effects of launch forces and minimizate the requirements for after-launch maintenance.

This is exactly the promise of SpaceX Starship.

The Starship is way too small to carry the fully deployed James Webb telescope. The sunshield is roughly 20 m x 14 m, while the diameter of the Starship is only 9 m.

Folding in half is still better than crazy origami robot yoga (that's my new band name). Also I could readily envision a supersize fairing built on the Starship/heavy booster platform.

More lift capacity also means less weight optimization and more emphasis on robustness, reliability, redundancy, and power.

Still the segments could be biggearand or heavier, possibly reducing complexity.

Phil Mosby, the guy who did the Webb inspired piece that Nasa bought and hung in their library is from Tahoe and good friends with my brother... we have one of his pieces hanging in our living room, but whats REALLY cool is his astro-calendar (a calendar with a whole bunch of space facts and beautiful pics.. Highly recommend...



It is destined for a point in space 1.5 million kilometres from Earth — too far away for astronauts to visit and fix the telescope if something goes wrong. Hubble required an after-launch repair in 1993, when astronauts used the space shuttle to get to the Earth-orbiting observatory and install corrective optics for its primary mirror, which had been improperly ground.

The whole Hubble mirror fiasco was fascinating. The before and after images of the galaxy M100 in the following link outlines the extent of the error:


One may assume that maybe the error was simply too big and that's why the aberration. Here's the root cause and the magnitude of the error would be dismissed as nothing by most people on this planet but ultimately turned out to be huge!

"Ultimately the problem was traced to miscalibrated equipment during the mirror's manufacture. The result was a mirror with an aberration one-50th the thickness of a human hair, in the grinding of the mirror."

That's huge. Everyday profile accuracy can be spoken of in "quarter wavelength"-like terms.

Was that true in 1979?

Accuracy less than a wavelength has been possible for a long time:


Huh wow, that's brilliant! I was expecting something super complicated with extremely high tech sensors.

Spherical-against-spherical and flat-against-flat grinding can be self-truing, so in effect centuries-old clever results required clever testing to verify.

High tech in those days? Rumor I heard in the mid-1980s was that LLNL had [vertical?] lathes which could mechanically cut mirrors to optical-profile accuracy. Supposedly 100gpm liquid flows were required to keep the part temperatures uniform.

The problem is that this actually makes no sense. For the price and cost of the Shuttle missions, a new even better Hubble could have been built.

Probably would have been bad optics to let the first one become space junk.

It was working fine, it could have de-orbited.

I have started to wonder, will it ever be possible to 'see' the big bang? How close can we get to measuring that far back? From what I've seen JWST will be able to peer back to just a few hundred million years after the big bang. What are the limit to seeing even further back? Is it a matter of telescope size, will an even larger telescope by definition be able to see even further back? What is the limit?

Unfortunately, we cannot, or will not be able to see the Big Bang. The simple reason is, it's just beyond our reach.

For the first few hundred thousand years, the universe was opaque.

This link goes into a good amount of detail about the first light in the universe:


We might be able to see a bit closer to the events after the Big Bang with a more powerful telescope in the future, but I don't think we can ever be able to actually "see" the Big Bang.

Fantastic link, thanks for that. Got me even more excited for JWST!

The CMB shows the universe at the point where its density became low enough that photons could move around freely. That's the furthest point that we can "see" if you required "seeing" to have to do with electromagnetic radiation.

But there are other types of radiation that penetrate dense matter better: neutrinos and gravity waves. Right now it's "holy shit I saw one" for both kinds, so we're a long way off from doing any kind of imaging in those media. But if we ever manage detectors large enough and sensitive enough, we should be able to take "pictures" of the universe when it was even younger than when the CMB was released.

Dense enough matter will stop neutrinos, so that signal will be further back, but not the bang itself. So far as we know, nothing stops gravity--so that signal ought to be... interesting.

(Or at least, that's what Lee Smolin says in his book: Time Reborn)

Shout out to the good folks of Delta, Utah, some of whom work in the Beryllium mines nearby. My car broke down there once and I learned that Beryllium is a vital component in the JWST because it doesn't expand or contract as the sun warms it.

Digging Beryllium for James Webb


I also learned that I actually broke down inside another telescope:

The Telescope Array project is a collaboration between universities and institutions in the United States, Japan, Korea, Russia, and Belgium. The experiment is designed to observe air showers induced by cosmic rays with extremely high energy. It does this using a combination of ground array and air-fluorescence techniques...The Telescope Array observes cosmic rays with energies greater than 1018eV. The surface array samples events over 300 square miles of desert.


I'm a big fan of prediction markets, where people wager their own money on clear yes/no propositions, and you get a payoff if you buy shares on the winning side. It's a great way to tap the "wisdom of the crowd," even if you don't put up your own money.

Which brings me to the JWST. I'd love to know how likely it is that this amazing (and amazingly complex) tool actually succeeds in its goals. There's no way I could figure it out myself; I'd have to take someone else's word for it. Unless... there is a prediction market somewhere betting on whether the JWST will succeed, so I can piggyback on others' research and self-interest. I haven't been able to find one though. (Perhaps people think betting "against" success is too macabre.)

Anyone want to throw out a likelihood of success? (My WAG: 70%.)

I don’t find prediction markets too convincing, especially after my experience betting on the US election last year. One month after the election there were still markets with greater than 10% odds on Trump winning states that had already certified Biden.

It was great for me, I made some good money, but it definitely downgraded prediction markets in my mind from “uncannily accurate” to “good but not perfect”. They’re not going to predict Black Swans or other odd things in a way you can rely on.

I hope if Webb will be successfully operating, we'll get significant progress in the question of the life origin. Abiogenesis is predetermined and common.

I expect we'll see more mature galaxies in the distant universe.

This is a glaring problem for the standard model (big bang LCDM) right now.

XMM-2599, SPT0418-47, MRG-M2129, all mature galaxies, far away

That seems like an interesting test for the big circus model. If the JWST largely sees galaxies in early formation at those distances, would you consider the big circus model refuted?

something tells me he wouldn't

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact