Hacker News new | past | comments | ask | show | jobs | submit login
First images from James Webb telescope exceed expectations (cosmosmagazine.com)
840 points by mooreds on March 18, 2022 | hide | past | favorite | 265 comments



I really appreciate how NASA is handling the James Webb. Instead of waiting for everything to be _done_ done, they’re bringing us all along for the setup, giving us “alpha” and “beta” images (if you will), and in so doing keeping interest in the telescope. I know it’ll continue, and I’m all hype about it.


I really and truly hope for longevity.

"The Webb telescope will use 132 small motors (called actuators) to position and occasionally adjust the optics as there are few environmental disturbances of a telescope in space. Each of the 18 primary mirror segments is controlled by 6 positional actuators with a further ROC (radius of curvature) actuator at the center to adjust curvature (7 actuators per segment), for a total of 126 primary mirror actuators, and another 6 actuators for the secondary mirror, giving a total of 132. The actuators can position the mirror with 10 nanometer (10 millionths of a millimeter) accuracy.

"The actuators are critical in maintaining the alignment of the telescope's mirrors, and are designed and manufactured by Ball Aerospace & Technologies. Each of the 132 actuators are driven by a single stepper motor, providing both fine and coarse adjustments. The actuators provide a coarse step size of 58 nanometers for larger adjustments, and a fine adjustment step size of 7 nanometers."

https://en.m.wikipedia.org/wiki/James_Webb_Space_Telescope


Here's a simple video of the Lego prototype of the actuator that the inventor, Robert Warden, had on his desk:

https://youtu.be/3WBrqUa_1yk

The full paper describing it is an excellent read:

https://www.esmats.eu/amspapers/pastpapers/pdfs/2006/warden....

I think the latter was posted to HN a few months ago.


Can someone explain why alignment can't be done in software? i.e. collect data from each mirror separately, then switch together in software ?


Because sensors in the IR and visible range of the EM spectrum can only capture magnitude, not phase information. At radio frequencies you could do this (in fact it's pretty common).

Without phase information, you can combine different captures to improve SNR, but it won't improve the resolution. To improve resolution you need light interference, which requires phase information to be preserved.


I'm not certain but I believe to get a high resolution you need a large aperture, i.e. photons must physically interact from places that are far away from each other.

Doing each mirror separately would observe the photons before their wave functions are combined and so it would be the same as many small low resolution cameras instead of one big high resolution one. It defeats the purpose of a large mirror.


Doesn’t that imply that the incoming photons are spread out over the entire mirror, several meters?


Yeah I was also surprised to find that photons can be really big: https://youtu.be/SDtAh9IwG-I

Before I saw the experiments in that video, I assumed photons were about as wide as their wavelength.


That guy is hilarious.[1][2][3] I'd like to know what would happen if the split beam was sent through fibre optics to Pluto and back, if the interference pattern would persist instantly.

[1] "It's a golden oldie..."

[2] "... just to give the setup a nice high tech look and feel."

[3] "The physics behind this is pretty hefty, and not, like, youtube video material."


From what little I know about quantum mechanics, the statement about changing the light source and expecting to not see an interference pattern is very surprising. I’d expect to see one.


Different light sources emit with different "Coherence length". A continuous-wave laser has high coherence length, and a light source that isn't a laser at all has low coherence length even if it's monochromatic, for example a sodium lamp.

The laser used by the presenter has a coherence length longer than (or in the same ballpark as) the difference in optical paths in their experiement, so they get a clear interference pattern.

The Wikipedia article may explain. https://en.wikipedia.org/wiki/Coherence_length

Since you can measure coherence length (and higher-order temporal and spatial coherence statistics), it is part of the information carried by light from a luminous object that is available for imaging by a suitably designed camera.


Yes, pity that is not in the video.


Woah. Thank you. This is nuts.


amazing video. thanks.


Hm? Yes, that’s how telescopes (and camera lenses) work!


Information theory. You can’t fix information you don’t have.

Think of the csi ‘enhance’ meme and why that is physically impossible without introducing potentially fake information.


As user 4ad explains, one of the reasons is that phase information isn't collected by the sensors, and is needed to reconstruct all the image detail.

In radio astronomy the phase information is actually collected and sometimes recorded, which is why you can have arrays of radio telescopes far apart that combine. The most extreme version of this is Very Long Baseline Interferometry (https://en.wikipedia.org/wiki/Very-long-baseline_interferome...) which was used to image the black hole at the centre of our galaxy (https://en.wikipedia.org/wiki/Event_Horizon_Telescope).

At infra-red and visible wavelengths, it is technically possible to collect some phase information, subject to noise though. So in principle it is possible to collect images including some phase at each mirror location instead of using a mirror, and then stitch them in a similar way to how it's done with radio. However, collecting the phase would be difficult and complex, especially with current technology, and likely to degrade the image so much that it's not worth doing anyway. Using mirrors is better.

In future, it is plausible that this will be done to combine images from optical telescopes far apart in space, for a very wide aparture. But it seems just as likely that they will use mirrors far apart in space, directing the incoming light to a small number of focal locations to combine the light in the optical domain first, before converting it to image data.




Other great answers, but if those things weren’t an issue I think also you’d need one sensor for each mirror, and the sensor is bigger and heavier and more complex than a single mirror. Or you could just watch the same mirror for X times longer where X is the number of mirrors, but then you’d get 1/X as many observations.


Yes. It's the same reason you can't take 10 out of focus pictures of yourself and get one in focus picture. You need actual good data to combine.


You can, with enough processing (and knowledge of how out of focus it is). It worked for Hubble. But you get more good pictures if you get it in focus in the first place.


Yup

A engineering friend of mine was working on hardware related to mil satellite imagery, and was sent to a course that covered all the kinds of post-processing techniques they had to improve resolution and what could help those techniques upstream. He said that at the end of the course, the instructor said the bottom line was that while they could do all kinds of 'magic' to improve & enhance the photos, the best input to all their techniques that would yield the best end result, was to take a better photo in the first place.

So, yes, there really is no substitute to a good original image.


Deconvolution. Haven't followed that in awhile, but getting to original function was next to impossible. Recording calibration image and/or recent AI developments might've taken that into something awesome. I'll go and catch up on material!


We really need to watch it with trusting AI.

While my example here isn't photo recognition, the same principle applies. I recently sat for a deposition where the stenographer used an "AI"transcription system. The result was literally pages of errata (vs the standard errata sheet that has space for about a dozen lines).

The consistent error I noticed was that the erroneous words were (probably) the word most expected in that position, and NOT the word that I said.

So, at a glance, it seemed like a really good transcription. In fact, many errors were barely noticeable to me and I had to go back to the audio recording to confirm. And these were errors that substantially changed the meaning, or even inverted it.

This is not merely information loss — the least surprising/lowest information item was inserted instead of the real item — this is actual information CORRUPTION.

I'd fully expect parallel phenomena from image - "AI - filling in the item most expected from the training set, and actively corrupting the data by stripping out the highest-value info bits and replacing them with the most expected.

Beware


Yeah, it's called signal reconstruction for a reason. There are classes of it where it's verifiable and decent enough however.


Yes, I'm sure that with properly constrained and well-tested data spaces, it could produce outstanding and very helpful results.

But accurate reconstruction in the wild is just sooo far away. And for good reason - it would need to have insane amounts of experience and exposure to every bit of unusual data that existed in the world to get it right...


as long as you know the point spread function you can focus out of focus images through deconvolution. https://imagej.net/imaging/deconvolution Basically, take a picture of a well-known object that should look like a single pixel, measure how spread out it is, and then use that to backconvert images taken from the same camera.


Would you prefer this approach over actually having the target in focus?


It depends. Personally, I'd apply the technique even to focused images, after using a set of fluorescently labelled beads of known diameter to calibrate the PSF.


Any indication how many of those actuators they can lose and still maintain functionality (i.e., are any of those "spares")?


The actuator is a genius flexure design that is both mechanically simple and accurate. Breaking Taps 3d-prints a working replica and describes the design. I highly recommend: https://youtu.be/5MxH1sfJLBQ

The actuator breaking wasn't as concerning after seeing the design.


Thank you for the link, that was incredibly interesting. For other people following the link: don't miss out on going into the comment section and finding the original designer of the mechanism complimenting the video and then sharing his own original model - in Legos!


Slightly exaggerated paraphrase: "You YouTube kids with your fancy 3D printers, here at Ball Aerospace & Technology we didn't need all that, we had Legos"


Thanks for the video, flexure-based designs can be so beautiful.

It reminded me of a collaboration I had with a small Swiss company that did wonders with electro-discharge machining such as this flexure-based mechanism machined from a single block of aluminium: https://i.imgur.com/PDAVDmJ.jpg


That is gorgeous. Well done.


Thanks for that video and channel. See what you mean about simple and accurate, strongly agree with the presenter calling it genius. Can't help but feel there is a strong lesson in there for software people.


> Can't help but feel there is a strong lesson in there for software people.

Hmm, can you elaborate on that? What is the strong lesson for software people specifically?

Don’t get me wrong it is a beautiful design, and I’m a big fan of flexure designes in general. There is this amazing open source project which uses similar flexure mechanisms for very accurate positioning of microscope samples: https://openflexure.org/

But I fail to see any obvious takeaways which would generalise to software development. Other than perhaps “Think and work on the same problem for a decade and more and you might find a compact and elegant solution.” Which is nice, when one has that luxury.


We tend to make complex solutions to simple problems.

The thing about any problem, is that "the devil is in the details." It may seem simple, from a high level, but, once we start to "drill down" into the issue, the "rough edges" appear.

At that point, we start to break out the baling wire and bubblegum, to kludge our original "graceful" design to meet the facts on the ground.

It doesn't just happen for software. Hardware suffers from the same issue, but software makes it easy to start coding before modeling the requirements and context completely.

I actually leverage this, in my own work. I call it "Evolutionary Design"[0]. It's not for the faint of heart, because a big part of it is recognizing when I'm rabbitholing, and tossing out what may be weeks of code, wholesale. I'm actually going through that process right now, with the app I'm developing. I'm working on the final feature set.

[0] https://littlegreenviper.com/miscellany/evolutionary-design-...

"There's always an easy solution to every human problem; Neat, plausible and wrong."

"The fact that I have no remedy for all the sorrows of the world is no reason for my accepting yours. It simply supports the strong probability that yours is a fake."

-- H. L. Mencken

“When the map and the terrain disagree; believe the terrain.”

-- Swiss Army Maxim


The takeaways are limited, IMHO. Yes, in software engineering ingenious, simple, sometimes a bit crude solutions exist and can solve otherwise complex problems. More often than not, overly complicated, endless stacks of abstractions are developed to solve relatively trivial problems. But, software engineering is usually about maintaining something as part of a larger, dynamic, distributed system (comprised of humans, other tech) and reliance of brilliance is a limiting factor, since brilliant engineers rarely want to support their creations for years. With age, I am growing more and more tolerant of bloated, verbose, boring code that can be passed from one individual to another without too much trouble.

I think the actuator is an equivalent of a very clever Perl one liner.


Very cool. I know nothing about these types of actuators but you're right that they seem pretty robust. When I heard actuators I was thinking of the common types that definitely don't last forever :)

Unrelated: reaction wheel assemblies (used for attitude control) typically would have one extra wheel as a "spare." Redundancy is important enough on spacecraft you expect it wherever it is practicable. I used to work in aerospace - spent enough time coding spacecraft simulation tools that I had to develop at least a working familiarity with how some of the common satellite bus systems are supposed to work :)


None of them are spares per se; if any of them fail they will have to use the other four on that segment to point it away from the sensor and disable that segment.


My guess: any single one of the actuators could fail, and the other actuators could compensate by moving the plane of the mirror(s). The exception would be the curvature actuators. (IANAL, or the appropriate equivalent...)


3 points in a plane, so maybe half?


No, they require 7 axes of movement. See the video that somebody else linked to.


The actuators control the XYZ position and the pitch, yaw, and roll, plus curvature. There is no redundancy.


Good question. No idea.


They purposely designed some of the first experiments it will perform to produce aesthetically pleasing images, so they are definitely considering the PR angle. You should expect to see lots of pretty pictures from the JWST over the next year.


I wonder who they've been learning from https://www.youtube.com/watch?v=bvim4rsNHkQ


nasas pr team has done an amazing job over the last 5 or 6 years. something changed where they understood how to engage with people better


I love watching the ISS feed they stream on the app sometimes. It’s fun to make the room all dark and have it up on the TV, it’s super tranquil.


NASA TV has been around since the 80s. I know they tried to engage the public with many MARS lander/rover missions some time back.


They have a great YouTube channel


NASA has become really good at self promotion and exciting people. The JWST launch and deployment is being covered really well. I also thought it was great that they had cameras on the latest Mars Rover that showed videos of the landing sequence from several angles. ESA should learn from NASA.


It makes me more confident that the right people are responsible for this project. The results for such a critical instrument are very important to the success of future research. To see them consistently exceeding mission objectives...how high can the Webb go?


It's nice to not see NASA dragged through the mud for a change because "SpaceX is cheaper."


NASA's unscrewed exploration has long been top-notch. (Although, for the cost, arguably a series of initially less sophisticated 'scopes could have been lofted, leading to maybe a better one by now.)

It is their crewed program that is completely screwed by politics.


*uncrewed


... maybe their crewed program is screwed?


SpaceX is cheaper for rockets, but they also aren't in the business of making space telescopes, so that's a complete strawman.


Easy there Elon.


I was wondering why the star has 6 crisp points and found an explanation here (a 3 minute video):

https://www.youtube.com/watch?v=VVAKFJ8VVp4

The most interesting part of the video explains why even your naked eye viewing the sky at night will cause this effect -- it's due to imperfections in the lens of your eye.


I think the wikipedia explanation is much better.

https://en.wikipedia.org/wiki/Diffraction_spike

One deficiency is why apertures (effectively, the support structure in a telescope is an aperture) observe this behavior.

> No matter how fine these support rods are they diffract the incoming light from a subject star and this appears as diffraction spikes which are the Fourier transform of the support struts.

Like why is that the case?

https://en.wikipedia.org/wiki/Airy_disk

Read this if you don't immediately understand why. This is the shape of the image you see when the aperture is circular. This is the Fourier transform of a circular aperture, which, in 1D, is a sinc function.

I can't give a straightfoward answer to elucidate further, but if you've done signal processing stuff before you can probably handwavey explain that if there is a Fourier-transform relationship with one function, due to linearity and spacial invariance, you can say that it holds for all functions.


So we're not actually seeing the image of a star but only its Fourier transform?

I presume that only happens if the star is far enough away, as there are plenty of images of our Sun (which is much closer) that don't look like this.


For larger objects with a larger subtended angle, we are probably seeing the union of the Fourier transforms of each “pixel” (I’m not sure what the smallest element would be, in this case).


Good summary but I found it a bit weird how only 2 seconds were spent on the 'camera aperture' explanation - as AFAIK that's the main cause for photos of stars and lights to have 'points' - it depends on how many 'leaves' (? term) your camera/lens aperture has -> the more leaves the "more-sided" a shape the aperture makes (hexagon, octagon...) thus the more points you get when photographing lights.


The number of rays is determined by the number of aperture blades, as well as whether there's an even or odd number of aperture blades

https://phillipreeve.net/blog/best-lenses-for-sunstars/#The_...

Interestingly, some modern photography lenses have achieved aperture mechanisms with much rounder geometry, sometimes with near perfect circles at multiple apertures. This can result in a more desirable bokeh, at the cost of well defined sun stars.


The article fails to explain that, but round aperture is equivalent to infinitely many straight blades (and thus rays), so there is a light halo instead of distinct rays.


Which is essentially also why you get more diffraction limited as you stop down the aperture -- a larger and larger fraction of the light that gets through passes near the edges (and gets diffracted) rather than the central area.


yup, but when you step down the aperture a lot (which is when the sun star effect becomes more pronounced), the aperture transitions from more circular to more polygonal, so with a lot of lenses, the behavior is that when your aperture is a few steps of fully open, it is basically circular but when you need the sun starts it's definitely there.


How do stars look like in animal eyes?



I wonder what they would look like if stars were pointed objects.


The same. Because any shape to a star is too small to be detectable with these kinds of resolution.


I see 8. With the smaller horizontal ones likely coming from one of arms of the secondary mirror (check the “selfie” image). The linked video is nice, by the way.


From another article I read, it appears that most of the pattern is due to diffraction from the edges of the hexagonal mirrors.


Awesome video!

Made me laugh and learn at the same time :-)


The NASA press release, which includes the same photo but at a decent resolution: https://www.nasa.gov/press-release/nasa-s-webb-reaches-align...


Think this is the classical image of that star, taken in visible light:

http://simbad.u-strasbg.fr/simbad/sim-id?Ident=2MASS+J175540...

Click on fullscreen icon on right-hand side, then zoom out until FoV on bottom left says about 10.63' to 12.23'.

Best I understood, that should be about the same field as the Webb image, based on the instrument definitions I read about. And I wasn't able to line up any dots visually - the new Webb image was taken with a red filter, so I thought it's likely showing photons no one's seen before.

Paging any astros for corrections.


Here’s a nice comparison baggy_trough posted below:

https://twitter.com/gbrammer/status/1504369779540480002?s=21


Wow, that's almost exactly like seeing the world when I got glasses for the first time!


Oh god that resolution.


Looks like it's around 2.5' and the Webb image is rotated about 15 degrees clockwise from this one. The four blobs to the left form a quadrilateral that seems to match the four bright galaxies in the Webb image. It seems like the star has moved maybe 10" downward between the two, and used to be just below that round galaxy to the left of the top spike.


Any photon you see nobody has seen before.


Photographs are funny like that. We can replicate once-unique information.


Or just link to the massive PNG here. Beware if your mobile plan has a bandwidth cap before clicking:

https://www.nasa.gov/sites/default/files/thumbnails/image/te...


Only about 5 MB compressed. Interesting to note is that even though the extension and MIME type claim it's a png, Firefox claims it's actually a jpeg. Throwing it at a couple different pieces of software, they all appear to open it with either file extension (so presumably they're going off the file header rather than the actual extension).


File starts with 0xffd8ff, so it's a jpeg. Guess someone uploaded the wrong file.


Considering these 'deep space' images that capture these myriad of little galaxies so far away.... The number of galaxies that must be out there is inconceivable to me.

Spitballing.

Might there be some space/time mechanism at play whereby we're actually seeing the same handful or so of galaxies? Like maybe some lensing thing.

Or weirder, we're actually seeing right around the universe itself — as though seeing the back of your head in a mirror if you look far enough. Not a topologist, but seems a toroidal universe would have a property like this: look far enough and you see the back of your head. So perhaps the same galaxies seen from multiple angles at the same time appear to be a greater number of galaxies than there actually are.


As well as what ISL said, if the universe is toroidial it seems like it's larger than our visible horizon. While images of an individual galaxy might be hard to tell if they are repeated, we can see enough of the structure of the universe that if our visible horizon was much larger than the real value of the universe and there was wrapping going on, we'd see it. This isn't the exact sort of video I'm looking for, but it's close enough to what I'm talking about: https://www.youtube.com/watch?v=rENyyRwxpHo Everywhere we look we see distinct structures.

It is not a bad thought, though. It may well do some sort of wrapping around, just at a larger scale than we can see. It's an open problem.


It's been over a decade since my cosmology class, so forgive any errors, but isn't there a constant that describes the curvature of the universe [1], which so far has been calculated as "flat"? That's not to say we'll ever truly know, since you need infinite precision to rule out hyperbolic or spherical geometry, but in a flat universe, how can you have toroidal geometry?

Per [2]:

> The actual value for critical density value is measured as ρcritical = 9.47×10−27 kg m−3. From these values, within experimental error, the universe seems to be flat.

[1] - https://en.m.wikipedia.org/wiki/Friedmann_equations#Density_...

[2] - https://en.m.wikipedia.org/wiki/Shape_of_the_universe


It’s actually no problem mathematically to have toroidal topology and flat geometry. (The classic Asteroids video game, where your ship can go off the top of the screen and come in at the bottom, or similarly with the left and right edges, is an example of a flat 2D space with the topology of a torus.)


Everywhere we look we see distinct structures.

This might be us being in an isotropic bubble where outside of the bubble is forever gone for us to see due to expansion of universe... Not sure how CMBR falls into that however.


Hence "it seems like it's larger than our visible horizon", "at a larger scale than we can see", and "It's an open problem".


We're talking about being toriodal or curved in a 4th spatial dimension I assume?


Watching that video will make anyone humble.


Your comment reminded me of this story where a space program director decided, on a hunch or whim, to point Hubble at a seemingly empty space for 100 hours.

The rest, as they say is history. Galaxies went from being rare things to abundant ones. Turns out the universe is teeming with them.

https://www.nationalgeographic.com/science/article/when-hubb...


It wasn't quite that dramatic a whim or lucky hunch as it's made out to be.

There was input from galaxy researchers on such an idea, and I think it was viewed as a bit of a calculated risk compared to other normal proposals to spend the telescope time on well-known targets visible with a few hours of observing. Especially when proposals outnumber available time by a factor of approx. 7, people will question why you should gamble on something unproven. Hence "director's discretionary time".

But of course it was a smart move in setting up the field for "what's the next big thing", because if you were to find something interesting, it obviously drove the call for larger and larger telescopes to study farther and farther things. Just studying the crap out of brighter nearby things would have been relatively predictable/boring by comparison. (I exaggerate a little)


There are as many galaxies in the observable universe as there are stars in our galaxy or… as there are bytes in 100 gigabytes. Within a factor of five or so.

The universe is flat as far as our best measurements go, if it is curved, it is with a much larger radius than the ~100 billion light year width of the observable universe.

It is indeed difficult to conceive but it is how it is, the universe is very big and has a whole lot of unique stuff in it.

Occasionally there is a lensing thing where we see the same galaxy twice or so, but not in any way to diminish the 10^11 other galaxies out there.


What's interesting to me is, relative to human scales, cosmic space is so much more ridiculously vast than cosmic time.

Our size as a percentage of the size of the universe is tiny, and our age as a percentage of the universe's age is still tiny. But the second number is gargantuan in comparison to the first.


It always seemed odd to me that we're so close to the apparent start of the universe. 13.8 billion years isn't that long ago, given how big the future time horizon is.


We are close to the beginning of the universe, but we are about halfway through the lifetime of our sun, which formed right around when 50% of all sunlike stars that will ever be formed, had been formed. So we're actually around average in many ways.


Oh, that's interesting! That does make more sense. Most of the sun like stars that will ever exist are formed in the early universe, so it makes more sense that's why we exist when we do.


Does that mean that roughly around the time our sun dies that many more stars will also be dying ?

And then will there simply be way less stars in the universe than there are today?


Not “way” less, because most stars are lower mass than the Sun and thus have longer lifetimes (~ trillions of years in the case of red dwarfs).


Do you have a sense of how many stars are "sunlike" compared to how many stars there are in general?

This is interesting to me because there are a lot of seemingly arbitrary ways you can try to make the size and age of the universe comprehensible, and relative to a human scale is arbitrary but at least kinda means something.

And the commenter above measured our point on a cosmic time scale by looking at where on that timeline our sun formed relative to other sunlike stars. By that metric we are kind of "in the middle." But for me the natural next question is compared to stars in general, as "sunlike stars" is maybe a bit of a texas target.


AFAIK the sun is brighter than about 90% of stars in the Milky Way. We're on the larger and faster burning side, as stars go. So there will be many stars left after the sun dies. Whether you consider this to mean that saying we're about halfway through the lifetime of sunlike stars sidesteps the question depends on whether you believe that complex, energy-intensive, intelligent life is equally viable on planets orbiting red dwarfs.


One person's idea of a "side step" is another person's "refraining from making assumptions" but point noted.

For me, in my position of refraining from associating life with particular types stars, I at least feel a bit more reassurance to think of us as in the company of stars writ large, by which measure we're not so close to the end of the cosmic span of time.

That said, I think sun-like stars is a perfectly legitimate measure to put alongside the others when casting about for intuitive ways of assessing where we stand on a cosmic timeline.


>> The number of galaxies that must be out there is inconceivable to me

Don't worry, there are many times more galaxies in the observable universe than neurons in the human brain. Nobody can conceive of it.


I don't need to be able to conceive of every individual grain to be able to make inferences about sand.


There's really only like 2 trillion galaxies in the observable universe which isn't that much more than the number of neurons.


TIL there's about as many galaxies in the observable universe as trees in the planet.

https://www.google.com/search?q=how+many+trees+are+on+earth


Yes but that’s just the observable universe. My understanding is that the observable universe is only a fraction of the total universe. However that was something I found hard to verify so I could be wrong.


We don't have an upper bound on the size of the universe. It's difficult to construct falsifiable models about things that can't be directly observed, but it may well be infinite.


That number nobody can conceive of... add one to it


People have looked for hints of this sort of thing without success so far.

https://en.m.wikipedia.org/wiki/Shape_of_the_universe#:~:tex....


Perhaps it’s all procedurally generated, and every time we build a new telescope there’s a supernatural sysadmin out there going “Oh, ffs! - We need more new hardware!”.


Surely if we do live in a simulation then the engineers just need to simulate our minds. It's easy to fake what we see then as an optimisation instead of generating a whole universe.


That assumes we're the object of the simulation rather than a byproduct. Maybe what they want to simulate is a universe and we've just happened to pop up in a small part of the simulation.


On the scale of the universe you aren’t just seeing far away objects but seeing the past. For illustrative purposes the edge of the Universe is the edge of time, where time has any meaning to us, and beyond that is inflation where light didn’t travel freely.

If somehow the Universe were as you propose and wrapping around itself so we were able to stare at the back of our heads, we would only see objects from the past to a point, and then the objects would begin to appear closer in time again until the furthest point where we would once again be looking at the present.


The universe was opaque up to about 380,000 years after the Big Bang. If the universe loops in on itself and is large enough then wouldn't we be blocked by that? And the further you go back from there the higher the density would be.


Meaning we couldn’t look straight out and see the back of our own heads. Eventually we would run into a point of the universe before time had any meaning (the opaque plasma) and there would be no looking past that back into the present.


> Might there be some space/time mechanism at play whereby we're actually seeing the same handful or so of galaxies?

Nope. Current evidence is that space is infinite. Even if that was all wrong, then its incredibly large due to inflation and our past light cone is one tiny bit of it. As time goes on we're seeing more and more of it, and parts that used to be in contact before inflation are only just coming back into contact again (their past light cones overlapping). Those are all unique galaxies. An awful lot of physics would have to be wrong for that to not be true.

"Space, is big. Really big. You just won't believe how vastly hugely mindboggingly big it is. I mean you may think it's a long way down the road to the chemist, but that's just peanuts to space."


Yes, I wonder about this also. Wouldn't need to be toroidal - could just be spherical, but would need to wrap in a 4th spatial dimension. It would explain a lot of things, like the expansion not having a clear centre, the requirement for dark matter (expansion would happen in an orthogonal dimension), background radiation (which I never quite understood) and the size of the universe.

But that was a high-school idea, and I'm sure some clever physicists and astronomers will tell me why this isn't the case.


“ Might there be some space/time mechanism at play whereby we're actually seeing the same handful or so of galaxies? Like maybe some lensing thing.”

Good question. People have been wondering this and have studied whether it is possible a la periodic boundary conditions for example.

https://aapt.scitation.org/doi/abs/10.1119/1.13499?journalCo...


Back when they were launching JWST, I kept thinking that there's a non-zero probability of it proving wrong some of our essential assumptions about the universe. Like, you know, they keep saying "galaxies that formed just X years after the big bang", but what if it turns out there was no big bang? What if our estimations of the age and/or size of the universe turn out to be wrong?


If there’s no Big Bang then the origin of the cosmic microwave background needs a new explanation: why does it exist, and why isn’t it more red or blue shifted.


The question I've always had is - what if the entire universe is not only unthinkably larger than the observable universe but also non-homogenous? In other words, what if the part of the universe that lies in our past light cone is nothing at all like the rest of the universe? The CMB and other remnants of the Big Bang could simply be remnants of something that happened in a large enough swath of the universe to cover our observable universe but still only be a tiny part of the entire universe and not represent the actual beginning. The parable of the Blind Men and the Elephant on a cosmological scale, so to speak.


Even if there wasn’t a Big Bang, let’s say the universe was infinite, we would still have the CMB from the furtherest reaching photons from the edges of the visible universe that have been stretched to microwaves right?

In other words eventually the nearest galaxies will one day become the CMB as the universe continues to expand until they are on the edge, until even they fall outside the visible universe and no photons/light outside our own galaxy will be capable of reaching us and the CMB disappears.


That's not what the CMB is. The CMB is an incredibly uniform pattern which appears to have been formed by a very hot plasma at some point in the past. It's difficult to imagine an explanation other than the Big Bang (or some kind of expansion of a very hot plasma) because the uniformity means it was almost certainly concentrated in space.

But yes, in the distant future, it will not be possible to detect the CMB anymore (and it's entirely possible that alien civilisations that evolve at that time would probably never have a model of the Big Bang because there would be no remaining evidence of it).


> That's not what the CMB is. The CMB is an incredibly uniform pattern which appears to have been formed by a very hot plasma at some point in the past.

I understood that the CMB is not photons from plasma but the first photons following recombination (when the universe cooled enough that electrons and protons formed first hydrogen atoms). In other words the universe was plasma just before the CMB.


Yes, you're quite right -- sorry for not being precise enough. However my point was that the fact that we cannot see beyond the CMB and the uniformity of the pattern both point towards a Big Bang-esque explanation.


...and the galaxies will have drifted so far apart, that nothing will appear to exist beyond their own galaxy, except dust.


The problem in an infinite universe is every possible location in the sky eventually terminates on a stellar surface - a star. So the night sky, if the universe is infinite, wouldn't be dark - it would be as bright as the sun - in fact the universe would be full of omnidirectional radiation coming from all directions at all times.

I suppose though, that if space is still expanding but the universe is infinite, this might temper it out but it doesn't seem like enough - it would have to be an expansion precisely tuned to on average send radiation to 4 kelvin so we only see a cosmic microwave background, and don't wind up being bathed in an infinite amount of whatever frequency of radiation.

Infinity is funny like that.


That’s Olber’s Paradox, of course, and it relies on three assumptions: 1. The universe is infinitely large and 2. The universe is filled with stars and 3. The universe has been this way forever.

It turns out #3 is wrong, so it’s still possible we live in an infinite universe.

(Imagine an infinite, unchanging universe filled with stars that magically popped into existence a billion years ago. You wouldn’t have a uniformly bright sky because even though every line of sight will theoretically terminate on a stellar surface, in most cases there hasn’t been enough time for light traveling along that line of sight — e.g. from a star 10 billion light years away — to reach your eyes.)


> So the night sky, if the universe is infinite, wouldn't be dark - it would be as bright as the sun

You are inferring infiniteness also translates somehow to cosmic density and luminosity. By definition of an infinite universe the vast majority will necessarily fall outside the observable universe and never be visible to us.

As to luminosity, even now when we look up and see dark patches in the sky, they are in fact are full of stars and galaxies. The most famous picture ever taken by the Hubble, the “Hubble Deep Field”, was taken by pointing towards a dark patch revealing 10s of thousand of galaxies. These 10s of thousands of stars and galaxies still appear as dark patches in our skies because they are not sufficiently luminous to appear in out sky as light without the aid of telescopes to collect their dim light.


At this point that's a quibble that loops back around to Big Bang theory - the OP was postulating we somehow discover that that theory is not correct, and my point is that if it's not - if the universe isn't bubbles of matter evaporating out of each other's spheres of influence - then you run into serious problems with why the night sky looks the way we are.

You've just looped it back around: expanding space gives finite, shrinking, observable universes. Which is just exactly the Big Bang theory.

EDIT: Dark patches in the sky though don't work with infinite light sources - at the very least you have to explain why far away ones apparently switched on in a finite time in the past. But that's the issue too - we're deliberately ignoring evidence in this discussion which points away from that - i.e. the red shift which shows us galaxies are rushing away from us.


> You've just looped it back around: expanding space gives finite, shrinking, observable universes. Which is just exactly the Big Bang theory.

There is nothing about Big Bang theory that explains the current expansion of our Universe (i.e. dark energy). In fact the Big Bang theory predicted our Universe should have stopped expanding and began collapsing on itself due to gravity (i.e. the Big Crunch), and that’s exactly why whoever can explain why the Universe is continuing to expand has a Nobel Prize waiting.


> The problem in an infinite universe is every possible location in the sky eventually terminates on a stellar surface - a star.

Isn't that only true if the universe is infinitely old as well as infinitely large?


Sure, but postulating an infinitely sized universe (full of an infinite amount of mass) does ask the question as to why the universe would not be infinitely old?

Particularly when you get into issues like entropy, which is the only real determinant of time even existing. A finite universe has a natural direction of entropy - whereas an infinite one does not, since there's an infinite amount of mass and energy and as such no possible lowest possible entropy state.

It's actually worse then that though: an infinite massed universe by definition would contain every possible configuration of that mass somewhere within it. So you and I talking right now like this, our past and future conversations would also all be somewhere else in the infinite universe happening simultaneously.

A universe with an infinite amount of mass and energy in infinite space doesn't really have any sensible notion of past, present or future - because all possible pasts, presents and future, exist at all times somewhere within it.


> A universe with an infinite amount of mass and energy in infinite space doesn't really have any sensible notion of past, present or future - because all possible pasts, presents and future, exist at all times somewhere within it.

This is actually untrue for many reasons, but one is that there are different kinds of infinities. For instance, there are an infinite amount of real numbers between 0.0 and 1.0, and none of them are pi. Just because something is infinite, doesn't mean everything is possible within that infinity. Again, there are an infinite number of integers, but none of the are pi or 0.1 or 37.5


All possible configurations was the term used. We aren't living impossible configurations of matter in our day to day reality.


> an infinite massed universe by definition would contain every possible configuration of that mass somewhere within it.

That was the statement, a universe with a 1kg blob of matter every billion light years would be infinite massed, but would not have all possible configurations.


Inside the framework of inflation this doesn't follow. You can have an infinitely large universe while still expanding and cooling over time.


If it proves there was no big bang then that's still an exceptional job well done and a huge accomplishment for science. Sure there will be more mysteries to solve, but it's not like we lose anything by proving something incorrect. It will be exciting no matter what they find!


I predict astronomers will observe and catalog all trillion observable galaxies in a century. It was probably inconceivable a century that we'd observe and catalog two billion stars in the Milky Way by now. But with improving technology we have.


You can reject some notions of repeating patterns / loops in space because it would violate Lorentz covariance. You could fire a laser in one direction and the opposite, then you know your absolute position in space.


> Might there be some space/time mechanism at play whereby we're actually seeing the same handful or so of galaxies?

That strikes you as the sort of thing nobody would have noticed throughout centuries of study?


Maybe it's a quantum observer effect and the galaxies only exist because there's an observer. Dark matter/energy is need to make things balance only because we haven't been looking in all the right places.

The universe would be an odd old place if that was true.


Comparison with the previous generation infrared telescope, showing the massive increase in resolution.

https://twitter.com/gbrammer/status/1504369779540480002


I am so relieved that it made it without hitting some debris, and that the hydraulics didn't fail for the unfurling. The unfurling was one insane piece of orchestration, which just means many more opportunities to see a piece of space dust or something messing it all up.

This image is fantastic, even though we can easily see that more work needs to be done with the alignment. It hopefully proves that alignment is all that's left.


I believe that they have achieved diffraction-limited alignment. What about the image suggests that there is more work to do?

The starburst pattern is part of the intrinsic PSF, not a signal of alignment error.


> I believe that they have achieved diffraction-limited alignment. What about the image suggests that there is more work to do?

Some of the remaining misalignment is rather obvious in the higher resolution image. For example, look left of the main star and notice an "echo" of the main diffraction pattern that is not centered on a star.

Also, if you follow that pattern's downward arm to where it meets a diagonal arm of the main pattern you see a rectangular bar of increased brightness. Perhaps it indicates the need for calibrating that part of the sensor.

https://www.nasa.gov/sites/default/files/thumbnails/image/te...


There's five echos, actually. (Look north, northwest, W, SW and S) Internal reflections, probably.

HD 84406 is fairly bright, magnitude 6, eye-visible. In one of the blog posts they say it's too bright to be imaged in normal operations: https://blogs.nasa.gov/webb/2022/01/27/the-webb-team-looks-b... (You don't spend tens of billions to put a telescope into orbit just to look at HD catalog stars)

All the other sensor cruft would go away with dark-frame subtraction: https://en.wikipedia.org/wiki/Dark-frame_subtraction They're apparently not bothering with that for mirror alignment photos. Another safe bet is that a bunch of the sensor noise is because all the instruments are still warm. NIRCAM is still at 42K, and MIRI is hotter. https://www.jwst.nasa.gov/content/webbLaunch/whereIsWebb.htm... It doesn't look like they've turned on the instrument coolers yet.


If one mirror's actuators fail, it's over.

I hope they have long life.

"The Webb telescope will use 132 small motors (called actuators) to position and occasionally adjust the optics as there are few environmental disturbances of a telescope in space."

https://en.m.wikipedia.org/wiki/James_Webb_Space_Telescope


AIUI if any one mirror fails then they can continue on with degraded performance, it wouldn't mean the immediate end of the telescope's usefulness.


If only one fails couldn't they do two-exposures with all the functioning ones making the smallest movement they can? Then the failed one's contribution can be decorrelated between the two exposures and removed since it stays the same throughout.



A universe of kudos to NASA, Goddard, and the hundreds, maybe thousands, of scientists, engineers, and technicians who designed, built, tested, launched, and now, run, this jewel. This is just the beginning.


Here a nice video that explains the mechanism that aligns each mirror:

https://youtu.be/5MxH1sfJLBQ

It is amazing that this is done with only a single motor.

It works something like this: one direction of the motor sets which axis to align, the other direction sets the alignment of the chosen axis.


> It is amazing that this is done with only a single motor.

No. You misunderstood the video. There are 6 motors per mirror segment. Listen to video you linked at 9:11. It says:

“There are 6 actuators per mirror segment, and they are arranged in a hexapod or Stewart-platform configuration.”

What you are confused about is that there is only a single motor per actuator. One could naively think “oh we need a rough adjustment, and a fine adjustment so we will need two motors for each of those”. But they managed to make it more clever, and only use one motor for both the rough adjustment and the fine one. When they run the motor in one direction it adjusts the distance of the actuator roughly (minimum step size 0.058 micron) and when they run it in the other direction it adjusts finely (minimum step size is 7.7 nanometer).


Pedantry: when the motor starts moving, it's a fine correction, but after one rotation of one of the gears it hits a stop and changes to a coarse correction. It then continues to do coarse correction as long as you rotate in that one direction.

If you reverse the motor, you get fine correction again until that gear turns one rotation and hits the other side of the stop, reverting to coarse correction.

Once you have things coarsely aligned, you back the motor a bit and then operate within the single rotation of that gear, staying with fine correction.


Ah yes, you are right. I was mixing up another video with this one.


I wonder if this thing will see something which changes the course of human civilisation...


I'm hoping to see "contrails" ie trails created by vehicles moving through interstellar space at large percentages of the speed of light, as it interacts/hits and heats interstellar gases.


Which are your most expected objects that Webb should photograph? Here is my list:

1- M60 blackhole

2- Proxima Centauri b

3- Tabby's Star

4- some apparently empty space

5- Mars surface

6- Sagittarius A*

7- Mercury's craters with water


We can get some pretty nice photos of Mar's surface from the rovers and drone we have on the surface right now. :) There are almost daily updates here: https://twitter.com/NASAPersevere


Sure, but it would be cool to compare the resolution.


Some of the priorities are earth like planets that we are already aware of and early galaxies close to the big bang time. The light from back then would have shifted to the infrared spectrum due to red shift.


It can’t image Mercury because it has to stay facing away from the sun. Probably probes have done better pictures anyway.


TON 618


Trappist 1


Have you seen any articles on what was learned in the design and manufacturing and if NASA believes that similar cost and time overruns can be avoided on follow ups?

Really curious if $10 Billion is just what it actually costs to build this incredible machine, or if the pioneering work will mean that we can do a better job making things like it now.


It really is worth pointing out, again, that it’s $10 billion…for the entire estimated lifetime cost of the project. Amortized out over 20 years? I’m not saying it’s nothing…but given the US federal budget, it’s kind of nothing.

Compared to, oh, let’s say, $1.6 trillion dollars…

https://www.nbcnews.com/think/opinion/air-force-admits-f-35-...

But that’s just petty to point that out.

I too would like to see a debrief on “what went wrong”…if there is anything that really went wrong. I mean, there isn’t exactly an off-the-shelf solution for an infrared space telescope deployed to a phenomenally distant orbit. One might reasonably expect a few cost overruns, when you’re making mirrors that have no real precedence anywhere in human history.


Yes it's about $1.66 per US citizen per year during the 20-year development.

I've already personally received way more value than that just following the "entertainment" of following along with the construction and launch. I would gladly pay that much again for a repeat endeavor.

Now that it looks like JWST will be able to perform actual science, I think we'll all get a lot more than $1.66/year of value out of it.


It would be amazing to get a list every year on how your personal taxes were spread out like this


Even better if you could pick where 2% or so of your taxes go to


$30 per person basically. Jeeze I would straight up donate to a NASA gofundme that asked for that to do it again. Even had it straight up failed, $30 per person seems like a reasonable stake.


Well most of the cost is in building and launching the thing...


The baseline take on what happened with Webb is that too many low-readiness technologies were included when the mission was approved and went into “Phase A” which means that development starts.

The entire astrophysics community suffered as a result of the ensuing suckout of resources. Lessons have been learned because a lot of careers were impacted.

There’s several successors to Webb on the horizon, and current thinking is to mature the technologies needed before such missions enter Phase A and is in effect committed to.

For a concrete reference on this, see the (very large) National Academies survey, which charts the course for NASA astrophysics over the next 10 years:

https://www.nationalacademies.org/our-work/decadal-survey-on...

A decent gloss on the above report is:

https://www.aip.org/fyi/2021/astro2020-decadal-survey-arrive...

which lays out the tech maturation plan under the “ Flagship mission maturation program” heading.


Yeah, it would make more sense to send several telescopes, each a bit better than the one before, but I guess it was easier to get funding for "this revolutionary telescope" than for "this would be like Hubble but a bit better."


We spent nearly 30 years planning, designing and deploying this. I don't believe there's an agency anywhere that can effectively cost out a project of that scope given that entire new industries can rise and old ones fall in that time.

For perspective, $10 billion is like 1% of a single year budget in the US and I believe the estimate includes the entire lifetime operating cost of the instrument.

If it did in fact just "cost that much" I would probably "not be that bothered."


If they could get a bigger cargo bay on a rocket, a lot of the risky folding mechanisms could have been removed. They could have used a design closer to the Hubble, Spitzer or Neowise.

The next telescope being planned out is LUVOIR which be even bigger so they will still need to continue with folding mechanisms given the bigger size. They may need to eventually think of modular telescopes that are assembled in space.

https://en.wikipedia.org/wiki/Large_Ultraviolet_Optical_Infr...


As far as I know there currently exists no bigger rocket cargo bay than that of Ariane V which also had to be modified for JWST.


SpaceX's Starship is 9m in diameter, but it's not quite ready for prime time.


It did cost $10 billion, and the cost overruns were exaggerated a bit.

One example is comparing the final cost to the cost of the design phase. The starting point should be after the final design had been approved.

Additionally, costs need to be inflation adjusted.

I commented on this last year.

https://news.ycombinator.com/item?id=27764547


Is technology like the webb telescope design , or curiosity design open and public ? I mean eg CAD , components used , other details ?


In general, no. Individual elements of the design might be explained in some articles, and there are some overview articles like e.g. https://link.springer.com/article/10.1007/s11214-012-9892-2 but the actual engineering resources are usually not available to public


I think they keep these from public bc lots of defense contractors are actually involved in this project, too. And it's basically impossible to ask them to release those resources.


In the Copenhagen interpretation of QM did James Webb collapse these distant galaxies because they were finally measured? :)


The light from them was still hitting the Earth constantly and would entangle quickly with everything on Earth.


A bit poetic, but suddenly the universe feels a bit more approachable, more within reach, a bit more sane. As if, through JWST, you can extend your hand and touch and feel and socialize with neighboring galaxies. Truly a window to our galactic neighbors.


This is the first time we’ve imaged these galaxies. Did we know they were there?


Someone else linked an gif comparing against our previous best photo of that star: https://twitter.com/gbrammer/status/1504369779540480002 . Presumably astronomers knew those blobs were galaxies, but no details could be resolved.


The spitzer image is 3.6um IR, I think the webb alignment images are at 2um... pretty big difference in resolution just due to the wavelength.


Between James Webb and Ingenuity, major kudos to things that actually work. Can’t take any of these successes for granted, even if we spend $100 billion (!) on just one of them. Hope Webb is safe in its Lagrange point for a long life. Glad we actually had built the 100B of wealth to deploy.

Now if we can only get those $800MM littoral combat ships to work, we’re building like 53 of them and they don’t launch up, they launch down a few feet. Not rocket science…


> Hope Webb is safe in its Lagrange point for a long life

JWST has a 5-year science mission requirement, with 10-year propellant life. Unfortunately it has a relatively short upper-bound on its lifetime, compared to say Hubble. Though apparently there's ~20-years of propellant onboard thanks to a precise launch, still a far cry form Hubble's 31 years and still ticking.

Should we consider 20-years a long life for such an expensive instrument?


One should not value the time of operation as much as the total value of science returned. If JWST can engage in science operations for even a year, it will transform our precision understanding of the universe.

This is like switching on the first electron microscope when everyone has only ever had optical microscopes. We will see new and surprising things with unprecedented fidelity. The big question has always been, "will JWST's engineering actually work?" So far, it looks like it is working very well.


The Mission extension vehicles (MEVs) are demonstrated technology[0] for some geostationary orbit. I think I've heard some talk in the press releases that now they know they've got as much time as they do, and that it even got past it's 300+ single point of failures, they're chatting about what kind of maintenance missions they can carry out to extend JWST's life span.

[0]https://www.northropgrumman.com/space/space-logistics-servic...


> extend JWST's life span

I recall a thread here when it was launched where I suggested building a twin simultaneously, and that the increment in cost was likely to be 10% of the cost of one.

One of the critics of this idea said there was no need for another, as there was only so much the JWST could discover. But it's hardly been turned on before people are trying to figure out how to make it last longer. Sigh.

P.S. I found out later that in the past NASA would build probes in pairs, and made extra parts in case one was damaged or didn't work. So it really couldn't be that expensive to have built a twin.


You have indeed raised this idea before. The replies in other threads, among people familiar with this design space, will be the same here, as they were before (https://news.ycombinator.com/item?id=29855830).

Namely:

These devices are one of a kind items, fabricated, integrated, and tested manually, not on some kind of assembly line. There isn’t an economy of scale.

Your conjecture is just not correct. It’s remarkably hubristic to think the people who design these space telescopes have continued to do it wrong because making duplicates has not occurred to them.


Even if building a 2nd one cost the same as building the first, how much of that $10 billion is design and development cost? $0 of that will factor in to the cost of the second.

I've fabricated many things with my hands and machine tools. The second one takes dramatically less time, in every case. Even the 2nd set of materials cost less. For example, I ordered a needle bearing the other day for $7, but with shipping the total came out to $20. If I ordered two bearings, the total cost would have been $27, not $40.

It took me 20 minutes or so to install it. If I installed it a second time, I could have done it in 5 minutes.

The reason is simple. I had the right tools laid out, and I knew exactly what to do the second time.

So, yeah, I was quite unconvinced in the last thread.

P.S. I did not say they were doing it wrong.


I agree entirely. It's not like JWST components were built from diamonds or something where the material costs dominated the budget.

It wouldn't surprise me if there were already more than one made for many of the bespoke components used in the telescope. Nobody makes a one-off component without some iteration and covering of their own ass in case something goes awry in shipping or assembly.


There is no reason a priori to believe that integration and test of one of a kind space cryocoolers scales the same way postage does, or batching out parts at a drill press.

I’d suggest that it’s on you to demonstrate why these analogies should hold.


> integration and test of one of a kind space cryocoolers

The point is they wouldn't be one of a kind if #2 was built. Planning, designing, iterative prototypes, designing tests, designing test equipment, building test equipment, devising test plans, writing the enormous amount of software require for all of that, for the ground stations, etc., all add nothing to the cost of building #2.

Normally, people take the cost of a program and divide it by the number of units built, and call that the per-unit cost. That's an accounting fiction. The first one costs the bulk, the rest cost far less per unit.

> no reason

The reason is I can't think of any endeavor where the incremental cost of #2 doesn't drop dramatically.


When is the second copy of anything ever the same price as the first? That thread talks about the second being maybe 30% of the cost of the first. Is that not “economies of scale”? And I think that seems like a great deal, given how much universe there is to look at, and the potential for the first to not survive until the end of its expected lifetime. Hell, why not put 10 of them up there, and iterate on the design?


Perseverance was originally a twin of Curiosity, but it cost more than Curiosity. There's no way that a twin of JWST would cost only 10%.


> Perseverance was originally a twin of Curiosity, but it cost more than Curiosity.

I can't imagine how that could come about.

> There's no way that a twin of JWST would cost only 10%.

It's kind of the way building things works. The cost of the prototype is enormous compared to the next one. For one thing, the additional R+D cost is $0. The additional cost of the software (and I bet the custom software is a big chunk) is $0. The additional cost of committee meetings to discuss competing alternatives is $0. And on and on.

When cutting parts, the cost isn't in cutting the parts. The cost is setting up the machine to cut the parts. The cutting cost is trivial.


I still can't figure out why this "economies of scale" misconception is so popular on HN with respect to JWST. The major costs of building JWST are in the testing + validation + refinement phases, which must be done meticulously for every unit that is built. A JWST "out of the box" is guaranteed to fail: literally, you could launch a million "unrefined" JWSTs and every single one of them will experience a critical mission failure.

Most components of the JWST are not within spec as they leave the factory floor; for many components, the precision required cannot be achieved with machining metrology alone. Remember that system error compounds with every new component that is incorporated. Components have to be constructed, integrated, and then measured/validated with sophisticated metrology equipment after full assembly. If you're lucky, you can modify the components you have to achieve the desired overall tolerances. But a lot of the time, you have to bin the same component a dozen times until you get a batch which happens to be correct (much like in microchip manufacturing).

And this is just for physical manufacturing -- there are multiple other dimensions which are impossible to get right the first time, requiring multiple iterations until your integration tests pass. Many of these test scenarios are extremely expensive to simulate (e.g. full-size vacuum chambers, launch and zero-g simulators), and must be done to validate every single phase of a 5-year mission to an extremely high chance of success (from transport to launch site -> launch -> full deployment -> science operations). Something as simple as a wrongly-tensioned cable is enough to scrap an entire mission -- the validation is absolutely essential to ensure that anything from a manufacturing defect to a simple human error doesn't make it through to launch.

Even in spite of all the lessons learned from JWST 1, I would be surprised if JWST 2's cost was less than 50% of JWST 1 (realistically, I'd peg it at ~80%). The testing costs are a very high fixed cost that must be paid for every unit you make. There's no other way around it.


I remember seeing a documentary on designing the parachute for one of the Mars landers. JPL build this huge building solely for the purpose of testing the parachute designs. Design after design after design failed, and the engineers were worried that they'd never figure out how to make a working Mars parachute.

But they did come up with a design that worked. Phew!

The cost of that special building, the building's design, the special machinery that filled it, and all those months of testing various parachute designs must have been enormous, and all count for the cost of parachute #1. The construction of parachute #2, after all that, was likely insignificant in comparison.


Again, you'd have $0 in research and development and software and test rig costs of #2.

> The testing costs are a very high fixed cost

I'm sure they are. But you won't have to design the tests and build the test rigs and validate the test procedures a second time. Secondly, you'll inevitably learn from the first test runs to need less iteration.

For example, the full-size vacuum chamber. You would already have it on hand, and not need to build another one. Having already just run #1 through it, you'd know just what to do to get #2 through.

For example, the first time I took the heads off my Mustang it took 4 hours. The second time 2 hours. The third time 20 minutes. The procedure was already all laid out for me in the shop manual. But knowing just what to do cut the time enormously.


> When cutting parts, the cost isn't in cutting the parts. The cost is setting up the machine to cut the parts. The cutting cost is trivial.

When you're building a one-off, you aren't setting up a machine to cut the parts. You're just cutting the parts more or less by hand. It'd be too expensive to set up the machine, and calibrate, and run all of the prototypes to make sure it works, if all you need is a couple of pieces out of it.


I've made parts on milling machines and lathes. The time spent is on the setup, not the cut.


There are indeed some reasons, why it would be nice to have two JWST. Certainly, at least some costs would have been lower for building a second one. Though all the labor would have duplicated, and it seems a lot of labor went into just building JWST to spec.

But what would buy us the money spent on a second telescope. One often named reason is protection against failure. That is not so straigth forward, as it sounds. If there is a random chance for failure, then a second telescope lowers the risk accordingly. However, if there is a systematic problem with the design, you would have two defective telescopes. That means, you would have even wasted more money.

Then, if both succeed, you would have increased the "bandwidth", as they could be operated in parallel. But you wouldn't have added the capability to do things differently. With Voyager 1 and 2 and Spirit and Opportunity, they at least were sent on different mission profiles and thus justified the expense.

The thing is, 10 Billion is a huge amount of money, if another JWST had cost like 5 Billions, thats a lot of scientific projects not done because of building a second space telescope. I would rather see the money spent onto different capabilities. Hubble for example is failing, we should have another telescope in the visual range ASAP. As soon as Starship reaches orbit, plans should immediately start to convert one starship into a humungeous Hubble successor.

An instrument like the Thirty Meter Telescope costs just 1 Billion. There is so much other science those 5 Billion would finance. Even if you look around only in the field of astronomy and cosmology.

I really like what they did with Curiosity/Perseverance. They used a proven platform for a second mission with updated sensor and mission profiles. So in my eyes, it would be good to invest the money not spent on a second JWST to begin construction of a true successor, which should be operational before the end of the life time of JWST. With upgraded sensors and based on anything we learn in the first years JWST is used.


The glass half full haruspicy is that it might mean we'll be forced to start thinking about designing an even better one 5-10 years from now :)


I heard one option for addressing climate change is a huge sun shade between us and the sun. I figure that sounds like something that could be tuned into a radio telescope array or something. Maybe with the earth side being the side used for equipment? Idk.


A radio telescope would perform best if it was shielded by the moon.


From memory, that sunshade needs to be many orders of magnitude bigger than any telescope.

Which probably means it would have to be a "cloud" of millions of smaller shades.


If enough of a star's light can be blocked by polarized filters, data transmission can occur when they are switched on and off in a pattern. Maybe with the JWST we'll be able to download many different civilizations' Wikipedias if they're blinking their suns at us.


Maybe a shorter life is a feature if it adds capability. JWST was built to further investigate discoveries made with Hubble. Presumably whatever comes after JWST will similarly drill into discoveries we are about to make.


Let’s get $5B/year out of it. Your $1K smartphone is expensive if it only lasts 1yr for fun and not so much if it lasts 5 years and you get business income via it.


20 years of expected propellant, but other critical parts that could fail before that time.


Your $100B figure should be $10B, for Webb. Ingenuity was an about $80M part of Perseverance which was $2.4B in total.


Off-by-10 error, shame face here. Can’t edit now that replies have rolled in.

W.r.t the comparison to other science and engineering efficacy, defense spending is for total shit.


Military spending isn’t like buying groceries. The US Military is a jobs program. Even a failed program fulfills this purpose. No amount of spending is “wasted”.


This is the Broken Window Fallacy (https://en.m.wikipedia.org/wiki/Parable_of_the_broken_window) -- without the military spending, those same jobs could have been created by the government to do something useful, like fixing our crumbling infrastructure.


The studies giving our infrastructure bad grades are mostly coming from the contractors we'd be hiring to fix them.


I drive on our roads. I drink our water. As anyone who does these things can tell you: our infrastructure is a god damn trainwreck.


A bridge just collapsed in Pennsylvania. In Seattle we have a major chunk of the city cut off because a bridge cracked. Every city’s transit system is aging or needs expansion badly.


Wasn’t the bridge undermined by a gas leak? Not much you can do about that unless you want a spare bridge somewhere else.


War is unfortunately real, if you read the papers. Our spending on stuff that isn’t people is massively kickbacks and back-scratching.

Look at wasted energy inputs and unrecyclable materials as the true wastes, and the offshored/oligarch-concentrated money as a time bomb for when it gets actually deployed from its base in Virgin Islands.


Shouldn't that be 10, not 100?


Yes.


What were the expectations and what are they measuring that exceeds that? I haven’t seen any comparison of the metrics. From my memory, the JWST was roughly a 10x upgrade relative to Hubble in each of spectral resolution, angular resolution, and platform stability. How are those numbers looking now?


The endless dots as entire galaxies in that photo is just beyond mind-blowing.

I wonder if the public can really grasp it, not just grains of sands on a beach but each one a beach with 100-MILLION grains of sand.

If we can't have FTL travel in my lifetime sure would be nice if they figure out FTL communication.


I'd really love to read a book about the design, engineering and project management of James Webb.


Its important that the Webb got this right because proposed 2030s successors to visible light Hubble or infrared Webb are considering as many as a hundred one meter mirrors due to the limited size of launch vehicles.


Since the James Webb telescope primarily operates in the infrared, does that mean we can't expect magnificently colored images from it of the kind we got from the Hubble telescope?


The "magnificently colored images" are false colored unfortunately - the colors are derived by assigning colors to wavelengths which are faint or invisible to the human eye: purple to ultraviolet, reds to infrared, etc. I learned this in a presentation by a Hubble scientist at an astronomical imaging conference. While their work was brillant, there was a tiny part of me that was scandalized by the whole thing.

So, for the James Webb, I'm sure they will be able to do the same thing to help improve and bring out details in it's images - it's just that the scale will be shifted to the red a bit.

Article: https://asd.gsfc.nasa.gov/blueshift/index.php/2016/09/13/hub...


Maybe this will start the transition from terrestrial telescopes to one’s based in space instead? This should be good news for SpaceX and Starlink


Kudos to everyone involved.

I didn't realise there was a planet-spotting remit to JWST though. Looking forward to the results of that.


Not just spotting planets, it's possible they can roughly measure the atmospheric composition of them as well.


How fast is the JWT data transmission in mbits/s?

And if it's fast, how is that even possible if it's so far away?


Honest question: Is the lens flare at the center to be expected or more corrections are underway?


It’s an inevitable result of diffraction caused by the three arms holding the secondary mirror.


I believe most of the diffraction is due to the edges of the hexagonal mirrors, not the spider holding the mirror. The lens flare though I think is due to the spider.


Can this not be mathed out if we know the distance of the target? I know it wouldn't generate the missing information in the occluded regions, but it might look neater.


Distance to the target doesn’t matter because to an astronomical telescope every object of interest is at optical infinity, but yes, if you know (an approximation) of the optical system’s point spread function, you can compensate for the effects of aberrations including diffraction.

The JWST team knows the PSF from having designed, simulated, and now actually testing the telescope, and likely it will come useful in getting the last bits of scientifically valuable information out of the data, but in normal use those diffraction spikes in particular are unlikely to be any problem. They basically only show up because the star in the test image is highly overexposed.


Hey, good for them! Congrats to all involved. One more joyful project in the world of technology.


The best news in a long time.


Absolutely incredible. I can't wait to see what this thing shows us.


I'm relieved we don't have a new Hubble crisis.


Like Hubble, are images taken with JWST public domain?


That’s awesome.

Now let’s mass-produce 50 of them and put them all over any Lagrange point there is, and on the dark side of the moon.


There’s no dark side of the moon, unless you count some of the craters near the poles whose floors are in permanent shadow.


The dark side of the moon refers to the hemisphere facing away from Earth.


But it's actually NOT dark, it's fully illuminated, just facing away from us. While normally this would be a nitpick (it's far side, not dark side!), it actually matters a lot for an infrared telescope.


I'm letting you know what they meant. It's a common expression.

https://en.wikipedia.org/wiki/Dark_Side_of_the_Moon_(disambi...


No matter what they meant, the point is that the idea of putting an infrared telescope anywhere that's not permanently dark and cold is a nonstarter. So it would only make sense to build one on the far side of the moon if it were actually dark.


JWST is never in the shade of Earth or moon.


Yes, but it has its own sunshade, with all the complexity and SPOFs it entails. The point is that at L2 it only needs a relatively small sunshield to block the three way too bright things in the sky, without having to move a lot to keep them shielded. On the lunar far side, it would have to block one whole hemisphere of the sky (ie. the lunar surface) PLUS the sun, which moves in the sky, PLUS insulate the structure from conducting heat from the surface. Oh, and radio doesn't work too well through 3000 km of rock so you'd need a relay satellite system just to communicate with Earth. And you'd need a non-solar source of power for the 14-day lunar night.

But building an infrared telescope in one of the permanently shaded polar craters? That just might make sense at some point, but likely not we have robust crewed infrastructure in place. The polar areas are very attractive from a crewed mission perspective as well, because we now know there are sizeable amounts of water ice there and at the same time on the crater rims you can get continuous sunlight for solar panels.


Because that was the important part of my post?


Yes, in this case it was, because there’s no point in putting an infrared telescope somewhere that’s actually not in permanent shade and extreme cold!


You seriously did not understand that the post was about building a lot more of the same instrument. That it was not about where to put them. You really didn’t understand that?


What??? Pink Floyd lied to us???




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: