Hacker News new | past | comments | ask | show | jobs | submit login
Compare Webb's Images to Hubble (johnedchristensen.github.io)
1636 points by hexomancer on July 12, 2022 | hide | past | favorite | 244 comments



For the professional (or former professional) astronomers among us, I will make my somewhat amused observation that what people are most paying attention to is not really the distinguishing features of JWST.

People seem most impressed by the apparent increase in resolution of the images, which is not from a certain point of view the hardest thing to do . HST might have done that if its instruments had been of different pixel size or imaging array size / focal length. Ok, the much larger mirror is an achievement. But anyway, the resolution of the images is often not what really is the limiting factor for photometric observations. Yes it is sharper/higher resolution, but that wasn't the key selling point.

The new thing is observations in the IR, which is somewhat a technical footnote in many gushing announcements of these images (or some discussion here too). And the general public knows little about that detail's importance, especially since the images are stylized / colored anyway to look just like RGB images that we are so familiar with. But everyone can easily appreciate a sharper image.

Anyway, still a momentous achievement. And thank god we have a scientific field where stunning images was enough to get the public to support a $10B project.

**

Edit to add: I did not mean to detract from or diminish anyone's appreciation of the images and accomplishment at whatever level they are enjoyed. And of course many here are technically knowledgeable about the IR aspect. I just write to point out that for the most headline-grabbing images and newspaper writers, the sharpness of the images over the actual IR frontier is what grabs the attention.


Agreed! In the SMACS 0723 image, there is a red spiral galaxy near the top right which is effectively not present in the HST image because it was redshifted out of the spectrum. This implies it's one of the galaxies receding the fastest from us in the image right? And therefore also among the oldest and farthest away?

https://blog.wolfd.me/hubble-jwst/


Yes, well 2 possible effects:

1) as you said, its flux is predominantly in the IR

2) it could have been fainter than the sensitivity of the HST instruments but now seen because of the sensitivity of JWST

But given that it appears so bright in the JWST image similar to other nearby galaxies that do appear in the HST image, your bet on #1 seems reasonable.

Also there is another point: rather than a highly redshifted galaxy it could be a very dusty nearby galaxy (also appearing very red) but if I remember right, that would have a slightly different signature. Dusty galaxies often aren't entirely dusty and have "lanes/channels/streaks" of dust that are interspersed among normal stellar regions, so if it were that, you would be seeing some bright spots outside the infrared. But this one has the shape of a normal galaxy but red all over, suggesting something affects the whole galaxy -- i.e. redshift.


Isn't one way to detect redshift to check if there's absorption of certain frequencies due to the light passing through matter on the way here? The absorption will be redshifted as well?


Isn't there a 3rd possibility that the galaxy is quenched leaving only redder stars?


Related question: to confirm, some of the additional detail we’re seeing in the JWST images is in fact IR that has been “hue clamped” into the visible spectrum?


Astrophysicist YouTuber Dr Becky covered this last month: https://www.youtube.com/watch?v=sNJR3lenz1I


It's all IR. It's color-mapped over RGB, but the sensors are IR.


Didn't the observed light shift into IR wavelengths due to the expansion of space over the last billions of years while it traveled to us?

Do the recolored images have any relation to what the original view would have looked like, or is it just arbitrary "artistic license"?


Hey, that's an interesting and uncommon question that I had not seen elsewhere --

They did not release or talk in much technical detail of how the images were assembled, which I'm sure will be done at some point.

I do not think the colors do correspond (at least not deliberately), for 2 reasons:

First is that the image of the Deep Field ("SMACS") contains galaxies at a range of redshifts. For example there may be galaxies quite near us (redshift z = 0 or close to 0) while others are more distant (the arced galaxies in the image being lensed that this image is famous for, at redshift z = 0.39), where redshift is the measure (1+z) of how much the wavelength light has been multiplied.

So regardless of what color mapping you chose, it would not be a perfect fit for all objects in the field of view. For the galaxies near us in the image (z=0), the wavelengths being converted to RGB don't correspond to what we would see in the optical.

Secondly, if it were remapped especially for the galaxy cluster of interest in this image, I don't think the colors are specifically tuned for that either.

In more detail:

Consider the optical color spectrum we see, ROYGBIV, or let me reverse it in order of increasing wavelength VIBGYOR -- and take the "RGB" 3 colors that might make up an image, or BGR to use that ordering -- this spans a wavelength range of say 400nm, 600nm, 800nm.

The imaging filters available on the NIRCam span 900nm to 4400nm (4.4 micron) and there are 29 of them [0]. Researchers choose which filters to use based on what they wish to study. And recall that the imaging sensor actually outputs grayscale only, it is the filters that give it a color view and individual images in each filter are assembled to create a color composite.

According to an example science program designed to take such images[1], the filters selected to be imaged might be 900nm, 1150nm, 1500nm.

If you applied the redshift of the galaxies (divide by 1.39 from the above info about the cluster of galaxies), the above sampled wavelengths in the image would still correspond to redder parts of the spectrum compared to what is visible if we were seeing the galaxies now: 647, 827, 1079nm.

So, no I don't think the color mapping was chosen to be accurate in a scientific sense of seeing what you would see if the galaxies were brought to the "original" view.

[0] https://jwst-docs.stsci.edu/jwst-near-infrared-camera/nircam...

[1] https://jwst-docs.stsci.edu/jwst-near-infrared-camera/nircam... ("select filters")


A tangential question. If these galaxies are so much redshifted, then they are probably very, very far. Like more than 1 billion ly far. And if that is so, they should look exactly the same today, tomorrow, in one year, or in a hundred years. If for some reason we wanted these images in much higher resolution, could we just point the camera to the same spot and take millions of shots and then apply a super-resolution algorithm?


I believe that in the image assembly pipelines for processing these astronomical images, they already do take into account / use the "dithering" patterns that you're hinting at. (Often the telescope will be pointed in a pattern with sub-pixel offsets over multiple exposures to do exactly this).

However, 2 factors:

1) there is an intrinsic limit I believe to how much more resolution you can recover (maybe a factor of approx. 2x?), for a lot more exposure time needed However, also at these faint levels of brightness you're also competing against intrinsic photon and sensor noise)

2) practically, given the value of the telescope's time and not much more to be gained (science-wise) from achieving this next order of spatial resolution, they want to spend the time on other new targets instead of sitting on the same patch for much more time.

(you can even try this at home: https://petapixel.com/2015/02/21/a-practical-guide-to-creati... )


Thanks for the write-up! It's definitely important to mention the single-channel aspect of the sensors because I guess many people probably don't understand that.


The typical digital camera of today works the same way, just with 3 filters (R, G, B) kind of like a CRT pixel array in reverse[0].

This kind of mimics the human eye, which is sensitive to those three approximate color frequencies, but it's interesting to note other species besides humans (and, apparently, even some humans)[1] have vision that work with more "filters", or on different spectra (such as honeybees).

I've always found it kind of amazing that so many satellite imaging devices work on far greater spectral ranges with far more color filters, being able to discern far more information than we could with the naked eye (but in essentially the same way).

[0]https://en.wikipedia.org/wiki/Bayer_filter

[1]https://en.wikipedia.org/wiki/Tetrachromacy


Looked like from what perspective? A human eye on Earth?


OP clearly means from an observer moving at (approximately) the same velocity as the object emitting the light.


The objects aren’t really moving in that sense. The expansion of space over time is what stretches the light.


Without a fixed point of reference, what is the difference?


The difference is the expansion of space is accelerating over time. And certain wavelengths of light will be blocked by gas in the intervening space, and which light is blocked changes over time based on how red-shifted it is. Special Relativity isn't enough to explain this.


NIRCam’s range starts at 0.6 microns (600 nm) so picks up a little bit of the red end of our visible range. But it definitely can’t differentiate a range of human visible colors.


Yes, exactly right. (AFAIK, IANAA)


Also, when comparing the deep field images I don't think it can be stressed enough the difference in exposure time between HST and JWST images (Hubble had a 10x longer exposure time). Many many more distant galaxies would become visible in the JWST image with the much longer exposure time. I look forward to seeing some long-exposure deep field images from JWST!


This is underappreciated. Much lamentation has been made of the fact that JWST's current mission length is only ten years (maybe twenty or thirty at best, but hard-limited by on-board coolant), but with the speed of its observations that ten years will be as productive as a century of Hubble time.


It's difficult to imagine how more detailed the JWST images could be if they used the same observation time as the Hubble images.

Will there be thousands of additional extremely red shifted galaxies?


With a 10x observation time, you would probably be approaching a uniform cosmic background of multitudes of galaxies.

There would be negligible space without a light emitter.

> Will there be thousands of additional extremely red shifted galaxies?

Yes, and also those that were too faint, but not necessarily extremely red-shifted.


The increased resolution is extremely important given that the diffraction threshold is function of the wavelength and the mirror diameter. And you can clearly see that in the MIRI images at a longer wavelength that have a noticeably lower resolution compared to Nircam. If Webb mirror was as big as Hubble the resolution would have been bad in the long wavelengths. Hubble couldn’t have had a better resolution with better instruments, he was already limited by its aberration problem and the new instruments were designed to mitigate that problem.


Those are good points -- my original comment was a little simplistic.

I don't have the info at hand -- do you know what was the resolution of HST optical/NIR imagers versus JWST new imagers?


Right. After the initial "wow factor" has settled down, what's been most striking to me is the level of detail that's no longer obscured by gas and dust in these nebulae due to MIRI. I know very little about the study of stellar nurseries or planetary nebulae but I've seen enough pre-JWST images of them to know that astrophysicists just got a whole lot to sink their teeth into and I look forward to seeing further developments as more data is collected and existing data is studied.


I'm impressed by the speed of the imaging. Apparently the Hubble Deep Field took over 10 days of telescope time while JWST did it in 12.5 hours.


>The new thing is observations in the IR,

Not a new thing.

Herschel space observatory operated in same location (L2-point) as JWST and it was IR telescope. https://www.google.com/search?q=Herschel+space+observatory+i...

Mirror sizes:

   Hubble    4.0 m2 (43 sq ft)
   Herschel  9.6 m2 (103 sq ft)
   JWST     25.4 m2 (273 sq ft)
Hubble had ~40% of the Herschel's collecting area and Herchel had 40% of the JWST's collecting area.


That is a good point -- my simplification of the advance provided by JWST is aimed at saying, JWST is kind of the first major telescope with such depth and resolution to complement the deep fields of HST that everyone is so familiar with.

Herschel, while impressive also, was far-IR (if I recall) and much lower resolution, which was good for certain research areas, but less complementary to the HST deep fields for the, well, currently fashionable, recent study of galaxies at high redshift.


Well, also, one thing to note is that it took WEBB only hours to capture these images, while HUBBLE was aimed for days and months to capture comparable ones. It will be interesting to see the results when they have more time to capture longer sequences of light.


> HST might have done that if its instruments had been of different pixel size or imaging array size / focal length

HST is already imaging at diffraction-limited resolution (with proper post-processing). It would need a bigger mirror to get there.


If I may get political here (nothing personal / no direct reply to the parent comment):

> stunning images was enough to get the public to support a $10B project.

I don't believe public support is relevant; is there public support for the >$700B a year spent on the military?


Yeah, more or less. People gripe about it occasionally, especially on the left, but then some shit will go down like Ukraine and suddenly nobody wants to look like an idiot (which will absolutely happen in an American context if you suggest cutting military spending right after Russia invades someone).


It’s not so much public support as the lack of public opposition.

Everyone in “big science” remembers the cautionary tale of the Superconducting Super Collider, which was cancelled mid-project when it became politically viable to oppose it as a waste of money. The circular tunnel is still sitting dusty and abandoned down in Texas while CERN runs another round on the LHC.

Big results that gather public praise go a long way toward making sure the next big science project will at least be seriously considered.


>$700B

Global force projection at the benefit for all Western economies is very expensive. This money is required to even allow the form of economy "the West" is running.

How else were you going to keep up the Pax Americana that enables globalization by making significant global trade networks even possible in the first place? Who's gonna insure your freighter if international waters aren't protected by Western navies? Pirates, rogue states closing important channels, at will seizures for no reason... the list is long.

As the Pax Americana will likely soon fade through growing influence of the BRICS nations and "America First"-style ideologies, the 700B will probably wither away quite fast in the next decades - along with all the benefits we enjoyed since WW2.

Just one illustration. How man South American or African nations support the sanctions against Russia? How many Asian nations that are not Japan?


Public support for NASA drives the members of Congress to allocate funds for its work. So, NASA spends a whole lot of effort showing the public how cool its work is. That, and it divides its facilities and subcontractors across the various states to ensure that every member of congress has constituents who benefit from its payroll.


> is there public support for the >$700B a year spent on the military?

Of course. It's tempting to think we're in this new lovey-dovey age of an improved/superior humanity, but the reality is man's baser instincts are kept in check by BFGs and MAD.


In the second image it is clear that more of the nebula is visible, isn't this because more wavelengths of light are being detected? In this case an amateur absolutely can appreciate the technical improvements, the IR is mapped to a visible RGB spectrum...


Different wavelengths. The JWST also cannot see the shorter wavelengths that Hubble excels at.


.. and why is infra-red more useful ? Can we see through dust or something ?


Seeing through dust is part of it. Another part is redshift: because of relativity, things moving away from us appear redder (longer wavelength) the faster they’re moving away. That’s the same principle (Doppler effect) as the lower-pitched siren sound as the ambulance drives away. Because the universe itself is expanding, the farther something is from us, the faster it’s moving away relative to us, and the more redshifted it appears to us. And again because of relativity, the oldest objects we can see are the ones that appear farthest away (i.e. their light is just now getting here, after 13 billion years, from 13 billion light years away). Thus, if we want to study the earliest times of the universe, we must study the most redshifted objects—which have shifted all the way out of the visible spectrum and into the infrared. Hence, Webb is an infrared telescope.


Your statements about redshift and distance are correct, but I do not see where relativity comes into it. Even in Newtonian physics redshift occurs. If you could elaborate I would love to learn. Thank you!


Newtonian physics does have the Doppler effect which can cause red shift, but it doesn't accurately explain the red shift that occurs when observing distant galaxies. That's because this red shift occurs due to three factors:

1) The galaxy is moving away from us. This is most like the classic Doppler effect, but because of the high relative velocities involved, time dilation needs to be taken into account to model the red shift accurately, thus at least special relativity should be used.

2) The light travels through space with different curvature. For example, light originating near a very massive star will red shift when moving away from that star because it moves into less curved space. General relativity is needed to explain this effect.

3) The light travels through expanding space. For very distant galaxies this becomes the dominating factor of red shift, as we see an amount of red shift directly corresponding to their distance from earth. General relativity also explains this effect.


I see, thank you.


Yeah, you’re correct, you don’t need to invoke relativity to explain the Doppler effect. Sibling comment did a good job explaining the ways relativity does impact redshift, but my initial statement (“because of relativity…”) was not correct.


Infrared is the only way to see things that are cold (as in not a star) or extremely far away (due to redshift).


What about reflected light, like most things we see in the solar system?


Good point, I shouldn't have said "only." I have heard of reflected light been used for exoplanet observations although I think it's quite faint. I guess some nebulae are illuminated by stars. I don't know if the reflected light is longer wavelength than the incident light, but I suspect so. BTW I'm not an astronomer!


Look how much more transparent the dust is in the carina image at the bottom:

https://johnedchristensen.github.io/WebbCompare/


[flagged]


> This is so dismissive and insulting. ... I'm sure you were well intentioned, but this comment read all kinds of rude and negative.

Your comment is way, way over the top. Their observation is entirely correct, maybe not in your circles, but certainly it's what I'm also seeing on Facebook and twitter.

Hell, NASA posted "The razor-sharp resolution of the @NASAWebb imagery was enough to bring astrophysicist Jane Rigby to tears."


Apologies, I did not mean for it to come off that way. I edited it to not make such broad brush statements.

Certainly people here and discussing it among those who get to watch these announcements during their day, are a more knowledgeable and appreciative group of the details. They have the time and info to know the "new" aspect.

I was just making the point that in the most headline-grabbing and CNN/newspaper science-writer genre, probably their and their audience just sees the image sharpness as they flip through the news.


I think this is a little harsh, but I actually generally agree. I'm a total laymen here and my PRIMARY TAKEAWAY has been the IR component and how much more and further away you can see because of it.


I get what you are trying to say here, but someone posting on HN about "the IR component" using the nom de guerre "Enginerrd" might not be "a total laymen" :-)


Ok, yeah, in retrospect that's probably fair.


> I'm sure you were well intentioned, but this comment read all kinds of rude and negative.

Pot, meet kettle. Except you are flat out an arse while GP was mostly innocent in their action I feel.


> especially since the images are stylized / colored anyway

The thing that I don't like about the new images is the abuse of star flare effect. The colors are okay, but the flares... that's simply too much.


It's not an effect. It's a diffraction artifact that's a consequence of the telescope construction. Explained here for example: https://bigthink.com/starts-with-a-bang/james-webb-spikes/


Although this reference nicely explains the origin of the diffraction pattern on the JWT images, it does not explain why the spikes seems to extend much farther then in the Hubble images. My hunge is that the JWT succeeds much better to gather all the light into a real point which makes the primary diffraction pattern stronger too.

While this is very important for scientific work (easier to see planets!) it is less appealing to the eye. Also note that some JWT images have some faint blue streaks which in effect are diffraction spikes from bright stars outside the field of view.


I don't think that's on purpose


Here's a backyard telescope versus Webb: https://twitter.com/AJamesMcCarthy/status/154694183270093209...

More comparisons on Twitter, some zoomed in:

- https://twitter.com/Batsuto_/status/1546899241880240128

- https://twitter.com/Batsuto_/status/1546900387931766784

- https://twitter.com/JBWillcox/status/1546881033597075457

- https://twitter.com/jason4short/status/1546626672488632321

I'm not a physicist, so I've only recently learned about redshift. Hubble's deep field images were very dark red/orange because further objects appear redder (into infrared) before they disappear to the observer. Webb's sensors are more red/infrared-sensitive than Hubble's, so along with extremely fine, super-cooled optics using exotic materials to align and capture every single photon, its red sensitivity allows Webb to peer deeper, further, and dimmer than we've ever been able to before.

And I've read that the "spikes" coming off the brighter stars are generally from stars in our own galaxy and they're not lens flares. They're caused by the edges of the telescope. Hubble's stars would have 4 spikes in a cross; Webb has 6 in a snowflake because of the shape of Webb's mirrors having 6 sides. Or something like that.


The spikes are caused by diffraction of light around the struts supporting the secondary mirror. Hubble has 4 supports for the secondary mirror. JWST has 3 support for the secondary mirror, which because...physics (I don't know I'm not an optics guy)...manifests as 6 diffraction lines.

https://en.wikipedia.org/wiki/Diffraction_spike

EDIT: It may be caused by both the diffraction spikes from the supports struts and the shape of the mirror and aperture. I'm not really sure. The JWST images also seem to have two additional small spikes that look more like the diffraction pattern from a single strut, which could also be a support strut for a stop further down the optics chain.


Apparently the Diffraction spikes come from both the primary mirror shapes and the struts holding the secondary mirror. The primary mirrors of the JWST are hexagonal which would explain hexagonal looking effects. The three struts are apparently designed so that two of the struts match the hexagonal mirror angles at all times and are "hidden" inside. (The third strut apparently sometimes causes two much smaller "horizon" spikes for very, very bright objects.)

Hank Green on TikTok did a neat, quick demonstration in video form.


Link to that Hank Green video on YouTube: https://m.youtube.com/watch?v=Y7ieVkK-Cz0


The hero we need.

Thanks!



ah an authoritative source. if only we funded scientific exploration more heavily with public funds..


Oh, thank you that makes sense. And due to the folding of the mirror and launch envelope constraints they can't equally space the three struts such that ALL are inside the diffraction of the hexagonal mirror.



They're not lens flares, but they're still an artifact of the system, which is probably what most people mean by "lens flare" anyway, due to lack of a better common term.


According to the Wikipedia definition, I'd say that Webb's artifacts could be classified as "lens flares".

>This happens through light scattered by the imaging mechanism itself, for example through internal reflection and forward scatter from material imperfections in the lens. Lenses with large numbers of elements such as zooms tend to have more lens flare, as they contain a relatively large number of interfaces at which internal scattering may occur.

https://en.wikipedia.org/wiki/Lens_flare


Diffraction spikes are caused before the lens gets involved; improving the lenses cannot remove the spikes (though there may be structures causing diffraction spikes in-between lenses).

The spikes from JWST are primarily caused by the edges of the mirrors and the three support struts.

There likely is some amount of lens flare (though I don't know if it is significant, the optics are Very Good), but the dominant artefact is the diffraction spikes.


>The spikes from JWST are primarily caused by the edges of the mirrors and the three support struts.

That sounds to me like "light scattered by the imaging mechanism".


Actually I would say the effect mimics a "cross filter" / "star filter" special effect.

https://en.wikipedia.org/wiki/Photographic_filter#Cross_scre...


Which begs the question: is there a computational way to "collapse" the spikes in post-process? They are beautiful but also sort of distracting when you are trying to take in the enormous mass of stars in these photos.


deconvolution, but it's more art than science because the problem is ill conditioned and blows up without regularization-- particularly if any part of the spike is overexposed (which it usually as you only notice the spikes on extremely bright stars).


Hmm - maybe a use case for (trained?) machine learning? I don't have much idea about ML so not sure if it makes any sense.


There is machine learning to remove stars called starnet, though last I checked it doesn't to an great job with diffraction that severe. Starnet is used by astrophotographers to enhance the contrast of nebula without getting artifacts from stars or to make completely starless nebula images.

If someone got starnet to remove the stars in the JWT images it wouldn't then be hard to go overlay stars back in without the diffraction.


thank you very informative.


Question for anyone who happens to be an expert: Is there any way to quantify how much better Webb is independently of the amount of time used to take the exposures? Like, could Hubble achieve the same quality of images as Webb if it was given 100x (or whatever) more time exposure?

I'm trying to understand how much the improvement is "speed of convergence" vs. "quality of asymptotic result". (Though... is that even a valid way of trying to understand things?)


IANA{astrophysicist, space engineer} but I do follow this closely and have what I call a working armchair understanding of this stuff. Anyone from relevant fields is welcome to gently correct any imprecisions. I always want to learn more and will thank you for it

>could Hubble achieve the same quality of images as Webb if it was given 100x (or whatever) more time exposure?

No, for a different and simpler reason: Hubble isn't as sensitive in the infrared as Webb. A lot of the stars and structure Webb has revealed in the two nebulae especially is due to it picking up a lot more of the infrared light to which the gas and dust of the nebulae are essentially transparent. In other words the data is qualitatively different in addition to the increased resolution. This also will see much older light which is redshifted(the longer the travel, the greater the shift) out of Hubble's range of sensitivity.

As for the quantitative part, I guess mirror size is what you'd want to look at? Hubble has a single circular primary mirror with a diameter of 2.4 metres.[0]

Webb has 18 hexagonal mirror segments that are combined into the equivalent of a circular mirror with diameter 6.5m. That is ~6.25 times the light collection area of Hubble(25.4m² vs 4m²)[1]

0: https://en.wikipedia.org/wiki/Hubble_Space_Telescope

1: https://en.wikipedia.org/wiki/James_Webb_Space_Telescope#Fea...


> That is ~6.25 times the light collection area of Hubble(25.4m² vs 4m²)

This would have to be scaled by the wavelength being observed, for a resolution comparison. Hubble actually has better absolute resolution, when viewing shorter the wavelengths that JWT can't sense (0.05 arcseconds vs JWT 0.1 arcseconds).


Right, that didn't occur to me at first, but is just obviously true when you point it out, thanks. Though I didn't know that hubble is actually higher resolution in that comparison.

Then, in some sense, the first part of my explanation is most of the story in the case of comparing MIRI(mid-infrared instrument) to hubble in the near-infrared.

But in comparing NIRCAM to Hubble in the near-infraread JWST would in fact have greater resolution, no?


The resolution is limited by the diameter, not the area (although they're usually closely linked).


Plus, upgrading Hubble wouldn't get us close either. JWST is specifically designed to shield the sensors from IR/heat, and it's 1 million miles from Earth for a similar reason.


> This also will see much older light which is redshifted(the longer the travel, the greater the shift) out of Hubble's range of sensitivity.

It's a matter of speed not distance, isn't it?


Redshift is indeed a matter of speed. But due to the expansion of the universe, relative speed and distance are directly related (Hubble's law).

So farther away means faster relative speed and thus more redshifted (Doppler effect) Farther away also means older light (due to the finite speed of light).

Putting that all together means that to observe old light from the start of the universe we have to look in the IR spectrum.


As I understand it redshifting is due to the doppler effect, whose formula only depends on the relative speed between the observer and the sender.

However, it also seems like due to galaxies further away having a larger expansion speed, typically they are more redshifted.


It’s both, but the contribution from distance will be much greater than from the relative speed for an old, distant galaxy.


For the wavelengths that the telescopes are designed to observe (primarily ultraviolet & visible for Hubble, though it can do a little bit of infrared, while JWST looks at Infra-red and mid-infrared) resolution is fairly comparable, though JWST has a much wider field of view and doesn't half to sit idle when it orbits the sun side of the earth like Hubble does.

A major issue with Hubble & JWST comparisons is just that they're designed to look at different wavelengths of light. A lot of what JWST will see is completely invisible to Hubble, and no amount of observing time can compensate for that.


A crude analogy is like this: Two cameras are pointed towards a wall. Camera #1 is good, but it is blocked by the wall. Camera #2 has a special trick, it does some magic that can look behind the wall.

Now both have resolutions and stuff. But no matter how big the resolution or how long it stares, cam1 is fundamentally blocked by the wall. It can take extremely high res photos of things inside the wall, but it can never see anything behind the wall.

Cam2 could have infinitely higher quality than cam1 — because who knows, there can be 100, 1000, million or a never ending world of things behind that wall that can never be seen or captured by cam1.

Cam1 is Hubble, cam2 is JWST, and the wall is infrared wavelength which is all around us. JWST can peer deeper into the _same area_ of space, and see more things behind the infrared wall, which Hubble can never see.


Not an expert, but one metric to demonstrate Webb’s capability is that Hubble’s deep field exposure took 10 days, and Webb did it in 12.5 hours.


This is true, but he was asking specifically about a metric that's independent of the exposure time.


No, expsoure time is not enough. Resolution is a factor of the size of the primary mirror. Exposure time just allows capture of photons at that resolution. With the JWST primary mirror dwarfing the Hubble's, then it will always have better imagery.


Wavelength is also a factor. Huygens optics has a great video on this. tldr; angular resolution is about the same as the Hubble.

https://youtu.be/gOpbXBppUEU


Regardless of exposure, you have to consider wavelength. There are some things JWST can see that are completely invisible to Hubble, or, similarly, there are objects that are opaque to Hubble that JWST can see right through. Just look at all the extra stars that appear in the image of the Carina Nebula for an example of that.


Webb's physical dimensions are larger than Hubble's. The "collecting area" of Web is 273 sq ft to Hubble's 46, per Wikipedia. The two telescopes are sensitive to different (but somewhat overlapping) bands of light. Hubble worked through the visible spectrum while JWST is almost exclusively infrared.

To the "can Hubble do anything Webb can do but with more time", the answer is no, due to the lack of mid-infrared sensitivity, among other things like atmosphere.


I worked in astronomy software for a few years for a different telescope, the LSST. I am not an expert, but I was in this world enough to answer.

The short version - it converges faster (probably like 5-10x faster), but also (as everyone else said) works in different wavelengths.

You can think of a telescope as a "photon bucket." The number of photons it collects is proportional to the area of the aperture. Webb's aperture area is 25.4 square meters, while Hubble's is 4 square meters, so roughly speaking JWST will get photons about 6 times quicker than Hubble.

But that's only the roughest measure. Once you've got the photons, what do you do with them? You send them to a detector. There's loss in this process - you bounce off of mirrors, with some small loss. You pass through band filters to isolate particular colors, which have more loss. The detector itself has an efficiency; in CCD cameras people speak of "quantum efficiency" - the probability that a photon induces a charge that can be counted when you read out the chip. That quantum efficiency depends on the photon's wavelength.

Furthermore - the longer your exposure, the more cosmic rays you get which corrupt pixels. You can flush the CCD more often and detect the cosmic rays and eliminate them, but you'll eventually brush against the CCD's read-out noise, which is a "tax" of noise you get every time you read out data.

So this all get's complicated! People spend many years characterizing detection capabilities of these instruments, and write many pages on them.

JWST's capabilities are described here: https://jwst-docs.stsci.edu/jwst-near-infrared-camera/nircam...

HST's camera is more complicated to characterize, partly because it's older. Radiation has damaged and degraded many of the components so they have a lot of noise. The details of how this works are at the edge of human knowledge, so we don't have a great model for them. From the STIS handbook:

    Radiation damage at the altitude of the HST orbit causes the charge transfer efficiency (CTE) of the STIS CCD to degrade with time. The effect of imperfect CTE is the loss of signal when charge is transferred through the CCD chip during the readout process. As the nominal read-out amplifier (Amp D) is situated at the top right corner of the STIS CCD, the CTE problem has two possible observational consequences: (1) making objects at lower row numbers (more pixel-to-pixel charge transfers) appear fainter than they would if they were at high row numbers (since this loss is suffered along the parallel clocking direction, it is referred to as parallel CTE loss); and (2) making objects on the left side of the chip appear fainter than on the right side (referred to as serial CTE loss). In the case of the STIS CCD, the serial CTE loss has been found to be negligible for practical purposes. Hence we will only address parallel CTE loss for the STIS CCD in this Handbook.

   The current lack of a comprehensive theoretical understanding of CTE effects introduces an uncertainty for STIS photometry.
Now - this was all about how many photons you collect. When humans look at an image, they also care a lot about how fine the details are on it. This has to do with the resolution of the telescope's imaging systems. Resolution is limited by the number of pixels on the detector, and (to a much lesser extent) by the optical train of the telescope - the aberrations and distortions introduced by mirrors that focus light onto the detector's pixels.

Hubble has a high-res camera, and a separate wide-angle camera. Hubble's high-res camera actually outperforms JWST - it can resolve down to 0.04 arcsec, while JWST's can go to around 0.1 arcsec. But JWST's camera has a much wider field of view.


The Hubble image of SMACS was a 10 day exposure, the Webb image was a 24hr exposure and is far superior.


I'm no expert either, but I imagine that high exposure times come with more motion blur. So just cranking up exposure time does not necessarily result in better pictures.


Here's out exposure time works for JWST: https://jwst-docs.stsci.edu/understanding-exposure-times


There's effectively no motion blur visible.

The most pronounced effects might be paralax of nearby stars to thousands of light-years at the outside. That would be observable in images taken at opposite sides of Earth's orbit around the Sun, a baseline of about 300 million km (186 million miles). Even that will be phenomenally small, too small to be observable for most objects within our own galaxy (the Milky Way) let alone the distant objects JWST is most concerned with.

From Wikipedia:

In 1989 the satellite Hipparcos was launched primarily for obtaining parallaxes and proper motions of nearby stars, increasing the number of stellar parallaxes measured to milliarcsecond accuracy a thousandfold. Even so, Hipparcos is only able to measure parallax angles for stars up to about 1,600 light-years away, a little more than one percent of the diameter of the Milky Way Galaxy.

The Hubble telescope WFC3 now has a precision of 20 to 40 microarcseconds, enabling reliable distance measurements up to 3,066 parsecs (10,000 ly) for a small number of stars.[10] This gives more accuracy to the cosmic distance ladder and improves the knowledge of distances in the Universe, based on the dimensions of the Earth's orbit.

https://en.wikipedia.org/wiki/Stellar_parallax

JWST's optical acuity is roughly similar to Hubble --- despite the larger mirror surface, it's using longer wavelengths of electromagnetic radiation, with lower resolving power.

Movement of the JWST itself is kept to an absolute minimum for obvious reasons. It would simply be unusable as a telescope if this weren't the case.

Absolute motion of objects being imaged ... also isn't a factor, as the maximum resoultion of JWST (the smallest pixels on an image) are still tremendous. It's possible that a nearbye (neighbouring galaxy) nova event might generate observable motion over days or weeks, but even that is unlikely. The interesting stuff in that event is actually the changes in brightness and evolution of light emissions, for the most part.

In the case of the Carina Nebula image 8,500 light years distant (that is, astronomically near), the individual dust segments are light years in length. The distance from the Earth to the Sun is roughly 1/64,000th that distance --- too small to visualise in thos images. The individual stars show are not dots or disks, but points, whose apparent size is a matter of refraction and saturation effects on the JWST itself.

Even where there migh be any movement, individual images are composed of multiple exposures and "stacked" to take median observed signal strengths. This is, in a way, to eliminate motion effects, but the moving entities are cosmic rays which create random signatures on the sensors of JWST, and not movement of the telescope or its targets themselves.


Motion of what?


Minor changes to the satellite position? Vibration from some hardware? I dunno but the parent asks a question I ask too.

When you do long exposure, any kind of movement, even very small, can degrade your image.

How JWST handles movement during long exposure is a good question. Same with hubble.


long exposure images aren't actually 1 continuous exposure. you take a ton of individual images and composite them using known reference points.


Well I mean the JWST is in orbit both around the L2 point and the sun. It's sensitive equipment must also be facing away from the sun. So there's a lot of movement going on out there.


everything everywhere all at once


I'm sorry but no no no. These telescopes are tracking the objects they are imaging specifically to avoid imaging issues from motion. This isn't some dude in the backyard with an alt-az scope bought from a Sears catalog.

I really hope you were trolling with this response


Downvote me to hell, but as a person who has zero understanding of what differentiates Hubble from Webb, the pictures alone just aren't doing it for me. I was excited to see something completely new given 30 years and 10 billion dollars and instead I feel like I'm seeing what looks like an enterprise upgrade and feel slightly disappointed.

What am I missing?


TBH, I don't think the photos were ever going to be that much more groundbreaking to your average person, given the fact that the angular resolution of the telescope is roughly equivalent to the Hubble.

It's got a much bigger mirror, so why is this the case? Well, it's because the Webb works primarily in the longer, infrared part of the spectrum, not the visible. Resolution is related to the diameter of the mirror and the wavelength being studied - the longer the wavelength, the bigger the mirror you need to achieve the same sharpness.

But working in the infrared part of the spectrum means that Webb can look further back into the past, because the oldest light created by the earliest galaxies has redshifted out of the visible spectrum because of the expansion of the universe. So we'll be able to collect and see much older light from much younger galaxies with Webb that Hubble literally cannot detect because of this redshift.

On top of this, infrared, being a longer wavelength than visible light, allows us to see through dust clouds more readily. Notice how many more stars are visible in the Carina Nebula comparison.

Lastly, the Webb has other instruments such as a spectrograph that allows us to determine the chemical composition of distant objects (such as planets). We can point it at an exoplanet and determine, say, if it has water in its atmosphere.

In total, it means marginally better photos for the general public, but a great deal of new data for scientists that should greatly further our understanding of the universe.


Webb isn’t optimized to produce maximally pretty pictures, because most of Webb’s groundbreaking science is not necessarily going to involve pretty pictures. For example, Webb will tell us otherwise-unknowable things about the atmospheric composition of exoplanets—but it will not produce stunning photos of them (too small and far away to look like more than a pinpoint). Webb will tell us discipline-defining things about the cosmological conditions in the very earliest years of the universe—but that doesn’t mean it can show us pictures of the earliest galaxies in the same close-up detail as it could of a nearby galaxy. The latter image, while more breathtaking for us laypeople, would not tell us much of anything new.

With that said, NASA is not unaware of the PR value of pretty pictures (they weren’t the point of Hubble either!) and I have no doubt that we will be getting plenty of them.


It took a while even once Hubble was fixed for people to figure out the processing to extract the prettiest pictures from it.

To explain that concretely: Hubble was launched in 1990 and was fully functional once got its eyeglasses in 1993. But it wasn't until April 1995 that Jeff Hester was studying photo evaporation in the eagle nebula and motivated by studying the concentration of different molecular gasses, created a color image by mapping the narrow SII, Halpha, and OIII molecular lines filters to RGB (a false color image, called SHO or the 'Hubble pallet' by astrophotographers)-- creating the iconic "Pillars of Creation" image https://en.wikipedia.org/wiki/Pillars_of_Creation#/media/Fil... .

Hubble's large aperture and freedom from atmospheric distortion and light polution obviously contribute greatly to the image-- but much of the purely aesthetic beauty of the image, beyond the target, comes from the process and processing choices as illustrated by the many lovely images of the same object created by amateur astronomers whos processing follows in Dr. Hester's footsteps. E.g. https://www.astrobin.com/lglsd8/ https://www.astrobin.com/i1wffo Today, SHO images of many targets are produced by advanced amateurs with relatively inexpensive equipment, resulting in many breathtaking images of a sort that never existed before these techniques were popularized by the Hubble telescope. (random example: https://www.astrobin.com/fzp6u2 )

By the same token the JWT likely has locked inside it a tremendous potential for images which are both intellectually and aesthetically pleasing waiting to be unlocked through the skill and practice of people working with the data and their discovery of targets best matched to the instrument and those processing techniques.

Targets which are likely to be particularly aesthetically stimulating (as opposed to only intellectually stimulating) are also only a portion of what gets studied. A differential spectral measurement showing an oxygen atmosphere won't be much to look at-- but it will have a tremendous intellectual beauty.

Maybe in the future we'll see one of the billionaire space spacefarers partner up with some amateur astrophotographers to launch some modest equipment optimized for making aesthetic images (e.g. using optical designs that are free of diffraction spikes, like refractors or SCT reflectors). Who knows-- they might also make some interesting scientific discoveries because it's hard to study the aesthetic beauty of the universe without finding intellectual beauty of vice versa.

It might also be that processing techniques from JWT NIRcam images help terrestrial astrophotographers make better images. There are some reasonably large windows of NIR spectrum that we can image from earth-- e.g. J-band from 1170nm to 1330nm has good atmospheric transmission. And there is a lot in favor for terrestrial imaging in J-band: Light pollution is much less there, wavefront distortion from seeing is reduced, scattering (which follows the inverse 4th power of wavelength) is vastly lower. As a result you can even image the stars in the daytime with J-band. The big barrier is sensors because silicon sensors are blind past about 1100nm. The sensors used by JWT's NIRcam cost about $350k each and have to run at cryogenic temperatures. But sensor technology is improving (e.g. https://www.qhyccd.com/qhy990_qhy991/ QHY990 is more like $24k), and JWT might help drive along development by finding targets and processing techniques that could also be applied on earth just as happened with hubble SHO.


With respect, the goal of a major scientific project is not to impress the layperson with pretty pictures.


Though of course the pretty pictures really help with increasing the appetite from the public for spending on projects like this.. and capturing people's imaginations is a necessary part of the long game.


I can understand this sentiment, but I'm excited by the results we won't see: chemical spectra.

Getting readings from the atmospheres of exoplanets will give us an idea of what population and percentages could harbor life. We may even get a whiff of some tell tale signs of industry, and that would change life on earth forever.


Not an expert. But we are inside a galaxy ourselves, and it's not easy to leave this place. The scenery around us gets dated really quickly once we saw it for the first time, and it will probably stay boring until we visit another part of the universe to change the view.

Based on my understanding of astronomy, the real research starts when scientists zoomed way in, thus the "enterprise upgrade" (increased resolution I assume?) is exactly what they're looking for.

Those published pictures are probably just for show (/to proof that the taxes you paid is have been used on a real project).


What you're missing is the science and all the new data we'll find with an infrared telescope, which will come down the road. Yes it takes cool pictures, but that's just a bonus. JWT's real function is to facilitate hard science.


The JWST is only now entering science operations. What you see is pretty pictures taken as part of the commissioning and PR efforts of NASA.

The actual science is yet to come, but will likely not look as spectacular to the layperson.

Edit: Here is an overview of the science that JWST is going to be doing in Cycle 1: https://www.stsci.edu/jwst/science-execution/approved-progra....


Detecting alien life has now become reality, JWST can analyze the composition of exoplanet atmosphere. If we can detect any technosignatures, then we made the discovery of the millennia.


I thought this comparison was pretty impressive:

https://twitter.com/Batsuto_/status/1546900387931766784


> What am I missing?

Several PhD in astrophysics.

They didn't spend 30 years and 10B for a big jpeg don't worry

These pics are gimmicks sent to the public as a "see what we did". It's like if Armstrong personally brought you back a moon rock, you'd be like "yeah cool that's a rock" because you don't have the instruments to analyse it nor the knowledge to know what to even look for.


The TLDR is that Webb can see much further across the universe (and thus much further back in time) and at higher higher resolution than Hubble, and Webb’s primarily Infrared sensors can see through nebulae, space dust and other obstructions better. There are numerous videos on the differences, but here are some I found pretty good:

1. Scientific American comparing how far back in time Webb can see vs Hubble: https://youtu.be/nBDHqquK_8k&t=2m8s

2. NASA scientist reviewing Webb’s Carina Nebula image, explaining what Webb is unveiling for the first time, and comparing to the same image from Hubble: https://youtu.be/3y6iWi95ypc&t=2m17s

3. Good overview of Webb and its differences from Hubble: https://youtu.be/JzDWpvtDJ9g


These are just the first few images. Give it time.


That's somewhat fair, but Hubble and JWST have fundamentally pretty similar resolutions so JWST will never get hugely better.

What JWST can do is show new things that have never been seen before, but obviously it's a bit hard to schedule that sort of photos.


This thing can see through dust. That's pretty cool, don't you think?


Now build a zoomable full-res version. Because I am spoiled and want the internet to do things for me.



Well, I found my next set of ultrawide wallpapers for my 49" monitor :) Thanks!


I made a full-res version for the Carina Nebula. Not sure how to make it zoomable on desktop (it's zoomable on mobile).

https://hubblevwebb.tiiny.site/


I made this site an I released update to allow zoom! Let me know what you think!


Not a UI or web guy scaling the slider with zoom might do the trick?


Why is there so much more lens flare on Webb vs Hubble?

It seems to negatively degrade the photos taken much more so than Hubble.



This is interesting, thank you - going to pass this along to friends who were curious and got less informative answers :)


The segmented design of the mirror creates diffraction effects vs. Hubble's single mirror.


I'm actually most looking forward to seeing a picture of our planets. I wonder what kind of resolution we'll get of Jupiter and Mars in particular.

Also curious about what the closest stars to our solar system would look like. Of course it also makes me wonder what would we be able to see given a 100x increase in aperture. Like for example if we could send up something extremely large on Starship. Would we be able to image planets in our local group? Exciting!


We've got very good pictures of our own planets thanks to probes that did flybys. Maybe I'm way off but I'm pretty sure JWST won't be able to beat those. I also wonder if it can even handle that much light, I know it can't look at the moon or it will burn out.


Oh for sure it won't. I'm more interested in that as a tech demo.


Imaging Phattie/IX would also be pretty cool. It's been a while since we've discovered a real big-boy planet.

https://en.wikipedia.org/wiki/Planet_Nine


I read at some point that a piece of dust hit the telescope and potentially damaged it. I'm guessing that it turned out not to be a big deal or maybe they were able to work around it? Does anyone have any insight? The fact that they're releasing good pictures and not mentioning it seems like a good sign to me.

Searching for this stuff is kind of hard (information overload), so I'm wondering if anyone here has more up to date info.


https://blogs.nasa.gov/webb/2022/06/08/webb-engineered-to-en...

Summary: they can see it in the data, but events such as this were expected and part of the lifetime of the telescope.


This is perfect! Without this context it's hard to appreciate how much better Webb is!


Just needs to include exposure time differences and this is perfect. Glad to witness the power of this fully armed and operational battle station.


It's my understanding that Webb used much shorter exposure times than Hubble, correct?


Yes, hours vs days, which makes them a lot more impressive if you know a bit about photography.

I wanna see what this thing is able to do with a 10 days exposure. Let it loose.


>I wanna see what this thing is able to do with a 10 days exposure.

Ever seen a solid white square? =)

As with all things, it totally depends on what's being imaged. Exposing the Orion Nebula for 10 days would result in a totally over exposed image looking like a solid white square.


I trust astronomers to know how to set the exposure.

I want the Webb focusing 10 days on a patch that looks pitch black on Hubble’s 10 days deep field


Exposure time on Webb is complicated. The sensors are different, the sampling techniques are different, and you can get the on board systems to do integration within sets of exposures for you.

https://jwst-docs.stsci.edu/understanding-exposure-times


To touch the stars. To reach a galaxy. To dream of afar. And in the deepest space, see our ancestry.


It's hard to look at the Webb images without thinking that there must be life and technology out somewhere else in the stars.


Reading the selfish gene is making me think the same thing.


Why does that only come from Webb images? You weren't getting that same sensation looking at the Hubble images before? If that's true, then welcome to the club! It took you a bit longer, but we're happy you're here now. ;P


> Why does that only come from Webb images?

Re-read comment.


"It's hard to look at the Webb images without thinking..."

What am I missing?


A charitable interpretation.

You're making the assumption the poster meant they didn't have this feeling before. They might just be repeating that, once again, it's difficult to do this with Webb. Hubble is irrelevant. Anything else is irrelevant. They're talking about the pictures here in front of us.


cheap shot?

I was absolutely agreeing with the sentiment. I clearly asked if the feeling was there before as well or if it was just from the new images.

Thanks for your charity though


A question to the experts here: What will be the most exciting things to be explored within the next months? What insight could come out of it, which open questions could be answered? Hints to life on other planets by observing specific spectrums of specific ones? Could certain open questions about the early universe be decided? Or something else?


What you find most exciting is a subjective thing. I don't have a clear answer, but you can browse through the list of accepted proposals in cycle 1 and decide for yourself: https://www.stsci.edu/jwst/science-execution/approved-progra....


Aesthetically, I like the Hubble images better, they are more painterly and colorful. However there is no doubt that the JWST contains way more information and is exponentially more valuable.


Ummm, pretty much all of those images are artificially colored so your comment basically just judges the person postprocessing the images and applying colors for detected wavelengths.


I'm glad someone here said this about them. I grew up seeing Hubble style images (non-IR) so I get the warm fuzziness they have. For me the one thing that gets me are the big shiny stars that seem to have big streaks going everywhere. They feel like lens flare in the photos.


It's kinda like listening to music on vinyl vs. digital.

High quality digital is clearly superior sonically, but vinyl has a dullness around the edges that is so aesthetically pleasing.

I agree that the Hubble images are just pleasant to look at, warm and wonderful.


Will Webb ever be used to image our own planets?

What happens when it's pointed at Mars?

Ah found answer here

https://space.stackexchange.com/questions/57492/can-james-we...


Bug report: On Firefox the difference wiper thing doesn't appear for the last image (Carina Nebula), it only shows the full Webb image.


Working on latest FF on Mac. The Hubble image is just smaller.


funny enough that one didn't work on chrome but worked in firefox nightly albeit a distorted version that only showed the top left corner of it... I'm not sure hubble shot the full version?

https://i.imgur.com/oBK3sWE.png


I'm seeing the same and I assumed that Hubble hadn't shot the full version.


on Brave, instead it's the polar opposite, only the Carina Nebula works

Edit it could be because of my default zoom level https://news.ycombinator.com/item?id=32076048


Strange, when I compare the two images of Stephan's Quintet [1], it appears much more "progressed" in the new WEBB image [2]. But that should be impossible.

[1]: https://en.wikipedia.org/wiki/File:Stephan%27s_Quintet_Hubbl... [2]: https://johnedchristensen.github.io/WebbCompare/img/Stephans...


Overlaying them, it seems the JWST images have brighter galaxy edges and gas. This makes the galaxies look bigger, which in turn could make it appear as if the galaxies are closer together. Is that what you mean with 'progressed'?


Yes, I think you're right - also, the linked HUBBLE image from wikipedia is rotated a bit, which increases the effect that two of the 5 galaxies appear to be closer together in the WEBB image.


Hmm, for some reason I prefer Hubble's image of Stephan's Quintet over JWST's. Though, that is purely from an aesthetic perspective. I am sure JWST's is much more impressive from a scientific standpoint.


Does someone have any intuitive explanation on why Hubble images for bright stars seem to have a cross-shaped lens-flare effect, while for JWT it's got six spikes?

It might be because of a different post-processing algorithm, or some phi-related magic, just curious a bit.


I believe the six spikes from JWST are because of its hexagonal mirror segments.


This was my guess as well, but if it's correct, I don't understand why the Hubble images show 4 spikes. I thought Hubble's main mirror was circular, not a square.


As usual, wiki has the answer:

https://en.wikipedia.org/wiki/Diffraction_spike

>"...four spider vanes supporting the secondary mirror. These cause the four spike diffraction pattern commonly seen in astronomical images."


> This makes the Hubble telescope even more impressive in my eyes. Built 50 years ago with presumably 60 year old tech.

> > Hubble telescope was funded and built in the 1970s by the United States space agency NASA with contributions from the European Space Agency. Its intended launch was 1983, but the project was beset by technical delays, budget problems, and the 1986 Challenger disaster. Hubble was finally launched in 1990.

I commented on this other thread: https://news.ycombinator.com/item?id=32074242


The sliders break when page zoom is anything other than 100% :/


I’m curious what’s going on in the upper left area of the Carina Nebula image. The dust can’t have actually cleared out that much since the Hubble shot was taken, could it?


I'm not an expert, but from what I came across online earlier, the Hubble telescope sees more of the visible/UV spectrum than the Webb telescope. So it may just be a difference in what's captured.

This site has a diagram of the spectrum that shows which portions are covered by each telescope, as well as some video clips comparing photos captured by Hubble and Webb. The first video of the Lagoon Nebula (M8) demonstrates what I'm saying pretty well.

https://webbtelescope.org/webb-science/the-observatory/infra...


Lots of visible light is reflected in the Hubble image from the dust causing it to look like it does. As others have said, the JWST does not suffer from that as it "sees" past those frequencies revealing new details instead.


I believe the Hubble shot is more in the visible spectrum whereas the JWST images are in infrared so there are structures in each shot that don't necessarily show up in the analogous image.


With infrared you can see through the dust.


Indeed - that's one of the primary reasons to build the JWT: Dust and gas blocks our view in various cases so we want to take images in wavelengths that are more transparent to that debris.


Considering these are just the initial "test" images there is going to be some amazing stuff to come over the years. Can't wait.


Advantages of IR incredibly apparent in the last pic.

Very nicely done!


I do like that this is generating interest and optimism for science again, yet I have yet to hear a good answer on what new insight we can expect from the lifetime of the telescope. Does someone have a good list of questions we're hoping to get better answers for?

At this point I like it even for just brightening the news cycle anyway.


It's a bit ironic that to me, the layman, Hubble's space images have been so ubiquitous that Webb looks kind of... posterized. For example, in that last comparison of the Carina Nebula, Hubble has that ethereal quality that so many space pictures do, whereas Webb's crispness reminds me almost of a drawing.


So, I have a few questions.

1. Can the telescope be pointed in any direction? (of course, orthogonal to the suns rays, I understand the need to cool it down).

2. If it can be pointed, I am assuming some boosters would be used to pivot it. How long do these last?

3. Is there any info on the orbit? Can the orbit degrade?

4. All the fluid / gas required to correct / point, can it be refilled?


1. It can be pointed at any direction, just not at any time. It actually can't look directly away from the sun either, just to the sides. See https://jwst-docs.stsci.edu/jwst-observatory-characteristics...

2. Basically any satellite in use today uses spinning masses which it can speed up or down to change the direction it's looking. Angular momentum is conserved, but direction pointing isn't. Eventually due to uneven forces (eg, from the sun), these reaction wheels become "saturated" and some propellant is used slow the wheels down. Current estimates put the fuel running out in more than a decade.

3. It's an orbit in L2, past the Earth from the Sun's view. This orbit is unstable, and some of the fuel will also be used to maintain this orbit.

4. There is no refilling mechanism built in. It's likely once the propellant is out the mission will be done. There is the possibility of another spacecraft grabbing onto James Webb and pushing it for station keeping, but it wouldn't be able to refill it. One major innovation with the James Webb though is that it's refrigerator is closed loop, as otherwise running out of coolant would be the mission limiting factor.


For 2, JWST uses reaction wheels to manage it's orientation. These are just heavy wheels it spins faster or slower to change it's own angular momentum.

https://jwst-docs.stsci.edu/jwst-observatory-hardware/jwst-m...


Very cool to see several galaxies that were entirely invisible to Hubble due to high redshift show up brightly to JWST.


The new scope appears to be capable of wonderful images, and no doubt many new discoveries.

Too bad, then, about the crappy colorizing/outlining for the 'so pretty' crowd. I await the site that simply shows (frequency-shifted) images. Any colorizing should have a 'legend' describing its purpose.


The Carina Nebula is the most amazing photo. The level of detail now shown by JWST is breathtaking.


I think so too. I even made another comparison website for it because OP's website doesn't use the full resolution images. Each image is about 50 MB so it will take awhile for it to load.

https://hubblevwebb.tiiny.site


My God, it's full of stars!


Interesting that while they're certainly more detailed they also look "flatter" than the Hubble images. Is that due to differences in hardware or different choices in post-processing?


How much and what type post processing are appled to these type of images?


Most importantly the light frequencies have been mapped to visible spectrum.


old and new images already looked highly processed, that photoshop look - which i think some of us got a little jaded on maybe.

what's more interesting to me is what we can learn about exoplanets from this mission

https://www.nasa.gov/image-feature/goddard/2022/nasa-s-webb-...


Isn't that look mostly a mapping of the observed wavelengths into the visible spectrum? I think at least for some of those images the different colors correspond to different chemical elements, which makes for pretty images _and_ some extra data that's interesting. For purposes of comparison with Hubble images it would also make sense to process them in the same way.

But apart from that, there's the raw data, which is surely somewhere in the public domain, but that's way less useful in communicating those achievements to the public.


Are the four vs six point starbursts an artifact of the different lens / mirror designs? The Webb telescope has hexagonal mirrors, but not sure about Hubble.


Did anybody make a comparison for rough estimation on the number of galaxies in the observable universe based on hubble vs webb?


Thank you so much!!! I really wanted to see the difference side by side and this was an even better presentation. Really cool!


Just to be clear I did not create this website, I just saw it on reddit. All credits to the original author: https://www.reddit.com/r/woahdude/comments/vxeeqo/i_made_a_t...


These need to be adjusted for brightness... The Hubble ones are a lot darker which make things harder to spot.


I just filed an issue on the project. (Gotta look that gift horse in the mouth...)


Is it absurd to think that maybe the Carina Nebula's shape has visibly shifted a bit since the last photo?


I love this. But the Carina Nebula doesn't work on Firefox for Android. It just displays the JWST image.


Changing the orientation of my phone from portrait to landscape while on the site somewhat solved this.


For me it shows the top half of the Hubble version, but not the bottom.



That makes sense then. I have two suggestions if the person who made this is reading: maybe fill out the hubble image with black so it can line up with the JWST image, might make the interface less janky? Or if someone knows of a larger imagine from another telescope(Spitzer?) That could be a more interesting comparison for that particular observation.


Amazing! Would be interesting to also compare with earth-bound telescopes, to really appreciate the progress.


I'm slightly surprised they haven't gotten rid of the stellated hexagon artifacts with software.


And replace them with what?


Nothing, that's kinda the point.

I'm not talking about just deleting the pixels. If you know your measurement instrument introduces artifacts, you just move it around or use some ground-truth image until you map the artifacts and then subtract them out by convolution.


It's striking how much more flamelike the structures appear to be, with the added resolution


This is a wonderful way to visualize side by side images like this.

Great work, it feels smooth and intuitive.


The post-processing in the webb images is hilarious. Marketing at its best.


It’s as if they remastered an old video game. So much more detail!


Notice how the Southern Ring Nebula has expanded


just WOW! I always feel so tiny when considering the absurd dimensions of space brought shockingly vivid with this images!


Hubble = iPhone 4S; Webb = Galaxy S22 Ultra


Damn near unusable of mobile. Cool.


what’s the practical application ?


Of astronomy?


of the James Webb Telescope


I mean, it's pretty much asking what the practical applications of astronomy are. It's the marquee instrument now.


feel free to share your thoughts


seems pretty cool, lots of tech n stuff.

How about you?


pretty illustrations , not much more


yeah, like who cares about heliophysics


born too early...


For the machines that will replace us.


Pretty expensive Insta filter.


Not trying to underscore this incredible achievement, but I'm curious if we could use AI techniques to upscale the Hubble images to achieve similar results as the Webb telescope. Has this been tried before?


AI upscaling works if you want a prettier picture, but not if you want to actually know more. AI can't magically conjure information that isn't there, so if you upscale it has to invent details to fill in. Which is fine for some use cases, but not for science or truth-finding.


It’s a valid thought, but it would really be like trying to take pictures of the sky from underwater and using AI to make it look like it was taken from out of water.

This means: the AI has to predict what it is supposed to look like and for that we would need out of water pictures as reference in the first place which we didn’t so far!

And then: even if we have these new out of water pictures as reference, the AI generated ones would still not show what is real, but instead a fiction. The fiction can look believable but it cannot be studied to derive facts from it. It’s like trying to study an AI generated language.

This sounds like my friend who literally believes that buses will go extinct within 3-5 years as every vehicle will self drive. It’s not thought all the way through.


Like, I guess you could run images and tensors through a neural net and see what the weights look like. That might tell you something that the endless pool of astro-grad students missed. Like, maaaaaaaybe you might have backed out dark matter from some strangely weighted neuron, or there might be something lurking in the noise that was missed. But, I really really doubt it.


For what purpose? If you wanted glorious and infinitely zoomable imagery without much concern for accuracy, couldn't you just design that? Marvel movies do that now. We already have artists' impressions in space articles and documentaries.


Sure you could try, but without getting real higher fidelity photos you’d never know how realistic the synthetic images are.


No




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: