I’m getting the impression that some people don’t know what they’re looking at. It says “Background: Two Micron All Sky Survey” which you can read about here [1]. This is not a live image. It’s just showing you where it’s pointing, which isn’t so meaningful for most of us. (It’s random-looking stars.)
You can read interesting details about the current observation at the top, though. Currently:
* A census of high-redshift kpc-scale dual quasars
* A 49 minute, 55 second observation.
There’s a link to the research proposal [2]
Apparently it’s a six-month survey of dual (possibly lensed) quasars. Gravitational lensing can cause magnification, distortion, and duplication of the image of whatever is behind it, so this is a way to learn more about very distant (early) quasars.
A quasar is a supermassive black hole at the center of a galaxy, so this seems like a way to take lensed pictures of a lot of early galaxies?
Webb won't ever provide a recent view since the images belong to the research group and are only made public when the results are released.
Also, Webb can't provide a video feed since it is taking long exposures, and probably only sends results back to Earth when image is finished. The images are in the infrared and only look good when processed.
To expand on this, Webb uses the Deep Space Network (DSN) to communicate with us. It can’t stream data back 24/7. There are generally three contacts per day each lasting a few hours, but I believe this is dependent on the scheduling of contacts with other missions that also use the DSN.
Also, the science data that is sent back is a stream of packets from all the data that was taken since the last contact. The packets are arranged for efficient transmission. One of the first steps of the science data processing is to sort the packets into exposures. Often packets for an exposure are split among multiple SSR (which stands for solid-state recorder) files. Sometimes there are duplicate packets between SSRs (data sent at the end of a contact is repeated at the beginning of the next contact). Only when the processing code determines that all expected packets are present—by using clues from other subsystems—can the next step (creating the uncalibrated FITS) begin.
Ehh. I personally can’t stand all the post processing folks love to make their results look “magazine ready”. I think the most minimal transformation possible to map the data into 0xRRGGBB would look the best, ideally with a simple standardized algorithm that doesn’t allow for any “artistic license”.
Yes, that is what the map does. Convert values from one domain to another. That is the purpose of mapping. The point is to make it as simple and consistent as possible.
Are they not doing that? What is the origin of the idea that they sit on the images too long choosing a custom wavelength conversion formula? Is it just that their images look good?
I encourage you to try some raw file photography and processing. A bitstream from a sensor is not an image and there is no “correct” or “accurate” image from a captured signal.
Think of it like using a linear or log scale for a chart axis: neither is “more correct”, neither is taking “artistic license”.
Poor example, given many photographs shoot raw precisely because it gives them more room for artistic decisions in post. Obviously the standardized algorithm should have basic factors like gamma and general phase shifting incorporated, but the idea of being able to adjust the maps delta between arbitrary adjacent inputs is of questionable benefit to the community. It’s akin to adjusting levels via curves with many points, and it’d be incorrect to say folks are taking artistic liberties when they do that.
IR pictures would be not RGB, just black/white (or whatever palette you like). But yes, it would be possible. For ex. from Flir Thermal cameras, you also can output the image just as spreadsheet. You can even choose the values as temperature (which is calculated) or just as energy (what the sensor gets).
Light isn’t RGB. We just receptors that react to certain wavelengths. I suspect the sensors on the telescope have a range of wavelengths they are sensitive to. It would be a straightforward translation to shift it to map it to the visible spectrum without varying the relative intensities to accentuate certain aspects of the image.
The IR may be in a very narrow band. The visible wavelengths have different colors because there are a range of wavelengths that correspond to different cones in our eyes that roughly match red:green:blue sensors. If you shift the IR frequency up into the visible range, you would just get a luminance image (like grayscale) centered on one visible wavelength like red.
False color imaging sometimes applies colors to different luminance levels or sometimes it takes multiple images at different wavelengths and assigns RGB values to each of those wavelengths. The results are informative but require some editorial / aesthetic decisions to produce the best results.
That's not how vision works. You see an extremely post processed image that's extremely far away from the original light that hit your retinas. There's nothing at all privileged about shifting something into the visible spectrum directly and seeing junk. You're just making an image that your visual system isn't good at understanding. It's not pure, it's garbage. You would hallucinate things that aren't there, you would miss obvious things that are there, etc. For you to really comprehend something the transformation needs to be designed.
Besides the wavelength being outside of human perception, an astronaut wouldn't see anything due to the low photon flux. These pictures have a very high exposure time.
You can read interesting details about the current observation at the top, though. Currently:
* A census of high-redshift kpc-scale dual quasars
* A 49 minute, 55 second observation.
There’s a link to the research proposal [2]
Apparently it’s a six-month survey of dual (possibly lensed) quasars. Gravitational lensing can cause magnification, distortion, and duplication of the image of whatever is behind it, so this is a way to learn more about very distant (early) quasars.
A quasar is a supermassive black hole at the center of a galaxy, so this seems like a way to take lensed pictures of a lot of early galaxies?
[1] https://en.m.wikipedia.org/wiki/2MASS [2] https://www.stsci.edu/jwst/science-execution/program-informa...