You are not wholly wrong! There is both a supporting structure for the mirror, AND a glass lens in front of the sensor to further flatten the incoming light.
The interesting thing about the spikes in our images is that they stay fixed in image plane coordinates, not sky coordinates. So as the night sky moves (earth rotates) the spikes rotate relative to the sky leading to a star burst pattern over multiple exposures.
Image creator here. We do dark field subtraction, as well as many other instrumental calibrations. What you are seeing is the fundamental photon noise. Because it is statistical in nature, you can never completely eliminate it. We could have chosen to put the black point in the image at a much higher flux level, but if you go to a high enough signal to noise level that you see no grain anywhere, you would miss out on so many interesting things that are still quite obvious to make out but are only 2-3 sigma above the noise.
Image creator here. Now imagine, when the survey is done, we will be able to see even fainter objects and image an area of the sky 1000x times this size.
Image creator here. This is such a massive dataset, most of the image processing needed to be custom written software pipelines. It not really practical for every pixel to be hand inspected. A few defects (and bright asteroids) imprinted through. It really hard to decide what is a real weird thing in the universe, and what is some sort of instrumental effect. We try to not pre-decide on what we think we should be seeing and filter for those by using things such as using classifiers. That leaves us with heuristics based on temporal information, size (is it smaller than a point spread function), and other related things. On large numbers of objects and pixels 1 in a thousand or 1 in a million outliers are bound to occur.
I'm glad you responded (i'm assuming you knew i wasn't criticizing the effort, but just in case -- I wasn't). I was assuming asteroid trail, but I've read that green stars can't exist and _could_ be a technosignature of "little green men". :) Your work on this is lovely. The combined effort of so many smart people over decades of work is truly heartening. Thank you.
I agree their results are also great! We do go a bit deeper, but he big difference it the speed we are able to build these images. We are able to image a larger area of the sky in each exposure, and are able to collect more light. This will lets us build images like this one in a few hours of observation, and build up an equivalent image of the entire southern hemisphere.
I'm the Rubin team member responsible for mapping the data into RGB images. I have been a long time reader of hacker news, but finally made an account to comment on this. I wanted to thank everyone here for their interest and taking their time to check out these images. Seeing everyone interested and engaged makes all the long hours worth it.
What range of wavelengths are in the original images? Do you produce multiple RGB images for looking at different things? c'mon, what does that entail? ;-)
The filters used for this range from near infrared to near uv. We used 4 different filters in all (for this image, the telescope has more). In general yes to fully appreciate all the color information as a human we need to generate different color combos so our eyes can pick up different contrasts.
However, what we strive for is being accurate to "if your eyes COULD see like this, it would look like this". To the best our our ability of course. We did a lot of research into human perception to create this and tired to map the information of color and intensity in a similar way to how your brain constructs that information into an image.
Let me tell you, I did not appreciate how deep a topic this was before starting, and how limited our file formats and electronic reproduction capabilities are for this. The data has such a range of information (in color and intensity) it is hard to encode into existing formats that most people are able to display. I really want to spend some time to do this in modern HDR (true HDR, not tone-mapping) where the brightness can actually be encoded separately than just RGB values. The documentation on these (several competing) formats is a bit all over the place though.
Edit:
I wanted to edit to add, if anyone reading this is an expert in HDR formats and or processing, I'd live to pick your brain a bit!
I'm impressed so much thought went into how to colorize the image! Sometimes it seems like space photos are just colorized thoughtlessly, or to increase the "wow" factor, so it's great to hear how careful and thoughtful you guys were in mapping this data to color-space.
The interesting thing about the spikes in our images is that they stay fixed in image plane coordinates, not sky coordinates. So as the night sky moves (earth rotates) the spikes rotate relative to the sky leading to a star burst pattern over multiple exposures.