Those are diffraction spikes, caused by how the light interacts with the support structure holding the secondary mirror. Each telescope has different patterns, hubble, jwst, etc. I think they only happen for stars, and not for galaxies (an easy way to know which is which), but I might be wrong on that (there's a possibility for faint stars not to have them IIRC).
This one's extra-special! The pattern is multiple + shapes, rotated and superimposed on top of each other. And they're different colors! That's this telescope's signature scanning algorithm—I don't know what that is, but, it's evident it takes multiple exposures, in different color filters, with the image plane rotated differently relative to the CCD plane in each exposure. I assume there's some kind of signal processing rationale behind that choice.
edit: Here's one of the bright stars, I think it's HD 107428:
This one has asteroid streaks surrounding it (it's a toggle in one of the hidden menus), which gives a strong clue about the timing of the multiple exposures. The asteroids are going in a straight line at a constant speed—the spacing and colors of the dots shows what the exposure sequence was.
I think this quote explains the reason they want to rotate the camera:
> "The ranking criteria also ensure that the visits to each field are widely distributed in position angle on the sky and rotation angle of the camera in order to minimize systematic effects in galaxy shape determination."
https://arxiv.org/abs/0805.2366 ("LSST [Vera Rubin]: from Science Drivers to Reference Design and Anticipated Data Products")
No, they happen for absolutely every externally-generated pixel of light (that is, not for shot noise, or firelflies that happen to fly between the mirrors). Where objects subtend more than one pixel, each pixel will generate it's own diffraction patterns, and the superposition of all are present in the final image. Of course, each diffraction pattern is offset from the next, so they mostly just broaden (smear out), not intensify.
However, the brightness of the diffraction effects is much lower than the light of the focused image itself. Where the image is itself dim, the diffraction effects might not add up to anything noticeable. Where the image supersaturates the detector (as can happen with a 1-pixel-wide star), the "much lower" fraction of that intensity can still be annoyingly visible.
It depends on the science you're doing as even these small effects add up, there's a project within the LSST science team (which a college is working on) to reduce this scattered light (search for "low surface brightness"), where there's a whole lot of work around modelling and understanding what effect the telescope system on the idealised single point that is a star.
There are projects (dragonfly and huntsman are the ones I know of) which avoid using mirrors and instead use lenses (which have their own issues) to reduce this scattered light.
Diffraction spikes [1] are a natural result of the wave-like nature of light, so they occur for all objects viewed through a telescope, and the exact pattern depends on the number and thickness of the vanes.
My favourite fact about these in relation to astronomy is that you can actually get rid of the diffraction spikes if your support vanes are curved, which ends up smearing out the diffraction pattern over a larger area [2]. However this is often not what you want in professional astronomy, because the smeared light can obscure faint objects you might want to see, like moons orbiting planets, planets orbiting stars, or lensed objects behind galaxies in deep space. So you often want sharp, crisp diffraction spikes so you can resolve these faint objects next to or behind the bright object that's up front.
The same effect is used for Bahtinov focusing masks. From what i know, all light will bend around the structures, but stars are bright and focused enough to see; in theory galaxies would too