
Supersharp Images from New VLT Adaptive Optics - sohkamyung
https://www.eso.org/public/news/eso1824/
======
arriu
This is very exciting. I'd love to see some of this make its way into amature
equipment.

Technology has helped us go past what would have been though of as possible
with similar optics equipment 50 years ago. For the most part, optical mirrors
and lenses are the same but what we can now do with them has changed quite a
bit.

For example, here is a video of Mars through a small telescope:
[http://i.imgur.com/8juHPdn.gifv](http://i.imgur.com/8juHPdn.gifv)

If we take the best parts of each video frame in that video and combine them
in a smart way, a process called lucky imaging, we can reduce the impact of
the atmosphere:
[http://i.imgur.com/CzLZTlv.png](http://i.imgur.com/CzLZTlv.png)

~~~
namibj
Uh, there is much more than that, if you feel fancy, you can reverse the
distortion in each frame and "fix" the atmospheric issues given sufficient
SNR. Unfortunately this is not enough for dark stars with reasonable sized
optics...

See the work here and the comparison to Lucky Imaging using iirc. Avistack:
[https://publikationen.uni-
tuebingen.de/xmlui/handle/10900/49...](https://publikationen.uni-
tuebingen.de/xmlui/handle/10900/49685)

~~~
arriu
Thank you for the link, there that is one hell of a reference. Please share
more if you have any others handy.

I wonder how much of that work would be generalizable through specialized
neural nets:
[https://arxiv.org/abs/1702.00403](https://arxiv.org/abs/1702.00403)

~~~
namibj
Err, this is far from neural networks. I once tried to combine it with some
thing that can do the optimization faster, so that I could make it a tree
structure (as to prevent feedback form amplifying iteratively, kind of like
how you need to be careful that your GAN doesn't start doing dog pictures
instead of cat pictures), and run the images that compromise a tree against
the best guess of it's sibling tree, as well as vice versa, before using these
new estimates of the distortion to get a new best guess from all images
contained by the parent of these sibling trees. Repeat until you reach the
top. You can use more than one sibling to handle non-power-of-two framecounts.
There is partial software on my Github, in case someone is interested I can be
reached, and while working for free isn't really in my interested (other
things make more fun/seem more promising), I'd be happy to start working on it
again if there was a reason.

Regarding your paper, I have to remind you that Michael got nice results from
upsampling the images before running his software. I actually planned on using
the texture units for this, to save on bandwidth/address calculation overhead
in the pending partial rewrite of my software. The GAN there also uses just a
single frame, whereas this uses the properties of the distribution of the
distortions when seen in the frequency domain to figure out how the
distortions are most likely, and then combines the SNR from the many frames to
a single image. There is research using a method very similar to Michael's
with a GPU, GTX 580 or so iirc, which does >15 fps @720p in real time, with
less than 2 frames latency and no more than 1 frame necessary latency if you
run the GPU work queues rather empty (risking underutilisation if you don't
get CPU time fast enough again). Combine with e.g. a nice Volta DGX, and
something like a 400mm Schmidt camera including a field flattening lens and a
CMOSIS CMV12000 (like, take the sensor out of an AXIOM beta camera, shrink the
board around it to the smallest you can get, and stick it with a lens on top
facing a 20 cm spherical mirror, with a corrective plate ~80cm from the
mirror. This is about ~1000$ optics, 2500$ image hardware (including that
necessary to get the full stream at >100 fps into the DGX), and whatever rent
you pay for the DGX. Distortion free 10x slow motion with a pixel size of 14mm
at 1km distance.

If you'd want to sell such a thing to non-military...

[0]: KIM, Dongmin; SRA, Suvrit; DHILLON, Inderjit S. A non-monotonic method
for large-scale non-negative least squares. Optimization Methods and Software,
2013, 28. Jg., Nr. 5, S. 1012-1039.
[https://pdfs.semanticscholar.org/622c/84cfba9781ad846105f28d...](https://pdfs.semanticscholar.org/622c/84cfba9781ad846105f28d7bf69c5405a481.pdf)

------
eisstrom
I'm a PhD student working with data of globular clusters from this instrument
for quite some time now. I will be happy to answer your questions!

~~~
Jaruzel
Are the images natural colour, or have they been 'enhanced' in any way? i.e.
is Neptune really that blue?

~~~
Osmium
I see this question a lot. I used to have an obsession with 'true color';
images felt fake otherwise. Artificial.

I'm a working scientist now, and my view has changed. I realize how limited
our senses are. How much of the world--of the universe--I'd miss by
restricting it to just what my eyes can see natively. Even among colors that I
can see, but perhaps the signal is too faint ... I'm a lot more tolerant of
color-mapped images now. I don't see them as artificial anymore, but as
beautiful and transcendental. A window into a hyper-spectral world normally
invisible to me. It's really something special. I wish I could share this
perspective with more people.

~~~
goldenkey
Alex Grey studied cadavers at Harvard for years. His art tries to show the
true medium, not one limited by visible light. Sort of like what Superman
might see. Our bodies are emanating light in a spectrum of frequencies (Planks
law.) All this light is leaving our bodies at C, whiles all the light from the
universe is coming at us, our "light cone." We see the surface of bodies..but
the actual substance of reality has interfering rippling waves emanating and
being absorbed..not unlike a pool. So the next time someone tells you someone
is ugly, remember that the visible light surface is just the beginning..

[https://m.alexgrey.com/art/paintings/soul/alex_grey_humming_...](https://m.alexgrey.com/art/paintings/soul/alex_grey_humming_bird/)

~~~
mchahn
The artist of drawings for Scientific American for many years made his
drawings super-real by emphasizing components of interest. And these were
black-and-white.

------
raverbashing
This one is impressive
[https://www.eso.org/public/images/eso1824c/](https://www.eso.org/public/images/eso1824c/)

~~~
dghughes
I think this one is even more impressive
[https://www.eso.org/public/images/eso1824d/](https://www.eso.org/public/images/eso1824d/)

If I understand correctly the middle and the image on the right are the same
thing, both taken by VLT array. But the right is using the new MUSE technique.

------
hguhghuff
Kinda hard to find the pics so here’s top 100
[https://www.eso.org/public/images/archive/top100/](https://www.eso.org/public/images/archive/top100/)

~~~
vanderZwan
Great images. However, the text below the first image says:

> _The Very Large Telescope snaps a stellar nursery and celebrates fifteen
> years of operations_

... so presumably most of those were not taking with this new technology.

Thanks for linking anyway, though!

------
yread
Really cool profile page of the telescope [http://www.eso.org/public/teles-
instr/paranal-observatory/vl...](http://www.eso.org/public/teles-
instr/paranal-observatory/vlt/)

------
StavrosK
Jesus, the description of the technology sounds amazing. I am blown away at
how much ingenuity was required to make this.

------
Tepix
The neptune image is breathtaking. It instantly summons a yearning to explore
this unknown icy neighbour.

------
bawana
So the light we see from the lasers is emitted by atoms in space? I thought
space was a vacuum? Or is it that the emitted light only comes from atoms in
the atmosphere? So the further up we go, the dimmer becomes the emitted light.

Also, is there a way to determine how much blurring is happening from
gravitational waves? In other words, if a ripple in spacetime washes across
the space between us and a far away star, will the star become fuzzy like the
schlieren distortion on a hot day here on earth? (faraway objects become
ghostlike as heat from the hot ground alters the air density between observer
and target)

------
mchahn
> The correction algorithm is then optimized ... to reach an image quality
> almost as good as with a natural guide star.

Then why not use natural guide stars?

~~~
welterde
Because they have to be very bright and very close to the target. This limits
the observable targets quite a lot (we don't want to observe something, but
specific targets for most projects).

------
Annatar
Lots of bla bla bla and only one image! Now I feel cheated, that was
clickbait!

