
Universal algorithm set to boost microscopes - rbanffy
https://actu.epfl.ch/news/universal-algorithm-set-to-boost-microscopes/
======
jerf
I've been looking for an algorithm for a while to examine photos to get their
"real" resolution, both to compress my personal collection better, and to
provide a relatively rigorous demonstration of when a "12 megapixel photo" is
"really" a 2MP photo due to the camera as a whole not being able to support
it. I wonder if this could be the basis for what I'm looking for.

~~~
mehrdadn
Be glad you're not taking RAW photos, or this isn't even an option. :(

Anyone have any suggestions for doing the same thing for video?

~~~
ChrisLomont
RAW usually compresses, just not as well as lossy. For example, the lossless
FLIF image format usually compresses images quite well (and it's lossless -
you can binary compare the before and roundtrip images to check).

For video, I suspect there's things like FLIF, but with interframe knowledge,
allowing even more compression.

None of these should beat good lossy codecs, but they vastly outperform
completely uncompressed data for the stuff people normally capture.

If you somehow capture true random noise, then these wont' work. But most
recordings of interest are much more structured, hence compressible.

[https://flif.info/](https://flif.info/)

~~~
mehrdadn
Not really. Some RAWs from some cameras do compress, but not others. (Not sure
why; maybe it's already pre-compressed.) As an example, I tried just
compressing a random 30MB RAW from my camera (just a photo of jeans, nothing
particularly unusual) with 7z on Ultra compression, and it saved—guess what—a
whopping 2.2% of its size.

~~~
remcob
7z is a generic lossless compressor. These algorithms tend to specialize in
exact repeating patterns in sequences. Good for english prose, but images tend
to have non-exact patterns in two dimension. A dedicated algorithm like FLIF
will be much better at picking those up. I would expect an order of magnitude
difference in performance between 7z and FLIF.

For the curious: the way these non-exact pattern compressors work is by
subtracting out the predicted pattern and then storing the remaining signal,
which will be smaller numbers and therefore take less bits to store.

------
godojo
Here's the paper (source code is linked in the article):
[https://www.researchgate.net/publication/335405938_Parameter...](https://www.researchgate.net/publication/335405938_Parameter-
free_image_resolution_estimation_based_on_decorrelation_analysis)

------
contingencies
What about programmatically determining the resolving power or clarity of an
optical component such as a lens or filter while in-place on a camera
assembly? This often varies across different parts of the component due to
design, dirt, damage, manufacturing issues, etc.

------
paulkrush
Wow, I have casually looked for a tool to get the real resolution of a
captured image for years. The best I could do is the comparing the max
distance away from a target(checker board or April tag).

~~~
ipunchghosts
This isn't for photographs of natural scenes but for microscopic imagery --
not the same.

------
Fordec
Could this theoretically be used/adjusted for telescopes too?

~~~
raymondh
It would be fun to try it with full-res TIFF Hubble images:
[http://hubblesource.stsci.edu/sources/illustrations/](http://hubblesource.stsci.edu/sources/illustrations/)

The open source plugin for the image analysis algorithm with is here:
[https://github.com/Ades91/ImDecorr](https://github.com/Ades91/ImDecorr)

------
doubleunplussed
If I were trying to solve this problem, I would Fourier transform and look to
see at what spatial frequency the signal stops being larger than photon shot
noise. Seems like the obvious first attempt at an approach.

But that's not what they're doing - they're doing something about "image
partial phase autocorrelation", which are all words I know but not together. I
wonder why the naive method isn't adequate.

~~~
jacquesm
The usual approach is to assume that those that have dedicated a good chunk of
their life to a problem have tried the obvious. Aka as the 'why don't you'
fallacy.

It's a good approach to entering a new field, and sometimes (rarely) it
actually gets results. But the normal situation is that those in the know have
thought of, tried and probably reject all the approaches that a newcomer would
come up with in the first 10 minutes.

~~~
doubleunplussed
I'm not implying my obvious approach is better and that anyone is stupid for
not doing it instead. People downvoting me for implying that are not being
charitable.

On the contrary, the fact that they did not use it is good evidence that it is
inadequate, given they spent a long time on the problem and I'm sure simpler
solutions occurred to them. So I'm agreeing with you here. I'm just expressing
interest it knowing _why_ it isn't adequate.

Though for what it's worth I do work a lot with image processing (of trapped
atoms though, not biological samples), so my first thoughts are also coming
from someone who's spent a decent chunk of time considering these things.

------
yummybear
I love it when algorithms have such a real world measurable effect.

