Hacker News new | past | comments | ask | show | jobs | submit login
Universal algorithm set to boost microscopes (epfl.ch)
83 points by rbanffy 52 days ago | hide | past | web | favorite | 24 comments

I've been looking for an algorithm for a while to examine photos to get their "real" resolution, both to compress my personal collection better, and to provide a relatively rigorous demonstration of when a "12 megapixel photo" is "really" a 2MP photo due to the camera as a whole not being able to support it. I wonder if this could be the basis for what I'm looking for.

For compressing your personal collection, you are probably best off doing a parameter search with a desired max/average error. Too many variables in image quality to do anything based on signal chain alone, probably.

As fare as a rigorous demonstration goes, that's what resolution targets are for :)

Naive idea: do a downscale-upscale roundtrip using bilinear interpolation and then compare that with the original. Any difference should be the added value over the lower res. This diff image would contain noise and signal though, the final score should increase with the signal noise ratio.

this is intended to find the native resolution of upscaled anime, but maybe the getnative tool or the fourier approach can help pointing at the original resolution of a blown up photograph.



I doubt it. The method they are using makes a bunch of implicit assumptions about the content of the image. Microscope images generally are very sparse and have lots of black areas which makes this method work very well on them. At the very least, you may have to adapt some of the parameters (like dt in max finder) to work for photographs.

Be glad you're not taking RAW photos, or this isn't even an option. :(

Anyone have any suggestions for doing the same thing for video?

RAW usually compresses, just not as well as lossy. For example, the lossless FLIF image format usually compresses images quite well (and it's lossless - you can binary compare the before and roundtrip images to check).

For video, I suspect there's things like FLIF, but with interframe knowledge, allowing even more compression.

None of these should beat good lossy codecs, but they vastly outperform completely uncompressed data for the stuff people normally capture.

If you somehow capture true random noise, then these wont' work. But most recordings of interest are much more structured, hence compressible.


Not really. Some RAWs from some cameras do compress, but not others. (Not sure why; maybe it's already pre-compressed.) As an example, I tried just compressing a random 30MB RAW from my camera (just a photo of jeans, nothing particularly unusual) with 7z on Ultra compression, and it saved—guess what—a whopping 2.2% of its size.

7z is a generic lossless compressor. These algorithms tend to specialize in exact repeating patterns in sequences. Good for english prose, but images tend to have non-exact patterns in two dimension. A dedicated algorithm like FLIF will be much better at picking those up. I would expect an order of magnitude difference in performance between 7z and FLIF.

For the curious: the way these non-exact pattern compressors work is by subtracting out the predicted pattern and then storing the remaining signal, which will be smaller numbers and therefore take less bits to store.

What is your camera? Multiply your sensor size by the bit depth, and I suspect you’ll have more than 30mb.

Also image compressors like FLIF vastly outperform general compressors on images.

Some RAW formats are pre-compressed, usually with very simple (hence fast) algorithms, and some aren't even lossless (Sony ARW). It's often possible to further compress them with specialized tools.

Disclosure: I'm the author of PackRAW, a tool that does exactly that (https://encode.su/threads/2762-PackRAW)

The RAW format is probably using e.g zlib for compression internally. You can check by calculating how many bytes in total a truly raw image is: 12-16 bits (1.5-2 byte) per pixel.

Many lossy video formats (e.g. h.264, h.265, av1) have a special lossless profile. But there are also dedicated lossless codecs such as ffv1.

Here's the paper (source code is linked in the article): https://www.researchgate.net/publication/335405938_Parameter...

What about programmatically determining the resolving power or clarity of an optical component such as a lens or filter while in-place on a camera assembly? This often varies across different parts of the component due to design, dirt, damage, manufacturing issues, etc.

Wow, I have casually looked for a tool to get the real resolution of a captured image for years. The best I could do is the comparing the max distance away from a target(checker board or April tag).

This isn't for photographs of natural scenes but for microscopic imagery -- not the same.

Could this theoretically be used/adjusted for telescopes too?

It would be fun to try it with full-res TIFF Hubble images: http://hubblesource.stsci.edu/sources/illustrations/

The open source plugin for the image analysis algorithm with is here: https://github.com/Ades91/ImDecorr

If I were trying to solve this problem, I would Fourier transform and look to see at what spatial frequency the signal stops being larger than photon shot noise. Seems like the obvious first attempt at an approach.

But that's not what they're doing - they're doing something about "image partial phase autocorrelation", which are all words I know but not together. I wonder why the naive method isn't adequate.

The usual approach is to assume that those that have dedicated a good chunk of their life to a problem have tried the obvious. Aka as the 'why don't you' fallacy.

It's a good approach to entering a new field, and sometimes (rarely) it actually gets results. But the normal situation is that those in the know have thought of, tried and probably reject all the approaches that a newcomer would come up with in the first 10 minutes.

I'm not implying my obvious approach is better and that anyone is stupid for not doing it instead. People downvoting me for implying that are not being charitable.

On the contrary, the fact that they did not use it is good evidence that it is inadequate, given they spent a long time on the problem and I'm sure simpler solutions occurred to them. So I'm agreeing with you here. I'm just expressing interest it knowing why it isn't adequate.

Though for what it's worth I do work a lot with image processing (of trapped atoms though, not biological samples), so my first thoughts are also coming from someone who's spent a decent chunk of time considering these things. My group is actually working at the moment on a method of measuring arbitrary aberrations of an imaging system in order to be able to invert them and obtain better images. This is a more general problem, and the finite resolution of any imaging system comes through clearly, we could definitely determine the resolution via our method (though not from a single-image).

I love it when algorithms have such a real world measurable effect.

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact