Hacker News new | past | comments | ask | show | jobs | submit login
Sea-Thru: A Method for Removing Water from Underwater Images (petapixel.com)
340 points by cgranier on Nov 15, 2019 | hide | past | favorite | 74 comments



"I really see this as a start of the artificial intelligence boom in marine science" - first words of the video in the article.

I don't understand why to even say something like that. PR value? As I understood the paper, the presented method is very refreshingly clean of anything resembling what we call AI today. It seems to be a combination of good, old-school photogrammetry and image processing techniques - which is great, because with such methods they can actually ensure the result is physically correct.


In the paper [0] there is more info than just what's explained in the simplified blog post. A.I. is needed if you want to do the same without the color chart used for training it. Furthermore, it is not just color correction, also backscatter is taken into account.

[0] http://openaccess.thecvf.com/content_CVPR_2019/papers/Akkayn...


Yes, I'm talking about this paper exactly. All I can see is photogrammetry and regular image processing stuff. There's no DNNs or other hot stuff. I guess the part where they're estimating parameters for the equations from photos counts as ML, but people have stopped calling this stuff "AI" quite a while ago.


I think it is more that this correction allows more generic algorithms like object classification to work. Of course you could do object classification without color, but that would be throwing away information. It is just that the back scattering currently is overwhelming the colors in such a variable way that it is hard to use color inclusive object classification without doing stuff like requiring the exact same distance between camera and object for each picture. And likely that doesn't even work yet.


> I guess the part where they're estimating parameters for the equations from photos counts as ML, but people have stopped calling this stuff "AI" quite a while ago.

That's the AI effect.

https://en.m.wikipedia.org/wiki/AI_effect


> That's the AI effect.

Indeed- as that coined term suggests, "AI" has such a philosophically-tied definition with moving goalposts, that in practice it gets slapped far too liberally on anything 'magical' having to do with computation. (As long as it's still trendy to do so, at least.)

'Machine Learning' is at least a little more specific of a term (if still rather general).


People point at this all the time, but there's also another AI effect. People use traditional techniques to solve modern problems and label it "AI." Calling it a traditional technique despite the modern application doesn't mean it's moving the goalposts.


AI just means "I want my paper funded & published" nowadays.


Aside from the paper, I think one thing they're referring to is using the output (color corrected images) as input for AI. One thing they want to do is count the number of fish in an image, and know which species each fish is. So you can take pictures of coral reefs and estimate "there's 1,000 species X, 2,200 species Y". With the old images, it's too difficult to determine which species a fish is. With the new images, it's easier. So Sea-Thru is preprocessing that'll be useful for AI in marine science.


I dont have the impression it works on moving objects, since it needs multiple frames from different depths, but it could be used to count static critters... unless some kind of boom with multiple cameras at different depths is used...

also, since the technique is removing a foggy haze, it seems like this could be used for selfdriving cars, with multiple cameras along the periphery of the car, to clean the image for foggy conditions (fog, smoke, smog, ...)


I think the idea is that if you can produce cleaner, physically-correct ocean imagery, then the result of that is more amenable to consumption by machine learning image processing tools.

It's sort of like how having a big image corpus enabled AI. There's no AI in the images themselves, but it gives you something to throw AI at.


> PR value?

Yes, the hype around AI / machine learning is such that everyone wants it to do magic and invigorate their field of study.


Neat! This uses Structure from Motion to compute depth to each pixel and correct backscatter for the specific distance light is traveling from the subject. Typical flat color correction algorithms can help reduce the blue tint typical of underwater photography, but it's only physically correct for a narrow distance band.


Modern Image programs have a Dehaze algorithm that already can calculate a depth map using the haze. Wonder if this is better.


The paper says that they used current dehazing algorithms as a base, and added more parameters to them.


Which programs? I had to generate some depth maps recently and couldn’t find a good solution.


That's not what the comment above said. They said some imaging programs create depth maps, and process images based on that. It doesn't mean it creates a depth map for the user, it can simply be an internal part of the whole process.

Photoshop RAW pluging does dehazing. I don't know how it works.


A couple of days before I saw the Scientific American article someone dropped this project in our local development group: https://github.com/nikolajbech/underwater-image-color-correc... the repository contains some test images, and they also provide a website to test it out: https://colorcorrection.firebaseapp.com/ that states: "This web-application features a beta version of the Paralenz color correction algorithm, which features an open-source license".

The two projects seems unrelated at first glance, but the timing is interesting.


I've skimmed the description of the repo and it does look unrelated. The paper uses depth information while the repo is focused on color balance. The paper mentions this is inaccurate way of fixing the colors as the color information changes with distance too.


I'd love a tool to reverse this process on normal photos so I can put random family events under water without drowning the family


You can do it in Photoshop with various color tweaks, blurs, and textures. It helps a lot to "stage" the original shots for background removal (or to look like they could already be in an underwater location).


> It helps a lot to "stage" the original shots

In particular, a fog machine will help emulate the haze and backscatter produced by water and the particles floating in it.


https://www.scientificamerican.com/article/sea-thru-brings-c...

Original article linked in current article, has more pictures/content.


the first picture in this article is extremely blurry compared to original


While, as explained in the article, the method was developed to help scientists and not primarily to improve aesthetic qualities, I find most of the images very pleasing.

The one thing that is disturbing though is images with visible "horizon" (for lack of a better word). I find the images that look like they were taken on land but don't have a sky where one would expect it somewhat uncanny.


These are truly amazing results. Very excited to see some wonderful underwater photography in the coming years. Also, the name "Sea-Thru" is cleverly perfect!


I would be interested in an experiment of exporting video to a frame sequence, and then letting the software process each frame. Would it make the same decisions on each frame so that when played back as video there is no visible changes between frames? While the results are impressive for a single still, using this to see movement that occurs underwater would be amazing with the improved color.


and you could also use it to just accuracy (or at least consistency) of the algorithm. Sounds like a good follow up paper.


In the early days of VR, I did the equivalent of this test. The software to stitch stills was much more robust than the the nascent versions of software handling video. As an experiment, I ran the video through the video software, and then did the export to image sequence and had the stills software apply the same settings to the image sequence. When the image sequence was converted back to video, you could see how the stills software made different decisions for each frame resulting in a very psychedelic video. With more tweaking the trippy effect was reduced but not eliminated, and the video software was updated to become much more robust (and impressive).

As a primarily video guy, I always laugh to myself (sometimes not to myself) at the amount of effort photo editors spend on a single image. I remind them that the video world has to do that same level of work, except x24 per second multiplied by number of seconds. Photoshop is cool, but Nuke is mindblowing


>I remind them that the video world has to do that same level of work, except x24 per second multiplied by number of seconds.

While this is true, video frames only have to stand up to scrutiny for ~40ms. You can take a lot of clever shortcuts when an onlooker only has a fraction of a second to spot the rough edges!


The one crux to my argument. Thanks for knocking me back down from my high horse


I kinda want to see this applied to the diver video. It could surface artifacts, or look surreal.


I was initially disappointed by the shadows/refraction patterns of the surface waves are present in the images. Can we compensate for that?


Fun fact, those patterns are called caustics:

https://en.wikipedia.org/wiki/Caustic_(optics)


I imagine if you had multiple shots you could median filter through time, similarly to how people remove crowds from busy landmark photos.


Nope.

Not with this technique at least. Maybe you could train a CNN to do that...


This seems like it's just color fixing using stereographic distance modeling and a known reference palette. I'm not sure what is new or novel about this? Maybe it's just new to oceanographic photography?


They actually correct by using wavelength-dependent Backscatter and attenuation models while accounting for camera spectral sensitivities and illuminant spectrum. This is much more complex than "just color fixing".

The publication shows that it is not a simple colour correction: http://openaccess.thecvf.com/content_CVPR_2019/papers/Akkayn...


It might not be "just color fixing", but to my eye at least, the results appear similar to if you were to simlpy white balance in lightroom (which doesn't always require a color chart either).

Example: https://imgur.com/a/h3uFu4z


It's clearly much better than that. The problem is that, in a medium like water, light of different wavelengths absorb differently. So the longer distance the light travels through water, the more the effect is, and this effect is different depending on the wavelength. So you can't just open Photoshop and change the color levels of everything in the picture simultaneously. You actually have to know/estimate distances to different objects and make adjustments locally based on that.


I'm aware how water affects light underwater; I do scuba photography. I'm sorry, but it's simply not "clearly much better than that".

It's a lot of effort for very little, if any, visible difference.


> This seems like it's just color fixing using stereographic distance modeling and a known reference palette.

That simple, huh?


Which is already quite complicated...


That was my point... did you need a sarcasm tag?


The article explains that the colour chart was only for the photos used for training the model. After the model was complete it is not needed, so the use of the software is to be able to take photos without a reference palette when doing reef surveys.


Inventions often seem obvious or trivial in hindsight. Are you aware of this technique being used anywhere else? Because this seems pretty novel to me.


Yeah both are common in astrophotography. Color correction using a reference palette is also pretty common when doing any kind of scientific imaging, though in lab setting the distance is usually well established and not relevant.


TFA:

> Once trained, the color chart is no longer necessary. As Akkaynak helpfully explained on Reddit, “All you need is multiple images of the scene under natural light, no color chart necessary.”

On Reddit:

> Just a clarification: the method does NOT require the use of a color chart. That part was not clear in the video. All you need is multiple images of the scene under natural light, no color chart necessary.


What annoyed me with that answer is they didn't explain why they needed the color chart at all. I would assume it's for training some model, which would lead to this method not working without the chart out of the box for example in muddy waters.


The chart is needed to validate that the algorithm works: you need to have known colors in the image, you need a reference. (The chart is the ground truth)

Once the chart is back to its exact color, the image can be considered corrected (at least for this distance, illumination…).

If the algorithm brings the chart back to its true colors at several distances and in various conditions, then it can be applied confidently on images without a chart.


It might indeed require new training data for substantially different kinds of haze.

They are only talking about reefs though, which means tropical ocean water. Coral don't thrive in muddy water.


That makes it way more relevant.


How did they get the reference palette to those stars?


Apple patented something similar in 2015 regarding the recoloring part without dehaze.


is it physically accurate image though? I think those untouched photos are accurate same as seen by naked eye

sure it's more aesthetically pleasing, but it's already distorting reality

personally I think photos should be most accurate representation of what healthy naked human eye see, no beatification, no bigger contrast or oversaturating despite making photos more appealing. if you twist reality where it will stop, where is the border of what is too much?


The goal for this might not be to be visually pleasing, but to recover the actual colors of what is underwater. And yes, in that respect it should be physically accurate. She uses a color chart and multiple images from different distances. The algorithm thus ensures that colors are neutral.

This is not what the eye sees, but in general, very, very few photos are. Do you also model the fovea in photos you take and make everything blurry around the center in the pursuit of greater realism? Do you shun long exposures of the night sky, as our eye can't see the milky way in that detail anyway?

It's a misguided goal in general, I'd say. Some photos aim to realistically depict a scene, some aim to artistically depict a scene, some aim to depict a scene to evoke certain emotions and either realistic or artistic might be valid choices for that. The list can go on, I guess.

In this case, being able to see and convey the actual colors of what is being photographed has scientific and artistic value.


> personally I think photos should be most accurate representation of what healthy naked human eye see

Under what lighting?


no artificial lightning which would not be there just for photo


Read the article and watch the video and find out that you are wrong.


I'm not really excited about technology anymore. But I found this absolutely amazing. The people that made the jankey whale pop out of the floor and raised billions have nothing on this. I'm not joking about how cool I think this is.


Does something like this exist for over-water photography? I remember once taking a picture of beautiful scenery in the Japanese mountains, and the result was a very disappointing, almost completely grayed out image.


Shoot in RAW and adjust white balance to your taste, maybe tweak the colour space too. Professional editing software such as Lightroom in combination with high-resolution and fidelity RAWs will allow you to correct even the worst pictures, as long as it's not pure white or black.


I've had some success with adaptive contrast ... basically you compute the difference between the original and a blurred image and then apply more contrast where that is high.

I've done this in Darkroom and Blender (with the node compositor).


Here is a recent research that does haze removal and other image decomposition http://www.wisdom.weizmann.ac.il/~vision/DoubleDIP/


The name is pretty good.


How does one verify if the output is correct?


She used a color chart for calibration. I think it should work, but I'm not an expert in image processing.

An alternative can be to make some similar looking objects that are like fake coral blocks and use a boat to sumerge them in the sea and compare the corrected photos with the photos taken in air before. (The wet surfaces have a different look, and it would be important to use a diffuse light instead of direct sunlight, and perhaps other technical problems for the comparison.)


I'd assume that's the point of the colour reference card. You know what the colours should look like when not under water.


The goal of the transformation is to make the image look like as if it were photographed above water. A straightforward measurement would be to photograph certain subjects above and below water and compare the appropriate reference and transformed images.


Maths


The branding of this is on point. Well done.


Whether the modified photos look better or not is mostly a matter of taste I guess. Personally I prefer the non-modified photos for the underwater feeling they give. It's a cool technique though, and probably useful for scientific purposes.


I'd be kind of curious about the reverse filter. I wonder how some of my photos would look like if I "underwaterified" them.


r_wateralpha 0.1




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: