I don't understand why to even say something like that. PR value? As I understood the paper, the presented method is very refreshingly clean of anything resembling what we call AI today. It seems to be a combination of good, old-school photogrammetry and image processing techniques - which is great, because with such methods they can actually ensure the result is physically correct.
That's the AI effect.
Indeed- as that coined term suggests, "AI" has such a philosophically-tied definition with moving goalposts, that in practice it gets slapped far too liberally on anything 'magical' having to do with computation. (As long as it's still trendy to do so, at least.)
'Machine Learning' is at least a little more specific of a term (if still rather general).
also, since the technique is removing a foggy haze, it seems like this could be used for selfdriving cars, with multiple cameras along the periphery of the car, to clean the image for foggy conditions (fog, smoke, smog, ...)
It's sort of like how having a big image corpus enabled AI. There's no AI in the images themselves, but it gives you something to throw AI at.
Yes, the hype around AI / machine learning is such that everyone wants it to do magic and invigorate their field of study.
Photoshop RAW pluging does dehazing. I don't know how it works.
The two projects seems unrelated at first glance, but the timing is interesting.
In particular, a fog machine will help emulate the haze and backscatter produced by water and the particles floating in it.
Original article linked in current article, has more pictures/content.
The one thing that is disturbing though is images with visible "horizon" (for lack of a better word). I find the images that look like they were taken on land but don't have a sky where one would expect it somewhat uncanny.
As a primarily video guy, I always laugh to myself (sometimes not to myself) at the amount of effort photo editors spend on a single image. I remind them that the video world has to do that same level of work, except x24 per second multiplied by number of seconds. Photoshop is cool, but Nuke is mindblowing
While this is true, video frames only have to stand up to scrutiny for ~40ms. You can take a lot of clever shortcuts when an onlooker only has a fraction of a second to spot the rough edges!
Not with this technique at least. Maybe you could train a CNN to do that...
The publication shows that it is not a simple colour correction: http://openaccess.thecvf.com/content_CVPR_2019/papers/Akkayn...
It's a lot of effort for very little, if any, visible difference.
That simple, huh?
> Once trained, the color chart is no longer necessary. As Akkaynak helpfully explained on Reddit, “All you need is multiple images of the scene under natural light, no color chart necessary.”
> Just a clarification: the method does NOT require the use of a color chart. That part was not clear in the video. All you need is multiple images of the scene under natural light, no color chart necessary.
Once the chart is back to its exact color, the image can be considered corrected (at least for this distance, illumination…).
If the algorithm brings the chart back to its true colors at several distances and in various conditions, then it can be applied confidently on images without a chart.
They are only talking about reefs though, which means tropical ocean water. Coral don't thrive in muddy water.
sure it's more aesthetically pleasing, but it's already distorting reality
personally I think photos should be most accurate representation of what healthy naked human eye see, no beatification, no bigger contrast or oversaturating despite making photos more appealing. if you twist reality where it will stop, where is the border of what is too much?
This is not what the eye sees, but in general, very, very few photos are. Do you also model the fovea in photos you take and make everything blurry around the center in the pursuit of greater realism? Do you shun long exposures of the night sky, as our eye can't see the milky way in that detail anyway?
It's a misguided goal in general, I'd say. Some photos aim to realistically depict a scene, some aim to artistically depict a scene, some aim to depict a scene to evoke certain emotions and either realistic or artistic might be valid choices for that. The list can go on, I guess.
In this case, being able to see and convey the actual colors of what is being photographed has scientific and artistic value.
Under what lighting?
I've done this in Darkroom and Blender (with the node compositor).
An alternative can be to make some similar looking objects that are like fake coral blocks and use a boat to sumerge them in the sea and compare the corrected photos with the photos taken in air before. (The wet surfaces have a different look, and it would be important to use a diffuse light instead of direct sunlight, and perhaps other technical problems for the comparison.)