I had the opportunity to try and implement a "novel" algorithm for image downscaling. I contacted the authors - one replied that he can't reveal the source code, and the other didn't reply. So I went ahead and invested about 2 weeks implementing and optimizing it to the point where it worked - but the results were far from what we wanted. If they just supplied a demo program where I could see if it worked for our case, it would be much better.
Academic work would have explained the benefit of the algorithm. It would have presented it with a side by side comparison with common algorithms and explain it's pros and cons against these algorithms. It would have covered all aspects like quality, size, performance to name a few. It would have explained me if I could use this new algorithm in my field, and why (not). None of this is present in the current work shared here. You say this is half done, I would say this is not even 20% done..
To end on friendlier terms, I completely agree that more academic work should have been made available. Yet, I know the pressure in that world and can understand keeping it for yourself for a while. More often than not you will have to drag out a couple of other papers in the same field.
Furthermore, the traditional paper would have to make a lot of guesses about the kinds of images people were likely interested in and the range of characteristics that mattered. What kind of noise spectrum due your images have, how does increasing contrast affect things, what about the spacial frequency distribution in the images themselves, and so on... Different fields have radically different "typical" images and the attempts at covering a reasonable range of the in traditional papers were not necessarily very limited.
Instead, I see this model of publication as exploiting the possibilities of the 'Net to allow more effective communication and collaboration. And it is publication: it is making public, which is what makes the difference between science and alchemy... if there had been a "Proceedings of the Alchemical Society" we'd have had chemistry a thousand years ago.
What this model of publication does not (yet) have is a reputation mechanism, but it isn't clear it needs one, because you can see the results (and the code) for yourself. As such, I think the author has not only done something interesting in the image compression space, they are pointing the way on the future of scientific publication.
Measuring this model as if it could be described as a certain amount of progress along a line toward the old endpoint is mistaken. This is a paradigm shift, and the models are incommensurable.
The original post is certainly interesting, but that doesn't mean it extends our knowledge of image processing. For example, see this 20 year old paper that proposes the idea:
This is something peer review would pick up on... That said, I don't mean to discourage the author. It's a great idea and nicely presented!
I completely agree with the rest of your comment. Depending on audience you might prefer the one approach over the other. A person searching for an algorithm to use in a (large) production environment will most likely search for traditional publications (as you called it), while research might search for the other type described.
That leaves a discussion if you can talk of academic publications missing all the contents required for a publication. ;). I would like to call that academic work, and the other academic publications.
And then it only works on grayscale images. Maybe because it's easier to get funding for medical images. Just applying the algorithm to each color channel separately leads to color fringing when they get out of sync.
Finally, usability, distribution, and performance are afterthoughts. I don't disagree but it makes a huge difference.
Is this effect bad enough that it's still visible in other color spaces that use a luminance channel?
* what is the related work?
* can you put any bound on the error of the reconstruction?
* how does performance vary across resolution, noise, and content?
* how does that compare to the other state-of-the-art methods?
Without that, how do we know whether this is worth using?
That could be the reason why they didn't want to release any source code.