It’s...not as useful as it first appears. It’s actually very computationally expensive to solve the mapping, and it can add quite of bit of space to the image to pre-compute the seam order (which is the proposal made by the original paper).
Each pixel can be tagged with a rank according to its seam number along each axis. If you have 4,000x4,000 pixels then each pixel needs a 12-bit integer value for each axis; and if you want the optimal map it’s another 1 bit per pixel. So something on the order of an extra 25-bits per pixel for a naive encoding, which almost doubles the size of an 8-bit per channel image.
It also has no explicit scene awareness. The energy function essentially defines an implicit heuristic of scene importance, which is surprisingly reasonable until you’re removing >25% of pixels along either axis.
I believe it shipped with Photoshop for a couple years (maybe still?) but in my experience it’s a bit more of an artistic effect than an automatic and scalable solution—and it would be really hard to make it run in real time at game frame rates. (I think the artifacts alone would make it not worth it for games.)
Each pixel can be tagged with a rank according to its seam number along each axis. If you have 4,000x4,000 pixels then each pixel needs a 12-bit integer value for each axis; and if you want the optimal map it’s another 1 bit per pixel. So something on the order of an extra 25-bits per pixel for a naive encoding, which almost doubles the size of an 8-bit per channel image.
It also has no explicit scene awareness. The energy function essentially defines an implicit heuristic of scene importance, which is surprisingly reasonable until you’re removing >25% of pixels along either axis.
I believe it shipped with Photoshop for a couple years (maybe still?) but in my experience it’s a bit more of an artistic effect than an automatic and scalable solution—and it would be really hard to make it run in real time at game frame rates. (I think the artifacts alone would make it not worth it for games.)