
SPIF – Streaming Progressive Image Format - k__
http://fhtr.org/multires/spif/spif.html
======
kig
Hi, author here. Wow, this was a while ago, I'd forgotten I wrote that.

There's a version of this using a directory of images and loads in a bigger
picture if you zoom in: [http://fhtr.org/multires/](http://fhtr.org/multires/)
(Note that, yes, it'd be better to have a tile map for large resolutions and
load in just the visible part of the image. And dump the hi-res tiles when
zoomed out.)

SPIF's intention was to throw out a "it'd be cool if browsers supported
something like this natively"-proposal, as the browser knows best what pixels
of an image are needed for sharp rendering. For the webdev, the experience
would be to just put the image on a page, rest assured that it looks good.
Like with SVG.

Yes, loading JPEG2000 / progressive JPEG with stream truncation would be nice.

Images don't load on iOS? Probably some silly bug in my code.

Images can't be saved with right-click? That's probably due to using
revokeObjectURL after loading the image from a blob.

~~~
ktta
How does it compare with FLIF?[1]

[1]: [http://flif.info/](http://flif.info/)

~~~
kig
The biggest difference is that FLIF is an image encoder, SPIF/Multires is a
container format.

So you can cram lossy 20x compression ratio JPEGs into Multires, optimized for
each resolution. Or you could put a simplified SVG for low-res use, and
detailed one for zoomed-in detail. Or hack it a bit and use Multires for
loading the right-resolution video for your page. The format is just a
container that tells the browser where to find the assets for each resolution.

FLIF is a lossless bitmap image encoder with progressive resolution
enhancement.

TL;DR FLIF is PNG++, Multires is automatic srcset.

------
jsingleton
This appears to be quite similar to FLIF
([http://flif.info/](http://flif.info/)) and to some extent BPG
([https://bellard.org/bpg/](https://bellard.org/bpg/)). There's no shortage of
image formats that are better then JPEG and some have been around for quite a
while.

The problem is in software/hardware support, particularly in browsers. JPEG
has a lot of momentum. WebP is only supported in Chrome/Opera
([https://caniuse.com/#feat=webp](https://caniuse.com/#feat=webp)) and
IE/Android don't even support animated PNG yet
([https://caniuse.com/#feat=apng](https://caniuse.com/#feat=apng)).

I did loads of research on this for a web app dev book, which I'm currently
updating for the second edition. Browser image support hasn't changed much
since it was originally published.

------
morecoffee
> Tech Details

> The SPIF format starts with a header that tells the offsets and sizes of the
> images in the SPIF. The images are stored smallest first, but there are no
> image size restrictions apart from that.

So... two HTTP requests per image load? That is probably going to hurt more
than it helps. Also, that probably means a Range request, which don't have
great support. (For example, the builtin Python http server SimpleHTTPServer
doesn't support them.)

~~~
jsjohnst
While I agree with your other point re: multiple round trips completely, the
part about Range requests feels misplaced. Any "real" HTTP server that anyone
would use for hosting images these days supports them. Yes, some simple
developer tools may not, but it's not hard to add if there was value to.

------
sprash
FLIF ([http://flif.info](http://flif.info)) does this already but is far
superior in many ways. All it lacks is a JS implementation.

------
pornel
Nicer progressive display is possible in JPEG already:

[https://imageoptim.com/progressiveblurdemo.html](https://imageoptim.com/progressiveblurdemo.html)

it's only a limitation of libjpeg that smoothing of early progressive scans is
weak and incomplete, and thus looks very blocky. I wish browsers improved
implementation of this.

Partially loaded DCT coefficients correspond quite well to maximum possible
image resolution, so it is known how much cross-block smoothing needs to be
applied to get nearly-optimal smooth preview. There's even an implementation
of that idea:

[http://johncostella.webs.com/unblock/](http://johncostella.webs.com/unblock/)

------
vortico
Nice idea, but it has usability issues:

\- Right click -> View Image, Save Image, or Copy Link doesn't seem to work.
This might be an issue with Firefox, because the blob is stored _somewhere_.

\- No direct linking is possible, so I can't drop a link to the image into a
chatroom.

\- Zooming in page with Ctrl+Plus doesn't increase the resolution of the
downsized image.

I'd rather download the original image, even if it takes more time/bandwidth,
if these issues aren't fixed. And the CPU usage scares me a bit, if you
multiply this by ~100 images, which is very common on news websites.

However, this method is better than embedding the full 7360 x 4912 image at
least. It took 0.8s for the page to load in my browser and 6.0s to download
the test.spif image.

I wonder if there is a way to use a progressive JPEG in a normal <img> tag (so
it displays progressively rather than once it has finished loading) and use
Javascript to halt the download once a certain amount has downloaded.

~~~
Asooka
First and second are probably features for some people, third is solved,
according to the source.

I wonder if it would be a good idea to revive plugins - back in the day we
used object tags with links to the plugin to install to render the media.
Maybe something similar can be done today with a link to some js that does the
rendering. So you can have an img tag with mime type image/x-spif and a link
to a js (or in the future, web assembly) handler in case there is no native
support.

Web browsers are basically operating systems now. We should be working towards
letting people safely install extensions.

------
userbinator
It sounds like a reinvention of JPEG2000 and progressive JPEG, which were
specifically designed to accommodate use-cases like this without having to
store separate independent images for each resolution, and allowing the data
stream to be truncated at any point to reach the required level of refinement.

~~~
jsingleton
Progressive JPEG is slightly different but JPEG2000 is very interesting. The
wavelet transform is pretty clever in how it arranges the frequency
components.

It's a shame JPEG2000 never took off, probably due to the patent issues. It
has a niche in medical imaging though.

I wrote my thesis on this topic, over a decade ago now. When doing image
recognition you only need to read the first small part of a JPEG2000 file to
get good results, which has obvious advantages. You can ignore the high detail
bits at the end.

~~~
conceptme
It's quite popular with archives as well due to storage costs and allowing
zoomable images with IIP image
[http://iipimage.sourceforge.net](http://iipimage.sourceforge.net) But support
is lousy only kakadu works fast enough, there are still many issues with
OpenJPEG but it still seems to be in active development.

------
ClassyJacket
Cool work! I'm a fan of anything that makes things more efficient.

Why not go a step further and have a server dynamically scale the image based
on the size the image will be displayed at, so it's always displayed pixel for
pixel and there's no waste? Anyone tried that?

~~~
matt4077
THat's (almost) possible. You can obviously redirect requests to
image-????x????.jpg to a script that resizes the source image to the requested
size. To get the browser to request the correct size, you just have to add a
bunch of them to the srcset attribute. I say "almost" because you're limited
to giving specific sizes as options, and it's probably not a good idea to let
that list grow to a thousand entries.

However, the approach given in the linked project is actually much better,
because

\- It works with only a single file, not one for every resolution. The latter
makes CDNs useless with increasing number of possible resolutions

\- On-the-fly-resizing is slow

\- A streaming solution could just continue to read at the point it previously
stopped if anything changes, such as the user clicking on a thumbnail to see
the full image

\- It also allows to make the stopping decision at a later point. Currently, a
browser may have only loaded the first few lines of a html page when it has to
decide which of the available sizes for the image it wants to load. If it can
wait a bit longer, it has better data about, for example, the network speed to
the server and the layout of the page.

~~~
dawnerd
Used to have a setup like this years ago but having a script that performs
something heavy like image manipulation led to it being abused to ddos the
server. You basically have to set predefined sizes but in that case you might
as well render the images on upload and save all the versions.

~~~
eknkc
What we do is to add a signature to url. For example image.jpg?w=500&sig=<hmac
of w=500> so the server can validate that the request url originated from us
and we can add more image sizes by just generating new urls with correct
signature.

This solves the abuse issue while providing the same freedom for image sizes.

Edit: Also we use aws lambda for image resizing so scaling is not an issue and
generated images are cached on s3. Storage is cheap.

------
ngneer
Interesting. A similar concept is applied in microscopy, where multiresolution
TIFFs are the norm. Another option to consider would be Deep Zoom. The common
denominator is that users are essentially wanting a differently zoomed version
of the plane depending on circumstances.

------
ricardobeat
How old is this? In practice, since there is no native support for this
format, ways 2 and 3 are the same (both JS based). We now have srcset + media
queries that let us achieve this without any overhead.

------
azinman2
Image is very very low res on my iPhone (os 11 beta)... seems like it pre
maturely stopped?

~~~
dawnerd
iOS 10 as well. Looks broken.

------
baalimago
sort of pointless no?

it would be for me, anyways

