Hacker News new | past | comments | ask | show | jobs | submit login
Using SVG as image placeholders (medium.com/jmperezperez)
983 points by FriedPickles on Nov 14, 2017 | hide | past | favorite | 134 comments



(Developer of Primitive here.)

Love seeing people use my software in different ways! If this looks interesting to you, try it out:

https://primitive.lol/

https://github.com/fogleman/primitive

Also, I made a Twitter bot that posts samples every 30 minutes based on "interesting" Flickr photos using randomized Primitive settings:

https://twitter.com/PrimitivePic


Aren't you the guy who writes all the graphics stuff in Go too? How do you find time to do all this???


What do you do with all of your time? :)


Why Hacker News of course! Thanks for sharing.


I went ahead and used it yesterday to change the background image of my site to save a couple (426kb) of bytes on page load. While you recommend using smaller image sizes, it still worked well on a 1920 x 1080 image and only took ~10 minutes to run in the background with n = 2500.

I did add an additional step (as lots of extra triangles are generated with a large n like this), which is to put it through SVGOMG[1] to optimize out some of those extra bits.

That cut the generated SVG image size about in half, which I think is an excellent amount of bytes saved for relatively little effort. You can see the result on my site linked in my profile, or compare the PNG and SVG files from the source on my GitHub in this commit[2].

[1]https://jakearchibald.github.io/svgomg/

[2]https://github.com/Adamantcheese/adamantcheese.github.io/com...


I suppose someone would write one eventually, but I don't really see the motivation to build or use libraries like SVGO for most use cases.

If you GZIP compress an SVG to SVGZ then you are going to save way more bytes, and if you have a half-decent webserver and a browser this side of 1999 it should do that automatically - https://en.wikipedia.org/wiki/HTTP_compression#Compression_s...


Slightly off topic, but for the first time I've seen a "lol" domain :). Did you not have any apprehensions using it, like people might not take the product seriously?


Kinda, but I figured it would stand out so I went for it.


I love this, for budding visual artists it is a great way to see how basic shapes and seemingly simple colors come together to create evocative images.


Thanks - this is my favorite aspect of it as well!


This looks great! I would definitely pay the $10 for an iOS version. I can see myself playing around with this on an iPad Pro for hours, especially with drawing mode.


Don't really know how it works, though I wonder if some of the filters in Prisma are implemented in similar ways.



Your twitter bot sometimes produces images that look like unintentional art. Awesome job!


Well, that's the idea!


Honestly your twitter bot (and some of the other art bots) may be the best content on twitter. I thought it was cool that I could interact with your bot. Great work!


Hey, you're that guy who did "minecraft in 500 lines of python".


Awesome! Are you aware of ports to any other language?


I have a JS "port" (not a 1:1 reimpl): https://github.com/ondras/primitive.js


That's incredible


Very cool. Anything like this available for Windows?


You can download Golang and run it yourself from the windows command line. To make it easy to use, open the command window from the location your "go get" downloads to (usually in your documents there's a folder called "go"), then instead of using "primitive <options>", use "go run main.go <options>. Works like a charm.


very cool, so much better than seeing images constructed from cat and dog faces


Well, this is awesome!


Clever, but I'd love to see browser support for FLIF make this kind of thing irrelevant.

> FLIF is lossless, but can still be used in low-bandwidth situations, since only the first part of a file is needed for a reasonable preview of the image.

> A FLIF image can be loaded in different ‘variations’ from the same source file, by loading the file only partially. This makes it a very appropriate file format for responsive web design.

The loading video on http://flif.info/index.html makes this clear.


The problem with FLIF here is that it's not remotely competitive when you're using a lossy format for both the low-quality and high-quality versions, which is the most common case for photos. UI elements tend to be small, so it doesn't matter.

In most cases if you get more bandwidth which you could spend on using a lossless FLIF, you would get better subjective quality by using a higher resolution JPEG at the same file size.


If you need to use an image twice, once low-quality and once high-quality but not perfect, FLIF would let you load (say) 10% of the image in one place and 50% in another.

Are you saying that for the same bandwidth usage, you could load two separate lossy JPGs and they'd look better?

What if the fact that you've loaded the first 10% of the FLIF for the low-res version means you can resuse that data and start with the 11th percent when loading the high-res version?


> Are you saying that for the same bandwidth usage, you could load two separate lossy JPGs and they'd look better?

The tests I've done are along the lines of this (I forget the exact numbers I used):

* JPEG: 10kB + 90kB

* FLIF: 100kB

JPEG usually wins for photographic source material. I was unable to come up with “reasonable” parameters where FLIF would win, but perhaps someone creative can figure that part out.


You didn't mention the result of your tests, but I infer that you found the 90 kB JPEG to be superior to the 100 kB FLIF/PNG? That seems reasonable, if unfortunate.


Yes, for photographs. This shouldn't be surprising.

Note that the FLIF home page doesn't even claim that FLIF is superior to normal JPEG images... it only claims that it's superior to other lossless formats. For lossless formats, you can compare your desired metric (e.g. file size) for your corpus. For lossy formats, you can either fix subjective quality and compare size, or fix size and compare subjective quality. (Or compare some other metric, but these two are more common.)

These are completely different ways of evaluating compression algorithms, and it intuitively makes sense that different algorithms will be better if you evaluate them differently.

Consider that FLAC gives the best bitrate for lossless audio, but Opus gives a far better bitrate when you fix the subjective quality or a better quality when you fix the bitrate at reasonable rates.

Or consider that there is a wide spectrum of data compression algorithms, each of which performs the best depending on how you assign weight to compression speed, decompression speed, and compression ratio, and what is in your corpus. There is a surprising variety of new compression algorithms out there, some of which may be the best for your use case even though their compression ratios are significantly worse than other well-known algorithms (LZ4, LZFSE, Snappy, for example).

JPEG is designed for best quality at reduced bit rates, so it should not be surprising that it is good at doing that, even though it is old and newer algorithms are better.


Which is why we need a replacement for Jpeg. While bpg / HEVC is a lot better, sometimes i wonder if we can do more. While I can shrink a 100kb Jpeg to 50Kb Bpg and looks better, what I really want is something 100Kb Jpeg quality at 30Kb or less.

But given bpg or HELF isn't even being used yet. I know i am asking for a lot.


I didn't even know about BGP! BGP looks great.

Today I also learned that HEVC is another name for x265.


You mean HEIF?

It took 22 years to get from JPEG to BPG, it will probably be a while before we get something noticeably better :-)


Congratulations, you've reinvented the progressive jpeg.


I hope the same.

I love what people can do with vectorization, but personally, I'd be happier if there was an established standard for previewing content. As it is, a lot of previews don't work for users who don't trust the site enough to enable JavaScript.


If you've got JS turned off, isn't lack of image previews about the smallest issue you'll face?


Not really; it's actually one of the biggest ones. There are several categories of sites which fail due to reliance on JavaScript:

- Sites which should work perfectly but don't work at all (e.g. Blogger). These are so annoying because serving text & images is exactly what the web is good at, and requiring JavaScript to do it is just abusive and borderline evil.

- Sites which work perfectly fine, except that all the images are blurry, low-res placeholders. These are so annoying because they're actually serving images. Their developers know how to do the right thing, but obstinately refuse to do so.

- Sites where small things don't work. They are annoying, but at least one can read them. Ars Technica gets a special mention for the fact that its articles are perfectly readable but the comments are not. I chalk this up to incompetence and laziness.

- Single page apps which really need to be single-page apps. They're annoying because they would almost certainly be better as native apps, but they really are doing something that the static web can't do.


> They're annoying because they would almost certainly be better as native apps

I definitely disagree. If something is truly an application and can also be done on the web, I vastly prefer that over native apps. Not having to worry about different OSes, different machines, updating, deployments, etc. Webapps have significant advantages over native here IMO.


Ars Technica can be fixed with the following CSS loaded into Stylus:

.comments-row { display: inherit; }

Seeing that it's such a small change, it really makes me wonder why they don't already include it.


Disallowing JS is really only an issue when using "rich" web apps.

For reading articles composed of just pictures and text on random websites, running code is not (shouldn't be) really necessary. In this light, lack of images is a big problem: the user is forced to choose between not seeing half the content or running untrusted code.


I assume you'll still see the images, just not the previews.


> browser support for FLIF make this kind of thing irrelevant

I'm not sure it would. The nice thing about a lot of these SVG options (or even the smaller images as well), is that they can be embedded into pages. So not only do you see the image faster, you also reduce the number of remote file fetches. FLIF sounds like you'd still have to hit another server to see anything, even if the thing you see would display before it finishes loading.


> reduce the number of remote file fetches

This will become less of an issue as HTTP/2 spreads though.


> The nice thing about a lot of these SVG options (or even the smaller images as well), is that they can be embedded into pages

That's a good point. Though presumably you could embed FLIF data into HTML using a data URI - https://css-tricks.com/data-uris/


Can a data uri be used to store 10% of a file in HTML, but then get the rest of a file? I mean you don't want to include the whole file.


You should be able to encode the first 10% of a file as a data URI, yes.

Maybe you could use something like `srcset` to say "and also load the higher-quality verion from the server"? https://responsiveimages.org/


I think they mean, could you inline the first 10%, and then lazy-load the other 90% (taking advantage of the FLIF decoder's progressive rendering). Which I would have to imagine is a "no" (you would have to swap your 10% placeholder with a whole 100% remote version).


Currently, browsers don't support FLIF at all. If they ever do support it, what you're suggesting could be an excellent feature to add.


That's what I thought. I wonder what the advantage is at that point, over using, say, an inline jpg for the low quality version.


> I'd love to see browser support for FLIF

Wouldn't WebAssembly make waiting for browser support unnecessary?


For certain values of "unnecessary", yes. We've still got a few steps to go before we can decode an image with SIMD instructions (assuming FLIF can use those to speed up, though I imagine it can as any modern image format ought to take that into consideration), and I'm not sure what the story is for pushing images efficiently to the browser, but you should at least be able to put something together that works in WebAssembly now and speed it up in various ways later.


Do we need a new format? I remember jpeg's had this kind of functionality back in the days of dialup, all the patents for that should have expired long ago.


Are you sure you're not thinking of interlaced GIF? That was used pretty widely for images in the days when the processing for jpeg could actually be serious work for a pc.


Progressive JPEG is a thing and was widely used in the 90s and early 2000s. Remember when images would load all pixellated at first, and then appear clearly?

https://www.thewebmaster.com/dev/2016/feb/10/how-progressive...


> Are you sure you're not thinking of interlaced GIF?

Yes, interlaced JPEG has that property: each "pass" in the file adds detail, you can stop at any point. They're more expensive to process but compress better than non-progressive JPEG. PNG also has an interlaced mode, however it compresses less well than non-interlaced. Because of the way jpeg works, it also looks better early on, especially when the PNG renderer does no interpolation on the early interlacing phases.

Interlaced GIF is actually pretty crappy as it's one-dimensional, so it has a "blinds" effect both during initial rendering and during refresh phases: https://nuwen.net/png.html


Possibly, I just remember being a grateful teenager when my adult images loaded progressively.

Wouldn't gif have been a poor format for that even with progressive loading?


Probably, but the last time I really cared about load times enough that interlacing or progressive downloading was an issue would have been back in the early 90s on dialup and well before progressive JPEG was even a thing.

IIRC, watching a JPEG paint on a 386/25 or maybe an early 486 was something that could likely be measured in full seconds or tens of seconds, not tenths of seconds.


Remember JPEG 2000 and its wavelets?

Second Life uses it ubiquitously, its progressive loading of textures is distinctive.


>Clever, but I'd love to see browser support for FLIF make this kind of thing irrelevant.

No, it will not make much difference. You rarely need lossless for anything but stuff like UI elements, which to begin with should not be drawn as PNGs if they are simple graphics that can be drawn as SVG


How will FLIF "not make much difference" here? Suppose we're talking about a profile image. The full, lossless file is 40KB. The video on the FLIF site (http://flif.info/index.html) shows a blurry-but-useful preview being displayed with the first 0.25% loaded (and possibly before). In this case, that's 100 bytes, which is a fraction of a single TCP packet.

Don't need the full resolution of the 40KB lossless file? OK, sure, stop loading after 20KB, or 10KB, or 1KB, or whatever seems like enough.

Yes, SVG is still better for things like icons, because you get clear, resizable images with a tiny data transfer. But for previewing a photograph, FLIF seems ideal, and doesn't require any additional tools.


Maybe, people usually just recode avatars to low-res JPGs


FLIF makes re-encoding unnecessary. http://flif.info/responsive.html

> for every image, only one file is required, ever. The optimization can happen entirely on the client side.

> A FLIF image can be loaded in different ‘variations’ from the same source file, by loading the file only partially. This makes it a very appropriate file format for responsive web design. Since there is only one file, the browser can start downloading the beginning of that file immediately, even before it knows exactly how much detail will be needed. The download or file read operations can be stopped as soon as sufficient detail is available, and if needed, it can be resumed when for whatever reason more detail is needed — e.g. the user zooms in or decides to print the page.


Rendering SVG is considerably slower than PNG. If you know the target's display DPI, generating PNGs from the source SVG ahead of time is usually the better choice.


The animation is really fun.

Maybe I am unusual, but I have never in my life been annoyed at image load time, even in the dial-up days. I just simply open a new tab or window and continue reading an article or writing code if loading takes too long. But when websites start adding "progressive" placeholders that don't decrease the load time but increase the CPU, that's what annoys me since they prevent multitasked work from being done. If I don't care about the images on the page, I'll continue scrolling, but when my scrolling becomes laggy while an image placeholder is being rendered, it's just unnecessary.

So think carefully if you want to take over the resources of your users' computers, which trades visuals for feel. "Feeling" your website (interacting with it) is more important than its appearance at all points in the loading sequence.


Am I the only person who finds it ironic that a web page about keeping web pages small comes with multiple megabytes of Javascript?

It doesn't need it, folks. It's a static blog page.


Couldn't agree more. If you look at what's happening in the network tab of the developer tools, you'll see it's doing a lot more than providing just a static blog page.

Instead, every x seconds it executes another POST request with pretty much all the details they can gather (scroll from top, scrollable height, referrer etc.). As soon as you start moving your cursor, the new requests start adding up very quickly, with lots of new params such as "experimentName: readers.experimentShareWidget" or "key: post.streamScrolled".

It really is collecting every single interaction with this page. As it's provided by Medium I'm sure it's part of their data collection program.


I wonder if those experimentName parameters are to do with serving different example images to different readers. Below, u/Anhkmorporkian mentions an image of sneakers, but I saw images of a woman and of the Golden Gate bridge. Did other readers see different?


Much more likely the experiment is from medium itself, A/B testing different sharing UI.


It's a Medium article. The author didn't write the platform.


The faustian bargain that writers of web performance articles make: use a minimal static platform that's fast and get minimal coverage and analytics or use a slower fully fledged platform that offers a built-in audience, analytics and features.


It loads 500kb of javascript files, not including the two codepen docs. Also, there is actually functionality on the page, it's not just a static page.


I'm on mobile right now and therefore it is difficult to look at source but wouldn't the JavaScript be cached in most scenarios while the images would be unique per page? On a side note I have no JavaScripts on my own blogs.


Besides uncached views (I'm guessing a decent amount of the traffic from HN for this submission would be) you also have to consider parse/eval time. Look at the flame charts in Chrome Dev tools for some of your favorite sites and probably see a few that take almost a second to parse/eval cached JavaScript. It will be even slower for mobile users.

Just because it can be cached doesn't mean it's free or even necessary.


I wouldn't count on anything being in the cache, especially on mobile. Even on desktop, I've just checked my cache and any Medium resource shows exactly 1 hit. With ~350 MB in my cache (default value), it's not going to stay long before being evicted.


gotta data mine the pleebs


And all the code pens are also slow. Between clicking the "run" button and the actual script running there's somewhat of a 3 second gap.

Which kind of puts the opening paragraph into question:

> I’m passionate about image performance optimisation and making images load fast on the web.

Like, ok, why aren't you using a simpler blog engine to post? As a plus, if you were in control of everything on the page, you wouldn't even need to use codepen; just include the javascript directly in your page.


Perhaps they don't want to worry about deploying a simpler blog engine ?


That's a common attitude, I was just pointing out how it puts the "passionate" part under doubt.

I've noticed the word "passionate" gets thrown around a lot cheaply.

It's seriously starting to lose all meaning.


This is the kind of thing that seems really neat (I love computer generated art) but ultimately the user experience isn't any better than just a solid rectangle, and you're sending like 1kb more per image to show the placeholder.


I disagree. If you have a user on a very slow connection on an image heavy site, the user experience will be enhanced significantly. Depending on how you implement it, you might even save them bandwidth. No interesting content for them in the current viewport and they scroll past the placeholders/go to the next page? Cancel the XHR requests you made for the full-size thumbnails/images and start anew.


I agree with GP. Personally, I hate websites that show a blurry version of an image before the final one loads. It plays with my eyes. I have to look away and peek to see if it's done before I can read on.

I wish there was a way to disable this functionality.


Ditto.

It’s even worse when combined with lazy scroll-based loading: now you’re guaranteeing that I’ll see the unpleasant version briefly (especially in Australia, I imagine, where there’s generally higher latency on such requests than in the USA—but I haven’t tried Medium or similar sites from the USA, so I’m not sure if it’s as unpleasant there).

It’s worst of all when combined with lazy scroll-based loading and an unreliable internet connection: I load pages in places where I know I have an internet connection, and then read them in places where an internet connection is unavailable. With lazy loading of these things, I can no longer be confident that it’s actually loaded everything I need. Same deal with Medium’s blocked iframes for things like CodePen—that just means that the iframe is not loaded when I need it to be.

I want less magic, not more, because we’ve proven as an industry that we’re not responsible with magic, and always manage to make a mess with it.


I didn't mean to refer to blurred images, but the unblurred outline SVGs in this article. I think the blurred ones are somewhat pointless too. The unblurred versions give you actual information as to what the image is. For instance, the sneakers image; if I didn't care about sneakers, I'd be likely to just scroll past it. In the blurred version, I might not be able to tell that they're sneakers.


Interesting: I didn't see any images of sneakers. I saw the Golden Gate bridge, and a vague shot of a woman. Are they serving a different set of images (and example js) to different readers?


It was in the GIF of the Gatsby 3 implementation in the embedded tweet. Here's a direct link to the tweet to save you some time:

https://twitter.com/nystudio107/status/920673966091534338?re...

Edit: Maybe they're not sneakers, but they're shoes. I just wear loafers, I really have no idea what any shoe types are called.


Downside: if it's an image that for some reason doesn't load, the user is confused because it doesn't match the caption.

I've had this happen with blurred images on some websites.


Yep, the main use case I imagine for something like this would be to show a blurred/tastefully-rendered <10 KB SVG _instead_ of a multi-megabyte hero JPG that still ends up looking horrible on 4K screens.


I'm more concerned about CPU time on mobile devices - rendering SVGs then blurring them is surely more intensive than other options.

Not saying it isn't worth it, but AFAIK no-one has profiled this.


I love the visual effect, but I have to wonder how much CPU/GPU is entailed in rendering 10-50 triangles, blurring them and rendering the result to the page canvas vs. loading a pre-blurred image (that might be a "tiny" progressive JPEG), so I'd probably try to do this server-side first

(edit for clarity: my site currently generates <12Kb blurred placeholders from high-def photos and only loads visible images to be more mobile-friendly, and I worry about client-side rendering impacting battery life).


JPEG has a thumbnail facility built-in I think, did you try using a pre-created thumbnail scaled up? Or just progressive images?


I generate blurred thumbnails from my photos, and then output progressive JPEGs at 80% quality (which is low enough for my purposes and provides a smooth output)

Code is here:

https://github.com/rcarmo/sushy/blob/master/sushy/utils.hy#L...

The thumbnails you refer to are part of EXIF data, partially designed to allow digital cameras to render previews inside the camera itself in more resource-constrained times.


I'd like to see some analysis of the psychological side of this. I'm not convinced these sorts of previews/placeholders are a positive thing, I find most of them rather distracting. The SVG approach is fine as a novelty, but to assume that this is the best approach from a user point of view is quite a big assumption. My intuition is that the hard edges of SVG are more distracting (attracting your eyes to something that isn't going to give them meaningful information), but that a blurred image messes with your eyes a bit more. I wouldn't really like either of them to be used on the majority of sites.

I prefer something very subtle to show that there is an image that is supposed to be there. If it is going to show a single color rectangle, I prefer it be translucent so as to attract even less attention.


Anecdata: In a newsfeed-type product at our company, we implemented content placeholders that communicate loading state instead of a traditional loading spinner. This reduced user drop off significantly, and bought us some time to work on the real problem under the hood which was latency.

Aside from my anecdote, there are many blog posts and experiment results out there that suggests that this works, and it works well.


What kind of placeholders were they?

Also I have suspicions about what you are measuring being 100% correlated with "better user experience." Lack of user drop off CAN indicate a better user experience, but the concept of "click bait" illustrates that you can gain short term increases in attention in ways that both decrease user experience and can contribute to driving away users in the long term. I'm not saying this sort of thing is the same as click bait, but still. I'm sure that in theory, you could put lots of things in those placeholder spots that will increase the number of people that wait for the page to load, but may drive away users in the long term. (e.g. blurry nudes)


A while back I made something similar, using Voronoi diagrams to approximate an image: https://codegolf.stackexchange.com/a/50345/4162

Although that was just for fun, not to create image previews or anything of the sort.


What went into your decision making process for using Poisson disc sampling? Did you try other sampling methods? I'd love to know anything else you can share about how the preprocessing works.


Honestly, I was just messing around, tried this, and it worked. I used an ad-hoc modification of Poisson disc sampling I thought of myself to make certain areas denser / less dense. The preprocessing is also really ad-hoc, just by looking through functions found in scikit-image to form a heuristic that seemed appropriate.


How have people come to the conclusion that displaying a low-information version of the image to be loaded as a placeholder results in a quicker perceived load? To me all these effects just look rubbish and distracting, with no benefit at all – certainly not the perception of speed. It's not like I can ever understand what the image really is before it's fully loaded anyway, with our without fancy placeholder.


Then use it only for images below the fold?


Loading images as soon as they are within viewport is already too late. You want to load them before user gets there. So you could just not mess up with native browser loading in the first place. Because all those requests and client side calculations will add up.

When you have image heavy content please load an image when I'm one or two page heights before the image. This way when I get there it will already be there. You could then just use regular single background placeholders, because I (and I would believe that you also) never intend to see them in the first place.


Really cool use of SVG. I think the 10 shapes method sans the blur would produce the coolest placeholder images. There's also a pretty impressive level of detail in the 100 shape images. I might have to look into adding something like this into one of my own sites!


Yeah, I like the impressionist feel of the low shapes. To me I like that it gives just enough of an impression of the image that is loading that you aren't too worried if an image is slow to load or fails to load that the gist still comes across, but is also still rough enough that it gives the indication that something may still be loading.


Another fun thing to do with using SVG-as-images in the browser is easy animation of the SVG, using Primitive (for example) as noted in the article to convert an image into SVG shapes, plus a tool like Vivus (https://maxwellito.github.io/vivus/) or Snap.svg (http://snapsvg.io) to animate each shape of the SVG.

Here’s a demo I made using Primitive + Vivus: http://minimaxir.com/2016/12/primitive/


SVG animation is amazing, but as a person who made work with in-browser animation and visualization my breadwinner, I can say that usage of JS for that is a huuuuuge downside

Effectively, you have to rewrite a big part of actionscript functionality into your browser to do really simple things

It is greatly regrettable than browsermakers have thrown out all declarative animation features, and never thought of improving on them.


Is “declarative SVG animation scripting” really a primitive API you need the browser to support directly? “Rewriting a big part of Actionscript functionality” seems like the job of a JS animation framework, not the browser. Like the ones that animation tools that previously compiled to Flash (e.g. Adobe Animate itself) use.


JS animations that can approach flash level quality are hard to do.

It is like saying that you don't need native video support, but you use JS to interpolate initial image.


The latter is false because the APIs for image manipulation don't give you features like hardware video-compression-codec decoding, and so you just can't really drive a <canvas> fast enough using JS to draw video on it.

The JS graphics+SVG APIs, however, do have all the right primitives exposed to let you do flash-level animation. It's not a matter of incapability; it's just a matter of nobody having coded the right framework, or the right framework (i.e. the one Animate uses) being proprietary and without an open-source attempt to clone it.

That doesn't suggest browser vendors should step in and put the capability into the browser, any more than the inability to do realtime 3D without a framework like three.js or a game engine like Unity's HTML5+WebAssembly engine, suggests that browsers should create a common, native game-engine-like API.


I'm definitely a fan of this, though I wonder how small 1 bit gifs or pngs would be for these placeholder silhouette images would be?

There are probably more efficient ways of storing the vectors than SVG too, which would help the compression - it would be interesting to see how small these could get.


There are none that display cross browser however. Which is an issue.


Which current browsers you have in mind that don't support GIFs or Javascript + SVG (or canvas)?


Gifs don’t support vectors and the comment was talking about a more efficient format than SVG.

So what on earth are you talking about?


Once upon a time Facebook had a similar idea for their app [1] (similar to the OP's use case, because the preview photo was to be included to the initial response). It makes use of the stock JPEG decoder but with the fixed Huffman tree and size to meet the 200-byte size limit. A careful implementation may also work in the Web even without JavaScript.

[1] https://code.facebook.com/posts/991252547593574/the-technolo...


A real game changing technique!

Why? To use progressive JPG, you have to pre-recode (if you don't have money to dish out for FPGA to recode on the fly) and store recoded images. With this, you don't have to alter the original image.

My favorite method was to use img tag with blur, you programmatically add image, then you wait when, say 10% of this it is loaded, and abort the request. It will stay that way. When you need to fully load it, you start loading, look for progress events, and set blur accordingly.


I'm not sure why preprocessing progressive JPEGs is an advantage of using this method instead, since you have to process the images to generate the SVG files in the article.


>To use progressive JPG, you have to pre-recode (if you don't have money to dish out for FPGA to recode on the fly) and store recoded images.

Not sure why progressive JPG isn't the default. It's not any larger.


The rules of thumb seem to be that (a) for JPEGs under 10kB progressive may actually be larger and (b) for mobile devices the extra effort (CPU, memory) of progressive decode may be significant eg in taking time away from other parts of page rendering.


As stated in the article, generating the SVG to use as a placeholder can be very expensive.

I, myself fiddled with the idea in one of my side-projects and ended up using the technique where I save the dominant colors from an image, which I’m using to create a CSS gradient as the placeholder. The effect can be seen on https://epicpxls.com


I wonder, how many users here know about progressive jpeg, and about the trick with partially loading progressive jpeg file?


What is the trick? I do know progressive images and use them, others argue its bad ux seeing the shitty version first.


You create Image objects from images loaded with xhr that are, say, loaded only 10%, then you blur it with SVG or native CSS filters


For small icons and images, this is fine and actually pretty cool. However, the rasterization cost of SVG into large images can be significant on mobile device CPUs. It won't speed things up much if the device is rasterizing a lot. That should be taken into account if you're mobile first or building a mobile app.


If you want to see this in action in Gatsby, checkout https://using-gatsby-image.gatsbyjs.org/traced-svg/

It's super easy to integrate this into your site w/ our Image component & GraphQL fragment.

See the source code for the page: https://github.com/gatsbyjs/gatsby/blob/master/examples/usin...

And component documentation https://www.gatsbyjs.org/packages/gatsby-image/


This looks interesting. Here's a shameless plug if somebody needs tracing:

ImageTracer is a simple raster image tracer and vectorizer that outputs SVG, 100% free, Public Domain.

Available in JavaScript (works both in the browser and with Node.js), "desktop" Java and "Android" Java:

https://github.com/jankovicsandras/imagetracerjs

https://github.com/jankovicsandras/imagetracerjava

https://github.com/jankovicsandras/imagetracerandroid


Reading through the comments I'm clearly in the minority, but:

For many of these I like the SVG as much as the full image, and start to wonder if the preview/placeholder makes having the actual image irrelevant.


I think the silhouette version is the best in transitioning from the summary to the full detail.


This is an interesting technique for stills; I'd be curious to see it applied to video...


It's been done! This one gets progressively more detailed:

https://www.youtube.com/watch?v=PyAkgS6Xl1Q

Here's one someone else did: https://vimeo.com/210854333


Are those SVGs actually smaller than JPEGs included using data URLs?


> Actually, the code for the SVG with 10 shapes is really small, around 1030 bytes, which goes down to ~640 bytes when passing the output through SVGO.

> The images generated with 100 shapes are larger, as expected, weighting ~5kB after SVGO (8kB before).

Looking at images that randomly appear on my hard drive:

- 1KB PNG image is a tiny image, 45x30 pixels. Just the Base64 part of it's data uri version is 1476 characters in length

- A two-color 152x30px PNG image containing simple text and a logo is already 3KB. It's base64 is 3344 characters in length

I don't have any small JPEGs though. The smallest I have is a selfie, 960x960 pixels. It's 79KB in size.

My guess is that you probably can produce a tiny JPEG/PNG that would beat SVG, but you'd have to play around with a lot of settings for it: reduce quality etc.


The only savings you get from data URLs are from less HTTP/whatever overhead. It’s like embedding the image in the source so everything gets loaded in one request. Base64 is not compression.


Reminds me of that old mid 90s fractal image format that never took off. When you zoomed in, the photos started turning into shapes of colour blocks rather than raster pixels.


Is it just me or does anyone else think LQIP on medium are annoying? They become sharp after waiting for a long time and at times never load at all.


Looks interesting. Any interest in building a SaaS model on top of this?


Am I the only one who found the thing awkward and offputting?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: