Hacker News new | past | comments | ask | show | jobs | submit login
The specification for native image lazy-loading is merged into HTML standard (github.com)
281 points by saranshk 5 days ago | hide | past | web | favorite | 176 comments

If you are going to use this, keep in mind the advice given... set the height and width attributes of the image to prevent the page jumping around for lazily loaded images.

Which means, if you accept user generated content or images from any CMS or other source you really should be obtaining the image dimensions and using them in the HTML.

This. It's bad enough already, pages can jump around many times if you start scrolling before they're "done" (which is hard to tell nowadays anyway). News site main pages tend to be the worst. Clicking through to a different article because the page re-rendered under my finger really gets my dander up.

I installed NoScript recently and was actually surprised by how many sites actually load low resolution placeholders and then swap them out with high res versions via Javascript. I'm assuming to avoid this jumping around.

That's probably an effort to load a faster, maybe a leaner version for mobile/whatever.

To actually prevent jumping (including on lean image load), I believe it would be more effective to simply set width and height.

I think it's a mix but most of the preloads I've encountered aren't just lower resolutions, rather they look like a smear of color. Like someone used a Gaussian blur filter with a 1000x pixel radius.

Yeah, I could understand a 128x128 pixel version or something but when it's 10x10 why are you even bothering.

Isn't that a thing of the past, with scroll anchoring being a feature of most major browsers?

Sadly, no. I experience many pages jumokng around wildly. Most often, the culprit seems to be ads that arrive late. Sometimes, it seems like pages are only requesting them when their intended position scrolls into view, so they pop up in the middle of the screen. So the browser must move either the upper half up or the lower half down to make space.

It's pretty depressing that so many otherwise modern UIs are totally broken whenever you're not using an ad-blocker

I always assumed this a was a dark pattern designed to get people to click on lucrative links swapped in at the last minute.

Brownian motion plus commercial (or other) pressure can produce things that look like dark patterns. Very similar to evolution in biology.

Assume that developers make essentially random mistakes or changes. Whenever anything happens that has a negative impact on metrics management cares about, there will be pressure to look what went wrong.

Any random change that has a benign impact on measured metrics, eg weird page loading that makes people accidentally click on more lucrative links, there will be no pressure to revert.

No one needs to consciously look for and implement dark patterns for this to work out.

It's very similar to how confirmation bias can make people question news they don't like until they find enough evidence to disregard them; but are just accepting news they do like.

How/Why would I do this with responsive/fluid layouts and dynamically changing image sizes?

How/Why would I do this with responsive/fluid layouts and dynamically changing image sizes?

You don't.

You're not required to use it on every image. It's a completely optional parameter that you add to individual images that might benefit from it.

You can use it on navigation icons that don't change size. Maybe in some of your boilerplate in the footer. Or in some sidebar content or a testimonial that you control. But if you have images of unknown size populating the middle of the page, don't use it on those.

But isn't that exactly the use case of lazy loading and also the source of all those problems?

The 1.2 kB icon in the footer is not really a problem, but the xxx kB article image some author uploaded with unknown sizes.

Those things will make my page jump around and will need to be lazy-loaded.

It should be possible in most backends to get size information from the image at the time of uploading. A pair of integers shouldn't be a lot of data to store as metadata for your HTML rendering process.

You would attempt to set some constraints on the image dimensions at the different breakpoints. For example, you can use percentages if you want a fluid amount. It's really to tell the browser how much space to allocate to the image before it's loaded. This is an old problem and pages that jump around would happen on slower connections.

Ok, my image has 100% width for screens up to 800 px width, what height value do I have to set? You simply don't know this information at the time when the DOM is built.

The idea to set fixed dimensions feels like it comes from a time where we did designs with tables and "optimized" sites for 1024 x 768 pixel screen size.

That was the time when you could tell that this image should be displayed 200 x 200 px in size and nothing else. And it fit. One way or another.

I would approach this thinking in terms of a known aspect ratio of the images and use a technique like this:


Make no mistake, this is a silly problem to need to hack a solution for. So I would welcome lazy loaded images with space allocation before it's rendered with a responsive sizing.

Oh, constraints! These cute little math things that always work and are fundamental to the definition of layout geometry.

I doubt we’ll ever see them built into a browser, because all nice generalized things never get there.

Use the browser's onresize event to work out what changed and modify the width and height attributes accordingly. Alternatively, design with lazy loading in mind and use screen relative units (eg vw, vh, vmin and vmax).

Screen relative units are not possible to be used here - you have no idea how big the image will be on the screen, since you don't know how big the screen is.

Using js onresize event is a no-go, if JS is still needed the spec is worthless. Lazy loading images with JS worked before just fine, this has to be about not needing special JS for this feature.

No, parent is right: It's impossible to satisfy that recommendation, usually.

Screen relative units are not possible to be used here - you have no idea how big the image will be on the screen, since you don't know how big the screen is.

Screen relative units represent a percentage of the screen size (technically speaking it's the viewport really). You don't need to know how big the screen is; you're tell the browser to use, for example, 25% of the viewport height when you set something to be 25vh. If you want 4 images vertically that's really useful. That way the browser knows it needs to put a box 25% of the screen height in the DOM where the image will be loaded later so there's no layout thrashing.

Using js onresize event is a no-go, if JS is still needed the spec is worthless.

The parent suggested they would want to change the width and height attributes of the image when the user resizes their browser. That's exactly what the resize event is for. If you want to display a fixed size image you can use fixed width and height attributes. If you want the image to scale with the viewport size you can either use units that represent a relative size or you need to use JS to modify the attributes. Heck, you can even use media queries to mix and match and have fixed size in some devices and relative in others.

Lazy loading images with JS worked before just fine..

Except you needed JS to do it. With the new HTML spec you get the benefits of basic lazy loading without needing the additional weight of some JS code. That's good.

But you never know that an image should cover 25vh! That depends on the size of the viewport in total, the resolution and the specifics of the layout situation, things like text size. And almost always the width of the viewport decides image width and thus the height of the image.

And that's before resizing anything.

This totally misses the point. I can't run all those calculations while rendering a website. The browser needs to be intelligent enough to not dynamically jump around in the viewport when an element finished loading. Just fix the viewport and attach at the top and bottom.

And relative units don't fix this either as that information still misses the ratio of my image. If it's 100 vw, how high is it? I don't know this information without fetching the image, doing calculations and dynamically updating the DOM with that information.

That sounds like a problem that should be fixed in CSS. Ideally, it should be possible to specify that an image should have a width and/or max-width of 100vw, 100% of its parent, or whatever, taking box-sizing and paddings into account... and then specify that its height should be a certain percentage of its own width. The server already knows the aspect ratio, so it can deliver this information in a 'style' attribute.

I haven't been following the latest developments in CSS for a while, so maybe it's already possible to do this?

I do this in production a few places by setting a div with no content or only absolutely position content to have padding-bottom of the aspect ratio in %. That gets calculated from the width, so if the image is 4:3 my ratio is 75%; 200px wide becomes 150px tall as well. Then, I set the image as a background to the div or absolutely position it inside to fill the space, and it's responsive to the layout while also reserving space for itself.

This reminds me of all the icky hacks "everyone" had to do going back years. (remember the star hack?)

I am not going to do this. This is such a pervasive issue, and with an _endless_ difference in screen sizes and millions of pages of content... not going to happen.

This kind of problem is exactly what the browser is best at solving. A problem everyone has on most pages on the internet.

Refusing to specify your image sizes denies the browser the information it needs to solve this problem.

Maybe, but I think this is a solvable problem from the browsers perspective. In my experience building websites the need for image dimensions is near zero.

1. The penalty for not displaying image dimensions is nearly insignificant because almost _all_ images that you would put width/height on are from content managers, not designers.

2. Mobile content: On a phone the content moves down after the image is loaded. (everything is just a single column) Which is a preferable action than a big empty space. So, I prefer _not_ including dimensions on user content for phones.

3. Responsive design: All CSS I have been using for years now has "max-width:100%".

Therefore (since a lot of traffic is mobile) most (rough guess) of images loaded from sites I've worked on the image dimensions are recalculated as soon as the image is loaded anyway.

4. Srcset: Multiple possible images downloaded that are _chosen_ by the browser at run time. You already have to provide dimensions. But what if they aren't exact? Go back to #1.

5. Web design: I can't even recall the last time I put an image in a design using the <img> tag that affected layout. (maybe if you go back to the 90s this would have mattered)

Instead of specifying how big your images are in html (or css), you are proposing the browser should magically guess how big your images are going to end up? Or should it not render anything until all your needless images are downloaded?

Is there an actual problem in identifying what images correspond to your markup and adding an appropriate attribute? (asking, not a web dev)

AFAICT the problem is solved, this is the necessary protocol - not even a bad one, use it and it will work.

If you can give the browser the image size with JS, then the browser can figure it out on it's own.

Rarely do we specify the width of many, many objects and they display just fine. (consider tables)

By adding an attribute I meant server side, not client-side if that's what you mean.

If you defer (down)loading of images, and layout depends on unavailable size/aspect-ratio of an image, you get layout updates - same would happen if you lazy-loaded contents of tables.

What is the problem you would like solved and what do you propose for the browsers to do? Ancestor post was about avoiding layout changes as new layout-critical information arrives. Judging by your sibling comment, you seem to prefer layout updates with content popping up.

Using the width/height value on an <img> tag feels simply archaic and more importantly doesn't add value to the site visitor.


1. srcset [0]: the browser chooses the size of the image, not the server.

2. CSS/Responsive images [1] : Almost all (I am close to saying 100%) of images I have loaded on websites have their width/height adjusted by CSS.

Layout updates are often unavoidable in most circumstances surrounding images, there is just no getting around it.

So instead of pining over the past, we should adapt for the future.

I feel like this argument is the same I had with print designers in the late 90s who made every page a full image because they couldn't handle not having full control over the layout.

[0] https://developer.mozilla.org/en-US/docs/Web/API/HTMLImageEl...

[0b] https://cloudfour.com/thinks/responsive-images-101-part-8-cs...

[1] https://developer.mozilla.org/en-US/docs/Learn/HTML/Multimed...

Do you mean like when you click on a search result in Google Chrome on Android, then go back want to click the next one but in the meantime an animation starts showing alternative search terms so the click goes there instead? And you don't even need images for it

Lazy loading can be frustrating when the connection between the website and your browser is slow. I come across this from time to time and the result is that every time you scroll down, you have to wait ages for the next set of images to load.

This is mostly solvable. I recently deployed lazy loading for schematic images on https://ultimateelectronicsbook.com/ and configured IntersectionObserver (with polyfill) to have a rootMargin equal to 3*window.innerHeight (see https://developer.mozilla.org/en-US/docs/Web/API/Intersectio...). Under most reading and even fast-scrolling conditions, the image will be loaded well before it is scrolled into view.

But I will agree with the sentiment -- I find it incredibly annoying on pages that do lazy loading and don't implement this enlarged intersection check to make it appear seamless.

Nice website! Some unsolicited advice: Just looking at the switches page of your website, most images are around 100 kb despite being visually fairly simple. Naively saving one as an indexed PNG image resulted in no visual changes but reduced the size by a factor of roughly 2. I would recommend optimizing your images in addition to what you're already doing if loading time is important. SVG probably would do even better, but I don't know how you're making the images, so I don't know how much work that would be.

Anyone know any books that go the opposite order? Start with interesting working projects, add in the theory as needed to adjust the projects.

To learn photography I'd prefer starting with a camera. Not withe theory of light or the properties of atoms that allow glass to be transparent or how lenses are ground.

Similarly, if possible, to learn electronics i'd like to start at a higher level and work down rather than bottom up.

Interesting, as I'm exactly the opposite.

Are there names for these two types of teaching/learning approaches? Is one "better" than the other?

"Bottom-up" vs "top-down" is the general name for the two approaches, although they're not specific to learning (also e.g. to project planning).

Edit: Actually I'm not sure that's really right, even example-led approaches are usually bottom up. For example, an example-led approach to programming would start with "hello world" and work up from there, leaving a full-blown example project (if there is one) to the end.

Completely non-ironically, I prefer a middle-out approach. I like to dive in past the "beginner" stage immediately, and go from there. If I feel like I need to go back and learn the basics, I can do so on a need-to-know basis.

Your book looks really great, how does it compare with the classic AoE?

Wow, thank you! I think Art of Electronics reads well for people who already have some theory and are ready for a more practical guide. Currently, I'm aiming to build in just a bit more theory/math/background than that, so it can be usable for an undergrad college course, for example. I think AoE really lives up to the "Art" part of the title, and that's probably why it's so beloved.

> equal to 3*window.innerHeight

So basically is regular loading.

Not on long pages with little scrolling.

Looks like the perfect use case for infinite scrolling.

> Click here to see more pictures

Your book looks like a potentially great resource for teaching electronics to music tech students. I will follow your progress! Thank you for sharing.

Lazy loading as a standard would actually make it much easier to modify lazy loading behaviour through browser config and/or addons. Now you are at a mercy of website creator, with a standard and the ability to modify default behaviour through a config you could possibly just switch off lazy loading if thats better for you

this is what I'm hoping since lazy loading is constantly an issue for me (where I live my internet is frequently less than ideal.) Unfortunately my issues are more apparent with content lazy loading, and less with image lazy loading.

This is hands-down the best part of this for me. I can't stand Medium.com and blogs like it, because the images are almost always smeary placeholders unless I want to wait for, like, a whole 60 seconds. And my internet connection is 18mbps! I'd turn off lazy loading in a heartbeat!

Won't turning off lazy loading just make the whole page go slower to load? You are having to queue up connections at some level, just saying not to do it lazily is just asking to download a ton of crap you could skip out on. Or worse, putting it in contention with the rest of the stuff you are downloading. (Imagine opening several tabs on large stories with tons of images now set to eager load. First thing the browser will likely do, is ignore requests from other tabs. Then, you are just back to the game of the page not loading till you get to that tab.)

My use case is predownloading an article before getting on the subway. Sure it's better to lazy load the images in the common case, but the rare cases where the equations and explanatory diagrams are just blurry smears when you are out of 3g range are really annoying. Having the whole article + images download in the background while you are on another tab would be just fine in those cases. As it stands, I have to pre-swipe an entire Medium article and then go all the way back up to start reading it, if I want to make sure I can keep reading it when the subway is in certain parts of the track.

I just have a bad/unreliable connection, and wish websites would GET while the GETtin is good.

Strictly, I'd agree it is better to preload all content for the article. Sadly, that shouldn't include pictures there just for space. Would be a neat audit to do daily. How many images downloaded contributed to your stories or day?

But that's why it's so important that it be part of the HTML spec. Previously, it would have to be implemented in Javascript. Now, you could configure your User Agent to ignore the attribute and eager load.

That way you get the behaviour of always-load and the rest of us get fast loading pages. This is a Pareto move.

Do we really expect people will be configuring their User Agent to ignore this attribute? Seems, wishful thinking.

I want a browser that can predict when I'll have patchy Internet, loading and prefetching as much as possible right before.

I want pages that can load in less space. When most of my Kindle books download faster than most web pages, there is a serious problem.

The problem are mostly with images and javascript.

That is why I wish the next generation image format to push the quality and size ratio. I think JPEG XL currently has the best chances at succeeding Jpeg.

We should have faster Internet (5G and Fibre ) for everyone, and better Image compression for everyone. Hopefully in 5 to 10 years times this problem will be a thing of the past. Assuming we dont bloat every website into WebApps that tries to download 10MB before it even load.

No there's not. Web bloat is a problem, but it's not useful to compare media-rich audio-video capable applications with the lightly formatted, text-only, Kindle book format.

I'm comparing content to content. Yes, some is media rich, but if I want media heavy, I'm likely using Netflix/Youtube or similar anyway. If I'm using a web page, I actually actively don't want media heavy content.

And you can get graphic novels on Kindle nowadays. To pretend it is just text is to undersell the formats now. Could I claim they are bloating? Certainly. Still have a long way to go before they reach current web browser bloat. Which only seems to be marching on. With no signs of restraint on what to pursue.

It’s also not useful for a web page you’re visiting for a few tens of kilobytes of text and a few images to be a media-rich audio-video capable application full of trackers and ads-that-might-be-malware.

Well, in my book webpages are there to display mainly text and maybe an image. So the comparison seems useful to me.

Google says the average size of a Kindle book is 2.6Mb. Web pages are often bad but they're not that bad. If you regularly find Kindle books download faster than webpages I suggest you talk to your ISP.

Web pages are in fact that bad. Testing just now:

* https://www.cnn.com/ is 1.3MB of data.

* https://www.nytimes.com. is 5MB of data.

* https://www.reddit.com/ is 6MB of data.

* https://www.google.com/ is 400KB of data.

* https://www.facebook.com/ (not logged in) is 2MB of data

* https://twitter.com/home/ is 1-3MB of data depending on the ads it decides to show.

Those are all on-the-wire sizes, so after gzip compression and whatnot.

You've picked some very heavy websites, and only two (or three, depending on Twitter) of your six examples are over the 2.6Mb threshold, but fair enough. Popular sites tend to be full of images, and images are usually quite big.

However, this is good, because those are great examples of how browser lazy loading is going to help. When I loaded Reddit it pulled down 7Mb of data, but more than 5Mb was images. Looking at the content above the fold my browser downloaded about 4.5Mb that it doesn't need until I scroll. This change to the HTML spec will get all those sites first load below the average size of a Kindle book. Awesome.

All the examples are indeed over 2.6Mb as that's only 325kB. You probably mean 2.6MB instead but since you're using Mb consistently and the discussion is about transfer sizes I'm not sure.

If you felt mistreated I'm sorry. But it was a genuine question.

My Kindle books download in a few seconds. And then they are loaded completely and I can jump around in them rather well now. Properly linked books even let me jump from exercises to answers and back rapidly.

And there is some irony that I am likely tracked heavily on what I've read. Certainly on what I note. So it isn't like I'm clamoring for no scripts. Just find it odd that the push for web applications has destroyed the use of web pages.

Websites make gazillion requests and run a bunch of javascript that is in their (not necessarily your) interest. Download size is not the only driver for slowness.

The majority of those requests aren't blocking though, and nor is the JS code when it's parsed and run, so it won't have much of an impact on the parent's perception of how long it takes to load a page. There will be exceptions on sites that have been poorly made but those are unusual these days. If you sit with devtool's Network tab open you'll see a ton of stuff going on, but most of the bad stuff is after the page has rendered and become interactive.

Web apps are a different story because they often load a couple of meg of JS before anything happens, but so long as things are being cached correctly that's only a problem occasionally.

Surely browsers could already do that without a lazy loading spec. I mean, I’m pretty sure browsers already queue requests so that if you load a page with 10,000 img tags it probably won’t do 10,000 simultaneous requests.

Not simultaneous, but it does queue them with some parallelism (6 request for chrome IIRC).

Page basically dies; any Ajax requests are at the back of the queue. Scroll halfway down that page, you'll be waiting 5 mins for the images in viewport to load since they are processed in order. You can roll your own lazy load, but it's a pain and often done poorly. A good browser implementation would be great for most pages (but 10k+ might still require custom work).

If they're annoyed they'll go on the Internet and find out how to do it. Defaults should be useful to the many, and the few can reconfigure their UA.

Think about all the use-cases you could build. Maybe you could later set "I'm on a metered connection" at the OS level, have your browser pick that up, and not overuse your metered connection, etc. etc. Maybe you could have that in a browser extension that you manually toggle. No, this is far too useful.

I'm still skeptical. Is there any precedent for similar features that are used today? Closest I can think of is turning off JavaScript. Which this seems made to combat, honestly. And, honestly, I doubt that has penetration worth talking about.

And again, just don't put so many giant graphics on a page. Problem mostly solved. With less tech and likely faster results.

Please read the spec. Your accusation of this being intended to defeat turning off JS is dealt with here[0]. It's only a +188 -66 patch and eminently readable. If you are discussing this without reading it, this is going to be an unproductive discussion.

0: https://github.com/whatwg/html/pull/3752/files#diff-36cd38f4...

The size is a non sequitor. This is something that otherwise requires javascript. Full stop. No?

Now, I don't actually think this is being done to sidestep people that turn off javascript. Mainly because I just don't think that is a market worth worrying about.

But I can't see why this is a feature that we need. Progressive loading, I could almost see. But by and large, high resolution images are just not compatible with high speed page loads. I'm not seeing how this feature actually changes that.

The link is to the part of the diff that enforces that it behaves identically to current unannotated tags when Javascript is disabled. Obviously if it's designed to be no different from the current state when JS is disabled then it can't combat disabling JS.

See, for me, disabling it would be not eagerly loading anything. This is literally putting the site into what is assumed the least optimized state of scripts are off. Ironic, as typically scripts off should be faster.

Why should built-in lazy loading combat turning off Javascript? It means less pages will be broken when you do that.

Nah. We only expect Firefox to listen to their use-base and disable it everywhere.

And with some extra noise, maybe the Chromium based browsers follow.

If Chrome disables lazy loading by default then web sites will continue to just use javascript lazy loading as they do today. Disabling lazy-loading in your browser is inherently something that can only be useful if it's not the default.

The point of lazy loading images is generally to spare the browser from making a bunch of upfront requests to below the fold content, thereby making it so the page and content in the viewport is ready faster.

If images aren't ready by the time you scroll down, this is more an issue with the particular implementation of lazy loading, but the concept is sound.

I always thought the point of lazy loading is to save the company serving the on bandwidth costs. AFAIK it's never been a good user experience scrolling down and then having to wait for images to load.

That's exactly what it's for. Modern browsers already optimize request order to grab above-the-fold stuff first. Below-the-fold images are loaded eagerly, but only after that.

Of course, they aren't perfect at this, because in general figuring out what's above or near the fold is equivalent to the halting problem (thanks, Javascript!). They do it well enough for the typical case, though.

Lazy loading is all about the hosting costs.

Given prolific use of CDNs and the abundance of cheap bandwidth, saving the transfer of some images isn't as much as a concern as it may have been once upon a non-cloud point in time.

However, if 40-50% of your visitors bounce because your page takes 5s to load[0][1] due to your longform photo essay hammering the user's 4G connection with a bunch of images they won't even see 5 minutes into scrolling down, this is a real concern. Lazy loading shines in these moments.

0. https://royal.pingdom.com/page-load-time-really-affect-bounc...

1. https://developers.google.com/web/fundamentals/performance/r...

In any decent implementation, the lazy loaded images should be loaded before you scroll to them, unless you're jumping halfway down a page. They don't just load what's visible, but also the next section that it's expected you'll scroll to.

A decent implementation would require precognition or at least telepathy. You cannot know when the user scrolls where. Web pages are random access.

Also depends on the skill of the maintainer. Some unskilled maintainers joyfully upload high res unoptimized images (esp if the cms also doesn’t have constrains in place)

Forgive my ingorance, but wouldn't that mean that the browser has to load images in the correct order? So instead of delaying the load until the image is in view, just prioritize loading based on how far up/down the page an image is? If yes, couldn't the browser implement it without any additions to html?

It's not that easy. It's not always that you scroll gradually to the bottom of the page. If you click the scrollbar to the bottom, then the middle images shouldn't load. The same if you use hash navigation to the bottom.

If you refresh the page at the bottom, it will stay at the bottom, so no need to load top images.

Ok, but either way, isn't that logic that browsers could apply without changes to html?

the problem usually looks more like this: I load a page and start reading. When I'm done reading the text/content in the viewport, I scroll down. On a lazily loaded page, now it needs to download the content in the new viewport position. On a non-lazily loaded page, the time I took to parse the initial content allowed the browser to complete loading the page. So effectively the lazily loaded page is taking the 10 seconds it takes to load the page, and rather than doing it while I'm occupied on other things, it holds on to most of that ten seconds, and spends that time when it interferes with what I'm doing, paying it out--2 second wait here, 2 second wait there.

Obviously this is an implementation detail and well done LL could keep loading everything until its done, but frequently it doesn't try until I scroll down. NonLL pages also aren't perfect, and if a page tries to load everything at once, each thing added slows everything else down. But for some reason I never experience that on non-LL pages (or maybe they're good LL pages and I don't notice.) It could be that browsers do some smart things (the obvious would be something like queueing requests for page content, only allowing 5 or so open requests, and loading images in the order they are referenced in the HTML so stuff at the bottom of the page is loaded last.)

>The point of lazy loading images is generally to spare the browser from making a bunch of upfront requests to below the fold content, thereby making it so the page and content in the viewport is ready faster.

In practice, it has the opposite effect when the browser needs to figure out where on the page the image will end up at before attempting to load it.

This is a classic case of “But Sometimes”[1]. I guess sometimes lazy loading of images can be frustrating, but most of the times it’s not.

1: https://www.youtube.com/watch?v=GiYO1TObNz8

I hate lazy loading images all the time. Not just sometimes.

Lazy loading is always frustrating. It never loads fast enough.

Especially when you temporarily lose internet connection -- for example when on some parts of the subway -- which is typically when you want your whole article downloaded beforehand -- especially for example when said images are not just decoration but part of the content for example the equations that explain what you are trying to read about.

This is another good case for progressive images as at least you'll get something on the screen.

I suppose it takes ages because it's many(!) sets(!) of huge(!) images. Not sure if eager loading will help with load time, and it sounds like a huge waste if you didn't see all that enormous loaded stuff.

Now that this is in the spec we just need browsers to implement it and then I can make a "shaking your phone loads things faster" plugin. People will notice a slight difference just often enough where eager loading worked out better. They'll waggle their phones all the time. It's going to look hilarious.

A fitting tribute to shaking your mouse in olden times to speed up page loading.

Yeah back in the day UI was very single threaded. Moving the mouse faster meant more processing time for UI since it had priority and faster the experience.

This is exciting, because the sooner developers standardize on single method of lazy-loading, the sooner I'll be able to disable it with a userscript.

I hope browser developers themselves will add an option to disable it.

Except expect polyfills!

Lazy loading seems like something that doesn't need a polyfill--the fallback behaviour is that it just works correctly...

(This is not to say not to expect the polyfills, though)

If lazy load isn't needed, then there is no need to apply it in the first place.

But a lazy load is never truly needed, right? It's a way to save bandwidth.

Well, if saving bandwidth is valid a concern, then why isn't it also true for the polyfill?

I wish browser vendors, instead of native lazy loading for images, focused on a universal mechanism to lazily render arbitrary elements, including custom ones.

Browsers could profile how long rendering a particular type of element takes on a given website, and optimize render triggers to provide seamless experience while conserving bandwidth.

Currently lazily rendering custom elements requires a fair chunk of IntersectionObserver boilerplate code, and beyond that any adaptability to user connection seems too complex to even consider.

I think you are conflating React elements (a blob of JS) and HTML elements (er the DOM?) in the comment to the point where I find it very difficult to reason about what an answer would look like.

If you replace your use of element with "blob of javascript" how would the browser separate out the "element" blobs that are "slow" from the other Javascript?

I mean purely custom elements, no React at all (which should not be of concern to browser vendors anyway). The metrics that would matter are similar to what we see in browser developer’s tools profilers—ones that measure the approximate time from when script execution starts until the document is rendered, or others depending on how custom element is bundled.

This is with a degree of imprecision overall, of course, but similar approaches I imagine could be used to profile individual element rendering times.

Edit: Tried to clarify my idea, being away from any browser developer tools at the moment myself.


The bug is already closed in Firefox. Hopefully it'll be here soon.

That's great news. Thanks for the link

dom.image-lazy-loading.enabled = true is the about:config flag

While I think lazy loading is great in itself, I wonder why we haven't come up with something better before.

There is progressive loading embedded directly into JPEG and if browsers would prioritize loading of assets through the DOM (order), we wouldn't need any other solutions.

Or am I wrong here?

I hate lazy-loading so much, since it never keeps up with my scrolling speed. So I first have to scroll down slowly and wait for everything to load, then go back to the beginning and start for real. Tried some methods to stop lazy-loading but none worked reliably.

Me too, especially when images fade from blurry messes to the real thing once it’s on screen, which I find very disconcerting (more disconcerting than I think I should find it). This is exacerbated by living in Australia, so that latency on US-hosted sites is 200–400ms, which makes it worse.

I’m honestly hoping that everyone quickly adopts the lazyload attribute, just so I can turn it off in one place.

> Tried some methods to stop lazy-loading but none worked reliably.

all the more reason to make lazy loading a part of the language standards. if every site did lazy loading the same way, you could more easily disable it.

Lazy loading is helpful but if the img tags' sizes attribute is mucked up then less is gained. That is for example, due to the sizes the browser is led to believe it needs an image that's 100vw when the actual width is 25vw. This happen quite a bit with WordPress.

Here's a great article on sizes and srcset.


Editorial: Personally, my fear (based on history) is this will ultimately lead to more bloat, not less. The belief in "oh not to worry, we've got lazy load" is not a positive overall.

LL is a good thing. But it will likely increase abuse, not mitigate it.

The first mainstream browsers with support: Chrome and Edge!


Edge is based on Chromium, so it's to be expected that they'll move in lockstep.

Makes me reminiscent of the days when Chrome and Safari were in lockstep (ish) because they were both based on Webkit.

Imagine if Microsoft, Apple, and Google were all sharing the same browser engine.

Having suffered through both browser wars and the Internet Explorer 6 era, I shudder at any meager suggestion of it.

IE6 was not releasing, which is not at all the same as getting the same features every month or two.

I don't like there being a single browser engine. At least Blink is somewhat open source.

And Blink is based on WebKit, which is also open source, which is in turn based on KHTML, which is also open source.

Of course, one rendering engine, it isn't great. But at least Microsoft can hopefully correct some of Google's baser impulses.

SVG-in-OpenType is a standard developped by Microsoft and Adobe. It was supported by every browser except Chromium, now it is not suported anymore by Edge. Do you really think Microsoft has a say on Google decisions for what should be in a browser or not ?

> But at least Microsoft can hopefully correct some of Google's baser impulses.

Good luck with that!

Microsoft is maintaining a fork of Chromium, which they probably rebase against upstream quite often. It may seem like “lockstep” but that’s probably just because they have effective and time-efficient release processes - and employ a great deal of expertise in the field of upstream fork tracking.

It should happen. The web browser is not a particularly interesting problem to solve. Let's make the base layer the same and distinguish ourselves at a much higher level. It's like the JVM--would humanity really be benefited by having multiple competing implementations of the JVM?

This argument may not make sense applied to an operating system/kernel. There are obvious benefits to having multiple competing operating systems. The crucial difference between a kernel and a web browser is that the kernel is a product, whereas the web (ECMA, W3C) is an international standard. So the only functional differences allowed to exist between implementations are at a very high level e.g. UX or privacy. The benefits of competing implementations are from innovation, but innovation in a way that violates the standard is not allowed, so innovation in functionality happens in the standards space. Where does innovation matter? In performance. Who can implement the standard with the best performance? It makes sense to have competition only up to such a point where a winner becomes 10x better than its competition. After that point it becomes useless to bet on the losers (save for extreme niches like lynx). There wouldn't be enough reward to heroically save the tied-for-last-place losers in a winner-take-all game.

It has to happen eventually. I think it would be quite sad to have flying cars on Mars and still have people working on rendering HTML.

> This argument may not make sense applied to an operating system/kernel.

I don't follow your reasoning. If competing implementations makes sense for operating systems, wouldn't it also make sense for browsers, which are basically the equivalent of an operating system for web apps? Conversely, if there should only be one implementation of a browser, wouldn't it make even more sense for there to be only one implementation of the operating system, so there is only one platform for native applications to target?

> whereas the web (ECMA, W3C) is an international standard. So the only functional differences allowed to exist between implementations are at a very high level e.g. UX or privacy. The benefits of competing implementations are from innovation, but innovation in a way that violates the standard is not allowed, so innovation in functionality happens in the standards space. Where does innovation matter?

I think you might be a little bit confused about how web standardization happens. Browsers are very much allowed to innovate beyond what is specified in standards. And in fact most standards are based on features that at least one browser has already implemented. Innovation drives standards, not the other way around.

After Internet Explorer won the last browser wars, both the winning implementation (IE6) and the web standards stagnated, until competition (in the form of Firefox and more importantly Chrome) came along. I don't want that to happen again.

Maybe it will be different with Chrome as the winner since for Google uses the browser as a platform to deploy its own web apps, but it still means the direction of browser development is primarily decided on by Google, and will meet Google's needs, which may or may not be the needs of the internet community as a whole.

> Maybe it will be different with Chrome as the winner since for Google uses the browser as a platform to deploy its own web apps, but it still means the direction of browser development is primarily decided on by Google, and will meet Google's needs, which may or may not be the needs of the internet community as a whole.

Yes and that company already has huge voting rights on the standards committees and is the primary benefactor of its "competition". Chrome and the web is already one and the same.

> I think you might be a little bit confused about how web standardization happens. Browsers are very much allowed to innovate beyond what is specified in standards. And in fact most standards are based on features that at least one browser has already implemented. Innovation drives standards, not the other way around.

It mostly comes from demand from the community. Take this lazy loading images proposal for example. It's only implemented by one vendor:


Its demand comes from the huge amounts of websites that use lazy loaded images with their own libraries. The browser vendors did not implement let alone invent this feature.

> If competing implementations makes sense for operating systems, wouldn't it also make sense for browsers, which are basically the equivalent of an operating system for web apps?

Because the web is already standardized, already a solved problem. One day the market will bring forth an ideal operating system, and we will standardize on that. The analogous event has happened for web browsers. Some people are understandably in denial, still used to the old religious warfare way tech ecosystems worked.

>would humanity really be benefited by having multiple competing implementations of the JVM

Um… sure? There are at least three popular JVMs out there. I am sure any Java expert can name at least as many more.




The last one is well known for its (almost) zero-pause GC. I believe it was one of the things that have led to some interesting GC developments inside the OpenJDK project in the last few years, and resulted in two competing low-pause GC implementations (one from Red Hat and one from Oracle).

I'm aware of these. Also aware they're based on core OpenJDK with proprietary extensions to or replacements of high level components. There were a few completely alternate implementations, but most of them have died off or exist as toys in 2020. Since they're all descendants of the same source tree, it's not very different from saying there are multiple Chromium distributions like Brave, Vivaldi, and Opera.

I could be reading this wrong, but it looks like Firefox Nightly will receive this capability in the next build, and Firefox stable will receive the upgrade in version 75: https://bugzilla.mozilla.org/show_bug.cgi?id=1542784

Status on Safari [1];

>Rob Buis 2020-02-13 00:04:07 PST

I was waiting for the spec to land before working again on this. First step is to fix the tests: https://github.com/web-platform-tests/wpt/pull/21773

I'll incorporate them into https://bugs.webkit.org/show_bug.cgi?id=200764, test a bit and hopefully put it up for review soon.

This is exciting. We should have all major browser supporting it within this year. Fewer Javascript required.

[1] https://bugs.webkit.org/show_bug.cgi?id=196698

I remember there was a huge protest from the ad industry over that.

This is the time we live in now, web standards are set by the ad industry

Google is the ad industry, so your point is valid. Also the ad industry basically pays the majority of stuff on the internet. It makes sense that they get a say in the matter.

the ad industry doesn't pay for shit. they extract value from users and give (some of) it to creators. the users (involuntarily) pay. so if the argument is "whoever pays for it gets a say" -- we do. ads don't.

It makes no sense whatsoever, they are free to discontinue chrome at any moment

I detest hard coding image sizes. I only want to add `img { width: 100% }` and that's it.

Then you're part of the problem. Failing to add width and height attributes to images is what makes websites jump around while loading. It's infuriating.

So this means better user tracking capabilities are now native, even with Javascript disabled.

You can learn what the user have read on the page before, and deliver more relevant ads next.

This has been addressed from reading the PR diff:

> If scripting is disabled for img, return false.

> This is an anti-tracking measure, because if a user agent supported lazy loading when scripting is disabled, it would still be possible for a site to track a user's approximate scroll position throughout a session, by strategically placing images in a page's markup such that a server can track how many images are requested and when.

The subtext here is, I believe, that the on-page JS can already see your viewport if JS is enabled, and thus would have little reason to use images to duplicate that.

> So this means better user tracking capabilities are now native, even with Javascript disabled.

No, the change to the spec explicitly says lazy-loading is disabled if scripting is disabled just for this reason.

I'm an adtech veteran and I guarantee this kind of tracking is never used. Just because you have a signal available doesn't mean it's used or even useful for ads.

Well, I'm a veteran software engineer, and one of my former jobs was to implement pixels and optimize page loading specifically for these kinds of pixels on an e-commerce website. It was a particularly hard problem because if you have dozens of pixels to order the loading of, it can really slow down the loading of a webpage or email. And I was one member of a rather large team. To say that pixel tracking isn't used in the industry is disingenuous at best.

You either missed the context or misread the post. Pixel tracking refers to using any kind of beacons to send back data (originally images but now also JS tags) and there are many ways to optimize.

This has nothing to do with lazy-loading image tags. This browser-native functionality doesn't add any kind of new tracking, and pixels are never lazy-loaded anyway because they need to be fired as soon as possible to ensure data capture. Nothing about this new API changes anything for ads.

What advertiser doesn't want to know that their ad is visible?

That's called "viewability" and isn't related to relevance. It's already solved natively with the IntersectionObserver API which came out years ago and supports more than just an image. [1]

Lazy loading is entirely different and offers nothing new or useful for ads.

1. https://developer.mozilla.org/en-US/docs/Web/API/Intersectio...

As the first reply in this chain remarks: this is not available without Javascript.

The point is that native lazy-loading is not useful for ad tracking. If there's JS then there are other methods to check viewability. If there's no JS then there are no ads at all.


A better rule of thumb is to avoid poor generalizations. I use my real name here so go ahead and look me up if you actually want to discuss something.

The cynic in me thinks if it was never used and that is useless, it wouldn't have been added as a feature. Because, I already feel that lazy loading should be rarely used and is itself mostly useless.

Lazy loading is just another feature to improve performance, and browsers already used heuristics to load assets with prioritization by what would render first.

Also in the last 5 years, every browser has added more functionality to block ads and tracking. Lazy loading is not going to somehow make up for that.

How does it improve performance? It is literally more work for the machine to (mis)manage. And most browsers already have viewport heuristics, no? What makes this more likely to succeed?

By not loading images at all until you get to it instead of guessing if you will. It's already done by some websites using javascript. More work in milliseconds of CPU time is a worthwhile tradeoff for network bandwidth.

Guessing on if you have a fast enough connection to load is literally one of the features called out as hopefully possible with this spec. Such that it is expected that this flag will be controlled by scripts. (Or, turned to always eager if scripts are disabled...)

My question is what makes this better than the current script solution? Especially knowing this is intended to be built up with scripts.

I don't know. I was just commenting on how this is useless for adtech, but I don't see any negatives with giving developers more explicit controls over loading. Loading less data is a good thing regardless of internet connection speed.

My gut would be that this is going to win back speed lost to adtech. So, not useless. Just not useful to tracking. Which, quite frankly, is already extremely good.

My gut would also be a lot of folks think this will be good for tracking sure usage. On both sides of the fence.

Hang on. Are you in the ad tech industry? I was, until recently, and this is far too granular a thing for people to care about. It's just too much tech for too little gain.

I mean, I get what you mean: you place pixels strategically to see how far someone got on the page. But you can do that and more already (even full screen interaction) and no one uses that for ad targeting.

After the tracking gravy train dries up in the coming years, this might have been a possible point of exploitation. Seems the whole was covered anyway.

Sounds like you just had a viable adtech idea that no one is using because there is to much overhead. It takes a bit more to turn it into a sales pitch but I could see it work.

There's nothing about that which is viable or new for adtech. Scroll depth has been an available metric for the last 5 years.

It seems like this is bound to have security holes at least for a while, right?

I can't see how. Perhaps it makes it easier for the page owner to track how far down a user has scrolled.

Can you elaborate on what makes you think so?

Oops, I just misunderstood.

Applications are open for YC Summer 2020

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact