Which means, if you accept user generated content or images from any CMS or other source you really should be obtaining the image dimensions and using them in the HTML.
To actually prevent jumping (including on lean image load), I believe it would be more effective to simply set width and height.
Assume that developers make essentially random mistakes or changes. Whenever anything happens that has a negative impact on metrics management cares about, there will be pressure to look what went wrong.
Any random change that has a benign impact on measured metrics, eg weird page loading that makes people accidentally click on more lucrative links, there will be no pressure to revert.
No one needs to consciously look for and implement dark patterns for this to work out.
It's very similar to how confirmation bias can make people question news they don't like until they find enough evidence to disregard them; but are just accepting news they do like.
You're not required to use it on every image. It's a completely optional parameter that you add to individual images that might benefit from it.
You can use it on navigation icons that don't change size. Maybe in some of your boilerplate in the footer. Or in some sidebar content or a testimonial that you control. But if you have images of unknown size populating the middle of the page, don't use it on those.
The 1.2 kB icon in the footer is not really a problem, but the xxx kB article image some author uploaded with unknown sizes.
Those things will make my page jump around and will need to be lazy-loaded.
The idea to set fixed dimensions feels like it comes from a time where we did designs with tables and "optimized" sites for 1024 x 768 pixel screen size.
That was the time when you could tell that this image should be displayed 200 x 200 px in size and nothing else. And it fit. One way or another.
Make no mistake, this is a silly problem to need to hack a solution for. So I would welcome lazy loaded images with space allocation before it's rendered with a responsive sizing.
I doubt we’ll ever see them built into a browser, because all nice generalized things never get there.
Using js onresize event is a no-go, if JS is still needed the spec is worthless. Lazy loading images with JS worked before just fine, this has to be about not needing special JS for this feature.
No, parent is right: It's impossible to satisfy that recommendation, usually.
Screen relative units represent a percentage of the screen size (technically speaking it's the viewport really). You don't need to know how big the screen is; you're tell the browser to use, for example, 25% of the viewport height when you set something to be 25vh. If you want 4 images vertically that's really useful. That way the browser knows it needs to put a box 25% of the screen height in the DOM where the image will be loaded later so there's no layout thrashing.
Using js onresize event is a no-go, if JS is still needed the spec is worthless.
The parent suggested they would want to change the width and height attributes of the image when the user resizes their browser. That's exactly what the resize event is for. If you want to display a fixed size image you can use fixed width and height attributes. If you want the image to scale with the viewport size you can either use units that represent a relative size or you need to use JS to modify the attributes. Heck, you can even use media queries to mix and match and have fixed size in some devices and relative in others.
Lazy loading images with JS worked before just fine..
Except you needed JS to do it. With the new HTML spec you get the benefits of basic lazy loading without needing the additional weight of some JS code. That's good.
And that's before resizing anything.
And relative units don't fix this either as that information still misses the ratio of my image. If it's 100 vw, how high is it? I don't know this information without fetching the image, doing calculations and dynamically updating the DOM with that information.
I haven't been following the latest developments in CSS for a while, so maybe it's already possible to do this?
I am not going to do this. This is such a pervasive issue, and with an _endless_ difference in screen sizes and millions of pages of content... not going to happen.
This kind of problem is exactly what the browser is best at solving. A problem everyone has on most pages on the internet.
1. The penalty for not displaying image dimensions is nearly insignificant because almost _all_ images that you would put width/height on are from content managers, not designers.
2. Mobile content: On a phone the content moves down after the image is loaded. (everything is just a single column) Which is a preferable action than a big empty space. So, I prefer _not_ including dimensions on user content for phones.
3. Responsive design: All CSS I have been using for years now has "max-width:100%".
Therefore (since a lot of traffic is mobile) most (rough guess) of images loaded from sites I've worked on the image dimensions are recalculated as soon as the image is loaded anyway.
4. Srcset: Multiple possible images downloaded that are _chosen_ by the browser at run time. You already have to provide dimensions. But what if they aren't exact? Go back to #1.
5. Web design: I can't even recall the last time I put an image in a design using the <img> tag that affected layout. (maybe if you go back to the 90s this would have mattered)
Is there an actual problem in identifying what images correspond to your markup and adding an appropriate attribute? (asking, not a web dev)
AFAICT the problem is solved, this is the necessary protocol - not even a bad one, use it and it will work.
Rarely do we specify the width of many, many objects and they display just fine. (consider tables)
If you defer (down)loading of images, and layout depends on unavailable size/aspect-ratio of an image, you get layout updates - same would happen if you lazy-loaded contents of tables.
What is the problem you would like solved and what do you propose for the browsers to do? Ancestor post was about avoiding layout changes as new layout-critical information arrives. Judging by your sibling comment, you seem to prefer layout updates with content popping up.
1. srcset : the browser chooses the size of the image, not the server.
2. CSS/Responsive images  : Almost all (I am close to saying 100%) of images I have loaded on websites have their width/height adjusted by CSS.
Layout updates are often unavoidable in most circumstances surrounding images, there is just no getting around it.
So instead of pining over the past, we should adapt for the future.
I feel like this argument is the same I had with print designers in the late 90s who made every page a full image because they couldn't handle not having full control over the layout.
But I will agree with the sentiment -- I find it incredibly annoying on pages that do lazy loading and don't implement this enlarged intersection check to make it appear seamless.
To learn photography I'd prefer starting with a camera. Not withe theory of light or the properties of atoms that allow glass to be transparent or how lenses are ground.
Similarly, if possible, to learn electronics i'd like to start at a higher level and work down rather than bottom up.
Are there names for these two types of teaching/learning approaches? Is one "better" than the other?
Edit: Actually I'm not sure that's really right, even example-led approaches are usually bottom up. For example, an example-led approach to programming would start with "hello world" and work up from there, leaving a full-blown example project (if there is one) to the end.
So basically is regular loading.
> Click here to see more pictures
That way you get the behaviour of always-load and the rest of us get fast loading pages. This is a Pareto move.
That is why I wish the next generation image format to push the quality and size ratio. I think JPEG XL currently has the best chances at succeeding Jpeg.
We should have faster Internet (5G and Fibre ) for everyone, and better Image compression for everyone. Hopefully in 5 to 10 years times this problem will be a thing of the past. Assuming we dont bloat every website into WebApps that tries to download 10MB before it even load.
And you can get graphic novels on Kindle nowadays. To pretend it is just text is to undersell the formats now. Could I claim they are bloating? Certainly. Still have a long way to go before they reach current web browser bloat. Which only seems to be marching on. With no signs of restraint on what to pursue.
* https://www.cnn.com/ is 1.3MB of data.
* https://www.nytimes.com. is 5MB of data.
* https://www.reddit.com/ is 6MB of data.
* https://www.google.com/ is 400KB of data.
* https://www.facebook.com/ (not logged in) is 2MB of data
* https://twitter.com/home/ is 1-3MB of data depending on the ads it decides to show.
Those are all on-the-wire sizes, so after gzip compression and whatnot.
However, this is good, because those are great examples of how browser lazy loading is going to help. When I loaded Reddit it pulled down 7Mb of data, but more than 5Mb was images. Looking at the content above the fold my browser downloaded about 4.5Mb that it doesn't need until I scroll. This change to the HTML spec will get all those sites first load below the average size of a Kindle book. Awesome.
And there is some irony that I am likely tracked heavily on what I've read. Certainly on what I note. So it isn't like I'm clamoring for no scripts. Just find it odd that the push for web applications has destroyed the use of web pages.
Web apps are a different story because they often load a couple of meg of JS before anything happens, but so long as things are being cached correctly that's only a problem occasionally.
Page basically dies; any Ajax requests are at the back of the queue. Scroll halfway down that page, you'll be waiting 5 mins for the images in viewport to load since they are processed in order. You can roll your own lazy load, but it's a pain and often done poorly. A good browser implementation would be great for most pages (but 10k+ might still require custom work).
Think about all the use-cases you could build. Maybe you could later set "I'm on a metered connection" at the OS level, have your browser pick that up, and not overuse your metered connection, etc. etc. Maybe you could have that in a browser extension that you manually toggle. No, this is far too useful.
And again, just don't put so many giant graphics on a page. Problem mostly solved. With less tech and likely faster results.
But I can't see why this is a feature that we need. Progressive loading, I could almost see. But by and large, high resolution images are just not compatible with high speed page loads. I'm not seeing how this feature actually changes that.
And with some extra noise, maybe the Chromium based browsers follow.
If images aren't ready by the time you scroll down, this is more an issue with the particular implementation of lazy loading, but the concept is sound.
Lazy loading is all about the hosting costs.
However, if 40-50% of your visitors bounce because your page takes 5s to load due to your longform photo essay hammering the user's 4G connection with a bunch of images they won't even see 5 minutes into scrolling down, this is a real concern. Lazy loading shines in these moments.
If you refresh the page at the bottom, it will stay at the bottom, so no need to load top images.
Obviously this is an implementation detail and well done LL could keep loading everything until its done, but frequently it doesn't try until I scroll down. NonLL pages also aren't perfect, and if a page tries to load everything at once, each thing added slows everything else down. But for some reason I never experience that on non-LL pages (or maybe they're good LL pages and I don't notice.) It could be that browsers do some smart things (the obvious would be something like queueing requests for page content, only allowing 5 or so open requests, and loading images in the order they are referenced in the HTML so stuff at the bottom of the page is loaded last.)
In practice, it has the opposite effect when the browser needs to figure out where on the page the image will end up at before attempting to load it.
(This is not to say not to expect the polyfills, though)
Browsers could profile how long rendering a particular type of element takes on a given website, and optimize render triggers to provide seamless experience while conserving bandwidth.
Currently lazily rendering custom elements requires a fair chunk of IntersectionObserver boilerplate code, and beyond that any adaptability to user connection seems too complex to even consider.
This is with a degree of imprecision overall, of course, but similar approaches I imagine could be used to profile individual element rendering times.
Edit: Tried to clarify my idea, being away from any browser developer tools at the moment myself.
The bug is already closed in Firefox. Hopefully it'll be here soon.
There is progressive loading embedded directly into JPEG and if browsers would prioritize loading of assets through the DOM (order), we wouldn't need any other solutions.
Or am I wrong here?
I’m honestly hoping that everyone quickly adopts the lazyload attribute, just so I can turn it off in one place.
all the more reason to make lazy loading a part of the language standards. if every site did lazy loading the same way, you could more easily disable it.
Here's a great article on sizes and srcset.
Editorial: Personally, my fear (based on history) is this will ultimately lead to more bloat, not less. The belief in "oh not to worry, we've got lazy load" is not a positive overall.
LL is a good thing. But it will likely increase abuse, not mitigate it.
Imagine if Microsoft, Apple, and Google were all sharing the same browser engine.
Of course, one rendering engine, it isn't great. But at least Microsoft can hopefully correct some of Google's baser impulses.
Good luck with that!
This argument may not make sense applied to an operating system/kernel. There are obvious benefits to having multiple competing operating systems. The crucial difference between a kernel and a web browser is that the kernel is a product, whereas the web (ECMA, W3C) is an international standard. So the only functional differences allowed to exist between implementations are at a very high level e.g. UX or privacy. The benefits of competing implementations are from innovation, but innovation in a way that violates the standard is not allowed, so innovation in functionality happens in the standards space. Where does innovation matter? In performance. Who can implement the standard with the best performance? It makes sense to have competition only up to such a point where a winner becomes 10x better than its competition. After that point it becomes useless to bet on the losers (save for extreme niches like lynx). There wouldn't be enough reward to heroically save the tied-for-last-place losers in a winner-take-all game.
It has to happen eventually. I think it would be quite sad to have flying cars on Mars and still have people working on rendering HTML.
I don't follow your reasoning. If competing implementations makes sense for operating systems, wouldn't it also make sense for browsers, which are basically the equivalent of an operating system for web apps? Conversely, if there should only be one implementation of a browser, wouldn't it make even more sense for there to be only one implementation of the operating system, so there is only one platform for native applications to target?
> whereas the web (ECMA, W3C) is an international standard. So the only functional differences allowed to exist between implementations are at a very high level e.g. UX or privacy. The benefits of competing implementations are from innovation, but innovation in a way that violates the standard is not allowed, so innovation in functionality happens in the standards space. Where does innovation matter?
I think you might be a little bit confused about how web standardization happens. Browsers are very much allowed to innovate beyond what is specified in standards. And in fact most standards are based on features that at least one browser has already implemented. Innovation drives standards, not the other way around.
After Internet Explorer won the last browser wars, both the winning implementation (IE6) and the web standards stagnated, until competition (in the form of Firefox and more importantly Chrome) came along. I don't want that to happen again.
Maybe it will be different with Chrome as the winner since for Google uses the browser as a platform to deploy its own web apps, but it still means the direction of browser development is primarily decided on by Google, and will meet Google's needs, which may or may not be the needs of the internet community as a whole.
Yes and that company already has huge voting rights on the standards committees and is the primary benefactor of its "competition". Chrome and the web is already one and the same.
> I think you might be a little bit confused about how web standardization happens. Browsers are very much allowed to innovate beyond what is specified in standards. And in fact most standards are based on features that at least one browser has already implemented. Innovation drives standards, not the other way around.
It mostly comes from demand from the community. Take this lazy loading images proposal for example. It's only implemented by one vendor:
Its demand comes from the huge amounts of websites that use lazy loaded images with their own libraries. The browser vendors did not implement let alone invent this feature.
> If competing implementations makes sense for operating systems, wouldn't it also make sense for browsers, which are basically the equivalent of an operating system for web apps?
Because the web is already standardized, already a solved problem. One day the market will bring forth an ideal operating system, and we will standardize on that. The analogous event has happened for web browsers. Some people are understandably in denial, still used to the old religious warfare way tech ecosystems worked.
Um… sure? There are at least three popular JVMs out there. I am sure any Java expert can name at least as many more.
The last one is well known for its (almost) zero-pause GC. I believe it was one of the things that have led to some interesting GC developments inside the OpenJDK project in the last few years, and resulted in two competing low-pause GC implementations (one from Red Hat and one from Oracle).
>Rob Buis 2020-02-13 00:04:07 PST
I was waiting for the spec to land before working again on this.
First step is to fix the tests:
I'll incorporate them into https://bugs.webkit.org/show_bug.cgi?id=200764, test a bit and hopefully put it up for review soon.
This is the time we live in now, web standards are set by the ad industry
You can learn what the user have read on the page before, and deliver more relevant ads next.
> If scripting is disabled for img, return false.
> This is an anti-tracking measure, because if a user agent supported lazy loading when scripting is disabled, it would still be possible for a site to track a user's approximate scroll position throughout a session, by strategically placing images in a page's markup such that a server can track how many images are requested and when.
No, the change to the spec explicitly says lazy-loading is disabled if scripting is disabled just for this reason.
This has nothing to do with lazy-loading image tags. This browser-native functionality doesn't add any kind of new tracking, and pixels are never lazy-loaded anyway because they need to be fired as soon as possible to ensure data capture. Nothing about this new API changes anything for ads.
Lazy loading is entirely different and offers nothing new or useful for ads.
Also in the last 5 years, every browser has added more functionality to block ads and tracking. Lazy loading is not going to somehow make up for that.
My question is what makes this better than the current script solution? Especially knowing this is intended to be built up with scripts.
My gut would also be a lot of folks think this will be good for tracking sure usage. On both sides of the fence.
I mean, I get what you mean: you place pixels strategically to see how far someone got on the page. But you can do that and more already (even full screen interaction) and no one uses that for ad targeting.