That's not really true, even with vectors it's good to have different images for different sizes, that add details (for larger sizes) or remove details and increase component sizes (for smaller sizes), so the icons look good at different sizes. You can see in the tape recorder icon the author shows: at larger sizes, it looks much better than the bitmap images, but at smaller sizes the bitmapped icons look better; the vector icons become a mush of unclear elements. The bitmap shows pieces as proportionally larger than they should be, so the visual elements that you consider important still show clearly.
In what way does it look "less clear"?
The problem with showing more details on the low-dpi screen for a 16x16 icon was the low resolution of the screen.
A hi-dpi screen doesn't have this issue. It's effectively showing 32x32 quality on the 16x16 "virtual pixel" size.
At worse, it would have looked as good as a 32x32 icon in a low-dpi screen. But in the hi-dpi it looks even better, much sharper and refined. You're getting an icon that's as good as a 32x32 icon would be on a low-dpi screen, but with half the physical size, so that it appears twice-as-detailed as a 16x16 icon would be.
And if the icon looks ok in 32x32 on a low-dpi screen, it will look doubly so as a 16x16 virtual pixels with doubled resolution underneath in a high-dpi one.
A bunch of details really close together looks like noise, not information
Is this a common acronym?
The most complicated example I've seen thus far, however, is that of Crash Bandicoot for the PS1. Perhaps the most interesting thing is the description of their efficient code use:
"Ultimately Crash fit into the PS1's memory with 4 bytes to spare. Yes, 4 bytes out of 2097152. Good times." (3)
 - http://www.rantgamer.com/wp-content/uploads/2014/12/Clouds-a...
 - http://all-things-andy-gavin.com/2011/02/02/making-crash-ban...
 - https://www.quora.com/How-did-game-developers-pack-entire-ga...
It is often not worth the effort.
Now look at your stack and count how many developers could have uttered this phrase when designing each module, subsystem, library, protocol, service, daemon, file type, interface, or plug-in that's part of it.
We should count ourselves lucky that hardware engineers have afforded us the ability to make such trades but we shouldn't take it for granted.
Imagine what would have happened if HTML had been a non human-readable binary format just to spare a few bytes.
Some optimizations are counter productive.
In what context? Also, aren't you making presumptions about new contexts that might appear? We have for decades been in an era where devices get more power efficient and smaller, allowing them to become more ubiquitous. BeOS used to be able to fit a fully kitted OS into under 300MB that could punch way above its weight in terms of multimedia multitasking. Right now computers attached to your body often need to be recharged every day or once a week. Even now, we'd like those computers to be able to do more with even less power.
If I could make a computer so small and cheap, a big company wouldn't care if one is lost occasionally, yet with enough longevity to accompany a freight shipment of package on its entire journey, while recording or even transmitting data, I bet could sell a bunch of those.
If you want to "just spare a few bytes", you could do the same to HTML and keep it text-based. Comparing something like XML and JSON shows that "human-readable" formats can vary significantly in complexity alone.
Besides, I think we could all do with fewer layers of abstraction in our lives. As a reverse-engineer I know once remarked, "Everything is human readable if you have a hex editor."
Mobile device, network latency?
A couple of years later, when his code started taking down other actors in the ecosystem, it became my job for a while to replace his modules with my own implementations that were more efficient. Sometimes by a factor of 1000. Literally.
Doing it right the first time would have only taken a fraction of the effort it took in the end (assuming you have the proficiency to do so).
I find dismissive comments about efficiency troubling.
Don't write code you know will be too slow, but don't optimize the code just because you can. By the same token, if you find out that your code is too slow, it's your job to optimize it.
Take performance seriously, but don't optimize before you know why you're optimizing.
Of course, different programmers will have different ideas of what "mildly optimised" means; what I mean by "mildly optimised" above is actually the code I'd write as a first pass and consider not optimised at all, and not "the first thing that comes to mind even if it's actually horribly stupidly inefficient" which a lot of programmers seem to do. In other words, my idea of unoptimised is probably more like a -O1, and I'd have to spend extra effort "pessimising" to go below that.
"the first thing that comes to mind even if it's actually horribly stupidly inefficient"
There are certain practices, like the "Law of Demeter" and the "Replace Temp with Query" refactoring that are deliberately inefficient for the purpose of making refactoring and code changes easier. I think of these as being somewhat like a filing system that leaves a little bit of inefficiency to facilitate later reorganization.
This attitude is why we can't have performant smartphones with 1GB of RAM.
It's one thing to do evil floating point bit level hacking and such to save a nanosecond or two, and another to write a FizzBuzz in a way that doesn't require 700 external libraries to begin with.
And performance still sucks for many things. They should be instantaneous (in under one frame, which is ~1.6ms). Loading and parsing SVG files from the internet is one such thing that takes way too long.
SVG is ridiculously bloated in comparison. Even PostScript and PDF are more efficient.
It's like they actually noticed how much more bloat would result from making each point an element, so instead they just decided to dump what is essentially bastardised PDF/PostScript path data into an attribute. The "language" also doesn't match PDF exactly, despite using a similar style of short 1-letter commands.
And I was able to losslessly reduce it to 298 bytes with PNGOUT.
The smaller image is 257 bytes, and I reduced it to 186.
Since rendering a bitmap is straightforward, I imagine it would be preferred here for performance (I am just guessing here).
[Edit]: HVIF is rendered in a single pass (except in a few cases), while SVGs (and most vector format) render each element individually.
Other than that, pretty incredible work. Vector image compression at encoding level is definitely interesting. I alsolove posts in which binary formats are explained, so...
But of course that increases encoder and decoder complexity. For icons, a 255 style/path limit seems reasonable and keeps the code simple.
Edit: I would just teach the GUI to cache the icon folder in RAM.
I wonder what's the reason of its slow acceptance..
I'm sure there are ways of getting around the problem, but having lots of duplicate data (even if each icon is very small) isn't generally a good idea (but perhaps this is an exception).
How's it compare to a minimised SVG, gzipped, in practical terms? I see projects elsewhere just blithely using SVG or SVGZ. (or, in horrifying cases, multiple sizes for hundreds or thousands of icons.) Perhaps this is a thing that would be suitable to wider use if it can get good lib support.
Optimization tools do exist  to simplify the tree but they can do only so much good. I think this format, built with optimality might be a solution.
: Anecdotal. I once had to edit ~20 icons. The client had provided SVGs because they'd somehow lost the original AI files. When I imported them in Sketch 3, the nesting, masks, and transforms applied were absolutely horrifying! I had to optimize the icons using , which did remove the masks and transforms, but in the end I had to manually edit SVG's XML source to fix nestings. sigh.
("approximately everyone" to the point where Wikimedia occasionally considers just using Inkscape-as-a-service as the SVG renderer instead of rsvg, for best quirk-compatibility.)
I do wish that XML based formats were more deterministic to generate. Order should matter, however, I have seen several tools generating something that looks like this:
That's what I get for spending like a decade primarily using Photoshop, I suppose.
In contrast, the lead developer of Inkscape is active both in code and on the SVG specification mailing list, oftentimes implementing new features ahead of time to see how well they fit into SVG and where the spec might still need to be changed.
In practice, when valuing spec compliance and feature support, Batik and Inkscape are the only sensible options. This has nothing to do with quirk compatibility.
I personally tend toward the "users do scroll, it's okay to make long pages" school of thought. But certain extreme cases are extremely unpleasant to use. That said, I don't consider this one case so extreme, because it at least does not take over my ability to scroll at the pace I expect.
This icon library shows some good examples.