Hacker News new | past | comments | ask | show | jobs | submit login
500 Byte Images: The Haiku Vector Icon Format (leahhanson.us)
265 points by luu on Sept 3, 2016 | hide | past | web | favorite | 81 comments



> [Vector format] means that you only need one file per icon; it doesn’t matter how many sizes you want to render icons at.

That's not really true, even with vectors it's good to have different images for different sizes, that add details (for larger sizes) or remove details and increase component sizes (for smaller sizes), so the icons look good at different sizes. You can see in the tape recorder icon the author shows: at larger sizes, it looks much better than the bitmap images, but at smaller sizes the bitmapped icons look better; the vector icons become a mush of unclear elements. The bitmap shows pieces as proportionally larger than they should be, so the visual elements that you consider important still show clearly.


There is a feature of HVIF, called level of detail, that lets you vary what shapes appear according to the display size. You can find it in the Icon-o-Matic documentation. Ctrl-F for "LOD" in https://www.haiku-os.org/docs/userguide/en/applications/icon...


So like media queries, in CSS? That's neat.



That's pretty cool!


Very interesting


That's good, although you also want to change the size of things—make the remaining details larger at smaller sizes, so they show more clearly. Cape Cod is really narrow, but for a small US map you still want it clearly defined :-)


I imagine you would support that by showing the larger shape while hiding the smaller one in the small icon, and showing the smaller shape while hiding the larger one in the large icon.


That kind of goes back to having one file per size, in terms of space. Although it's more flexible and might be easier to maintain.


Having one vector file per size would involve each file being close to constant size. Adding one resized shape only adds twenty-some bytes (a few headers/indexes, plus the transformation matrix); this is a lot less than a whole new file.


This is something that irritates me about high dpi displays. If you have a small 16x16 icon and go to high dpi, often a regular 32x32 icon is used. So you just use a larger low dpi icon. The problem is that the icon at the same physical size now has much more details and looks less clear. The correct thing to do would be to have a separate 16x16@2x icon with 32x32 pixels, that is intended for the same physical size as the original 16x16 icon.


>So you just use a larger low dpi icon. The problem is that the icon at the same physical size now has much more details and looks less clear.

In what way does it look "less clear"?

The problem with showing more details on the low-dpi screen for a 16x16 icon was the low resolution of the screen.

A hi-dpi screen doesn't have this issue. It's effectively showing 32x32 quality on the 16x16 "virtual pixel" size.

At worse, it would have looked as good as a 32x32 icon in a low-dpi screen. But in the hi-dpi it looks even better, much sharper and refined. You're getting an icon that's as good as a 32x32 icon would be on a low-dpi screen, but with half the physical size, so that it appears twice-as-detailed as a 16x16 icon would be.


The top comment in this thread details why it looks clear: some icons have different proportions for different icon sizes to make sure things look okay. So the problem is that proportions planned for one physical size are used for a smaller physical size.


16x16 and 32x32 allow for the same proportions though (including alpha-ed out pixels).

And if the icon looks ok in 32x32 on a low-dpi screen, it will look doubly so as a 16x16 virtual pixels with doubled resolution underneath in a high-dpi one.


a 256x256 keyboard icon, is going to look pretty bad at 1cm x 1cm

A bunch of details really close together looks like noise, not information


Not necessarily. When the first iPhone came out with retina display I eventually realized that the contacts app had incredibly tiny letters on the tabs. It was only visible if you looked very closely but was an amazing touch of polish.


That's actually what the Gnome Desktop Environment (GDE) does. They have separate images for each icon size, but use vector graphics so they're easier to edit.


>Gnome Desktop Environment (GDE)

Is this a common acronym?


No. The official acronym is GNOME, AFAIK.


If you ever look into the memory consumption of modern applications and filetypes, you'll probably both throw up and have a heart attack. Very few people are interested in tuning the constant factors in the efficiency of their software, even when the benefits of doing so can be really impressive as demonstrated here.


It's partly a trade off between difficult to read code (highly optimized assembly) and executable size. Some old console games actually employed self modifying code not for obfuscation, but size.


The easiest example to see (to me) is the use of the cloud / blush sprite in Super Mario Bros. 1. They use the same sprite with an alternate pallet. (1)

The most complicated example I've seen thus far, however, is that of Crash Bandicoot for the PS1.[2] Perhaps the most interesting thing is the description of their efficient code use:

"Ultimately Crash fit into the PS1's memory with 4 bytes to spare. Yes, 4 bytes out of 2097152. Good times." (3)

[1] - http://www.rantgamer.com/wp-content/uploads/2014/12/Clouds-a... [2] - http://all-things-andy-gavin.com/2011/02/02/making-crash-ban... [3] - https://www.quora.com/How-did-game-developers-pack-entire-ga...


Thanks for the links man, the crash bandicoot stuff is a great read


But with so much memory available on modern systems, the benefits of optimizing makes les and less sense.

It is often not worth the effort.


"Systems have so much memory, it doesn't matter if I increase memory usage for simplicity or maintainability."

Now look at your stack and count how many developers could have uttered this phrase when designing each module, subsystem, library, protocol, service, daemon, file type, interface, or plug-in that's part of it.

We should count ourselves lucky that hardware engineers have afforded us the ability to make such trades but we shouldn't take it for granted.


In this case I think the tradeoff is not worthy. We're not in the 80s anymore and its ok for a desktop to load an app icon with an additional file access. Even more if its a well known vector format.

Imagine what would have happened if HTML had been a non human-readable binary format just to spare a few bytes.

Some optimizations are counter productive.


We're not in the 80s anymore and its ok for a desktop to load an app icon with an additional file access.

Some optimizations are counter productive.

In what context? Also, aren't you making presumptions about new contexts that might appear? We have for decades been in an era where devices get more power efficient and smaller, allowing them to become more ubiquitous. BeOS used to be able to fit a fully kitted OS into under 300MB that could punch way above its weight in terms of multimedia multitasking. Right now computers attached to your body often need to be recharged every day or once a week. Even now, we'd like those computers to be able to do more with even less power.

If I could make a computer so small and cheap, a big company wouldn't care if one is lost occasionally, yet with enough longevity to accompany a freight shipment of package on its entire journey, while recording or even transmitting data, I bet could sell a bunch of those.


Imagine what would have happened if HTML had been a non human-readable binary format just to spare a few bytes.

If you want to "just spare a few bytes", you could do the same to HTML and keep it text-based. Comparing something like XML and JSON shows that "human-readable" formats can vary significantly in complexity alone.

Besides, I think we could all do with fewer layers of abstraction in our lives. As a reverse-engineer I know once remarked, "Everything is human readable if you have a hex editor."


XML is more extensible than JSON. So the latter if a trade off, not an optimization. Perhaps comparing with binary XML would be better.


For some given value of "human."


"We're not in the 80s anymore and its ok for a desktop to load an app icon with an additional file access."

Mobile device, network latency?


Early in my career I heard just such advice from a guy I respected as a big success in his field.

A couple of years later, when his code started taking down other actors in the ecosystem, it became my job for a while to replace his modules with my own implementations that were more efficient. Sometimes by a factor of 1000. Literally.

Doing it right the first time would have only taken a fraction of the effort it took in the end (assuming you have the proficiency to do so).

I find dismissive comments about efficiency troubling.


There is a difference between doing the right thing for efficiency, and going out of your way to make things as fast as possible. Storing 1k entries in an O(n) datastructure is an example of not doing the former, and using your language's built-in hashmap type, instead of implementing your own, using domain-specific knowledge to optimize it within an inch of life, is an example of the ladder.

Don't write code you know will be too slow, but don't optimize the code just because you can. By the same token, if you find out that your code is too slow, it's your job to optimize it.

Take performance seriously, but don't optimize before you know why you're optimizing.


In my experience, "mildly optimised" code tends to be simpler and more efficient in terms of the effort required to maintain it too. Anyone who has compared a compiler's -O0 to its -O1 will likely come to the same conclusion. It's only at the "-O2" and above where the effort starts becoming significant.

Of course, different programmers will have different ideas of what "mildly optimised" means; what I mean by "mildly optimised" above is actually the code I'd write as a first pass and consider not optimised at all, and not "the first thing that comes to mind even if it's actually horribly stupidly inefficient" which a lot of programmers seem to do. In other words, my idea of unoptimised is probably more like a -O1, and I'd have to spend extra effort "pessimising" to go below that.


The 80/20 rule tends to give you significantly different returns on optimization, depending on what code you optimize. Then, there are other priorities to consider, like optimizing for programmer reads. By all means, if you can write easier to understand code by writing efficient code, then do so.

"the first thing that comes to mind even if it's actually horribly stupidly inefficient"

There are certain practices, like the "Law of Demeter" and the "Replace Temp with Query" refactoring that are deliberately inefficient for the purpose of making refactoring and code changes easier. I think of these as being somewhat like a filing system that leaves a little bit of inefficiency to facilitate later reorganization.


> It is often not worth the effort.

This attitude is why we can't have performant smartphones with 1GB of RAM.


It depends on what you consider optimizing.

It's one thing to do evil floating point bit level hacking and such to save a nanosecond or two, and another to write a FizzBuzz in a way that doesn't require 700 external libraries to begin with.


But but...my 700 node dependencies! D:


Data size usually is the constraining factor in performance. Loading memory from your SSD, loading it through the internet. Processing it: the smaller the data the lower in the cache hierarchy it will sit, copying data is expensive, less data means less register pressure.

And performance still sucks for many things. They should be instantaneous (in under one frame, which is ~1.6ms). Loading and parsing SVG files from the internet is one such thing that takes way too long.


For anyone interested in compact vector graphics, SWF (Macromedia/Adobe Flash) has a shape format which is also similarly efficient and might be even smaller, since it doesn't restrict things like coordinates and colour indices to multiples of bytes --- fields are bit-aligned.

SVG is ridiculously bloated in comparison. Even PostScript and PDF are more efficient.


SVG is a strange combination of really bloated and not easily human readable. Try to make sense of shape data sometime. I'd expected a XML list of points but it's nothing that clear, it's a string of letters and numbers as an attribute of a tag. It may very well be readable if you learn the language but it's certainly not self describing as I would have expected from an XML format.


I'd expected a XML list of points but it's nothing that clear, it's a string of letters and numbers as an attribute of a tag

It's like they actually noticed how much more bloat would result from making each point an element, so instead they just decided to dump what is essentially bastardised PDF/PostScript path data into an attribute. The "language" also doesn't match PDF exactly, despite using a similar style of short 1-letter commands.

Compare:

https://www.w3.org/TR/SVG/paths.html

http://www.websupergoo.com/helppdfnet/source/4-examples/17-a...


The article is wrong. The very first image is 375 bytes, not 1024:

https://www.haiku-os.org/docs/userguide/images/apps-images/i...

And I was able to losslessly reduce it to 298 bytes with PNGOUT.

The smaller image is 257 bytes, and I reduced it to 186.


The article talks about raw bitmaps, though, while PNG is a bitmap compressed with deflate.

Since rendering a bitmap is straightforward, I imagine it would be preferred here for performance (I am just guessing here).


Isn't HVIF compressed SVG? They basically substituted the verbose tags with binary, which is a form of compression.


You're right of course. However, I'd not call it compression of SVG. HVIF seems to have defined a simplified encoding (probably inspired by SVG)?

[Edit]: HVIF is rendered in a single pass (except in a few cases), while SVGs (and most vector format) render each element individually.


Good spotting. Also I am curious to know the size of the SVG icon compressed with gzip. I can't find the original file that they claim is 7192 bytes.


There seem to be a few constraints regarding the object count (255 with just one byte). Even though you could say that most icons won't be complex enough to need that, it's an unnecessary limitation that saves two or three bytes at best.

Other than that, pretty incredible work. Vector image compression at encoding level is definitely interesting. I alsolove posts in which binary formats are explained, so...


Variable length integers could have easily increased the object count limit with only a very small increase in size. VLQ[1] would have been one good choice, but various other schemes would have allowed the same size as a one-byte encoding with up to about 250 objects while still allowing larger numbers.

But of course that increases encoder and decoder complexity. For icons, a 255 style/path limit seems reasonable and keeps the code simple.

[1]: https://en.wikipedia.org/wiki/Variable-length_quantity


PrefixVarint [1] is usually a better choice than VLQ. It's faster to decode and requires the same number of bits.

[1] https://news.ycombinator.com/item?id=11263378


Get it down to 280 bytes and you can fit one in a tweet: https://github.com/ferno/base65536


I don't understand the application of this to filesystems. To me it sounds like it the usual way to have file icons is a simple mapping from a file type to an icon file, and in this solution that icon gets put into the metadata of the file instead of it's own file, so it's not a property of the file type anymore, but of the individual file. This couples the FS implementation to the GUI right? Isn't that horrible? If you want to change the Icon for one file type now you have to walk the whole file system and touch the metadata of every file. Also if you change the extension of the file, you have to change the icon in the metadata as well.

Edit: I would just teach the GUI to cache the icon folder in RAM.


I imagine it's similar to MacOS where you can edit the icon of any file type and its stored in a resource fork.


Haiku! Hadn't heard about this project for long time. What's their status right now? How active is the development?


They still post monthly summaries, seems active https://www.haiku-os.org/blog/pulkomandy/2016-07-28_haiku_mo...


I hope it someday gets to the point where I can use it as a daily driver. I think it's really cool, and as long as it runs a web browser and allows me to SSH into another machine for anything that requires another OS I'd be happy. It would be a lot of fun to play around with.


Yeah, I loved BeOS in the old times, and I'd really love Haiku would get more mainstream and productive for daily use. Its combination of speed, lightness, simplicity and cohesion is something to envy from other OSs. I love it.

I wonder what's the reason of its slow acceptance..


It's stated that the goal is to fit file icons into inodes to avoid additional disk accesses. How much of an issue is this on an SSD instead of a hard disk?


I'm curious as to how you'd update the icon for a file type. Would you have to update the inode of every single file of that type? That can't be too efficient.

I'm sure there are ways of getting around the problem, but having lots of duplicate data (even if each icon is very small) isn't generally a good idea (but perhaps this is an exception).


Yeah, I'm a bit confused about what this is for. Any icons common to an entire file type should be cached in RAM already, so I the only thing I can think of would be file-specific icons. But I expect most of those would be thumbnail images, which would not be amenable to a concise vector representation.


Also, doesn't every modern OS use an icon cache? So you'd only read the icon once for every file type.


It's much less of an issue than with HDDs, but random read latency to a fast SSD is still in the order of magnitude of 100,000 clock cycles or slower.


That's amazing!

How's it compare to a minimised SVG, gzipped, in practical terms? I see projects elsewhere just blithely using SVG or SVGZ. (or, in horrifying cases, multiple sizes for hundreds or thousands of icons.) Perhaps this is a thing that would be suitable to wider use if it can get good lib support.


SVGs generated from popular vector editing softwares can get very complex with cascading style overrides, transformations etc. Even though SVGz would reduce the file size, parsing the SVG tree and rendering will still be unoptimized.

Optimization tools do exist [1] to simplify the tree but they can do only so much good[0]. I think this format, built with optimality might be a solution.

[0]: Anecdotal. I once had to edit ~20 icons. The client had provided SVGs because they'd somehow lost the original AI files. When I imported them in Sketch 3, the nesting, masks, and transforms applied were absolutely horrifying! I had to optimize the icons using [1], which did remove the masks and transforms, but in the end I had to manually edit SVG's XML source to fix nestings. sigh.

[1]: https://github.com/svg/svgo


I know exactly what you mean. Approximately everyone uses Inkscape in practice, but I too have had occasion to go into gvim and edit the Inkscape SVG by hand. I will say that it really helps to know what's going on, but having to is less than ideal.

("approximately everyone" to the point where Wikimedia occasionally considers just using Inkscape-as-a-service as the SVG renderer instead of rsvg, for best quirk-compatibility.)


Right!? I have another "fun" story, where I found using SVGs a bittersweet experience. The firm I was interning this summer had a third party project of extracting waveform data in PDFs generated from ECG machines. Since parsing PDFs are another horrible experience, I used Inkscape to generate SVGs, and then used XPaths to parse the SVG's XML to get the data. All was good except that the XML tree was ~30-40 nodes deep at places.

I do wish that XML based formats were more deterministic to generate. Order should matter, however, I have seen several tools generating something that looks like this:

    <foo transform="translateX(10)">
      <bar transform="translateX(-10)">
        <...>
Inkscape has less than ideal UX on Mac though, which is why I use Sketch.


I couldn't get Inkscape to work on Mac at all.


I used Inkscape recently, and found it quite a nightmare to use, even if its UI reminded me of Corel Draw quite a bit.

That's what I get for spending like a decade primarily using Photoshop, I suppose.


It is impossible to use Inkscape without running through at least some of the tutorials. Once you have a good conceptual understanding and have found some of the stupid bits that make no sense (what, you expected "Object Properties ..." to contain, ooh I dunno, properties of the object?) it's entirely suited to doing proper work. But no, its interface is very bad on discoverability.


Inkscape is defiantly software where reading the manual is a good idea. I tried to use it with out doing so, and found it unusable.


rsvg has had a bunch of annoying bugs and in general seems to be further behind in terms of sorted compliance and feature support. When I was creating SVGs for Wikimedia Commons I had to work around a number of rendering bugs. And I didn't even use Inkscape, I wrote SVG by hand.

In contrast, the lead developer of Inkscape is active both in code and on the SVG specification mailing list, oftentimes implementing new features ahead of time to see how well they fit into SVG and where the spec might still need to be changed.

In practice, when valuing spec compliance and feature support, Batik and Inkscape are the only sensible options. This has nothing to do with quirk compatibility.


This seems to be the source she is referring to: https://github.com/tqh/haiku-efi/tree/master/src/libs/icon/f..., specifically the FlatIconFormat class. Some of the most interesting deconstructions and experiments have come out of The Recurse Center, great learning experiences and blog posts by these students. I found the concept of the format interesting in how it meets the goals of being modern (a vector format) while staying slim for disk reads (optimizing the format specifically for icons). Lastly the usability aspect is tackled by creating an icon editor specifically for the format. In the creator's blog post, he notes that creators might not be able to take full advantage of the format because they might not know how the it works, but I think it's fine since the format constrains the palette anyway.


Leah, please consider this: http://contrastrebellion.com


I'd rather like to see a movement against websites that force users to vertically scroll down page after page to read three paragraphs of text.


That's interesting. Because clearly separating parts of content too far presents a usability challenge of its own, but failing to separate them enough presents a similar set of usability challenges.

I personally tend toward the "users do scroll, it's okay to make long pages" school of thought. But certain extreme cases are extremely unpleasant to use. That said, I don't consider this one case so extreme, because it at least does not take over my ability to scroll at the pace I expect.


I think the problem here might be that, if you're skimming the text of these "slides" and not looking at the illustrations, you can actually read faster than it's possible to scroll on most computers, and thus "stall your [visual] pipeline." Normally, with scrolling, I can still read while I'm scrolling. In this case, I have to read-THEN-scroll-THEN-read-THEN-scroll, etc.


Isn't one of the reasons why bitmaps are used that icons should actually vary from a standard vector image when shown at very small sizes to remain clear and readable?

This icon library shows some good examples.

http://mezzoblue.com/icons/chalkwork/


Given the processing power of modern computers, I have always been surprised that operating systems haven't moved to vectors for all user interface elements. You would think this would be desired for resolution independence, if nothing else.


The problem is that the shadows on smallest sizes are not so clear and the icons are harder to see.


Macintosh PICT was just a bytestream of Toolbox draw calls. I think Windows EMF was too.


The innate proportion-consistency of pixel art still fascinates me.




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: