Hacker News new | past | comments | ask | show | jobs | submit login
The smallest 256x256 single-color PNG file, and where you've seen it (2015) (mjt.me.uk)
482 points by karulont on April 22, 2022 | hide | past | favorite | 101 comments



With some dirty hacks, I got it down to 83 bytes:

https://cdn.discordapp.com/attachments/286612533757083648/96...

  00  89 50 4e 47 0d 0a 1a 0a  00 00 00 0d 49 48 44 52  |.PNG........IHDR|
  10  00 00 01 00 00 00 01 00  01 03 00 00 00 66 bc 3a  |.............f�:|
  20  25 00 00 00 03 50 4c 54  45 b5 d0 d0 63 04 16 ea  |%....PLTE���c..�|
  30  00 00 00 1b 49 44 41 54  68 81 ec c1 01 0d 00 00  |....IDATh.��....|
  40  00 c2 a0 f7 4f 6d 0f 07  14 00 00 00 00 00 00 00  |. �Om..........|
  50  c0 b9 01                                          |��.|
Although technically invalid, it still renders fine in Firefox, Chrome, and Safari.

Edit: 87 -> 83 bytes

Edit2: Maybe in a couple of years time, we can use JPEG-XL instead (only 22 bytes, without any hacks!):

  data:image/jxl;base64,/wp/QCQIBgEALABLOEmIDIPCakgSBg==


Hadn't heard of JPEG XL until you mentioned it. The format looks really cool! Support for lossless and lossy compression, animation, and tons of other new features

https://en.wikipedia.org/wiki/JPEG_XL

Preliminary support in Firefox and Chromium nightly/testing builds already. I share your hope that we can start to use it in the next couple years. Looking at you, Safari ;)


Still sad about the demise of JNG all these years later. There was even a "light" version of the spec to address the complexity issues, and someone in bug #18574 had even made a reference library which took up less space than the default Mozilla libpng, and maintained their own browser branch. It was maintained for a few years but abandoned when it was clear Mozilla was all in on APNG.

But, yeah, we had all those things in something that was actually in major browsers, 22 years ago.

https://bugzilla.mozilla.org/show_bug.cgi?id=18574

http://mngzilla.sourceforge.net/


There is this JPEG XL art gallery with very nice images 20-100 bytes e.g.: https://jpegxl.info/art/2021-04_jon.html

To see them, enable flag in chrome: chrome://flags/#enable-jxl


I thought that it had existed for decades, but I had confused it for jp2. https://en.wikipedia.org/wiki/JPEG_2000


JXL without container has a very short header, which is nice.

WebP is 38 bytes, and is already supported by browsers.

  00000000  52 49 46 46 24 00 00 00  57 45 42 50 56 50 38 4c  |RIFF$...WEBPVP8L|
  00000010  18 00 00 00 2f ff c0 3f  00 07 50 e8 d6 16 ba ff  |..../???..P??.??|
  00000020  01 00 45 fa ff 9f 22 fa  9f fa df 7f              |..E??."?.??.|


  data:image/webp;base64,UklGRiQAAABXRUJQVlA4TBgAAAAv/8A/AAdQ6NYWuv8BAEX6/58i+p/6338=

Maybe can be made smaller, I just used cwebp -z 9.


> Although technically invalid, it still renders fine in Firefox, Chrome, and Safari.

When I learned HTML the syntax was sooo particular.

Now (our pretty much since then) anything goes and I love it

Natural evolution of protocols


No. It goes only because there are very few browser engines, who mostly can align their behaviour with each other.

Protocols with miltiple implementations are way more strict, because you can't feasibly test your quirky approach on every implementation, and the chance they will all be as forgiving is slim.


HTML parsing is well-specced these days.


The parsing is easy. And done in many, many libraries outside of the browsers. (And no, parsing with regex is still not possible, zalgo)

The problem is not parsing of HTML, its the DOM events, CSS application, javascript apis and above all the combination thereof which must be rendered all exactly the same, what makes it hard.


parsing was never so much the problem as what was rendered afterwards being different.


Still super bloated compared to the tiniest GIF:

47 49 46 38 39 61 01 00 01 00

00 ff 00 2c 00 00 00 00 01 00

01 00 00 02 00 3b

http://probablyprogramming.com/2009/03/15/the-tiniest-gif-ev...


That’s bloated ;-)

   P1
   1 1
   1
is a 9 byte 1 × 1 pixel black image in portable bitmap format (https://en.wikipedia.org/wiki/Netpbm#File_formats). Consumers of such files likely will know how to scale them to 256 × 256 pixels. The trailing newline may not even be necessary. If so, it would become 8 bytes.

Gray 1 X 1 pixel images in binary portable graymap format have the same size. To get a non-gray RGB color, you’ll need two more bytes in binary portable pixmap format.


If we permit the fairly recent QOI format[0] we can produce a 1x1 transparent pixel in just 23 bytes (14 byte header, 1 byte for QOI_OP_INDEX, 8 byte end marker):

  71 6f 69 66 00 00 00 01 00 00 00 01 04 00 | 14 byte header
  00                                        | QOI_OP_INDEX 
  00 00 00 00 00 00 00 01                   | end marker
Similarly the 103 byte png in the article would be [EDIT] This is incorrect, see below

  71 6f 69 66 00 00 01 00 00 00 01 00 04 00 | 14 byte header
  fe b7 d0 d0                               | QOI_OP_RGB, RGB color, 
  fd fd fd fd c6                            | QOI_OP_RUN for 62+62+62+62+7
  00 00 00 00 00 00 00 01                   | end marker
[0]: https://qoiformat.org/qoi-specification.pdf

[EDIT] I realized that we actually run into one of QOI's drawbacks if we were to encode the 103 byte png in the article, as we actually need to repeat the pixel 65535 times, so we'd have floor(65535/62)=1057 QOI_OP_RUN bytes followed by another QOI_OP_RUN to repeat the last pixel. Here it's pretty clear that the QOI spec missed out on special handling of repeated QOI_OP_RUN operators, as long repetitions could have been handled in far fewer bytes.


So what are the tricks


The first trick is simply "cut the end of the file off". This saves the adler32 checksum on the end of the zlib stream, the crc32 on the end of the IDAT chunk, and the entire IEND chunk.

This works because modern browsers have support for progressively rendering images that are still being downloaded - as a result, truncation is also handled gracefully.

However, this alone results in rendering errors - the last few rows of pixels end up missing, like this:

https://cdn.discordapp.com/attachments/286612533757083648/96...

I don't know the precise reason for this, but I believe the parsing state machine ends up stalling too soon, so I threw some extra zeroes into the IDAT data to "flush" the state machine - but not enough to increase the file size.


One thing I've wondered about are size savings for DEM tiles. Typically elevation values are encoded in RGB values giving a resolution down to fractions of an inch [1]. This seems like overkill. With an elevation range from 0 - 8848 meters (Mt everest), you can use just 2 bytes and get an accuracy down to .2 meters. That seems plenty for many uses. Does anybody know if there's a PNG16 format where you can reduce the file size by only using 2 bytes per pixel, instead of the typical 3-byte RGB or 4-byte RGBA?

[1] https://docs.mapbox.com/data/tilesets/guides/access-elevatio...


Not my area of expertise, but in my limited experience DEM tiles are usually GeoTIFF. This can be 16bit greyscale. The catch is that these are actually signed... Elevation doesn't start at 0meters bc you have locations below sea level and you need to handle those corner cases somehow

What's funny is that you can parse a GeoTIFF as a .tiff most of the time but not always. I had fun debugging that :). Java's BufferedImage understandably doesn't directly support negative pixel values haha


Data from sources like GMTED2010 or SRTM15+ is often float32 or even float64: whether it needs to be is another question, but float16 often isn't sufficient in terms of magnitude accuracy (IEEE), and as you mention you often need negative values as well, which for the whole ocean surface of the earth, more than double the range.

To me (working in the VFX industry with EXR being the predominant HDR format), it's interesting that something that compresses a lot better than TIFF (i.e. EXR) hasn't won over in the GIS space, but I believe that's mostly momentum as well as the fact EXR doesn't natively support 64-bit float, but then neither does TIFF really (it's an extension), and the same could be done with EXR (extend the formats it supports).


The PNG format already allows grayscale 16bits / channel images. I regularly use this when rendering out depth maps from blender and ffmpeg seems to handle reading from these PNGs just fine (detecting the gray16 pixel format).

However I don’t know of any DEM tile apis that provide these sorts of PNGs but it sounds like a fun project!

Edit: I found this StackExchange post which shows how to generate 16-bit PNGs with gdal https://gis.stackexchange.com/questions/246934/translating-g...


I hacked up a quick script and converted https://s3.amazonaws.com/elevation-tiles-prod/terrarium/0/0/... to 16-bit (2 channel) PNG.

Any value below sea level I set to 0. Any value above sea level I converted using the following formula:

scaled = elevation * Math.floor[(256 * 256) / (8849 - 0)];

top_byte = scaled / 256;

bottom_byte = scaled % 256;

This provides accuracy to 1/7th of a meter compared to 1/10th of a meter with Mapbox but the tile size went from 104KB -> 25KB. For applications that can ignore elevations below sea level, this is a huge savings.

Edit: the top level tile has a lot of ocean so the size savings are better than average. On a tile with no water, the savings appear to be around 50%.


Not exactly the same, but I did some work a while back retrofitting 8bit pngs as DEMs with non-linear elevation profiles (in the days when Mapbox only supported 8bit dem uploads). This allowed for fine detail in the range I was most interested in and coarser detail elsewhere. I was also working below sea level, so the standard models weren't suitable. I used gdal and node for the encode to PNG (from high res geotiffs) and then leant on mapbox expressions for the custom decode on the front end. Looked cool and file size was reasonable. Tho I'm certain much cooler things are now possible with 16 bit encode and the new terrain API.

Edit: a link to a mapbox-gl-js discussion on the use of custom dems/encodings (after which anything is possible): https://github.com/mapbox/mapbox-gl-js/issues/10775


If file size is your worry and accuracy not, maybe use a lossy format for tiles like JPG or JP2 rather than PNG?

Worth noting DEMs are moving away from tiled formats recently. Mainly to COG (Cloud Optimised Geotiff) which isn't the most efficient but is a simple tweak to a file format already broadly adopted. There's a few others out there aiming for efficency at scale too - ESRI has CRF and MRF for instance, but nothing has become industry standard other than COG yet.


I’d suggest jpeg; lossy compression over the full precision data will get you to your target bit rate trivially.


People get funny about accuracy in maps. Being able to specify accuracy, even a limited accuracy, is worth a lot. Saving bytes by reducing specified accuracy is in a lot of use-cases better than saving bytes by fuzzing data.


You can compute the maximum error when you encode, which tells you how much precision you can still claim to have


How? I thought it was up to the JPEG decoder how to actually decode the image into pixels. (Not that JPEG couldn't be workable in practice if some care was put into a solution.)


Decoding JPEG doesn't leave much room for interpretation and the images should essentially always be decoded the same. For encoding, it's a different story, as there are steps to downsample the image data (chrominance data is often, but not necessarily, sampled at a lower rate than luminance data), there can be a different cutoff point for which of the DCT coefficients to discard (usually related to the "compression" or "quality" setting for the compressor). All JPEG decoders should reproduce the same image from a given JFIF file, but I'd be surprised if different encoders produced the exact same JFIF file from a given source image.


For grayscale data there's very little ambiguity, since chroma and colorspace conversions aren't involved. Basically just rounding in DCT, for which you can make reasonable assumptions.

Moreover the JPEG XT spec (not to be confused with JPEG XL or the ton of other X-somethings JPEG made) has specified precisely how to decode the classic JPEG format, and has blessed libjpeg-turbo as the reference implementation.


I take your point; but I don’t think features on an in-browser map are ever small enough for compression artifacts to ruin the integrity of the map.


JPEG-XL has a lossless mode, and supports custom bit depths and channel counts.


> Instead of serving a 256x256px image, you can serve a 1px image and tell the browser to scale it up. Of course, if you have to put width= "256px" height= "256px" into your HTML that adds 30 bytes to your HTML!

CSS is a thing as well, could just use CSS to force all tiles to the same size, regardless of the image data in them. Something like:

  .map img {
    width: 256px;
    height: 256px;
  }


Was new to me, but it seems "slippy map" is open-streetmap's terminology for a generic zoomable-pannable web map view, here used to refer to any such UI - whether backed by OSM or google or bing or whoever's map data.

Feels like a weird word choice to me, when 'map' was right there, but who are we to judge.


This is a very old term used to try to describe the Google Maps interface to people who never used an octree multidirectional scrolling and zooming image collection.


I would have guessed that they use quadtrees for this, splitting each non-leaf node into quadrants of more detailed maps as you zoom in.


It's a quadtree of sorts, but is typically done via map projection (web mercator) math so that each tile is replaced by four tiles at the next highest (more zoomed-in) zoom level.

Number of tiles to cover the world at zoom z = 4^z


Before Google Maps came out, online maps all looked like this: https://web.archive.org/web/20060428160705/http://www.multim... and this: https://web.archive.org/web/20050528023529/http://maps.yahoo...

View the map a single tile at a time, no dragging the map, no moving by less than a tile, no zooming with the mousewheel, every move and zoom a full pageload.

(You'll also notice the older maps are much higher contrast than Google Maps - the older maps being modelled on printed paper maps)


>every move and zoom a full pageload

That seems expected, since XMLHttpRequest wasn't really broadly standard until 2003/4 or so. Mapquest and other incumbents didn't move fast enough to use it.


But you could use Javascript to replace one image with another - or to move elements around the page in response to the mouse.

(Back in those days common usage was limited to trivial things like making buttons change colour on mouseover)


I don't know that would be enough to deal with panning and zooming a tile based map, at least not without some severe hackery.


Such maps presumably still had the issue of ocean tiles.


I love image formats and data packing, but was disappointed that the reveal was that maps use raster tiles. The map data is vector, why not render it as such?


That time when browsers rendered SVG funny if at all was not very long ago.

There’s another challenge: you need to provide more and more data as you zoom in, wonder how that should work with vector stuff.


This already handled afaict. When I zoom in on apple maps on a stalled data connection I can see the sharp edges of the low resolution vector data until the higher resolution data is downloaded.

SVG is convenient but isn't necessary.


On apple maps the app or on the web? Although I suspect that canvas + some homegrown vector engine it is.


The app. I'm not sure how the web interface behaves.


A lot of rendering is vector based these days and uses webgl to do it but there are still a lot of tile servers using images as well. This article is from 2015. Vector maps were less common then and webgl was a lot less mature.

For example maplibre is a great option for rendering vector based openstreet maps from e.g. maptiler or mapbox. They can tilt the maps, render buildings in 3D, have step less zooming, etc.


A single ocean tile on mapbox is 39 bytes:

  1a25 7802 0a05 7761 7465 7228 8020 1217
  1803 2213 0980 69e0 7f1a dfa8 0100 00bf
  bf01 e0a8 0100 0f
Decodes to this protobuf:

  layers {                                                                                                                                                                                         
    name: "water"                                                                                                                                                                                  
    features {
      type: POLYGON
      geometry: 9
      geometry: 13440
      geometry: 16352
      geometry: 26
      geometry: 21599
      geometry: 0
      geometry: 0
      geometry: 24511
      geometry: 21600
      geometry: 0
      geometry: 15
    }
    extent: 4096
    version: 2
  }
Geometry interpretation is here: https://github.com/mapbox/vector-tile-spec/tree/master/2.1#4...

And produces this geometry before reprojecting to the tile coordinates:

  Layer name: water
  Geometry: Polygon
  Feature Count: 1
  Extent: (0.000000, 0.000000) - (4096.000000, 4096.000000)
  Layer SRS WKT:
  (unknown)
  mvt_id: Integer64 (0.0)
  OGRFeature(water):0
    POLYGON ((0 0,0 4096,4096 4096,4096 0,0 0))
But of course, this doesn't specify a color, just "the ocean is a rectangle".


Some are, but it maight be CPU heavy depending on your platform, acceleration and the numbers of layers/information you have on the screen.

Android application OsmAnd is notoriously slow because of this: https://github.com/osmandapp/OsmAnd/discussions/11961


Some of the newer map tile formats are vectors instead of raster. But technology hasn't quite caught up with that yet. And some things are still better are rasters (like overlaying satellite/aerial imagery).


Seems a lot more performant to generate single color images programatically rather than sending it over the wire?

Assuming this level of optimization is actually warranted


I mean, you could. Not a browser author or map maker but thinking it out loud.

This would be a 'rectangle color' specific. Probably 24 bytes to represent height, width, color? It seems like a Herculean effort to attempt to get browser support for such a thing, for a phenomenally rare use case. It would need to be an image format probably and not a browser implementation, since they're usually arranged around other images. And all for saving some 60 bytes per square.

To be clear I'm not saying it's a bad idea - I'm all for it. It just seems like a pretty edge use case(large blobs of single color images such as oceans in cartoon maps).


People can and have written image decoders for custom formats in Javascript [1]. This seems like the same thing but for a very domain-specific use case.

Worth it? Not sure. But possible? Definitely.

[1]: https://bellard.org/bpg/


> It seems like a Herculean effort to attempt to get browser support for such a thing

https://caniuse.com/datauri


Data URIs have much more widespread utility than what is being discussed, I think.


fwiw, JPEG-XL manages to encode the whole image in only 22 bytes.


i thought so too, but is it actually the case? it's a bit more js code but if pack all your js files there's one less http header. then it's cached. for the image you got the initial size + header (roughly 500 bytes), then it's cached all the same. so, if the additional js gz'd is smaller than 100 bytes of png + gz(400 bytes for the header) it might pay off.


The difference between the OSM and Google tile is 75 bytes. So if they serve one million tiles OSM saved 75MB.

OSM needs 54TB for all tiles but only around 1.8% are viewed. So you need at least 1TB of cache.

I am curious if this micro optimalization really makes a difference.


But it only applies to 100% water/forest/etc tiles, which when zoomed out only applies to oceans.


How did you find out only around 1.8% are viewed?


From a OSM wiki page. Most of the world is ocean and not a lot of people zoom in on those parts.


Even better would be no image for the water tiles and set the background color of the container element.


Yeah I wonder why that isn't used... I can even remove the src from the image and add "background-color: #aad3de" and it looks exactly the same. I'd imagine it's also slightly faster and less memory intensive to render a static background color than to copy the data from an image.

I'm actually surprised they even use DOM nodes for this. Last I checked Google Maps uses a totally custom WebGL based renderer (since it supports 3D and such).


> Yeah I wonder why that isn't used

It's extra handling in the client, request and traffic is still there. Saving few bytes for extra complexity is probably not worth it.


I think they don’t want the tile server API have to think about what planet it’s on and where said planet has its oceans. So every tile has to have a valid image associated with it.


By the time that will be a concern, mapping apps will have been rewritten in JavaScript++ 5 times over.


It’s been a concern for several years already. You can visit several planets and their satellites with Google Maps, at least.


That might end up looking weird with a dark-mode extension like Dark Reader.


Couldn't they just use css background-image property to load just one?


This is one example where "zopfli" or other optimized zlib compressors don't help, the input data is too simple.

"oxipng" (my current preferred png optimizer) bring a plain 256x256 image created with imagemagick down to 179 bytes, as long as you tell it to strip all metadata objects.

Interlacing doesn't make any difference, it's the same size with it on or off.


zopflipng takes the 1189 byte file to 103 bytes for me.


Convert to svg maybe?

For OpenStreetMap 136 bytes

<svg xmlns="http://www.w3.org/2000/svg" width="256" height="256" viewBox="0 0 256 256"><path d="M0 0h256v256H0z" fill="#aad3df"/></svg>


The png is only 103 bytes though. Btw your svg can be decreased by changing viewBox to 0 0 1 1, and also by changing the path to a circle (implicitly positioned at 0, 0)

  <svg xmlns="http://www.w3.org/2000/svg" width="256" height="256" viewBox="0 0 1 1"><circle r="2" fill="#aad3df"/></svg>


Depending on how you're loading the svg in the page the xmlns attribute may not be needed as well.


The SVG could also be served with gzip compression.


  >>> len('<svg xmlns="http://www.w3.org/2000/svg" width="256" height="256" viewBox="0 0 256 256"><path d="M0 0h256v256H0z" fill="#aad3df"/></svg>'.encode('zip'))
  122
Gzipping doesn't save much.


brotli helps:

  $ echo '<svg xmlns="http://www.w3.org/2000/svg" height="256" width="256" viewBox="0 0 1 1"><circle r="2" fill="#aad3df"/></svg>' | brotli -9 - | wc -c
  93


Couldn't resist trying to go further. We can use an implicit viewBox.

  $ echo '<svg xmlns="http://www.w3.org/2000/svg" height="256" width="256"><circle r="1000" fill="#aad3df"/></svg>' | brotli -9 - | wc -c
  83
r="1000" hits brotlis built-in dictionary, but if you target zlib then r="2566" is better.


You can also drop the new line character by using echo -n, which gives 82 bytes :)


Can you use a short closing tag "</>" or implicit closure?


Those are both sgml (as opposed to xml) things. Neither validates.


Doesn't the SVG scale better? I sure hope so.


For a single color? I don't think it really matters. It's all inherently lossless.


For a solid color block it doesn't really matter. For the rest of the map sure, Google Maps in normal mode (instead of classic as in the article) is vector rendering based.


But the PNG provided was 103 bytes :3

Also - pardon my ignorance, this may be a dumb question, is SVG universally supported in browsers these days? I’m not big up on image standards.


While yes, you could use SVG today, for performance reasons (not obviously for water, but more complicated stuff like an actual city) raster files are both more efficient in terms of computation time and more consistent in rendering with the only disadvantage of file size tradeoffs. Not quite a concern for computers and dedicated navi systems (in fact Google Earth and the Google Maps apps for iOS and Android* uses vectors, probably not SVG though), but other embedded systems don't really have the luxury of including a proper 3D or even 2D-accelerated drawing (there's a basic chip for raster rendering and relatively fast path drawing but city maps usually contains many geometric paths that overwhelm the graphics processor).

* If you know Android Go (not Android Auto), the Maps there are raster due to hardware constraints.



Yes, it's everywhere.


An inline svg in the parent page would trim it down to 87 bytes and replace <img> code, so even less than 87 in practice.

<svg width="256" height="256"><rect width="100%" height="100%" fill="#aad3df" /></svg>


That looks like a very low entropy series of bytes to me.


That string gzips to 82 bytes. I thought it would be a lot smaller.


<svg xmlns="dear user agent, if you happen to run into a problem resolving ambiguities with any of the following tags, please use the namespace for Scalable Vector Graphics and not, say, one of the zero other contexts that xml ninjas will no doubt create over the next two decades of what I can only surmise will be a glorious outpouring of richness and complexity. I mean, we went to the trouble of creating a dang URL for the thing, so it's the least you could do. That is, aside from doing nothing, in which case Hixie will probably write a parser to just handle it. Best wishes, W3C. P.S. Cannot wait to see all the Javascript-driven SVG content that users will upload to social media. Look out, animated gif-- your days are numbered!"><path d="animateYourSVGArcFlagsFTW" /></svg>


Every editor adds their own xml namespace to SVG. Inkscape’s being probably the most famous one.


Why use an image at all when css background-color exists?


As the article mentions, the browser requests an image and to respond with anything different would take as many bytes if not more.


This is the comment I just ctrl-f'd for: exactly?!


gzip compressed (binary) pbm is only 56 bytes; 32 bytes for zstd compressed data. PBMs have a very simple header and no footer, so the file is almost entirely all zeroes.


Web browsers do not support pbm, last time I checked. On the other hand, the JPEG-XL rollout is well underway (e.g. Chrome supports it behind a feature flag).

The image is only 22 bytes as a jxl:

https://cdn.discordapp.com/attachments/286612533757083648/96...

base64 data uri version:

  data:image/jxl;base64,/wp/QCQIBgEALABLOEmIDIPCakgSBg==


Most browser do support gzip though. Sort-of transparently on the connection level.

So instead of optimizing the size of your file directly, you could optimize the size of what's actually send over the connection.

I wonder if that would give you a slightly different png or bmp or so?


Takes more time to request it to the server and loading it in the browser than downloading the resource :)


Anyone else sometimes find it hard to upvote? [1]

[1] https://files.littlebird.com.au/Shared-Image-2022-04-22-17-5...




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: