Hadn't heard of JPEG XL until you mentioned it. The format looks really cool! Support for lossless and lossy compression, animation, and tons of other new features
Preliminary support in Firefox and Chromium nightly/testing builds already. I share your hope that we can start to use it in the next couple years. Looking at you, Safari ;)
Still sad about the demise of JNG all these years later. There was even a "light" version of the spec to address the complexity issues, and someone in bug #18574 had even made a reference library which took up less space than the default Mozilla libpng, and maintained their own browser branch. It was maintained for a few years but abandoned when it was clear Mozilla was all in on APNG.
But, yeah, we had all those things in something that was actually in major browsers, 22 years ago.
No. It goes only because there are very few browser engines, who mostly can align their behaviour with each other.
Protocols with miltiple implementations are way more strict, because you can't feasibly test your quirky approach on every implementation, and the chance they will all be as forgiving is slim.
The parsing is easy. And done in many, many libraries outside of the browsers. (And no, parsing with regex is still not possible, zalgo)
The problem is not parsing of HTML, its the DOM events, CSS application, javascript apis and above all the combination thereof which must be rendered all exactly the same, what makes it hard.
is a 9 byte 1 × 1 pixel black image in portable bitmap format (https://en.wikipedia.org/wiki/Netpbm#File_formats). Consumers of such files likely will know how to scale them to 256 × 256 pixels. The trailing newline may not even be necessary. If so, it would become 8 bytes.
Gray 1 X 1 pixel images in binary portable graymap format have the same size. To get a non-gray RGB color, you’ll need two more bytes in binary portable pixmap format.
If we permit the fairly recent QOI format[0] we can produce a 1x1 transparent pixel in just 23 bytes (14 byte header, 1 byte for QOI_OP_INDEX, 8 byte end marker):
[EDIT] I realized that we actually run into one of QOI's drawbacks if we were to encode the 103 byte png in the article, as we actually need to repeat the pixel 65535 times, so we'd have floor(65535/62)=1057 QOI_OP_RUN bytes followed by another QOI_OP_RUN to repeat the last pixel. Here it's pretty clear that the QOI spec missed out on special handling of repeated QOI_OP_RUN operators, as long repetitions could have been handled in far fewer bytes.
The first trick is simply "cut the end of the file off". This saves the adler32 checksum on the end of the zlib stream, the crc32 on the end of the IDAT chunk, and the entire IEND chunk.
This works because modern browsers have support for progressively rendering images that are still being downloaded - as a result, truncation is also handled gracefully.
However, this alone results in rendering errors - the last few rows of pixels end up missing, like this:
I don't know the precise reason for this, but I believe the parsing state machine ends up stalling too soon, so I threw some extra zeroes into the IDAT data to "flush" the state machine - but not enough to increase the file size.
One thing I've wondered about are size savings for DEM tiles. Typically elevation values are encoded in RGB values giving a resolution down to fractions of an inch [1]. This seems like overkill. With an elevation range from 0 - 8848 meters (Mt everest), you can use just 2 bytes and get an accuracy down to .2 meters. That seems plenty for many uses. Does anybody know if there's a PNG16 format where you can reduce the file size by only using 2 bytes per pixel, instead of the typical 3-byte RGB or 4-byte RGBA?
Not my area of expertise, but in my limited experience DEM tiles are usually GeoTIFF. This can be 16bit greyscale. The catch is that these are actually signed... Elevation doesn't start at 0meters bc you have locations below sea level and you need to handle those corner cases somehow
What's funny is that you can parse a GeoTIFF as a .tiff most of the time but not always. I had fun debugging that :). Java's BufferedImage understandably doesn't directly support negative pixel values haha
Data from sources like GMTED2010 or SRTM15+ is often float32 or even float64: whether it needs to be is another question, but float16 often isn't sufficient in terms of magnitude accuracy (IEEE), and as you mention you often need negative values as well, which for the whole ocean surface of the earth, more than double the range.
To me (working in the VFX industry with EXR being the predominant HDR format), it's interesting that something that compresses a lot better than TIFF (i.e. EXR) hasn't won over in the GIS space, but I believe that's mostly momentum as well as the fact EXR doesn't natively support 64-bit float, but then neither does TIFF really (it's an extension), and the same could be done with EXR (extend the formats it supports).
The PNG format already allows grayscale 16bits / channel images. I regularly use this when rendering out depth maps from blender and ffmpeg seems to handle reading from these PNGs just fine (detecting the gray16 pixel format).
However I don’t know of any DEM tile apis that provide these sorts of PNGs but it sounds like a fun project!
This provides accuracy to 1/7th of a meter compared to 1/10th of a meter with Mapbox but the tile size went from 104KB -> 25KB. For applications that can ignore elevations below sea level, this is a huge savings.
Edit: the top level tile has a lot of ocean so the size savings are better than average. On a tile with no water, the savings appear to be around 50%.
Not exactly the same, but I did some work a while back retrofitting 8bit pngs as DEMs with non-linear elevation profiles (in the days when Mapbox only supported 8bit dem uploads). This allowed for fine detail in the range I was most interested in and coarser detail elsewhere. I was also working below sea level, so the standard models weren't suitable. I used gdal and node for the encode to PNG (from high res geotiffs) and then leant on mapbox expressions for the custom decode on the front end. Looked cool and file size was reasonable. Tho I'm certain much cooler things are now possible with 16 bit encode and the new terrain API.
If file size is your worry and accuracy not, maybe use a lossy format for tiles like JPG or JP2 rather than PNG?
Worth noting DEMs are moving away from tiled formats recently. Mainly to COG (Cloud Optimised Geotiff) which isn't the most efficient but is a simple tweak to a file format already broadly adopted. There's a few others out there aiming for efficency at scale too - ESRI has CRF and MRF for instance, but nothing has become industry standard other than COG yet.
People get funny about accuracy in maps. Being able to specify accuracy, even a limited accuracy, is worth a lot. Saving bytes by reducing specified accuracy is in a lot of use-cases better than saving bytes by fuzzing data.
How? I thought it was up to the JPEG decoder how to actually decode the image into pixels. (Not that JPEG couldn't be workable in practice if some care was put into a solution.)
Decoding JPEG doesn't leave much room for interpretation and the images should essentially always be decoded the same. For encoding, it's a different story, as there are steps to downsample the image data (chrominance data is often, but not necessarily, sampled at a lower rate than luminance data), there can be a different cutoff point for which of the DCT coefficients to discard (usually related to the "compression" or "quality" setting for the compressor). All JPEG decoders should reproduce the same image from a given JFIF file, but I'd be surprised if different encoders produced the exact same JFIF file from a given source image.
For grayscale data there's very little ambiguity, since chroma and colorspace conversions aren't involved. Basically just rounding in DCT, for which you can make reasonable assumptions.
Moreover the JPEG XT spec (not to be confused with JPEG XL or the ton of other X-somethings JPEG made) has specified precisely how to decode the classic JPEG format, and has blessed libjpeg-turbo as the reference implementation.
> Instead of serving a 256x256px image, you can serve a 1px image and tell the browser to scale it up. Of course, if you have to put width= "256px" height= "256px" into your HTML that adds 30 bytes to your HTML!
CSS is a thing as well, could just use CSS to force all tiles to the same size, regardless of the image data in them. Something like:
Was new to me, but it seems "slippy map" is open-streetmap's terminology for a generic zoomable-pannable web map view, here used to refer to any such UI - whether backed by OSM or google or bing or whoever's map data.
Feels like a weird word choice to me, when 'map' was right there, but who are we to judge.
This is a very old term used to try to describe the Google Maps interface to people who never used an octree multidirectional scrolling and zooming image collection.
It's a quadtree of sorts, but is typically done via map projection (web mercator) math so that each tile is replaced by four tiles at the next highest (more zoomed-in) zoom level.
Number of tiles to cover the world at zoom z = 4^z
View the map a single tile at a time, no dragging the map, no moving by less than a tile, no zooming with the mousewheel, every move and zoom a full pageload.
(You'll also notice the older maps are much higher contrast than Google Maps - the older maps being modelled on printed paper maps)
That seems expected, since XMLHttpRequest wasn't really broadly standard until 2003/4 or so. Mapquest and other incumbents didn't move fast enough to use it.
I love image formats and data packing, but was disappointed that the reveal was that maps use raster tiles. The map data is vector, why not render it as such?
This already handled afaict. When I zoom in on apple maps on a stalled data connection I can see the sharp edges of the low resolution vector data until the higher resolution data is downloaded.
A lot of rendering is vector based these days and uses webgl to do it but there are still a lot of tile servers using images as well. This article is from 2015. Vector maps were less common then and webgl was a lot less mature.
For example maplibre is a great option for rendering vector based openstreet maps from e.g. maptiler or mapbox. They can tilt the maps, render buildings in 3D, have step less zooming, etc.
Some of the newer map tile formats are vectors instead of raster. But technology hasn't quite caught up with that yet. And some things are still better are rasters (like overlaying satellite/aerial imagery).
I mean, you could. Not a browser author or map maker but thinking it out loud.
This would be a 'rectangle color' specific. Probably 24 bytes to represent height, width, color? It seems like a Herculean effort to attempt to get browser support for such a thing, for a phenomenally rare use case. It would need to be an image format probably and not a browser implementation, since they're usually arranged around other images. And all for saving some 60 bytes per square.
To be clear I'm not saying it's a bad idea - I'm all for it. It just seems like a pretty edge use case(large blobs of single color images such as oceans in cartoon maps).
People can and have written image decoders for custom formats in Javascript [1]. This seems like the same thing but for a very domain-specific use case.
i thought so too, but is it actually the case? it's a bit more js code but if pack all your js files there's one less http header. then it's cached. for the image you got the initial size + header (roughly 500 bytes), then it's cached all the same. so, if the additional js gz'd is smaller than 100 bytes of png + gz(400 bytes for the header) it might pay off.
Yeah I wonder why that isn't used... I can even remove the src from the image and add "background-color: #aad3de" and it looks exactly the same. I'd imagine it's also slightly faster and less memory intensive to render a static background color than to copy the data from an image.
I'm actually surprised they even use DOM nodes for this. Last I checked Google Maps uses a totally custom WebGL based renderer (since it supports 3D and such).
I think they don’t want the tile server API have to think about what planet it’s on and where said planet has its oceans. So every tile has to have a valid image associated with it.
This is one example where "zopfli" or other optimized zlib compressors don't help, the input data is too simple.
"oxipng" (my current preferred png optimizer) bring a plain 256x256 image created with imagemagick down to 179 bytes, as long as you tell it to strip all metadata objects.
Interlacing doesn't make any difference, it's the same size with it on or off.
The png is only 103 bytes though. Btw your svg can be decreased by changing viewBox to 0 0 1 1, and also by changing the path to a circle (implicitly positioned at 0, 0)
For a solid color block it doesn't really matter. For the rest of the map sure, Google Maps in normal mode (instead of classic as in the article) is vector rendering based.
While yes, you could use SVG today, for performance reasons (not obviously for water, but more complicated stuff like an actual city) raster files are both more efficient in terms of computation time and more consistent in rendering with the only disadvantage of file size tradeoffs. Not quite a concern for computers and dedicated navi systems (in fact Google Earth and the Google Maps apps for iOS and Android* uses vectors, probably not SVG though), but other embedded systems don't really have the luxury of including a proper 3D or even 2D-accelerated drawing (there's a basic chip for raster rendering and relatively fast path drawing but city maps usually contains many geometric paths that overwhelm the graphics processor).
* If you know Android Go (not Android Auto), the Maps there are raster due to hardware constraints.
<svg xmlns="dear user agent, if you happen to run into a problem resolving ambiguities with any of the following tags, please use the namespace for Scalable Vector Graphics and not, say, one of the zero other contexts that xml ninjas will no doubt create over the next two decades of what I can only surmise will be a glorious outpouring of richness and complexity. I mean, we went to the trouble of creating a dang URL for the thing, so it's the least you could do. That is, aside from doing nothing, in which case Hixie will probably write a parser to just handle it. Best wishes, W3C. P.S. Cannot wait to see all the Javascript-driven SVG content that users will upload to social media. Look out, animated gif-- your days are numbered!"><path d="animateYourSVGArcFlagsFTW" /></svg>
gzip compressed (binary) pbm is only 56 bytes; 32 bytes for zstd compressed data. PBMs have a very simple header and no footer, so the file is almost entirely all zeroes.
Web browsers do not support pbm, last time I checked. On the other hand, the JPEG-XL rollout is well underway (e.g. Chrome supports it behind a feature flag).
https://cdn.discordapp.com/attachments/286612533757083648/96...
Although technically invalid, it still renders fine in Firefox, Chrome, and Safari.Edit: 87 -> 83 bytes
Edit2: Maybe in a couple of years time, we can use JPEG-XL instead (only 22 bytes, without any hacks!):