Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Image file formats that didn’t make it (tedium.co)
166 points by ingve on Nov 10, 2021 | hide | past | favorite | 207 comments


Tiff is very much not dead. It is likely one of the image formats more widely used today. Terabytes of new tiff images are prodced every hour and distributed by many remote sensing operators.

It is a really flexible and robust format also. How else are you going to store a floating-point multispectral image of 8 bands and 50.000 x 50.000 pixels, arranged tile-wise for easy cropping?


AKA: Thousands of Image File Formats.

I still remember trying to explain to a previous employer why I needed Debabelizer to create appropriately-striped TIFFs in addition to Photoshop. In hindsight, the me of today would have said then, "I just need it to do this job, which will make you far more money than it costs to buy a single license." The me of then wasn't quite as savvy.


I regularly outline how much engineering time could be saved with some minor capital expenses. They choose the longer and more expensive route every time.


Oh, my. DeBabelizer was an extremely powerful and extremely eccentric tool. Weird, weird, weird. But powerful.


That was my reaction. To be fair author says: "Of the formats listed here, TIFF is probably the one most likely to still be in wide use, but it has evolved into a more specialized format for professionals, in comparison to something like JPG."


TGA files still show up in video game textures surprisingly often


That's probably because of how simple the format can be, with basically a header and footer tacked on to some RGBA data.

Surprisingly, with PNG, the deflate compression can slow you down, depending on what you are doing with the data. Saving a little bit of IO when you're accessing local data can turn out to be a losing proposition. TGA makes it easier to just shove pixels around, from program to program. TGA has all sorts of weird options which you can ignore, but with PNG, compression is mandatory.


I believe PNG supports uncompressed files, usually with a compression level = 0.


The data is still a Deflate stream, it's just that the deflate stream will be a series of literal blocks. The idea behind "uncompressed" is that you can just copy the data into memory, and level = 0 does not achieve that (you still have to decode the deflate stream).


... what decoding needs to be done on a series of literal blocks?


Not only that, but you can use TGA to write 16-bits per channel, which is very useful for raw data, or linear RGB colour spaces.


Does bitmap not tick all of these boxes?


I’ve never seen a 16 bits per channel BMP… is that even possible?


Half-Life is a little old but I wrote a parser for the .tga files it uses and it was simple (but reading the full spec... yeah it can get complicated with all the different flags).


When I worked on Call of Duty (~7 years ago) almost all of our source images were TGA. The runtime format that almost all games use are various block compressed formats (BC7, ASTC, etc) and usually these are grouped in some kind of package/archive format (although engines are moving away from that in order to stream at the asset level) and lz4 compressed on disk.


TIFFs are uncompressed or compressed with zip or lzw which is lossless. JPEGs are lossy and can't possibly be lossless. Pitting one as an alternative to the other is apples an oranges. When I read that sentence you quoted, I figured that the author is either lacking knowledge on the topic they're blogging about or they are trolling.


TIFF is more like a container format that can contain a ton of different formats including many lossy formats. That also includes not just one but two different ways of embedding JPEG data (Compression 6 vs 7). See e.g. https://www.awaresystems.be/imaging/tiff/tifftags/compressio...


> JPEGs are lossy and can't possibly be lossless.

https://jpeg.org/jpegxl/


File formats exist that have JPEG in the name, that have certain features JPEG does not, and that cannot be decoded by all of these programs that can decode JPEG files. A JPEG XL file, for all intents and purposes, is not a JPEG file.

JPEG is a file format from decades ago. Obviously it’s possible to do better today. But it’s not possible to just invent a new format and have it as widely usable as JPEG. PNG did this with GIF but really only because GIF was seriously hobbled by software patents. It’s doubtful the millions of colors and alpha transparency would have won over compatibility otherwise.


For quite some time TIFF was the default format for screenshots on OS X, which was probably one of its more prominent uses by the general public. During that period it was also pretty common for developers to use TIFF for application assets like toolbar icons and splash screens.

At some point that all shifted over to PNG, but macOS support for TIFF remains good to this day.


It’s because TIFF and Postscript are friends.

Source: NeXTStep ;-)


I'll never pass up the opportunity to add random NeXT trivia -- Grab.app is one of several applications that came from NeXTSTEP/OpenSTEP with relatively few modifications, along with Preview.app, Terminal.app, TextEdit.app, Chess.app, and probably many others.

For anyone with curiosity and time to burn, comparing early OS X with NeXT/OpenSTEP in VMs is fun, just to see what changed and what stayed. Some parts are extremely different, while some software seemed to be copy-pasted over, while others clearly had a lot of polish put into them for the consumer market. Fun times!


Preview existed in OpenStep?


Hmm, I don't remember TIFF for screen shots. It was PDF (perhaps PDF-wrapped TIFF?) ca 2002–4 which is not too long after the first usable OS X.


Geo-tiffs are the go to standard in GIS integration of any property map. Photoshop works really well with tiffs.


Was going to say the same thing; GeoTIFFs are common. But they are a "specialist" format in the way the article is talking about.

Why is GeoPNG not a thing? PNG is based on TIFF's design with various chunks allowing all sorts of metadata storage. It'd be pretty easy to translate GeoTIFF's GIS stuff into PNG. Has that been done?


I asked some GIS experts on Twitter and GeoPNG is not a thing. Most folks seem to think GeoTIFF does just fine so there's no need for it. TIFF's support for more than 3 channels of data is a big help for multispectral remote sensing.


TIFF it's a recommended format for digitalised documents by the American library of Congress. https://www.loc.gov/preservation/resources/rfs/stillimg.html

also, DNG format (a raw format) it's a derivated from TIFF.


Yes, TIFF was (probably still is) the standard for capturing X-ray detector images with tons of channels, weird numbers of bits per channel, and custom metadata. I actually wrote a TIFF decoder in my first job. It is a crazy format though - "bi-endian" in that you have a choice of endianness.


Yea, I was confused when I saw TIFF in the list. I get email attachment with TIFF occasionally through my job. I know TIFF is still common in other fields. It is possible the author is in the fields that don’t encountered TIFF enough and went with the assumption that it is forgotten format.


When I was doing imaging, tiff was my go-to. It's uncompressed so huge images would pop open quickly (compared to loss-less png). I could store 16-bit CMYK images in tiff which no other format (except maybe PSD) would do.

Several years ago, I attended a talk at an imaging.org conference where the presenter, an archivist from the Smithsonian, described how they used tiff in their work.


TIFF also supports optional compression (usually LZW or Zip).


TIFF is wonderful because it's so optimized for performance. When you get the tiling lined up just exactly right with your storage device block/page/stripe size and your CPU cache sizes, it just screams.


I think all the RAW formats generated by digital cameras are really just slightly customized TIFFs.


Really? Slightly customized in what way? Would a tiff reader be able to decode them?


As far as I’m aware, all the tags (the T in TIFF) are in standard TIFF format. But your generic TIFF reader might haves some trouble getting the image data (I.e., the pixels).


For sure, you'd need de-Bayering for a start. And stuff like white balance will be in a custom tag.


Don't forget having multiple images in one file - 200.000x200.000 pixel main tiled, pyramidal jpeg or jp2 encoded image, thumbnail, label, overview and barcode. Next to the standardized metadata like resolution and vendor-specific xml with descriptions of channels, wavelengths, antibodies and complete provenance. With libtiff and libtiff-tools to help you make sense of it.


I remember dealing with massive (for the time) TIFF files back in the 90s. I had some full page B/W scans at 1200 dpi which, converted into PostScript turned out to be virtually impossible to convey to my service bureau for outputting to film. I ended up having to send camera ready copy to the printers for them to photograph and strip into the plates in order to meet my press deadline. CD-R was still something in the future and I ended up having to support about 3 or 4 removable media drives to deal with incoming files from my authors.


TIFF is still used in forensics/evidence processing by law enforcement agencies world-wide.

Seeing it on this list makes me think the author didn't dig very deep when writing this article.


The OpenEXR format, from the film industry, can also store an image with that configuration.


True, although TIFF (often used with GeoTiff) can also store 64-bit float (double), which OpenEXR doesn't support.


In this day and age I'd rather use the .npy file format used to save numpy arrays. It's trivial to read and write from C (and any other language) without any dependencies. The openexr api always seemed a bit cumbersome to me.


Scanners often have TIFF or multipage TIFF as one of their delivery formats.


Writing tiff files is faster than any of the compressed formats.

That's why I often use tiff to save images initially, and then compress in the background.


Hmm; Tiff is still the default universally-readable format for many lossless purposes. It also seems to be used automatically in places: e.g. in Lightroom, clicking "External Editor" sends a TIF file to the external editor and imports it back by default. I thought it was also used in some scientific/analysis scenarios as well.

I thought I also saw BMP in a lot of games for some reasons (not to mention it's default output for the default image program on the effectively default consumer PC OS :)


GeoTIFF is an interesting extension of TIFF that seems to serve a niche that other image formats cannot. It doesn't have the compression of say WebP or PNG, but the fact that it can have rich geographical metadata (projection, coordinates, etc.) makes it incredibly useful for GIS applications.


GeoTIFF does have internal compression support. In fact you can compress your GeoTIFF internally using WebP [0].

[0]: https://gdal.org/drivers/raster/cog.html#general-creation-op...


Also DICOM for medical imaging is afaik based on TIFF


DICOM is a container. It can contain any format including jpeg, tiff, png etc.


tiff is also just a container it can contain lz compressed data or with some lossy compression


I think bitmaps are often used in 2d games for level definition or various other sorts of mapping (height, for 2.5d games, et c.)


Bitmap is the easiest image file format in the world to read/write and the easiest and sometimes fastest to shoot over to any video device in the history of computing. Almost every game making tutorial going back to the 80s has "how to read/write" bitmaps usually in it somewhere.

Even 3D games will sometimes use them in a lot of odd places because it was faster/easier/took fewer cycles to include an uncompressed or simple run-length encoded bitmap than to try to store a smarter image format and need time to decompress it. (Though yes modern GPUs often support image decompression directly for certain texture file formats and that's shifted a lot.)


> Bitmap is the easiest image file format in the world to read/write

When making a quick hack like a toy ray tracer or Mandelbrot generator in C or C++, it's traditional to use .ppm [0] because it's trivial to write out pixel values in ASCII.

[0] https://en.wikipedia.org/wiki/Netpbm#File_formats


> modern GPUs often support image decompression directly for certain texture file formats

That works great for compressed textures like DDS or ASTC.

Unfortunately, the hardware decoders for proper codecs like jpeg or h.264 typically come with APIs prohibitively complicated to use, unless the app is a video player and needs to consume these APIs anyway, despite the complexity.


That's why some games have uncompressed audio files which made the game size ballooned up. If the game have so much audio on, it would be best to use uncompressed audio as CPU don't have to spend extra resources to decode bunch of audio files at the same time when the games is asking for it.


This is a small pet peeve of mine. Even though PCs have mostly lost dedicated "sound cards" they still have dedicated hardware for sound decompression in their SoC packages. The big reason I think games often think they need so much uncompressed audio has not as much to do with hardware support as much as it is that games and general consumers "forked" on different compression paths in audio. Most consumer audio is MP3 and has been for decades, while most game audio went to patent unencumbered formats like Ogg Vorbis. Everyone has MP3 hardware decoding and almost no one has Ogg Vorbis in hardware. Even with the MP3 patents expired I think game companies just forget it exists as a well-supported lowest common denominator format with a wealth of hardware support. Sure, MP3 is a lossy compression format, but the specific losses in MP3 would hardly matter in most game soundscapes (and also Ogg Vorbis is similarly lossy, though "better" in double blind studies of single audio files).

(Some of that I think is also political/contractual in the way that using MP3 would give some additional types of PC players "free soundtracks" while using other compression formats is a bit of security through obscurity and raises the bar slightly for PC players that don't, for instance, know what Ogg Vorbis even is.)


> ... they still have dedicated hardware for sound decompression in their SoC packages

Do they?

Even if they do, it's typically useless for an application like a game, because the game will need to filter and/or mix that audio in ways that the hardware probably can't handle on its own. (Sure, maybe your hardware can decode one MP3 at a time, but can it decode two dozen MP3s at once, apply a bunch of environmental filters to them, and mix them down to a single stereo output? Probably not.)


> dedicated hardware for sound decompression

Do they? Most of "modern audio" is just software decoding/mixing to a pretty not-smart-Digital-to-Analog Converter.

My educated guess is that most compressed audio formats are optimized for size, not for low latency recording/playback.

Also one of the reasons why Opus is so good and is replacing lots of older formats.


Even though CPU have hardware support for decoding, there are factors that can affect how much the computer handles it.

The games have to deal with multiple layers of audios at the same time. So imagine one sound file have the dialogue for that specific scene, other sound file have the ambient music to compliment the surrounding, other file have sound effects, etc. So the games have to handle maybe 10 audio files at the same time which can put the strain on the CPU since it need to handle the decoding for compressed format. When that occurs, it eats up the CPU resources when it should be allocated for other stuff. Also don't forget some game have 5.1, 7.1, Dolby variants, etc which can be taxing on the CPU if it have to process multiple compressed audio at the same time.


the other reason game devs use ogg vorbis over mp3 is cause ogg vorbis supports gapless looping and mp3 doesn't


TGA is easier because the rows go top-to-bottom, you don't have the RIFF header, and you don't have to pad each row to 4 bytes. I still copy and paste a little TGA encoder into my graphics code when I need to dump something out for debugging.


They are very much still in use in at least one government.


PCX!

It's such a simple format. I remember reading about it as a kid, punching in some code to read the header, then the data, then the palette, having an image appear before my eyes. Fixing the bugs in my code made the image appear correct, what a great intro to debugging and seeing the results.

It seemed like magic at the time. Taking this binary data that was gibberish when looking at it with EDIT.COM and making it appear on the VGA screen

https://nambafa.com/tutorials-computer-graphics.php?series=d...


Right there with you (:

I wrote a PCX reader in QBasic. I can still remember seeing that image of a red rose finally appear in all it its 320x200 pixelated glory.


Yeah, a PCX image reader was one of my first programming accomplishments. I wish I still had the code somewhere (same with the Mandlebrot fractal generator I had back then).


Yep. In fact, I've been planning to write a PCX codec for my retrogaming/gfx project. The RLE scheme is simple but still reasonably effective.


JPEG2000 is a big one that seems to be missing. It is used in medical imaging or something, because laws? But its visual quality is worse than old JPEG, despite a PSNR advantage. Apparently, compression researchers used to think that people care about PSNR.


JPEG-2000 was held back by intellectual property restrictions: patents but also the fact that the spec cost $1,500. A lot of the parties involved appeared to think that the technical merits made adoption inevitable and so they didn't spend time making it appealing with things like a test suite or high-quality open source implementation.

That had several impacts which scarred a lot of people I've worked with:

1. The commercial implementations were not interoperable and people get VERY nervous when they periodically get images which won't open correctly in one tool or, worse, have defects which one tool hides.

2. Many open source tools either didn't support the format at all or used Jasper, which just didn't get enough support to be fast or compliant.

3. Performance was quite slow for a while and people who weren't starting with Photoshop or native Mac apps (ImageKit uses the Kakadu codec, which has decent performance) were probably hitting Jasper or the Java implementation, both of which were orders of magnitude slower. People will forgive a certain performance hit for better compression but, especially as network bandwidth and storage capacity went up so much, the savings just weren't worth it if you could transfer and decode a JPEG in less time than it took just to decode the JP2.

4. All of that combined to mean browsers never really implemented it, other than Safari which picked it up via the standard image framework. The complexity meant that getting acceptable performance and security would be expensive and the demand just wasn't there.

5. The complexity of the format and low encoding performance meant that many JP2s were not encoded with optimized settings, which reduced the benefits to supporting it.

This makes me a bit sad because technically it was the only format for a long time which supported a wide range of colorspaces, bit depths, etc. and the way you can progressively decode a stream would have been a really neat option for responsive images in HTML — imagine if all your srcset had to do was say “Range-request bytes 1-x for 512x512, 1-y for 1024x1024, …” and the server + CDN could host & cache a single file for every client.

There are certain scenarios where the progressive / tiled decoding can make up for the friction (medical imaging and archival storage in libraries/archives) but they're not widespread enough to establish an entire image format. The open source situation has never been better with OpenJPEG but there just doesn't seem to be a high likelihood of significant increases in demand at this point.


The patent situation was exacerbated by the fact that JPEG2000 started getting pushed right about the time the GIF patent situation blew up so everybody was reluctant have a repeat disaster.

Add in the fact that JPEG2000 showed at best a marginal improvement over JPEG and it's not hard to just stick with the proven technology. Having no good reference implementation is just a nail in the coffin.


I would describe JPEG-2000 as more than a marginal improvement over JPEG: it's one format which supports lossless and lossy compression, it compresses a good bit better and the artifacts are less visually distracting, has better progressive streaming, and full support for variable bit depth and colorspaces.

If you look at it technically, it'll win on just about every point. Unfortunately for the standard, most of the people involved assumed that was enough to make adoption inevitable. I think it could have gone down a different path if, say, someone had released a high-quality open source version or worked with e.g. Netscape/Mozilla to integrate it into web browsers but that would have basically meant scaling back the older dream of making a profitable business around a single component like an image codec.


But compression time were atrocious and decoder were not as fast as the jpeg one (at the time).

Marginal support from tool/browser didn't help boost its popularity, and patent situation as pointed in parent comment just shut the coffin on this otherwise ok format.

At the time vanilla jpeg had compression artifact and had worst compression rate, but was good enough at the time, even now because of its ubiquity the "successors" are still having a hard time dethroning jpeg.


Exactly: it seems like a classic “Worse is better” situation where JPEG 2000 is basically better across the board on features but none of that mattered as much as being widespread and fast enough. Something like TIFF or BMP wouldn't have been able to do the same since the file sizes are so much larger that they'd have had a noticeable impact on usability but JPEG was far over the “good enough” bar for most users.


I think facebook even had a backlash when it tried to switch to webp for serving image


Yes - anything which breaks “save as” is going to hit backlash. However, as a counter example Apple’s HEIC deployment has been smoother since it works for users when it transparently transcodes shares outside of the ecosystem with HEIC support.


I've always been curious about the distinction between JPEG and JPEG2000. From what I can find online, everyone says that the visual quality is better than JPEG, it just flopped due to slow adoption. Could you elaborate on your claim that it's worse?


The technology is very different. Both translate the image into a representation where it's easier to remove "unimportant" data. JPEG is based on discrete cosine transform (DCT), JPEG2000 on wavelet transform.

Visually, where JPEG produces ringing artifacts, JPEG2000 produces blurriness. JPEG artifacts look "sparkly" and are often a reasonable replacement for the original texture, JPEG2000 artifacts look offensively dull (my opinion). I think, in theoretical speak, it's called "JPEG better preserves high frequency energy". You should be able to find side by side comparisons using your favorite image search engine.


I've found a nice example. Compare the lawn, the part above the arc (especially on the right) and the uppermost window on the two 0.17 MB images. There are parts that look like the JPEG2000 encoder inserted big blotches where details should be. JPEG2000 seems to be better at large, gentle gradients and strong discontinuities between solid colored regions. https://www.imagepdf.com/images/jpeg2000.jpg


My understanding is that JPEG2000 never really demonstrated, in experiment, that it was better than JPEG at the types of images and use cases which JPEG was already designed for (photographic images, lossy compression). This use case is such a dominant use case for the web. If you wanted similar quality levels for JPEG and JPEG2000, you often got similar file sizes. I could dig up some of the tests, but the basic gist of it is that at lower file sizes, JPEG2000 would get blurry and JPEG would get that funky blocky noise it gets.

This is the result of actual experiments with human subjects judging image quality.

I haven't really heard claims that JPEG2000 is better, actually. I remember people saying that it was "supposed" to be better, but not by people who dug into it and made comparisons with human eyeballs.

There were some images that would appear visibly better with JPEG2000, like photographs with those beautiful fields of defocused color. JPEG2000 captured those fields of color with all the smoothness they originally had.


Thanks to the power of internet we do not need to speculate; here is a comparison between image formats, JPEG2000 being represented by "OPENJPEG": https://wyohknott.github.io/image-formats-comparison/#sking*...

Subjectively I think JPEG2000 is better across the different images, at least at this "medium" size preset.


IMO, they are closer than in other comparisons except you can still see some of the usual problems: e.g. in the "Cecret [sic] Lake" image on Medium: with JPEG there is a hint of ripple on the lake and most of the raggedness of the edges of the clouds is still there. With JPEG2000, the reflections on the lake turn into blotches and clouds have soft edges, even missing some large and obvious details. For some reason, it's quite disturbing to me. OTOH, JPEG2000 does a surprisingly very good job on the bushes. Maybe JPEG2000 could be actually good by allocating a little more bitrate to low contrast areas.


I should also point out that mozjpeg is considerably better than jpeg compressors that were around when JPEG2000 was introduced. I don't know if openjpeg is similarly improved.


I actually prefer JPEG, esp. in small size. – Thanks for the Link!


I think many of those proprietary wavelet compression formats seem to be missing. I remember that at around 2000 I recompressed my images every other week after finding yet another file format that could save me some disk space. Cannot even remember their names...


JPEG2000 is not even proprietary (being from JPEG and all), it failed anyway. Of course, the patent situation may have been a problem.


It also used to form the basis for Redcode, the file format used by RED digital cinema cameras. Not sure if the current version is still based on JPEG2000 though.


AFAIK, digital cinema packages (DCP) used for shipping movies to theaters and showing them on digital projectors are still based on JPEG2000 image sequences.


Is there a reason h264 wasn't used? It has been widely available since at least 2006 and supports all kinds of resolutions, etc.


jpeg2000 has a number of positive traits:

* The industry doesn't want you to see compression artifacts, ever. Digital Cinema has no inter-frame compression, it's encoded as a series of individual jpeg2k images at a pretty reasonable quality.

* If one frame has a glitch, the glitch doesn't propagate over a bunch of frames the way x264 style compression does.

* jpeg2k at too-low a quality doesn't have jpeg/mpeg style square artifacts—instead certain parts of the picture will look slightly blurrier.

* jpeg2k has an interesting property where a 4k image can be encoded (with the proper parameters) such that when decoding you can stop at some point midway through a single frames data and have the equivalent encoded 2k image. Which means you can encode for 4k and ship it to theaters that only have 2k projectors (and that 2k hardware isn't required to render to a 4k framebuffer and then downsample).

* Since it's more modern than jpeg was (and dcinema was standardized before something like webp came along), it has a lower bytes/quality ratio than jpeg. IE, to achieve equivalent quality image from jpeg you'd need a lot more data.

* Support for deeper color depths and a much wider gamut. This is of course limited by what a projector can recreate, but they didn't want to limit to rec 709 or something.

They didn't care about patent encumbrance because they didn't need push this out the consumers—the jpeg2k patent stuff only affects the decoders and encoders, both of which are owned by people who can afford to pay.


I believe it is something to do with bit depths. JPEG2000 can support up to 16-bits whereas H264 only can support up to 10-bits. So using H264 8/10-bits could produce noticeable artifacts when it is being used in a big screen. So using JPEG2000 will alleviate the artifacts since it have higher bit depth as it have extra space for more information to fit in.

Here another article [1] I found recently that discussed why Movie theaters use JPEG2000 over popular video format.

[1] https://www.design-reuse.com/articles/4595/digital-cinema-re...


Digital cinema uses a kind of Motion-JPEG2000. The encoding of the coefficients is so convoluted that cinema projector systems require custom hardware to do decoding in 2K at full speed. The only software algorithm that could decode it at relative speed was patented by someone involved with the format: otherwise, software decoding is very slow compared to other formats.

JPEG2000 is also sometimes used to encode images embedded in PDFs.


> JPEG2000 is also sometimes used to encode images embedded in PDFs.

Interesting. I thought PDF normally convert the format into .jbig2 for embedding into the PDF itself.


PDF doesn't encode something. PDF supports a few data formats. They call them "filters"; for example CCITT Fax, JBIG2, DCT, RLE, etc. Its up to the PDF creation program to pick the best data format for the image it would like to put into the PDF.


Isn't DiCOM a popular image format for medical imaging too? Or does DiCOM use Jpeg2000 under the hood?


In DICOM you have the notion of a transfer syntax, so you can choose your compression scheme (from a list of Dicom supported transfer syntax), such as JPEG2000. You can also just leave the images uncompressed as well. That being said, any images of sufficient size are very likely to be compressed, J2K being one of the most popular.

IMO, the flexibility provided by the container is not worth the development/support effort. It's better to do one or two things really well rather than have hundreds of edge cases.


I worked on a library that had to process PDFs and grabbed a bunch of PDFs from archive.org and other open sources for testing, and as I recall, about 2% of them had at least one JPEG2000 image. So if you're in the PDF space it's probably still worth considering.


If you need to generate miniatures from PDFs, you must handle JPEG2000 and JBIG


I wish the author provided more information about the strangeness of some of these formats and why they're now obsolete. PCX being one of the oddest because it's ordered by planes. So the red, green, and blue components are split.

What this list is missing is WMF/EMF which were basically direct serializations of Windows drawing primitives.

> BMP files are usually not compressed and, therefore, are not well suited for transfer across the Internet

When BMP files were compressed they'd typically be given the extension .RLE.


Most of the IFF images produced on the Amiga were also ordered by bitplanes, but that's bitplanes of an n-bit number for each pixel, referencing a colour lookup tables.


EMF is still widely used. Word no longer imports .eps files as of about 4 years ago, so .emf is the next best option when generating plots in MATLAB or python. PDF embedding would be a dream, but that really doesn't work. svg is also reasonable, but word handles emf better.

I use LaTeX for my personal stuff, but employers don't touch such things with 20 foot poles.


The Amiga (EA) IFF format was adopted by Microsoft as RIFF (essentially the same format but with little-endian numbers for x86), which is the basis of WAV and AVI, among others.


IFF (Interchangeable File Format) is a container format that can contain different types of data. ILBM (InterLeaved BitMap) was the image encoding inside the IFF.

The Amiga had bitplane-based graphics. The "interleaved" part was that an image was stored by line first, bitplanes second instead of one whole bitplane after the other. Games often used the "interleaved" graphics layout in memory because it allowed blitting a sprite with a single Blitter operation. (... and EA who made DPaint was a games studio first)


> Today’s Tedium discusses 10 image formats that time forgot.

> (list includes BMP, TIFF, TGA)

I remember seeing lots of software using Quicktime PICT files, even on Windows, but I've never seen anyone break it down (Wikipedia does a bit[0]). Did everyone mostly use it as a container for JPEGs?

[0] https://en.wikipedia.org/wiki/PICT


PICT, there's a blast from the past. Way back at my first job I ported our visual graph editor to the Macintosh. Scrolling was very slow at first due to continuously redrawing the graph, until I started saving it to a PICT as a sort of cache. Worked great, the Mac could draw a PICT very quickly.


PICT was a really weird format. It's a stream of drawing instructions for Apple's old QuickDraw graphics layer. This could include bitmap graphics, but could also include (semi-)vector operations like drawing lines, circular arcs, polygons, or text.


Of course this means making a third party reader means re-implementing QuickDraw and probably getting some nastygrams from Apple's lawyers.


Apple wouldn't care. QuickDraw doesn't exist in current versions of macOS -- Preview.app can't even display them.

I've actually toyed with the idea of a PICT to SVG conversion tool. It's potentially useful for viewing certain old documents in the (long obsolete) Apple DocViewer format.


Found VRML in there unexpected.

But was expecting MNG but didn't find it. Interesting history with that one, a few browsers implemented it but then removed it. Leading to a lot of geek angst.

https://en.wikipedia.org/wiki/Multiple-image_Network_Graphic...


I was never sure why MNG failed so spectacularly. Eventually it had to be replaced by full up H.264 videos and a lot of places are still using old fashioned animated GIFs because firing up a full fat video codec for a 32x16 animation with a dozen frames is silly.

Was it really just the lack of interest from browser makers preventing people from using it?


My limited understanding was that the spec was too big and too difficult to implement. Too many features apparently. A simpler version called APNG was advanced instead. Until today I had no idea it is supported about everywhere except IE11. No one seems to know... spread the word. ;-)

https://caniuse.com/?search=apng


My understanding is that the video codecs are pretty clearly ahead for lossy compression, even for small numbers of frames. Image codecs got stuck in the jpg/gif/png local maximum, hence why a lot of the new image codecs (webp, heif, avif) are conceptually "one frame of a video codec" (vp9, h265 and av1 respectively)


I remember being excited about MNG, but finding the open-source libmng a pain to use. It was all based around callbacks with the library driving the animation cycle. Not a great fit if you were interested in using MNG for things like sprite animation.


Tool support was also an issue, I don't remember using any software that supported it out of the box and even less generated one, only a couple of dedicated image viewer/converter.


Is bmp really dead? It's my go-to file format when playing around with graphic image programming (just stupid hobby stuff). It's easy to pick a format and write one out, and most everything reads it as a format that I've used, even on linux. It can be tricky reading one created by something else, but if you make assumptions based on the header sizes and the fact that you rarely run into one that not a simple 24-bit pixel array, its' not too bad. I do only use it as an interim format, though.


You should take a look at tiff (also on their list for some reason). I used bmp in my imaging work because it's lossless and uncompressed (I got tired of waiting for 500M png files to open). But tiff is quite simple to understand and is a little more future proof than bmp.


I thought about looking into the tiff format, but never have. If it avoids the format versioning trouble that bitmaps have, then I'm all for it. I remember using it back in the day as an output format for the povray ray-tracing program (I think?). Maybe DKB.


It's as dead as you want it to be. IMO, I prefer it when I have to deal with lossless uncompressed images because it's familiar to users of old versions of Windows.


Anecdotally, the first time I heard of TGA was in Tomb Raider. The Tomb Raider games of old had a screenshot feature that outputted TGA files of gameplay and were useful for making walk-throughs for other people (you would typically convert them to JPG when sharing on the web).

And I still see BMP files being used even in modern games. Yeah: PNG is better, but BMP has its uses.


BMP and TGA have the advantage that they're uncompressed, so it's mostly trivial to dump the contents of your framebuffer to a file without any major dependencies.


Agree about TGA, it is trivially easy to write. Example in C# for grayscale version, a single page of code: https://github.com/Const-me/Vrmac/blob/1.2/VrmacInterop/Util...

BMP is more complicated, unfortunately. The header structure is more complex. And then there’s a requirement for rows to be 4-bytes aligned, might need to insert padding bytes between the rows.


The code for BMP isn't very complicated, the hard part is knowing all the rules and corner cases. Like the 4-byte alignment you mentioned, or the fact that the rows are in reverse order of what you'd expect (bottom up).


I think the row order in BMP is OS/2 related. Windows, X, etc. all put 0,0 in the top-left corner, but OS/2 puts it in the bottom-left instead.


You can write an uncompressed PNG IIRC.


Sort of but not really.

You can write out PNG files with deflate blocks that aren't compressed, so in that sense they are uncompressed PNG files, but all of the complexity of supporting compression is still there in the data format in a way that is not fully avoidable even if you don't actually compress the data.

BMPs on the other hand are simple to write, only a very small portion of the header data is required and then you can dump the image out as its raw pixel data in a way that they are probably already being stored in memory, making it a very easy format to support the writing of and a pretty easy format to support the reading of.


BMP is quite complex and poorly documented (it's a bunch of random Microsoft structs and #defines), more so than PNG.

BMP has compression too (RLE and Huffman). PNG is simpler here since you can just drop a DEFLATE library straight in.

Images are usually stored in memory as RGBA in scanline order. You can dump this representation straight to a PNG file (edit: this is wrong, you can't). BMPs are typically written with pixels in packed BGR order, rows going bottom to top. But not always. I think they go top to bottom if you give a negative height, and you can supply masks defining which bits the RGBA come from.


If you're going to drop a library in, sure use PNG or whatever else, but the premise of this subthread was about ease of dumping an in-memory representation of the pixels into a valid file, and BMP makes that way easier than PNG does.

In a couple lines of code with no external library you can create a valid BMP file with just enough header bits to tell the BMP reader how your pixels are arranged and let it deal with the conversion.

Virtually all of the complexity of BMP files is optional and can be ignored by a BMP writer and just let the BMP reader worry about it. This is not true for PNGs where you have to have a significant understanding of the file format to write a PNG file.

Libraries make that not matter so much but are kind of outside of the scope of where this discussion was at the time this came up.

I'm certainly not advocating for BMP over PNG and most people should just include a flexible image handling library that's relevant for their given language and be done with it, but I also understand why even today programmers working in languages like C/C++ will use BMPs just to get a simple and functional export working without dealing with the complexity of using a third party library when their needs for the file format are minimal and internal to their own use and not being exposed to users.


Well, you're right. I had thought you could write an uncompressed PNG, but I was wrong, it has to be DEFLATED. You can write a series of uncompressed DEFLATE blocks like you said (not so bad), but the Adler32 and CRC32 checksums kill you.


Just to chime in, my 3D engine uses TGA to import textures too because the code is dead simple.

Sometimes when you are going to use your own format for saving data you could pick the simplest format for your import to accelerate productivity at the cost of diskspace.

In a similar way I use collada for character animations, it gets alot of slack from AAA studios (who all use FBX) but for indie development it's simple enough to be able to implement in a reasonable time (XML so you can see what's what)...

In the same line of thinking, I import static meshes from .obj, sound from .wav and last but not least font from .ttf!

The limiting factor usually is the editor you use to create/manipulate the assets in the first place, always look to the simplest export format they have, for Photoshop TGA is the simplest!

I export everything to my own custom binary formats to be loaded by the engine by customers, and there lz4 is a pretty good/simple RLE compression (RLE is also used by TGA BTW) and then .zip (also very simple!) to wrap it all up...

Keep it simple!


TGA is a great format for tinkering because it is very simple to code yourself to generate images and it is still widely supported.


Yes, IIRC it's about an 18 byte binary header and then BGRABGRABGRA... or BGRBGRBGR... bytes for uncompressed (there's also an RLE mode, but I never use it). The only real gotchas are the channel ordering and that the image might be flipped top-to-bottom if you're not careful.

These days though, I use tend to use the even simpler PPM format. It uses normal RGB byte ordering from top to bottom, and it's simple enough that I can code a basic writer for it from memory:

    fprintf(stream, "P6 %d %d 255\n", width, height);
    fwrite(image, 1, height * width * 3, stream);
(Often, I won't even bother opening a file to write to. I'll just emit the image to stdout and then pipe it to Image Magick for display or to convert to a compressed image format for storage.)


PPM P6 is also what I use all the time - I usually have to convert from paletted format to RGB first.

A PPM paletted format would save me a lot of space.

The built-in support for PPM in Preview on Mac OS X makes it extremely useful.


For my use, alpha is what I tend to miss in PPM. There's the PAM extension [0] but it's not nearly as widely supported as basic PPM. Usually at that point, I reach for PNG via stb_image_write [1].

[0] https://en.wikipedia.org/wiki/Netpbm#PAM_graphics_format

[1] https://github.com/nothings/stb/blob/master/stb_image_write....


TGA was very popular in 3d games for a time, I do recall seeing lots and lots of textures and the like encoded as TGA.


IFF (or IFF-based formats) is a lot more widely used than some realize, just not usually with ILBM content nowadays. Microsoft RIFF (largely WAV and AVI, although animated Windows cursors also use RIFF), Apple's AIFF audio, Maya uses an IFF variant for raster image data, and even The Sims uses an IFF variant for many of its data files.


Yeah RIFF is essentially IFF with opposite endianness.

Regarding IFF usage in The Sims, it's not surprising given that EA designed the format. I believe that SimCity 2000 cities and scenarios are also IFF.


In fact I'm pretty sure it stands for "Reversed IFF", as in byte-reversed.


PNG is actually a cut down and then extended IFF.


CGM YEEEEEHAW

Ha ha ha I work in an image format so forgotten it doesn't even make the forgotten list.

Computer Graphics Metafile bitches!

My god how I loathe that thing. It's kept alive in specifications purely by the efforts of a single software company flooding all the WGs with cheerleaders. "No, it's not obsolete, SVG is just a thing used by hackers and programmers"


this? https://en.wikipedia.org/wiki/Computer_Graphics_Metafile

you poor, lost soul. tell us where it hurt you.


TIFF is not dead, still used heavily in bio sciences, imaging, etc. VRML is still a lowest common denominator for transfering 3D


Most RAW camera files like Nikon NEF are based on TIFF with a lot of proprietary extensions to the format.


The successor to JPEG is going to be JPEG XL (.jxl)

https://jpegxl.info/


Still waiting for browser and industry wide support. So far no one seems to care much.


Not true: https://caniuse.com/?search=jxl

It's in Chrome, Firefox, Edge, and Opera behind flags - works great!


It is behind a flag, with no time table when it will be officially supported. And no company ( apart from Facebook ) came up and said we will use it as long as browser supports it. Most are still in the AVIF camp.


That's awesome that Facebook is supporting it!

https://jpegxl.io/tutorials/facebook/#facebookjxlsupport

> [as of] the 8th of November 2021, all JPEG XL files were accepted ... Facebook offers full JPEG XL support


It’s unfortunate they spent time to get AVIF implemented right before JPEG XL. JXL is a more complete image format rather than a keyframe of AV1 video. Some media format cruft is amounting in web browsers but JXL of all things should ironically really be in there.


What about webp, avif, and the like?


WebP hasn't shown any significant advantage in real world test, and AVIF only wins in low bpp usage with a huge trade off on encode and decode speed. It is listed in the JPEG XL comparison table.


Interesting, most websites that convert user media to WebP load quite fast for me whereas conversions to JPEG are fast but still a bit slower/lower quality in my experience.

WebP suffers from lack of support in outdated browsers, but that's the only problem I've found with it. AVIF is kind of a silly format, but for some use cases it works quite well.

JPEG XL has very little tool support, isn't (as) free as the alternatives and browser support is practically non-existent. Nobody wants to adopt the format either, because the JPEG group made the same problem Nintendo made when naming the Wii U: JPEG XL sounds like a JPEG file with something special going on inside, while in fact it's actually a completely new format.


The interesting thing to me is that compared to WebP and AVIF, JPEG XL is the only one that isn't just largely based on a video codec's I-frame compression.


SGIs RGB files are missing. http://paulbourke.net/dataformats/sgirgb/sgiversion.html

Had been heavily used on SGIs as their 'native' image file format.


Can we write FLIF off, or does it still have a chance at wider adoption? I remember quite a positive reaction on release, but don't see it in the wild, unfortunately.

https://flif.info/


"FLIF development has stopped since FLIF is superseded by FUIF and then again by JPEG XL, "

Smells very dead to me.


I believe that the developer of FLIF integrated much of the tech into JPEG XL.

It seems to happen a lot with codecs (e.g., Xiph's Dalaa tech merging into AV1, or Xiph's CELT and Microsoft's SILK merging into Opus). I see FLIF as the prototype and JPEG XL as the production version.


There was also an update to it called FUIF - Free Universal Image Format

https://github.com/cloudinary/fuif

But it too got superseded by JPEG XL (.jxl) https://jpegxl.info/


I thought you were talking about FLIC[1] at first. Now there's a name I've not heard in a long time.

[1] https://www.compuphase.com/flic.htm


I wish they'd mentioned my favorite image format: XPM. Open one of those in gVim and prepare to be amazed.


I'd like to see a mention of FIF, fractal format from Iterative Systems, something quite different from other approaches to store images.


This format was very interesting because it was essentially "resolution independent". You could zoom in further and further and see nonsensical made up details by the fractal algorithms without getting any blurriness or interpolation artifacts.


> This format was very interesting because it was essentially "resolution independent"

When I read that, I thought it is because it is a vector type. Usually "resolution independent" images, from my understanding, are commonly vector type since it can scale infinitely like SVG. However, I couldn't find information online whatever .FIF is a raster or vector type. I saw one that said it shares similarly to vector images, that didn't confirm anything for me. So I would incline that because .FIF is a vector type, please correct me if I am wrong. It so hard to find this information and wondering if my google-fu is getting rusty.

EDIT: Whelps, it turn out my google-fu is getting rusty. I managed to find the information and I am wrong. It is a raster type[1].

[1] http://fileformats.archiveteam.org/wiki/Fractal_Image_Format


Your link contains is a nice article [1] how the resolution independent "raster type" looks like when zooming in including ancient Windows software to try it yourself. Sadly wine crashes when I try to start it.

1.: http://users.senet.com.au/~mjbone/Fractals.html


I want to be pedantic and point out the Nokia 3310 used to illustrate WBMP didn't support WAP. It should be the 3330, which has a reversed (white with blue details) case. (I know the "XpressOn covers" are swappable, but it's clearly a PR shot not someone's random phone).


I was kind of hoping to see PNM referenced here. That's the very simple format that comes in PBM, PGM, and PPM files made popular by Jef Poskanzer's PBMPlus (and later, NetPBM).

The core insight was great: have one generic lossless simple raster format and make lots of command line image processing tools for it. Then have a bunch of converters. For many years I got a lot of work out of things like xpmtoppm icon.xpm | pnmscale 3 | ppmtogif > icon-big.gif. It helped the format was so simple you could easily create images in C code, too. (These days we use Imagemagick or derivates instead.)

One interesting thing about PNM is very few images were ever stored in that format itself. It was intended mostly as an intermediate format for process.


Wasn't VRML support in browsers usually plugins and not native?



When I used it it was through a browser.

I would create the files in a txt editor and then explore the world-space created via browser.


I distinctly recall installing QuickTime to view some VRML stuff in the mid/late 90's.


QuickTime's QTVR wasn't the same as VRML. While VRML was a fully 3D world/object, QTVR was pre-rendered scenes with custom navigation controls.

There was an "object" view which was basically a bunch of still images of an object from different angles. The cursor moved a virtual camera that just displayed the still frame for the appropriate angle.

The "panorama" view was a panoramic image mapped to view angles from a central point. Images could be cubic or spherically mapped IIRC.

Both types supported hyperlinks. So clicking a linked region could load another scene which could have yet more links to more scenes.

The Star Trek Interactive Technical Manual was a famous use of QTVR. It enabled visual fidelity a real-time 3D rendered of the era couldn't deliver. It was made by actually photographing the Star Trek production sets.


There was also QuickDraw 3D (not QuickTime, and not cross-platform), Apple's 3D graphics API, which used actual 3D rendered objects.

Classic MacOS had pretty well-integrated support for QuickDraw 3D Metafiles, they could be dragged and dropped, you could embed 3D objects into a word processing document or presentation or something and manipulate and explore them in there.

Kind of mirrors what Apple is doing with USDZ these days (you can send a USDZ model in iMessage and the recipient can manipulate it right there in chat)



Another one that didn't make it: writing a packed binary file where the first two shorts are the width and height and then just 8 bit grayscale values.

That was my favorite way to serialize images but I never had the foresight to add enough header or versioning info to make it backward compatible.

Now after 30+ years of programming the first thing I add to any homebrew file format is enough info to make it extensible.


Just store the width. You'll know the height when you're done drawing it, and it saves you two bytes.

You never get over learning to program on a system with 3.5k ram.


Sounds a lot like a PGM.


NAPLPS didn't really have much of any syntax. Pretty much anything was valid. I used to fix Telidon terminals back in the day. A favourite pastime was to send a terminal some wildly inappropriate data so as to enjoy the wild and trippy vector madness that would ensue.


Given the title I was expecting a lot more obscure entries. I’d be shocked if many people reading that kind of article hadn’t heard of at least half of those formats. Or maybe I’m just getting old…?


This article has some... interesting entries (as noted elsewhere in the comments here), but it came up at an interesting moment for me. I spent last weekend poring over the PNG spec because I wanted to make QR codes real small [1]. I'm completely obsessed now. I'm going to dig into some other formats...

[1]: https://github.com/qubyte/qrcode-png/blob/main/index.js#L6-L...


In a summer tech program for pre-college high school tech-interested//inclined students, my group of three created four different VRML world-spaces. We each made our own, and I made the "gateway" world-space where you would enter a Parthenon-like structure with three hyperlink objects inside which you could click on to travel to each of the other places.

Each place had a similar hyperlink object to get back to the Parthenon.

I enjoyed it. I probably still have the VRML 2.0 book around somewhere.


VRML Sourcebook [0]. I did a similar little project in VRML. I took the general organization of my homepage and VRMLized it. Authoring it was such a pain in the ass.

I loved the concepts behind VRML, and still do. Unfortunately all of the implementations around it were terrible.

When it was in vogue PCs weren't powerful enough to handle anything but the most trivial models. Dial-up was entirely insufficient to deliver anything but the most trivial models as well.

[0] https://archive.org/details/vrml20sourcebook00ames


I remember the VRML player having some weird skewmorphic navigation widget that was incredibly clunky to use. You had to use the mouse to turn virtual wheels to adjust your speed or something, all laid out on a horizontal tube thing at the bottom of the screen.

And this was years after Doom and other 3D shooters introduced wasd movement.

VRMLs biggest problem was that it was just way ahead of its time. Everybody who tried it agreed that 90s PC hardware just wasn't ready for VR, and dialup modems were never going to be adequate. Unfortunately almost everything that has come up to replace it has been proprietary and usually monetized. We've lost that sense of "anybody in the community can be a creator" that defined the early web. It was too hard to monetize.


> I loved the concepts behind VRML, and still do. Unfortunately all of the implementations around it were terrible.

Being ~18 years away and therefore memory is skewed, may I ask regarding how the implementations were terrible? Honest question.


On the playback/viewing side many VRML viewers were browser plugins. The idea being you might navigate from a web page to a VRML space or embed a 3D object in a web page. These were (as I remember) very sluggish and crash prone which took down the whole browser. Even dedicated VRML browsers weren't all that great.

I don't remember any without awful skeuomorphic buttons and view borders. Their navigation UIs also tended to be terrible. Most I ever tried aped 3D modeling apps using the mouse (a workstation mouse with three buttons) to fly through a scene. But with the typical PC mouse having two buttons (and a Mac with one) you had to use modifier keys to switch between fly and "look". VRML supported world gravity (intensity and a normal) so a viewer program could just navigate like a FPS game but I don't remember any that did.

Another issue I remember but may have just been my PC, because WRL files don't embed any linked content they've got to go fetch every resource referenced like on a web page. On my systems at least this led to a lot of pop-in as I moved through scenes, even locally. On dialup it's was a thousand times worse.

So modulo actual hardware performance, or lack thereof, you had poorly performing software with very alien feeling UI. While that is more than 20 years of time between me trying VRML and now those are things that stand out to me. Maybe none of these problems existed if you were blasting around on an SGI O2 workstation or knew just the right software to use. I had neither so I just stumbled around playing with stuff.


I find TGA very useful when in need of image output and without easy access to a proper image lib. TGAs are trivial to output just by writing RGB values to a file with a simple header, no encoder needed.

I am so glad that many image viewers/editors still load TGA, my computer has gigabytes of them!

Spec here: http://www.paulbourke.net/dataformats/tga/


Is it crazy to fondly remember some of those formats


I guess I'm old because I've written loaders and savers for at least 3 of those formats listed there. Granted it was when I was still learning, but many of those formats are pretty simple to code up a good first approximation loader for. I don't know if I'd want to try and write a loader for JPEG or PNG these dats for example.


VRML is still used as an input format for colour 3d printing, eg. https://www.stratasysdirect.com/resources/design-guidelines/...


Should vml be included as a vector markup image format? Msft was forced to abandon vml in favor of open svg.


RIP came a year or two too late. It was a nice leap for BBSes but the Web came along and started the decline of BBSes. There were some neat apps written for RIPscript but I'm going to guess that the RIP version of Legend of the Red Dragon was its highest use.


Sorry if this is tangential, but I just love this discussion. Rather than debating about things, this discussion thread is full of informative tidbits, and fond (or not so fond) reminiscences of image decoding programming.


Does anyone know of a method for extracting images from the FlashPix (.fpx) format. I think user-friendly support for the format was lost when Kodak went bankrupt. However is the format itself overly-complicated.


ImageMagick may have support:

https://github.com/ImageMagick/libfpx


Thank you!


Windows Metafile


>bmp

>tiff


I agree that I am surprised by the inclusion of those two formats. I work with images and we still see BMP and TIFF all the time in medical contexts.


I’d like to nominate aPNG. It works only on Firefox unfortunately.


Looks like times have changed:

https://caniuse.com/?search=apng


ICO? Window's icon format. IIRC it was really two images baked into a single file; one a colour bitmap and another mask that's used for transparency.


Don't a lot of favicons still use that format?


Possibly. There was this HN news article from a short time ago [0]

"The vast majority of the favicons offered up by websites are PNG. 71.6% of <link rel=”icon”> images are PNG. 21.1% of /favicon.ico files are secretly PNGs, including Reddit’s. Strangely, only 96.1% of Apple touch icons are PNG. Presumably the other 4% are broken."

[0] https://news.ycombinator.com/item?id=28933391


Funny, I just implemented a PCX writer today... but only to be able to embed images in Intermec Direct Protocol (an obscure print language).


I also find ASCII image formats like PBM and XPM amusing. Simple images viewable in a mere text editor as a budget option. :D


I should pull out my VRML2.0 Sourcebook again, that was a really fun time.


I remember enjoying virtual walk on the ISS using VRML ~2001. Worked great on 512kbit link and with quite low resource usage :)


FIF - although google earth looks like they used a similar algorithm.


Is not bmp the default file-format of paint nowadays?


It uses PNG as default. Can also do BMP, JPEG, GIF, HEIC and TIFF on Windows 10.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: