The first/latest one ("avoid potential pointer overflow") sounds scary, but I believe it's just about the pathological case where the library is handed (valid) buffers that extend to only a few bytes away from the maximum possible value (i.e. 0xfffff...). If your OS does this, it probably shouldn't.
Going down the list, there are three relevant-looking commits. Listed in chronological order:
- The first one only substantively changes png_write_PLTE, so it could only be an issue for applications that actually write out PNGs. It changes a check on the ‘num_pal’ argument from a hardcoded 256, aka PNG_MAX_PALETTE_LENGTH, to be based on the bit depth. The png_write_PLTE function is internal and called only from png_write_info, which passes it ‘info_ptr->palette’ and ‘info_ptr->num_palette’ as ‘palette’ and ‘num_pal’, respectively.
- The second one confusingly changes png_write_PLTE again, but just to rename a variable. The substantive change is a similar check on an argument to png_set_PLTE, which is a public function and, incidentally, the only thing that can set ‘info_ptr->num_palette’, unless the application accesses it directly, which is deprecated. (Thus, outside of the deprecated case, it should ensure that the behavior change in the previous patch never actually gets exercised.)
- The third one adds yet another check, to png_handle_PLTE, which is called when reading a PLTE chunk in a PNG file - this time, the variable is called ‘num’. After the check, ‘num’ is used to fill an array of size PNG_MAX_PALETTE_LENGTH - but there was already an earlier check for the length being too much for that, in which case it may longjmp out of the function with png_chunk_error or just return. png_handle_PLTE then calls png_set_PLTE with the same argument, so, in lieu of the added check, the previous commit’s check would still trigger and fail the image load. The new check just changes an error into something the library might be able to recover from.
So the second commit is the important one, and the third demonstrates how png_set_PLTE can be called while reading a PNG with a size argument greater than appropriate for the bit depth, but still <= PNG_MAX_PALETTE_LENGTH. What happens then? The oss-security post says:
> Some applications might read the bit depth from the IHDR
chunk and allocate memory for a 2^N entry palette, while libpng can return
a palette with up to 256 entries even when the bit depth is less than 8.
So are only applications that do something odd affected? The png_get_PLTE function gives the client the actual num_palette value as an out argument, so any client that uses that to size buffers wouldn’t be affected; nor would one that hardcoded 256 or PNG_MAX_PALETTE_LENGTH. For example, Skia does the latter in SkImageDecoder_libpng.cpp.
Are there any libpng-internal uses that could be affected? I grepped for ‘->(num_)?palette\b’. TLDR: I don’t think so, but if you want the pointless detail, among other less interesting uses were:
/* Report invalid palette index; added at libng-1.5.10 */
if (png_ptr->color_type == PNG_COLOR_TYPE_PALETTE &&
png_ptr->num_palette_max > png_ptr->num_palette)
num_palette being too high could only make this fail, so it doesn’t matter.
png_image_read_header uses it to set image->colormap_entries, which has its own long list of uses. I think this is then used (by the application, via PNG_IMAGE_COLORMAP_SIZE) to determine the size of the colormap buffer, but there may be an issue somewhere.
png_image_read_colormap uses it to generate actual color map data, but this is not obviously dependent on the bit depth, and checked against image->colormap_entries (…in case it changed from earlier?).
png_set_quantize uses a user-specified num_palette value and in any case is not important functionality.
Some PNG_COMPOSE code uses num_palette to write to palette; not a problem…
png_do_expand_palette just looks up entries in palette (png_ptr->palette) by byte-sized indices without checking anything. The palette buffer must be at least 256 entries long, or else it will read out of bounds, which is presumably why ->palette is only set here:
/* Changed in libpng-1.2.1 to allocate PNG_MAX_PALETTE_LENGTH instead
* of num_palette entries, in case of an invalid PNG file or incorrect
* call to png_set_PLTE() with too-large sample values.
*/
png_ptr->palette = png_voidcast(png_colorp, png_calloc(png_ptr,
PNG_MAX_PALETTE_LENGTH * (sizeof (png_color))));
(it’s set differently in the dither case, which I don’t think is relevant?) Anyway, it’s not affected by this.
png_handle_tRNS and png_handle_hIST have separate arrays whose count is expected to be <= and ==, respectively, num_palette.
- For trans, the ‘trans_alpha’ buffer itself is always PNG_MAX_PALETTE_LENGTH sized, so there can be no overflow there; it also sets info_ptr->num_trans. This can affect the application if it uses png_get_tRNS.
- The hist buffer is also always maximum size and doesn’t have a separate length field, so it’s covered by me checking uses of num_palette.
png_do_check_palette_indexes checks the bit-depth invariant separately.
hIST again in pngwrite.c and pngwutil.c, nothing else interesting in those files.
I think num_trans is safe in the same way, but I haven’t looked as closely.
https://github.com/glennrp/libpng/commits/libpng16
The first/latest one ("avoid potential pointer overflow") sounds scary, but I believe it's just about the pathological case where the library is handed (valid) buffers that extend to only a few bytes away from the maximum possible value (i.e. 0xfffff...). If your OS does this, it probably shouldn't.
Going down the list, there are three relevant-looking commits. Listed in chronological order:
[libpng16] Reject attempt to write over-length PLTE chunk https://github.com/glennrp/libpng/commit/81f44665cce4cb1373f...
[libpng16] Prevent reading over-length PLTE chunk (Cosmin Truta). https://github.com/glennrp/libpng/commit/a901eb3ce6087e0afee...
[libpng16] Silently truncate over-length PLTE chunk while reading. https://github.com/glennrp/libpng/commit/1bef8e97995c3312366...
- The first one only substantively changes png_write_PLTE, so it could only be an issue for applications that actually write out PNGs. It changes a check on the ‘num_pal’ argument from a hardcoded 256, aka PNG_MAX_PALETTE_LENGTH, to be based on the bit depth. The png_write_PLTE function is internal and called only from png_write_info, which passes it ‘info_ptr->palette’ and ‘info_ptr->num_palette’ as ‘palette’ and ‘num_pal’, respectively.
- The second one confusingly changes png_write_PLTE again, but just to rename a variable. The substantive change is a similar check on an argument to png_set_PLTE, which is a public function and, incidentally, the only thing that can set ‘info_ptr->num_palette’, unless the application accesses it directly, which is deprecated. (Thus, outside of the deprecated case, it should ensure that the behavior change in the previous patch never actually gets exercised.)
- The third one adds yet another check, to png_handle_PLTE, which is called when reading a PLTE chunk in a PNG file - this time, the variable is called ‘num’. After the check, ‘num’ is used to fill an array of size PNG_MAX_PALETTE_LENGTH - but there was already an earlier check for the length being too much for that, in which case it may longjmp out of the function with png_chunk_error or just return. png_handle_PLTE then calls png_set_PLTE with the same argument, so, in lieu of the added check, the previous commit’s check would still trigger and fail the image load. The new check just changes an error into something the library might be able to recover from.
So the second commit is the important one, and the third demonstrates how png_set_PLTE can be called while reading a PNG with a size argument greater than appropriate for the bit depth, but still <= PNG_MAX_PALETTE_LENGTH. What happens then? The oss-security post says:
> Some applications might read the bit depth from the IHDR chunk and allocate memory for a 2^N entry palette, while libpng can return a palette with up to 256 entries even when the bit depth is less than 8.
So are only applications that do something odd affected? The png_get_PLTE function gives the client the actual num_palette value as an out argument, so any client that uses that to size buffers wouldn’t be affected; nor would one that hardcoded 256 or PNG_MAX_PALETTE_LENGTH. For example, Skia does the latter in SkImageDecoder_libpng.cpp.
Are there any libpng-internal uses that could be affected? I grepped for ‘->(num_)?palette\b’. TLDR: I don’t think so, but if you want the pointless detail, among other less interesting uses were:
num_palette being too high could only make this fail, so it doesn’t matter.png_image_read_header uses it to set image->colormap_entries, which has its own long list of uses. I think this is then used (by the application, via PNG_IMAGE_COLORMAP_SIZE) to determine the size of the colormap buffer, but there may be an issue somewhere.
png_image_read_colormap uses it to generate actual color map data, but this is not obviously dependent on the bit depth, and checked against image->colormap_entries (…in case it changed from earlier?).
png_set_quantize uses a user-specified num_palette value and in any case is not important functionality.
Some PNG_COMPOSE code uses num_palette to write to palette; not a problem…
png_do_expand_palette just looks up entries in palette (png_ptr->palette) by byte-sized indices without checking anything. The palette buffer must be at least 256 entries long, or else it will read out of bounds, which is presumably why ->palette is only set here:
(it’s set differently in the dither case, which I don’t think is relevant?) Anyway, it’s not affected by this.png_handle_tRNS and png_handle_hIST have separate arrays whose count is expected to be <= and ==, respectively, num_palette.
- For trans, the ‘trans_alpha’ buffer itself is always PNG_MAX_PALETTE_LENGTH sized, so there can be no overflow there; it also sets info_ptr->num_trans. This can affect the application if it uses png_get_tRNS.
- The hist buffer is also always maximum size and doesn’t have a separate length field, so it’s covered by me checking uses of num_palette.
png_do_check_palette_indexes checks the bit-depth invariant separately.
hIST again in pngwrite.c and pngwutil.c, nothing else interesting in those files.
I think num_trans is safe in the same way, but I haven’t looked as closely.