$ pv lock-your-screen.png | base64 | gzip | wc
2.1MiB 0:00:00 [ 13MiB/s] [================================>] 100%
8641 48904 2264151
$ pv lock-your-screen.png | xxd -p | gzip | wc
2.1MiB 0:00:00 [4.41MiB/s] [================================>] 100%
10109 49956 2573293
$ pv lock-your-screen.jpg | base64 | gzip | wc
377KiB 0:00:00 [19.2MiB/s] [================================>] 100%
1420 8373 392796
$ pv lock-your-screen.jpg | xxd -p | gzip | wc
377KiB 0:00:00 [9.36MiB/s] [================================>] 100%
1487 8935 441077
You only see some compression because gzip is just backing out some of the redundancy added by the hex or base64 encoding, and the way the huffman coding works favors base64 slightly.
Try with uncompressed data and you'll get a different result.
Your speed comparison seems disingenuous: you are benchmarking "xxd", a generalized hex dump tool, against base64, a dedicated base-64 library. I wouldn't expect their speeds to have any interesting relationship with best possible speed of a tuned algorithm.
There is little doubt that base-16 encoding is going to be very fast, and trivially vectorizable (in a much simpler way than base-64).
FWIW, PNG and gzip both use the DEFLATE algorithm, so I wouldn't call PNG's compression "better than gzip".
> This has led to its widespread use, for example in gzip compressed files, PNG image files and the ZIP file format for which Katz originally designed it.
Now at least as good is enough for my point: by compressing a .png file with gzip you aren't going to see additional compression in general. When compressing a base-64 or hex encoding .png file, the additional compression you see is largely only a result of removing the redundancy of the encoding, not any compression of the underlying image.