You are still doing a copy, and people want to avoid the needless memory copy.
If you are decoding a 4 megabyte jpeg, and that jpeg already exists in memory, then copying that buffer by using the Reader interface is painful overhead.
Getting an io.Reader over a byte slice is a useful tool, but the primary use case for io.Reader is streaming stuff from the network or file system.
In this context, you can either have the io.Reader do a copy without allocating anything (take in a slice managed by the caller), or allocate and return a slice. There isn't really a middle ground here.
And you are going to work on all 4mb at the time? Even if you were to want to plop it on a socket you would just use IO.copy which would be no overhead, as no matter what you are still always going to copy bits out to place it in the socket to be sent.
>And you are going to work on all 4mb at the time?
Yes? Assume you were going to decode the jpeg and display it on screen. I assume the user would want to see the total jpeg at once.
Consider you are working on processing a program that has a bunch of jpegs and is running some AI inference on them.
1. You would read the jpegs from disk into memory.
2. You decode those jpegs in into RGBA buffers
3. You run inference on the RGBA buffers.
The current ImageDecode interface forces you to do a memcopy in between steps 1 and 2.
1. You would read the jpegs from disk into memory.
2. You copy the data in memory into another buffer because you are using the Reader interface
3. You decode those jpegs in into RGBA buffers
4. You run inference on the RGBA buffers.
Step two isn't needed at all, and if the images are large, that can add latency. If you are coding on something like a Raspberry Pi, depending on the size of the jpegs, the delay would be noticable.
If you are decoding a 4 megabyte jpeg, and that jpeg already exists in memory, then copying that buffer by using the Reader interface is painful overhead.