I switched from Gnome Terminal after running large duplicity backup with -v failed repeatedly, apparently because the terminal didn't held the load. Switching to urxvt256-ml solved the problem.
When the terminal is too slow, the producer is throttled. This is called flow control, which is implemented by putting the consumer to sleep when the buffer from the producer to the consumer terminal is full. No backup program should fail because the terminal is to slow.
I guess that would depend on whether the backup is reading from and/or writing to an external drive without read/write buffers (eg the early CD writers from the 90s). Having the handler program paused might cause io errors there which would legitimately cause the backup to fail.
However I do agree with your point that this "shouldn't" happen in practice (ie any decent hardware you'd expect to have buffers to prevent that kind of write errors).
It is most certainly reading from a drive, or from a TCP stream. Reading from CD-ROMs shouldn't be a problem either. In other words, I doubt that the terminal was the actual problem.
It reads from a drive, encrypts each file and writes it, so there is an element of a large buffer to hold.
I Wouldn't think that the terminal is the actual problem, but running the same job side by side consistently failed on the gnome terminal and finished successfully on urxvt.
On a side note, OP remark on CD-ROMs was regarding writing, CD-ROMs write feed had to be an uninterrupted, continuous stream, and the smallest hiccup would blow the operation and render the media useless.