From parent: "Depends a lot on the usecase of course."
The usecase that I see most often of SFTP (and hinted at in the parent's problem description) is generating one-off reports for third parties, or passing data to vendors who are stuck in the 90s, like financial services companies.
It's almost always read only (or read and delete), in which case implementing an API like this is pretty straightforward. Log unsupported commands perhaps and decide if you want to implement them later.
You could. I mean, at least with OpenSSH you can specify a byte range. That is how lftp is able to chop up files into many streams on SFTP. I can't imagine anyone doing this with a database however, at least, not for writes.
I think this implementation uploads it to memory before going to S3. It usually won't handle 20GB files (unless you have like 20GB of RAM) and in this case, were it a smaller file, it'll just never upload.
You need to make it transactional. Upload to a temp file name (something easily ignored by whatever backend processes are looking at the files) and then do an atomic rename once the transfer is complete.