The way s3ql works is that it copies the db from s3 to your local machine. You make changes and then write the new version back to s3 later with the data. So it won't help in this case.
I believe it uploads the whole thing each time. I think it may even upload it with a new index counter in the name to version it (but I can't find that in the docs now).
It could probably be improved (subject to the nuances of S3 which I'm not fully familiar with). One way to fix it would be to copy the concept of SQLite's WAL mode. Use an appended write operation on S3 (if it supports it) to append to an existing file that contains the transaction log. Then at certain intervals (say every few thousand transactions) one can finally flush that log to be stored in the main database file.
This would substantially reduce the number of times the database would need to be re-uploaded in full.