Their CLI manages to completely lock up my up to date MacBook Pro when simply downloading files and has very strange and conflicting choices of parameters. It is comprehensive, but I wouldn’t call it good
I won’t defend their CLI arguments other than noting that they usually closely follow the also poorly-named APIs[1], but locking up is highly unusual — do you have something like AV software or other quasi-malware which might be interfering with normal I/O? We’ve used it for many millions of files over the years and that’s never been an issue on macOS or Linux.
1. AWS needs a Chief Consistency Officer who can block shipping until you cleanup the prototype slop
This is literally the first program I came up with, no attempt to optimize it at all.
There is zero chance that the AWS sync command is filling my CPU just by hashing bytes
edit: I'm going to try not to let you nerd snipe me into doing the profiling the AWS CLI needs to be doing, for them. Because that's now what I desire to do.
so 200 megabytes/second? I'm not sure what your definition of large files is, but hashing anything sizable with SHA1 is trivially CPU-bound with any modern SSD, in the absence of a processor with the sha asm extensions.
that being said, quick glance at the source suggests that awscli's s3 sync only compares files by size & timestamp, not etag, so it's not hashing anything client-side.
I can't say I've had the same experience here on Windows using WSL. It runs like a champ. I've also used it within Debian based Docker images on many occasions.