Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

As a small addendum to "Copying files", you can also copy entire directories:

  $ tar czf - foo | ssh remote "cd /where/to/unpack && tar xzf -"
This is often significantly faster than rsync, e.g. when copying a directory with many files for the first (or only) time.


I'm not sure how it compares in performance, but I've always used scp -r for this:

   $ scp -r remote foo /where/to/unpack


scp has some end-to-end latency for each file transferred. This means that for lots of small files, a single tar file stream is much quicker than 'scp -r'.


if you're piping the output of tar, instead of using "tar f -", you can leave off the f argument (since you don't want to specify a file anyway) and tar will default to stdin/stdout:

    $ tar cz foo | ssh remote "cd /where/to/unpack && tar xz"


For GNU tar these days that's true, unless you can be sure $TAPE is in its environment. Historically, tar defaulted to a tape device, e.g. /dev/mt0, and you still find vestiges of that, e.g. OpenBSD defaults to /dev/rst0.

IOW, specify "f -". :-)


You may not need the z either since it's common to configure ssh to do compression.

Or use ssh -C


Is it possible to pipe something into already opened ssh connection?


Using named pipes (mkfifo) I suspect you could do that. I've not tried it in practice and there will be some warts to work around. e.g. the password prompt comes to mind.


The most fun I ever had was doing exactly this, piping a stream through ssh, but on the one end was a CD image, on the other end was a cd burner. It is kinda obvious you could also do that because pipes and ssh are ubiquitous on UNIX but I still couldn't stop giggling.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: