Originally posted by Platypus
View Post
Odd question, but why exactly are you trying to squeeze that much out of rsh and the stack?
Surely, there has to be more efficient ways to do this (fiber-channel or similar SANs spring immediately to mind as does direct mirroring and clustering).
tar & the berkeley stack portions are always going to be Heath Robinson in implementation (as applied). To take your example:
$ tar cf - mydir | rsh remotehost tar xf -
Assuming Berkeley / UCB tar, assuming STDIN of ./mydir piped to extracting on remote machine...
You're aware that this will essentially be Epic Fail as well as grossly inefficient? 7TB warrants a backup network, mirror or cluster.
Save yourself the hassle, get a second opinion from an HACMP / Clustering expert. Or, purchase decent backup software and a secondary fabric.
Alternately, install wget and make / readable by target uid:gid - and be removed from an ops capacity :-)
Cheers,
PDCCH
Comment