Bob's Notepad

Notes on projects I have done and things I have learned saved for my reference and for the world to share

Tuesday, April 22, 2008

Using sudo on a remote rsync session (via ssh)

I have been using rsync to back up almost a dozen servers for years now and I am convinced that it is the best solution for remote back ups. A couple of months ago I ran into a situation where I need to rely on one of those backups and everything went expected.... well, sort of. All of my data was there and I was able to get things back up and running on a new server in only a few hours but it would have been much quicker if the permissions and file ownership was preserved. Once I got the system back up and running I wanted to make sure my the back up process was going to start preserving the file permissions and ownership. I found that the to accomplish this you absolutely had to be putting the files on the remote server as root. Of course, this is a security concern. The solution was permitting the rsync process to have access to sudo.

Step 1:
On the server that is receiving the back ups you need to add the following line to the /etc/sudoers file (according to Johannes in comments this needs to be the last line -- thanks):

  • username ALL= NOPASSWD:/usr/bin/rsync
You will, of course, want to replace "username" with the user that the sending server will be logging in as through the rsync process. Step 2: Now you'll need to make sure that your rsync command is using the -a flag and then use the --rsync-path flag to tell it to run the rsync process on the remote via sudo. Here is an example command line:
  • rsync -av -e "ssh" --rsync-path="sudo rsync" /source/ user@server.com:/destination/
You're all set You can combine this with using automated SSH login keys. Also, I want to note that this can compromise security in some scenarios.

Labels: , , , ,

Reference Link


Tuesday, November 20, 2007

Using lock files in a script

When dealing with automated rsync backup scripts you sometimes run in to the issue of a large file taking longer than expected and cron launching another rsync -- eventually spiraling things down. The way to resolve this is create a lock file so that your script doesn't launch again if it is already running. Just put the following in your script under the #!/bin/sh statements:

LOCKFILE="/var/lock/rsync/lockfile"
if [ -f $LOCKFILE ]
then
echo Lock file exists...Exiting...
exit 0
else
touch $LOCKFILE
fi


This declares what the lock file is (you can make the file anything as long as the user running the script has access to read and write). Then it checks to see if the lockfile exists and exits if it does. If it doesnt exist, then it creates it.

All thats left is to add the following to the end of the script so that it cleanly deletes the lockfile after running:

rm -f $LOCKFILE
exit 0


Thanks to @linuxchic for helping out :)

Labels: , ,

Reference Link


Monday, October 22, 2007

Automated SSH login using keys

I always seem to forget how to do this when I need to. When setting up an automated backup you obviously don't want the script to ask for a password so you set up a key pair.

Machine sending the backups (must be logged in as the user that will be doing the backups):

  • ssh-keygen -t dsa -b 2048 -f /any/directory/filename


Then you copy the resulting filename.pub file (NOT the file with no extension) to the authorized_keys file on the receiving machine in the .ssh directory under the user that will receive the backups. If the authorized_keys file doesnt exist, just rename the file you copied... if it does exist, append it to that file.

If your using rsync, use this command:

rsync -e 'ssh -i /any/directory/filename' source/ user@host:/destination/

Labels: , , , , ,

Reference Link