How are you syncing files across systems?
by Brian K. Jones
First, there's NFS. There are numerous places out there that have a central file server, and then the server farm mounts, say, /opt or /usr/local or something, and then there are lots of configuration files and stuff underneath those trees somewhere. The benefit of this method is that you can make a change in one place and have it take effect everywhere more or less immediately. The downside, as I see it, is something I call the "christmas light syndrome": if the file server goes down, any services on any hosts relying on the mounted files become unavailable.
There's another upside to the NFS scenario, which is that your server farms can mount config files read-only, which offers some protection should the machine be compromised in some way.
The second popular method is to use rsync. The upside to this method is that all files are local to the machine, so services on your hosts don't depend on the availability of a file server. The downside is that generally there is some glue code and duct tape involved, which means you're maintaining code to take care of all of this, which means there's not really a standard procedure per se for handling file synchronization with rsync. In addition, you don't have the benefit of having your config directories mounted read-only, which is just one less protective measure.
The third method is cfengine, which is still hanging on my list of things to make friends with. I tried using it during the version 1.x days, when it was quite a bit more difficult to use. I'm aware that the 2.x versions are much, much easier, more robust, supports RSA keys, and all that jazz, and I promise that once I get through the three projects currently on my plate, cfengine is number 4.
If you're using other means of handling file synchronization, or you just wanna plug your favorite feature of cfengine, or have a cool rsync hack or something, please share!
|I had fun with NFS a couple of years ago... Got this idea from AFS where you can mount your last backup read-only. I wanted a live backup volume on a 2nd server to mirror a group server. If the server were to loose its disk, just pull the backup user disk from the backup server, shove it in the server and boot. The backups needed to maintain permissions and ownerships, but nowing my users, they would see the free space on the backup server and start using it for work. The script I came up with did an rsync to the backup server. The backup disk was mounted under a tree only readable by root. Then I used nfs to export the volume readonly back to an area that users could see. This way, when they accidentally nuke some critical file, they ssh to the server and grab the backup.|
>First, there's NFS. ....The downside, as I see it, is something
>I call the "christmas light syndrome": if the file server goes
>down, any services on any hosts relying on the mounted files
|I've been using Subversion, with a central server and automated checkouts on the client machines. The clients don't have upload access, so it is a pull-only system, and by running periodic diffs I can see if a machine has had it's local files modified and roll back to a known good state. There's still a fair bit of scripting overhead, but I like the control and auditing abilities that it gives me.|
I'm not a sysadmin (yet), so I've never managed a server farm. I also don't know what files necessarily need to be synchronized, but I was thinking the other day about how to synchronize config files across multiple servers and keep a historical information about them.
What i need to sync are the websites to get a backup and also an email back up, so i wrote a small script put it in the cron and it has been working ok for some time now.
For my needs rsync does the job perfect.
|Jeremy, Your idea is a good one. So good in fact that sysadmins have been doing it for years. :-) They call it configuration management and used CVS before Subversion came along.|
|Unison is also a powerful alternative to rsync: http://www.cis.upenn.edu/~bcpierce/unison/|
I've been using unison over SSH to sync my ~/bin (and a data directory) on my home linux server and my work desktop running cygwin. Unlike rsync, it does two way syncs.
It's pretty much Linux only, but I have used and NBD patch for the Linux kernel.
|I make heavy use of the copy action, relying on md5 checksums to determine when a copy is appropriate and activating shellcommands classes when the copies take place to take care of restarting services. It's been a *lifesaver* for our company.|
|I wrote a huge write-up on this (http://www.djlosch.com/post_retrieve.php?pid=106). However, my scope is more of a cross device sync. I expect this to become a much bigger issue as phones become handhelds (many functions) rather than single taskers. My design uses something more like rsync than roaming profiles (aka central server profiles) because with portable devices, there is no guarantee that the server will be available when $HOME is requested.|
|Well, I am a newbie in all these server stuff..I have a task in which I need to cross mount 3 server data, I need to provide only one login to access data of all 3 servers..I want to know how I can do this.|