Using Transmit To Backup Your Referenced Masters

by Micah Walter

I’ve written in previous posts a bit about Amazon’s S3 storage service. So far I have been pretty happy with the service, however, the interface has left a little to be desired. I tried using JungleDisk and a backup program like Synk Pro to move my Aperture Referenced Masters to S3, but I have ran in to a number of problems with this routine. On paper it seems to work just fine. And, in fact, for small numbers of images it works essentially flawlessly. But one thing that Jungle Disk does that has been causing me problems has to do with its method of caching files. It allows you to set a cache and then when you try to upload images JungleDisk writes whatever it can to the cache and then begins the process of uploading.

This is a nice feature theoretically in that you can stop the process and return and begin again, but it can cause problems when trying to upload a large amount of data. When the cache gets filled, the Synk operation can get interrupted. It doesn’t always happen, but on occasion I have to start all over again. If the Synk operation gets interrupted for some reason it fails the backup and you have to try again.

JungleDisk also uses a fairly interesting method of storing your files. Instead of just creating folders and files in your S3 bucket, JungleDisk creates a flattened directory structure, using the folder names you create as part of the filename. It all works perfectly fine if you always use JungleDisk to interface with your S3 account, but once you try and connect with some other application things can be a bit confusing.

So, I am experimenting with Panic’s Transmit. Transmit has been around forever and has served as a great FTP program for me for a long time. Now that there is support for Amazon S3 I have yet another use for the fine program. What’s more is that Transmit offers additional features such as .Mac preference syncing and built in Automator actions.

transmit.png

I set up my Amazon account and saved it as a preference. I created a new “bucket” and then made some sub-folders. I pointed to the folder of pictures I wanted to upload and clicked Synchronize. I was given a number of options as to how I wanted the synching to behave and it is off and running. It seems to write the files one at a time, copying them from my network drive to the laptop and then uploading them to S3. It is going fairly slowly, but seems to working without any problems.

It would be really nice to eventually get everything up on S3 and then be able to just do a sync for any newly added files. This would keep a really nice archive of my Master images files up on a geographically redundant server out there in the ether. Of course an Aperture plugin might be a nice idea as well! Hint Hint....

1 Comments

SteveH
2007-11-16 10:07:41
After the first article I tried the S3 approach as a way of backing-up Aperture data as a vault but the initial transfer was always too slow on broadband.In the end, after trying it overnight, I backed away slowly and then used Transmit and Interarchy to clear the mess of the S3 servers. Hours of playing about but ridiculously cheap at the end of the day.


It's worth highlighting that Interarchy also includes S3 capability (pre-Transit) and is pretty good for managing data stored there.


In the end I decided S3 was only good for smaller amounts of data (at least on broadband) and for transferring projects if they are made publich for download.