Let's kill off FTP

by Chris Josephes

About once a week, a ticket works its way down the grapevine. User cannot upload a file by FTP. It's always a different user, but the problem is the same. I don't think it's the frequency of the problem that's frustrating, it's tracking the source of the problem.

What are their firewall settings? What are our firewall settings? Are they coming across the VPN? Is it an active or passive data connection? Is their FTP script sending the wrong password again? Are they hitting the proftpd server or the wu-ftpd server? Hey, this FTP daemon log doesn't report any connection details!

In the end, there's always a different root cause. Maybe it's a NAT translation failure, or a load balancer that needs to be rebooted. Next week it will be a totally different scenario, and a totally different solution.

FTP is an antiquated protocol, designed to address early shortcomings in the IP protocol over 25 years ago. In my humble opinion, it's time to give up and let this protocol die. There are other protocols out there that can do the job just as well, if not better. Here are the top contenders:

SCP/SSH. Secure authentication, and data encryption to boot. SCP is available on almost every platform out there. There are even a few decent SCP GUI clients out there. The only downside is there aren't too many tools that can script SCP uploads on a Windows host.

SVN/CVS/Other. Okay, these protocols are a little restrictive. They allow for file uploads and downloads, but their main function is versioning control. Neither of them will be helpful if you're trying to perform simple file transfer operations outside of a centralized repository.

Jabber. Not a popular protocol, but something to keep in mind. It would be incredibly easy to set up peer to peer file exchanges between client/servers or client/client environments.

HTTP. This should be the obvious choice; I know it's my personal favorite. HTTP has authentication mechanisms built into the protocol, SSL is available, download recovery is possible, and servers can design fancy HTML interfaces for uploads and downloads through a web page. Plus, almost every programming and scripting language out there has a HTTP client library to facilitate scripted actions.

Probably the only thing that doesn't work well is HTTP upload through a web browser. They will upload files, but web clients like Firefox (and others) don't bother to report upload statistics like bytes sent or time remaining. Anyone uploading files a file is stuck in feedback limbo. Is my file being sent? The blue E is just spinning.

If somebody out there develops an extension or program to make HTTP uploads easier, they will have my immediate gratitude. I'll email the program's webpage to every remote user I have, with a side note saying that they will no longer need to send me their entire firewall configuration to debug problems in the future.

14 Comments

Carey Evans
2006-09-28 02:17:01
There's WebDAV, which is built in to Windows and Mac OS X for the client, and Apache and IIS for the server. All you have to worry about is proxies that don't believe that PROPFIND is a real method, and SSL encryption works very well to fix that.


It's easier to do a PUT from most programming languages than a correct multipart file upload, too.

Chris Josephes
2006-09-28 04:37:34
Yes, I didn't mean to overlook DAV, I tend to just lump it into HTTP since it's a protocol extension.


DAV has a lot of potential to make uploads easier, but I think some of the complexities of the protocol (in contrast to the HTTP PUT method) make some web developers fall back to using FTP for file management.

Matthew Sporleder
2006-09-28 06:07:52
What are our firewall settings? Are they coming across the VPN? Is it an active or passive data connection? Is their FTP script sending the wrong password again? Are they hitting the proftpd server or the wu-ftpd server? Hey, this FTP daemon log doesn't report any connection details!


Although I agree that ftp is an old and annoying service to run (I hate smtp/pop for similar reasons), the problems you describe mostly sound like environment issues. e.g. Your fault. ;)

Terry Laurenzo
2006-09-28 07:52:50
Agreed but inertia is a powerful force to combat. For Windows clients, I usually put a client like Filezilla on their PC and then quietly set it to connect with SSH/SFTP.

2006-09-28 07:59:37
FTP may be old, but there are some problems with the suggested alternative protocols:


SCP/SSH is for secure transmission of data; often you don't need this for simply transferring a file, so the overhead of the encryption is costly. Also, FTP is often used 'anonymously' (e.g. you can download or upload files without needing a username or password); by design, SCP/SSH don't allow this.


Revision controls systems: they qualify as protocols OK, but it's a real stretch to see then being good FTP replacements.


Jabber: Optimised for chat applications I'd imagine; don't know much about this.


HTTP: The best of a bad bunch. Downloading using HTTP is fine and well-supported. However uploading, which is what FTP is often used for, is not great with HTTP; not because of poor client support as stated in the article (implementers could easily add displayes of the number of bytes sent etc.), but because of the way HTTP handles multiple files. An HTTP 'upload' may contain several files. Each file is seperated into its own section in the strem of data that constitutes the HTTP upload. Once this stream has been received, the server has to parse all of the bytes uploaded by the client to work out how many files were in the upload and where they start and end. In the implementations I've seen, this is quite inefficient for large files (which is what an HTTP replacement would have to cope with).


Of course, it might be just that I've seen poor implementations, and that there is little overhead in parsing the data.


My experience with WebDAV hasn't been great either---the servers I've used have tended to be very unstable.


2006-09-28 08:01:36
[Edit: "... which is what an *FTP* replacement would have to cope with"]
Ivo Emanuel Gonçalves
2006-09-28 08:34:10
Your Jabber suggestion is quite interesting. I've thought of it myself before as well, though at the time it was as a replacement for Direct Connect, not FTP, but hey, it can do that too.


I hope a friendly dev would like to start a project like this involving Jabber just to see how far it could go. With some luck, it could become part of the official Jabber scheme.


However, I don't agree with your HTTP suggestion. I don't think HTTP is the right tool for the job, though I might be wrong, I stand for my reasoning for now.

Chris Josephes
2006-09-28 08:36:19
the problems you describe mostly sound like environment issues


Is your recommendation then that firewalls or other forms of perimeter security on both sides of the transfer be turned off?


FTP is the only file transfer protocol out there that opens a seperate data transfer channel for transfers. Depending on the method used, the data connection is initiated on either the server, or the client.


Every other protocol I listed works much better in firewalled and NAT environments.

MenTaLguY
2006-09-28 16:18:37
For what it's worth, pscp, which is part of the PuTTY suite of ssh tools, works just fine for automating scp uploads/downloads on Windows.
MenTaLguY
2006-09-28 16:26:06
That is, pscp works for shell scripting. CPAN has a Net::SCP if you're working in Perl, and I'm sure thre are similar modules for other scripting languages.
casey
2006-09-28 20:12:23
My vote's for SSH. Nothing like a tarpipe over SSH for on-the-fly recursive file transfer!


tar cvf - some_dir | ssh me@somewhere.else "cd wherever; tar xvf -"


I guess rsync and unison might also be contenders, though.

bengt
2006-09-29 04:11:23
funny you should mention this problem now. this is another solution using the Erlang shell:
http://armstrongonsoftware.blogspot.com/2006/09/why-i-often-implement-things-from.html

2007-03-01 05:59:46
Nice article/rant, but you're missing the the protocol that offers the most promise: rsync. It solves all the problems of FTP by providing firewall-friendly single channel transfers, and even offers some of the benefits (chrooted virtual users are supported in daemon mode). The achille's heel? It has no builtin SSL support. That is the only thing it needs to be an FTP killer (besides widespread client integration, of course). Yes, I know, rsync works great with ssh, and I use it every day, but an FTP replacement that requires a shell is dead in the water, as far as admins like me are concerned (and FTP/SSL is not firewall-friendly because the firewall can't inspect encrypted packets, so don't even go there).
JB
2007-11-07 15:30:14
You are correct Chris. FTP should be put to rest for good. That protocol should be retired. HTTPS is the only way to go and purest method is secure file transfer. Key issues that need direct attention may consist of
* Provide on-demand large file transfer
capability for business users
* Send files and folders
* Eliminate FTP support
* Offload files from email system
* Achieve compliance with SOX, HIPAA, FDA
* Reduce IT support
* Implement audit trail for files
* Increase data security
* Automate file transfer management
* Obtain return receipt for file transfers
* External partner send back

I have heard of a company that can facilitate these types of needs.. Accellion .. Is the name.. Just noticed few news worthy items..


http://www.pcworld.com/businesscenter/article/139350/virtual_file_transfer_appliance_for_small_business.html