ONJava.com -- The Independent Source for Enterprise Java
oreilly.comSafari Books Online.Conferences.

advertisement

AddThis Social Bookmark Button
Article:
  The PHP Scalability Myth
Subject:   Two-tier vs. three-tier
Date:   2003-10-17 20:09:46
From:   tarrant
Response to: Two-tier vs. three-tier

Sorry, but I couldn't imagine anyone using DNS-based load-balancing when you can do it with Squid sitting in front of the webservers, or better yet set up a hardware load balancer like Big/IP. With DNS round-robin, you get the round-robin, but you don't get any kind of protection against machine failure.


Your assumption on (4) isn't valid either, in my experience. How long does a page normally take to respond, in your experience? If it takes more than 0.5 seconds, then you're going to start getting antsy; more than 2 seconds and you're really going to get agitated. Meanwhile, for that 2 seconds of calculation, you've still got 8-30 seconds of image and css downloading, depending on your modem speed. Now, it's true that most of the CPU time in total is either biz logic or database, but on a 0.5s page you can't afford the latency involved in marshaling a request, sending it over the LAN, unmarshaling the request, marshaling the response, sending that over the LAN, unmarshaling the response, and repeating all of that for each remote method call. It's too much overhead.


I don't know about you, but I've got 6 years of solid experience specifically architecting, building and maintaining high-traffic e-com sites. Sites with 1K-20K visitors per hour. And my experience is that presentation should be separated logically from business logic, but should not be physically separated. I believe there should be a "3-tier architecture", but tier-1 is a caching/proxy layer, tier-2 is presentation+bizlogic (i.e. the webserver), and tier-3 is the db:


loadbalancer <--> cache/squid <--> apache/tomcat/whatever <--> database


It's the architecture that's always worked best for me, and no matter what the starting point is on a new project, we always seem to gravitate to this solution. The cache/squid layer caches stuff like images, but it also buffers connections, allowing the webserver to get freed up faster when some guy's downloading a big page on a 9600bps modem.