This is Part 1 of a two-part article. Click here to read Part 2.
Congratulations on landing a contract to build the next Olympics Web site, including hosting the site. The contract requires a personal guarantee that the system will handle the expected visits from 300 million Web users.
Scenarios like this are equally exciting and nerve-racking. Exciting, because the success of a public Web service brings much follow-on work and can build a firm’s reputation. Nerve-racking, because the Web site will include many custom services featuring advanced Web application logic.
How will any business guarantee the system to perform under real world conditions?
Many businesses developing and running Web services are turning to intelligent test agent technology to plan for data center capacity. Intelligent test agents are the next generation over the old hand-coded test suite. The difference? Test suites run through a set of preprogrammed steps to check a Web service for simple connectivity and response times. Intelligent test agents model their actions from archetypal users.
An archetype is the original type after which other similar things are patterned. For example, Frankenstein is the archetypal created-by-a-mad-scientist monster. An intelligent test agent modeled after the archetypal impatient young Web services user may not wait for a Web service to complete before canceling and requesting another action. Running several intelligent test agents concurrently drives a Web service to near production environment levels. Test results show a model of the Web service’s ability to perform and scale.
Intelligent test agent technology is designed to work in a modern Web services environment. Since 1997, Internet users have become accustomed to a set of Internet client software and specific behaviors from a Web service. Customers, partners, employees and vendors all have Internet-enabled browsers, e-mail clients and desktop applications. On the server side, Web services are now typically built around a load-balanced, database-driven framework. This server framework is often called the “Flapjacks” architecture.
The Flapjacks Architecture
The analogy goes: Hungry patrons (Web users) show up for breakfast at your diner. The more that come for breakfast, the more flapjacks (servers) you have to toss on the griddle.
Some of the patrons want banana pancakes, some want blueberry, and many will want to sample several flavors. Just like some users will require access to the servers running Java-language servlets, some will need application servers, some will need .NET and SOAP Web services, others will be looking for CGI servers, but most will favor a combination.
All the flapjacks are dished out from the same batter. In this analogy, the batter functions as a single database for persistent storage, search, indexing, etc.
The waitress takes orders (user requests) and passes them to the appropriate cook (say, the blueberry pancake chef), much like a load balancer routes Web browsers to the appropriate server (or an alternate, if necessary).
The Flapjacks architecture is very popular. When using eBay, Yahoo, and HotMail, the systems providing these services are based on the Flapjacks architecture. Enterprise applications software providers, including .NET, WebSphere, and BEA Weblogic, recommend the Flapjacks architecture.
The Flapjacks architecture has many benefits to offer. Users tend to get faster performance and less waiting times from the small, inexpensive servers. In an array of small servers the load balancing system is able to keep a group of servers in a state of readiness – threads loaded and running, memory allocated, database connections prepared – so when the next function request comes the small server is ready immediately.
Software engineers find debugging less complex because fewer threads are typically running at any time. Large-scale multiprocessor systems typically manage and coordinate threads among the installed processors. Small, inexpensive servers run only the threads needed for the local Web application. Debugging threaded Web application software is easier with fewer threads.
Company financial managers like the Flapjacks architecture because they can buy lots of small, inexpensive servers and avoid the giant system purchases. The small-server category has rapidly reached commodity in the computer industry. The low-end server pricing gives financial managers much power with server manufacturers to build and equip multi-processor servers with large memory and hard drive capacity at bargain prices.
Finally, network managers like Flapjacks for the flexibility all those small servers give. As one IT manager put it, “If a system fails on any given morning, I can always go to the local giant computer supermarket and buy replacement parts.” Additionally, small servers are very easy to swap in and out of a rack of other equipment should they fail. During the swap the load balancer directs traffic to other working servers.
The Flapjacks architecture enables live scalability and performance testing of a data center. The load balancer enables selected application servers to be segmented into a test set. Intelligent test agents applying to the test set show real production environment results to massive use of the Web service.
In Part 2 of this article, we’ll define criteria for good Web performance and inform you about tools for data center capacity testing.
Frank Cohen is the principal architect for three large-scale Internet systems: Sun Community Server, Inclusion.net and TuneUp.com. These are Internet messaging, collaboration and e-commerce systems, respectively. Each needed to be tested for performance and scalability. Cohen developed Load, an open-source tool featuring test objects and a scripting language. Sun, Inclusion and TuneUp put Load to work, as have thousands of developers, QA analysts and IT managers. Load is available for free download at http://www.pushtotest.com.