Panasas Powers Stanford

At Stanford University’s Institute for Computational and Mathematical Engineering (ICME), running computational clusters and massively parallel programs for sponsored research demands equally high performance storage. So when the existing network file system (NFS) servers couldn’t live up to the task, the educational facility turned to storage provider Panasas Inc.

A product search that began early last year resulted in the installation of the Panasas ActiveScale File System, an integrated hardware/software product that combines a parallel file system with an object-based storage architecture.

“With the Panasas high-speed storage, we can add nodes, run more jobs and run them faster than we could with the NFS,” says Steve Jones, technology operations manager at ICME. “Panasas solved our pain point, which was I/O.”

ICME is part of The School of Engineering at Stanford University, which has nine departments, 231 faculty and more than 3,000 students. According to Jones, ICME runs sponsored research for government agencies, such as the Department of Energy and DARPA, for example.

Running Out of Steam

For years, ICME relied on NFS servers, a relatively inexpensive way to implement storage. According to Jones, ICME has a total of 1,200 processors, each capable of writing a stream of data to a storage node at some location. There are 600 servers. Over the years, the institute’s storage requirements grew from two terabytes on one system to anywhere between one and four terabytes on a system. According to Jones, his group runs 12 computational clusters. A single cluster, for example, may have 200 processors and two terabytes of storage.

Despite running a Gigabit Ethernet network, it wasn’t uncommon to expect jobs to run from hours to weeks, says Jones. The Linux-based servers with RAID arrays reportedly had a 25Mbps limitation on the amount of data written to disk.

“Everything runs over the network. We have a front-end node for the computational cluster, we have compute nodes and we have storage nodes,” says Jones.

Ultimately, he says, “The amount of data we’d write would overwhelm the appliance. The job would sit in the I/O, stop, and we would have to wait until it would write the data, taking increasingly longer lengths of time.”

Jone’s ICME team relies heavily on computational clusters based on Rocks open-source software for sponsored national research. A recent project for the team, for example, was to better understand the impact of turbulent air flow, or flutter, on turbine engines. The research objective aimed at improving the performance and reliability of jet engines, as well as improving the noise and air quality of communities near airports.

As the wait time for the NFS servers increased, Jones began to add more servers. This fix, however, led to more problems. According to Jones, the servers were difficult to manage and the multiple logical name spaces made it difficult to use.

Jones and his team put together a list of criteria for a new storage solution: easy to grow; easy to implement; the ability to run the cluster distribution tool kit for the Rocks cluster; no single point of failure, no single I/O path, or full redundancy; parallel I/O support for writing streams of data in parallel; a single point of support for the hardware and software; and single name space.

“Most importantly, we wanted vendors to provide a live demo of integration into a cluster,” says Jones.

Some background research and conversations with other lab development centers enabled Jones to draw up a short list of storage solution providers. As luck would have it, Panasas was the first vendor Jones contacted. Other vendors included DataDirect, EMC, Ibrix and Network Appliance.

Beyond PowerPoint

In early Spring 2005, Jones contacted Panasas. “We explained our requirements and the vendor asked for a week to set up a demo at their facility,” he says.

At the first meeting with Panasas, Jones expected to see a PowerPoint presentation but not a demo. He was happily surprised. “The Panasas engineers asked me which I wanted to see first, the demo or the PowerPoint,” he says. He chose the demo.

In the lab, a Panasas system was integrated into a cluster, and Jones saw a 20-minute live demo. “I was impressed,” he says.

He then explained to the vendor that a true test of the system would be to set up a demo in real time in production. A week later, a system was delivered to ICME, and Panasas engineers integrated it into a single parallel storage cluster that contained 172 processors and two terabytes of storage on two one-terabyte NFS servers. In two hours, the solution was ready to accept production jobs, according to Jones.

“It was an unheard of amount of time,” he says, noting that a Fibre Channel SAN would require days or weeks to configure the hardware and network, build software and file systems and meet with the engineers to set it up.

Bonnie ++, an open source benchmarking tool, was the first application that they ran for the demo. Similar test jobs were set up on the NFS servers and the new Panasas file system. “We wrote an 8GB file and multiple copies of it, which is a small job for us,” says Jones.

For test purposes, they wrote eight files. The NFS server wrote data at 17.8Mbps, and the read process from the eight nodes was 36Mbps. The same job on the Panasas system was 154Mbps for the write process and 190Mbps for the read process.

The equipment was then scaled to 16 nodes on the benchmark. The NFS servers wrote at 20.59Mbps and read at 27.41Mbps. The Panasas system wrote at 187Mbps and read at 405 Mbps.

Pleased with the huge increase in performance on the Panasas system, Jones proceeded to write live data. “Basically, the NFS servers had 85 to 90 percent CPU, memory and I/O utilization, while the Panasas system had three to four percent utilization,” he says. “As we added capacity, the system got faster and we never had I/O wait time.”

Jones did contact and meet with other vendors. “We got to see PowerPoint presentations, but no other vendor would meet our requirement of seeing a live demo that included real-time integration into a cluster,” he says, noting that he was shown published performance benchmark statistics. Additionally, no other vendor was able to provide a single point of contact for hardware/software support other than Panasas.

Full Steam Ahead

ICME purchased two systems that consisted of a single shelf, each with a meta data server on it. The meta data server has 10 StorageBlades. The Panasas ActiveScale File System, a hardware/software product, consists of DirectorBlades and StorageBlades that plug into a rack mountable shelf. According to the company, performance scales with capacity, so bandwidth increases as additional shelves are added.

With additional products on its plate, ICME has plans for a multi-shelf purchase in the near future.

The cost of the system? “It’s more expensive than cheap NFS, which has no reliability, and less expensive than other solutions, such as a Fibre Channel SAN,” he says.

For more storage features, visit Enterprise Storage Forum Special Reports

Get the Free Newsletter!

Subscribe to our newsletter.

Subscribe to Daily Tech Insider for top news, trends & analysis

News Around the Web