LLNL Home S&TR Home Subscribe to S&TR Send Us Your Comments S&TR Index
Spacer Gif


S&TR Staff

Spacer Gif











 
Photo of the Terascale Simulation Facility. Article title: Terascale Simulation Facility: Built for Flexibility.
The Terascale Simulation Facility’s four-story office tower will house more than 250 workers. Behind the office structure is a two-story wing (unseen) built for the Laboratory’s new terascale supercomputers: Purple and BlueGene/L. Data-visualization, conference, and computer- and network-operations facilities will also be housed in the 23,504-square-meter building.

AS design and construction manager for the Laboratory’s new $91-million Terascale Simulation Facility (TSF), Anna Maria Bailey found herself in a gratifying, although unusual position for an electrical engineer: overseeing construction of a building in which electrical and mechanical systems took precedence over the traditional interests of architects.
“Most architects want to dictate the design,” says Bailey. In the case of TSF, it didn’t work that way. Programmatic electrical and mechanical requirements controlled the design. “The architectural features had to take a back seat.”
Architects were told to approach their design by thinking of TSF as a large electrical–mechanical system surrounded by an architectural skin. This message was conveyed early in the design process to underscore the importance of the supercomputer systems that are vital for national security and will reside in TSF. The result is a facility that emphasizes function over form.

Ahead of Schedule
Groundbreaking for the 23,504-square-meter (253,000-square-foot) TSF was held in April 2002. Built by M. A. Mortenson Construction Company of Minneapolis, Minnesota, TSF was completed ahead of schedule in late 2004 and within budget. TSF will house two supercomputers being built by IBM to perform trillions of operations per second (teraops). The new machines—Purple and BlueGene/L—will support the Advanced Simulation and Computing (ASC) Program, a part of the Department of Energy’s National Nuclear Security Administration and an important component of stockpile stewardship.
Purple, with a peak computational rate of 100 teraops, will perform sophisticated physics and engineering simulations to help researchers better gauge the safety and reliability of aging nuclear weapons in the absence of nuclear testing. Purple will contain more than 12,000 next-generation IBM Power5 microprocessors, 50 terabytes of memory, and 2 petabytes of globally accessible disk space. The supercomputer will be delivered in phases with the first pieces arriving in April 2005.
The 360-teraops BlueGene/L will handle a variety of challenging ASC-related scientific simulations, including ab initio molecular dynamics for materials science; three-dimensional (3D) dislocation dynamics for materials modeling; and turbulence, shock, and instability phenomena in hydrodynamics. BlueGene/L will also serve as the ASC platform for evaluating advanced architectures for computational science. In October 2004, the supercomputer attained a record-breaking performance running at one-quarter its final size. BlueGene/L will arrive in three increments, with the second two expected in January and March 2005.
The machines will reside in the two computer rooms dominating the second level of the two-story supercomputer wing. The west room will house Purple, and the east will house BlueGene/L. A nonload-bearing wall separating the two rooms can be removed to create one large room should the need arise for more floor space for future ASC computers. Located on the ground floor beneath each computer room is a mechanical utility room. A total of 28 air-handling units blow cool air up to the second level, each at a rate of 80,000 cubic feet per minute. After cooling the computers, the air continues to rise upward and is forced into large return-air plenums for recirculation.

Housing for 250 Workers
TSF’s four-story office tower provides research and development areas for visualization and hardware prototyping, a 150-seat auditorium and visualization theater for unclassified presentations, a second visualization theater for classified reviews, an operations hub that will eventually control the Computation Directorate’s high-performance computers across the Livermore site. It also includes three small computer rooms to support facility infrastructure, an atriumlike lobby, conference rooms, and a classroom.
The facility will house more than 250 employees, most of them from the Computation Directorate’s Integrated Computing and Communications Department (ICCD). The employees are expected to begin moving into the building in February 2005. The third and fourth floors will be mainly offices, while the second will be a combination of offices, laboratory space, computer rooms, and space for the vendors and computer engineers who will maintain and service the machines.
To provide natural light for interior office areas, the architects designed TSF with an array of floor to ceiling windows at the end of corridors. Barbara Atkinson, associate deputy head of ICCD and the TSF construction liaison for the ASC Program, says, “In most cases, when one steps out of an office, one enters a bright hallway. The architects did a good job.”

Photo of a supercomputer room in the Terascale Simulation Facility.

Clear-span construction techniques were used for the Terascale Simulation Facility’s supercomputer rooms (approximately 2,230 square meters), opening up floor space by the elimination of columns and other impediments that could hinder placement of computer components. The grated floor tiles allow for chilled air to rise from air handlers on the first level, cooling the supercomputers that will be housed in the facility.


Three Design Requirements
At the start of facility design, the ASC Program defined three broad requirements that guided the architects in their approach to TSF: flexibility, scalability, and reliability. “These are competing factors that can be difficult to achieve,” says Bailey. “Yet, the program wanted these factors to be addressed and implemented across all design disciplines.”
The Computation Directorate wanted a facility built in such a way that modifications could be made to permit the siting in TSF of computer systems that today are only a gleam in the eyes of supercomputer designers. “We wanted a facility that would have a 35-year lifetime,” says Atkinson. Don Davis, a long-time Computation facilities manager who served as a technical representative to TSF, adds, “We don’t get the opportunity to construct a building very often. We had to design and build this one to last.”
High among the factors driving the design of TSF, says Bailey, was the anticipated power requirements for teraops-scale computers and the next-generation systems that will eventually succeed them. Specifications called for a 7.5-megawatt computer load per machine room. Purple is expected to consume a peak-power load of 4.8 megawatts. Total power consumption for computations and cooling will be approximately 8 megawatts. BlueGene/L, by comparison, will draw about 1.6 megawatts of power. The designed power capacity at TSF is 25 megawatts, great enough to power a city between 16,000 and 20,000 people and slightly less than half the current Laboratory peak-power demand of 60 megawatts on the hottest day of summer.
Nearly 5 kilometers of trenches were dug for underground power lines. A trench on the west side of TSF stretches beyond the Laboratory’s property line to the east to tie into the Laboratory’s northeast substation. Another trench heads south to link with the south substations. “The redundant sources of power guarantee a backup source in the event we lose one,” says Davis. “We don’t want TSF off-line.”

Overcoming Cable Limitations
Strategic placement of electrical–mechanical systems and clear-span construction with a deep subfloor in the terascale machine rooms helped to overcome problems created by the use of copper connector cables, which have a 25-meter-length limitation.
Atkinson explains, “Clear-span construction eliminates columns and opens up space on the computer floor. If we were forced to go around an impediment, 25 meters would suddenly become very short. We couldn’t throw in a 50-meter cable, because it would alter the communication timing. Computer manufacturers connect hardware with communication cables that are all the same length. That way, it takes a predictable amount of time for a signal to go from one place to another in a machine.”
When the distance between components is less than 25 meters, excess communication cable is placed in coils beneath the machine room floor. However, placing excess cables beneath the floor can create its own problems if the coiled cables block the flow of air needed to cool the supercomputers. The design solution was to provide a 1.2-meter raised floor for the computer rooms. Perforated and grated tiles sit atop the floor, allowing 13°C air from below to enter the hot intake sides of the computers and cool the machines. Both the computer room and office tower cooling systems are supported by four 1,200-ton chillers, one 400-ton chiller, and cooling towers capable of handling 8,176 liters of water per minute.
Large water pipes suspended beneath the ceiling separate the electrical–mechanical room from the second-level computer floor and are another example of flexibility designed into TSF. “Our computers are air-cooled now, but if we ever switch to water-cooled computers, we can easily tap into those pipes,” says Davis.

Photo of the raised floor in a computer room in the Terascale Supercomputer Facility.

Workers install the raised floor for the west computer room, which is slated to house Purple. The 1.2-meter raised floor allows for the convenient placement of copper computer cables and the free circulation of chilled air to cool the supercomputer.

Visualization on a Large Scale
A prominent architectural feature of TSF is what has been named the Armadillo Room—a zinc-clad structure resembling an armadillo’s armored body that abuts the east side of the glass-fronted lobby. “The architects were adamant about keeping this unusual-shaped structure,” says Atkinson. “It is their architectural moment, their signature.”
Inside the Armadillo is a 150-seat auditorium for unclassified presentations. Scientists can display 3D simulations on an immense 1.2-centimeter-thick Plexiglas screen called a powerwall. The screen has eight projectors behind it in a 4-by-2 array. “Visualization is the only way we can understand the huge quantities of data being generated by the ASC supercomputers,” says Atkinson. (See S&TR, November 2004, From Seeing to Understanding.)
TSF has a similar visualization theater for displaying classified data but in a program-review-style setting where core participants can gather around a table to offer technical critiques.

Photo of the Terascale Simulation Facility's Armadillo Room under construction. Photo of the Armadillo's Room data-visualization theater.
The architectural signature of the Terascale Simulation Facility is a structure known as the Armadillo Room, which features a 150-seat data-visualization theater and auditorium. The structure is shown here covered in waterproof sheathing before the installation of zinc roof panels. (inset) A powerwall inside the Armadillo Room displays a simulation to the audience. Powerwalls work by aggregating, or “tiling,” the separate images from many projectors to create one seamless image.

As the ASC Program liaison, Atkinson says, “My job was to field requests from program personnel and the TSF design and construction team and to translate these requests from ‘computereze’ to terms engineers and architects could understand.”
The program, she says, is pleased with TSF. “We have a great deal of flexibility to handle different technologies that will come our way. In terms of power capacity for supercomputers, TSF should accommodate anything that will be in the marketplace for a long time to come.”

—Dale Sprouse

Key Words: Advanced Simulation and Computing (ASC) Program, BlueGene/L, data visualization, Purple, supercomputer, Terascale Simulation Facility (TSF).

For further information contact Barbara Atkinson (925) 422-0823 (atkinson1@llnl.gov).


Download a printer-friendly version of this article.



Back | S&TR Home | LLNL Home | Help | Phone Book | Comments
Site designed and maintained by TID’s Internet Publishing Team

Lawrence Livermore National Laboratory
Operated by the University of California for the U.S. Department of Energy

UCRL-52000-05-1/2 | January 7, 2005