Taking on a charged particle physics calculation previously labeled "impossible," a team of Lab scientists led by physicist Jim Glosli developed an unorthodox strategy to fully exploit the power of massively parallel supercomputers and break new ground in scientific simulation.
The multidisciplinary team, including an IBM researcher, presented its methodology at SC09, the premier high-performance computing conference held this year in Portland, Ore. Nov. 16-20. They were finalists for the prestigious Gordon Bell Prize.
Using this strategy, scientists are able to run larger simulations at higher resolution over longer time scales. Such detailed simulations are critical to a broad set of applications vital to Laboratory missions, from stockpile stewardship to fusion energy. The method used to achieve those calculations also represents an important step for next generation supercomputers, which are expected to expand from thousands to millions of CPU cores.
This new capability was developed by scientists in the Lab's Institute for Scientific Computing Research (ISCR) on two IBM BlueGene/P systems; the 500-teraFLOP/S (trillions of floating point operations per second) Dawn at the Lab and the 1.03-petaFLOP/S (quadrillion floating point operations per second) JUGENE at Germany's Julich Supercomputing Center.
As supercomputing moves into the petascale (quadrillions of operations per second) era, scientists face the growing challenge of how to effectively use the increasing number of CPU cores to run more detailed simulations of scientific phenomena over longer time scales. A central processing unit, or CPU, is the part of the system that carries out the instructions of a computer program or application. The process of adapting a computer algorithm or code to a more powerful computer to increase the capability or detail of simulations is called "scaling."
"Our institute's mission is to advance the state-of-the-art for applications of national interest," said Fred Streitz, ISCR director. "In this case we focused on scaling a simulation that involved the calculation of long-range forces. Some people said that this was impossible — but this very talented team proved the naysayers wrong."
The ISCR team took on a problem that has long challenged scientists — a full understanding of the interaction of highly-correlated charged particles. Until the team's recent simulations, molecular dynamics simulations involving electrostatic interactions were of insufficient length and time scale to fill the gaps in theoretical and experimental research. Simulating these charged particle interactions is important to a range of scientific disciplines, including biology, chemistry and physics, notably to fusion energy experiments planned for LLNL's National Ignition Facility.
In a reversal of the conventional practice of dividing up a problem and distributing it equally across the machine, Lab scientists carved up the problem according to the varied computational requirements needed to scale up the individual component algorithms of the simulation. New BlueGene/P node technology, allowed them to use this new approach — called heterogeneous decomposition — to more fully exploit the system's capabilities.
"We have performed benchmark calculations ranging from millions to billions of charged particles," said David Richards, lead author on the Gordon Bell submission. "With this unprecedented simulation capability, we have begun to investigate plasma fusion physics under conditions where both theory and experiment are lacking — in the strongly-coupled regime as plasma begins to burn."
Jim Glosli, project leader, said this development has far reaching implications for scientific computing and will likely affect the way future codes are developed. "What is innovative is the different way we broke up the machine to run the calculation. This allows more complicated models and this approach can be applied to other applications," Glosli said. "The flexible approach we've been able to demonstrate will allow many problems to scale across both current and next-generation machines."
"The work of this team is emblematic of the innovative multidisciplinary science and technology that is a hallmark of this Laboratory and in keeping with our long tradition of leading edge computing," said Dona Crawford, associate director for Computation. "The strategy they developed will have an impact on the future of high performance scientific computing."
Also receiving recognition at SC09 was the Hyperion Project, which received a "Best HPC Collaboration Between Industry and Government" award from HPC Wire , a news service dedicated to supercomputing. Matt Leininger, a driving force behind the project, and Mark Seager accepted the award for LLNL. Each of the project's partners also were recognized. Funded by NNSA's Advanced Simulation and Computing program, Hyperion is a LLNL collaboration with 10 industry partners to advance next-generation Linux cluster supercomputers. Collaborators include Cisco, DataDirect Networks, Dell, Intel, LSI Corporation, QLogic, Red Hat, Sun and Supermicro. The project was first announced at last year's SC08 conference. For more about Hyperion, see the Hyperion Project Website.
Other highlights of this year's conference included:
In a keynote address Thursday, former Vice President Al Gore lauded the key role of high-performance computing in providing a clearer understanding of climate change. Laboratory scientists were among those sharing the 2008 Nobel Prize with the vice president for their climate modeling work.
Gore made an impassioned plea to the "supercomputing community" to use such computational tools as visualization to make the scientific realities of climate change "tangible and understandable" to the public at large. At the conclusion of his talk, Gore said addressing climate change "is not a political question, but a moral question."
An IBM team was awarded a Gordon Bell for simulations of a cat's cerebral cortex, the thinking part of the brain. Some of these simulations were performed on the Laboratory's Dawn supercomputer, an IBM BlueGene/P system.
Also, the new Top500 list of the world's most powerful supercomputers was released. In a battle of petaFLOP/S titans, Oak Ridge National Laboratory's Jaguar clawed its way to the No. 1 ranking on the Top500 list of the world's fastest supercomputers ahead of the previous titleholder Roadrunner, an IBM machine at Los Alamos National Laboratory. DOE/NNSA systems continue their domination of the Top500 with four systems in the top 10. LLNL's Blue Gene/L was No. 7 and Dawn was ranked 11th.