Answering Scientists’ Most Audacious Questions
EVEN before the Laboratory opened in 1952, Ernest O. Lawrence
and Edward Teller placed an order for one of the first commercial supercomputers, a Univac mainframe. The lumbering machine featured vacuum tubes and barely supported primitive one-dimensional simulations; yet it revolutionized scientific calculations. Today, simulations performed on extremely powerful supercomputers have become so essential to scientific discovery that simulation is considered a peer to theory and experiment.
Many investigators first build a computer model to design and guide their physical experiment and then use simulations to better understand experimental results. Some computer simulations take the place of physical experiments because the time period of the physical reality is too short or long, hazardous materials are involved, or they can’t be done in a laboratory (as with earthquake simulations or underground nuclear weapons tests). These kinds of virtual experiments are accomplished every day with great fidelity on Livermore supercomputers.
The National Nuclear Security Administration (NNSA) sponsors the Laboratory’s pioneering work in computer simulation, which has been driven by the need to support classified weapons research. At the same time, advanced simulations have played a crucial role in unclassified research. Over the past decade, we have made significant investments in Livermore’s high-performance computing resources dedicated to unclassified research. The capabilities have grown from performing 72 gigaflops (billion floating-point operations per second) to more than 280 teraflops (trillion floating-point operations per second) today. These unclassified resources leverage the supercomputing expertise and infrastructure we have put in place for classified research. In turn, computing advances made in support of unclassified research initiatives bolster our classified research efforts and strengthen our expertise in computer science, mathematical modeling, computer architecture, software development, and infrastructure support.
Despite the growing proliferation and capability of unclassified supercomputers, demand for access far exceeds capacity. To best use limited resources and spark scientific discoveries in a broad range of scientific fields, Livermore launched the Institutional Unclassified Computing Grand Challenge Awards in 2006.
The annual Grand Challenge award competition seeks proposals that, with significant supercomputer resources, might address compelling, audacious, even grand-scale mission-related problems that promise unprecedented discoveries in science and engineering. These proposals cover a broad range of disciplines in climate change, astrophysics, inertial confinement fusion, seismic and nuclear explosion monitoring, and many other project areas. Internal and external referee panels that include subject-matter and computer-science experts review the proposals. They are judged on the quality of scientific investigation, importance of access to computing resources, ability to effectively use a high-performance supercomputer, quality and extent of external collaborations, and the proposed project’s alignment with Department of Energy, NNSA, and Livermore’s national security missions.
The article Testing the Accuracy of the Supernova Yardstick describes one 2007 Grand Challenge effort that used Atlas to simulate in great detail the dynamics of a class of exploding stars known as Type Ia supernovae. Obtaining a better understanding of these enormously energetic explosions improves our knowledge of the physics of nuclear explosions. The simulations also promise to advance our understanding of the expansion of the universe and the phenomenon of dark energy, which some cosmologists call the greatest mystery in the universe. Typical of most winning proposals, the supernova effort involved colleagues at universities and other national laboratories, in this case the University of California at Santa Cruz, State University of New York at Stony Brook, and Lawrence Berkeley National Laboratory.
In May, we announced the winning proposals in our third round of Grand Challenge awards. Out of 29 proposals, 18 were granted a combined 84.5 million central-processing-unit (CPU) hours on Atlas, a 44-teraflops machine, or Thunder, a 22-teraflops machine. Additional CPU hours will be granted on the newly unclassified portion of BlueGene/L, an approximate 200-teraflops machine. Last year, we held a public symposium to showcase the accomplishments of the Grand Challenge participants, and we plan to host a similar meeting later this year.
Staying focused on high-end computing problems helps us ensure a breadth of supercomputing technologies. We’re working to improve our simulations by enhancing resolution, adding more physics, and making possible greater use of three-dimensional modeling. These applications, in turn, demand the best possible computers and infrastructures. Thanks in part to Grand Challenge projects, Livermore is continuing to redefine what is computationally possible.