New Big Ideas Lab podcast episode details history of LLNL supercomputing
Since Lawrence Livermore National Laboratory’s (LLNL) infancy, supercomputing has played an integral role in propelling mission-oriented scientific discovery. From the UNIVAC 1 to the exascale-class El Capitan, these computing wonders have gotten light-years more technologically advanced, efficient and incredibly powerful.
Since the Lab’s earliest days, LLNL has only strengthened its reputation as a dominant force in supercomputing, boasting more of the world’s fastest computers than any other institution or computing center. For 70 years, LLNL scientists and researchers have used supercomputers to develop innovations in nuclear science, fusion energy, biosecurity, advanced manufacturing, climate modeling, space/astrophysics, earthquake and atmospheric release modeling, and even human health. Each new supercomputing system takes performance and scientific discovery to new heights and allows scientists to ask and answer questions they hadn’t been able to before.
Over the past three decades, supercomputers have become even more vital to the Lab’s core mission of ensuring the safety, security and reliability of the nation’s nuclear stockpile. A pivotal moment came in 1992 when the U.S. ceased underground nuclear testing and faced the challenge of maintaining its nuclear weapons stockpile without tests. This led to the creation of the Department of Energy’s Accelerated Strategic Computing Initiative (ASCI) program where new machines and increasingly better technologies would be needed to model and simulate the complex physics behind nuclear explosions. Under the leadership of former LLNL Advanced Simulation & Computing Program Director Michel McCoy and others, LLNL began pursuing this goal by exploring massively parallel computing, which marked a fundamental shift in supercomputing. The Lab worked to address technical challenges of running thousands of processors in parallel, managing enormous data sets and developing complex physics codes.
The ASCI program led to milestones in computing power, but also revealed the complexity of scaling up. With time, LLNL faced new hurdles, such as the increasing power consumption of the machines, leading to the development of the Blue Gene series, which achieved record-breaking speeds and demonstrated the potential of low-power, high-performance systems to open the door to a new era of scientific inquiry.
Today, LLNL relies on its world-class supercomputers to fulfill the increasingly complicated demands of the National Nuclear Security Administration’s Stockpile Stewardship Program, as well as other efforts vital for national security. LLNL remains at the forefront of global supercomputing by entering the next paradigm: Set for deployment in late 2024, the exascale El Capitan is projected to be capable of performing more than two quintillion floating-point operations (more than 2 exaFLOPs) per second. El Capitan will play a critical role in maintaining the enduring U.S. nuclear stockpile and represents another significant leap in requiring novel approaches to parallelism, energy efficiency and software development. The combination of supercomputers with artificial intelligence — or “cognitive simulation” — is also seeking inroads into longstanding questions, such as how to optimize inertial confinement fusion experiments or develop new materials with less uncertainty than ever before.
As LLNL continues its legacy of excellence, persistence and pushing the boundaries of what’s possible in science and technology in the face of immense challenges, join McCoy (now retired), Weapon Simulation & Computing Associate Director Rob Neely and Livermore Computing Division Leader Becky Springmeyer as they dive into the fascinating world of supercomputing and explore the Lab’s journey from its early computing efforts to the dawn of exascale.
Listen to the latest Big Ideas Lab episode on LLNL supercomputing here: Spotify or Apple.