Supercomputing Record With Bubble Collapse Simulation
A new world record for flow simulations: A team of scientist led by ETH Zurich simulated cloud cavitation collapse and achieved unprecedented resolution and computing performance.
Scientists at ETH Zurich and IBM Research, in collaboration with the Technical University of Munich and the Lawrence Livermore National Laboratory (LLNL), have set a new record in supercomputing in Fluid Dynamics using 6.4 million threads on LLNL’s 96 rack Sequoia IBM BlueGene/Q, one of the fastest supercomputers in the world.
The scientists performed the largest simulation ever in fluid dynamics by employing 11 trillion cells and reaching an unprecedented, for flow simulations, 14.4 Petaflop sustained performance on Sequoia - 73 percent of the supercomputer’s theoretical peak. The simulations resolved unique phenomena associated with clouds of collapsing bubbles which have applications ranging from treating kidney stones and cancer to improving the efficiency of high pressure fuel injectors.
The simulations resolved 15,000 bubbles, a 150-fold improvement over the current state-of-the-art, along with a 20 fold reduction of time to solution. These are crucial improvements which pave the way for the investigation of cloud cavitation collapse, a complex phenomena leading to high pressure peaks that can damage turbine components and propellers. When harnessed it can improve the design of high pressure fuel injectors and destroy kidney stones. Bubble cavitation is also an emerging therapeutic modality for cancer through tumor cell destruction and effective drug delivery.
Violent Bubbles
The team of scientists simulated what is known in fluid dynamics as two-phase flows, which involve the simultaneous presence of liquid water and vapor, as in a kettle of boiling water on a hot stove. Bubbles can also form without the addition of heat, when the pressure of the flow is reduced below vapor pressure, a process called cavitation. Low flow pressures are associated with local high speeds, such as those encountered in fast rotating propellers or high pressure injection nozzles.
After their formation and growth, moving bubbles may encounter regions of higher external pressure and in turn collapse, a violent process that leads to extremely high pressure peaks that can damage boat propellers and combustion chambers.
The violence and short time scales of this process have made its quantitative understanding elusive for experimentalists and computational scientists. And while supercomputers have always been considered as a solution, the large scale flow simulations have not been effective on massively parallel architectures.
Open collaboration
“In the last 10 years we have addressed a fundamental problem of computational science: the ever increasing gap of hardware capabilities and their effective utilization to solve engineering problems”, says Petros Koumoutsakos, director of the Computational Science and Engineering Laboratory at ETH Zurich who led this project.
He adds: “We have based our developments on finite volume methods, perhaps the most established and widespread method for engineering flow simulations. We have also invested significant effort in designing software that takes advantage of todays parallel computer architectures. It is the proper integration of computer science and numerical that enables such advances.”
“We were able to accomplish this using an array of pioneering hardware and software features within the IBM BlueGene/Q platform that allowed the fast development of ultrascalable code which achieves an order of magnitude better performance than previous state-of-the-art,” said Alessandro Curioni, head of mathematical and computational sciences department at IBM Research - Zurich. “While the Top500 list will continue to generate global interest, the applications of these machines and how they are used to tackle some of the world's most pressing human and business issues more accurately quantifies the evolution of supercomputing. ”
The simulations are one to two orders of magnitude faster than any previously reported flow simulation. The last major achievement was earlier this year by a team at Stanford University which broke the 1 million core barrier, also on Sequoia.
The present code is however 50 times faster and capable of employing an order of magnitude more computational elements while achieving a better time to solution. This significant achievement is a finalist for the 2013 Gordon Bell Prize to be awarded by the Association for Computing Machinery this week at Supercomputing ’13 (SC13 ).
See Now: NASA's Juno Spacecraft's Rendezvous With Jupiter's Mammoth Cyclone
Join the Conversation