Unusual Supercomputer Design Poised to Enable More Accurate Climate Change Forecasts
They call it the Green Flash. While it evokes images of a muscled superhero with a lightning bolt hat, it’s actually a vision of a supercomputer that can model clouds like no other, with the potential to tell us what kind of climate change we're facing in the future. The idea of Green Flash was born out of necessity – climate scientists wanted a better understanding of how future climate change would play out, and supercomputers offered the best chance to create the climate models they'd need to do it.
Climate models divide the Earth into points arranged on a grid. Currently, that grid has a resolution of about 100 km on each side in most state-of-the-art climate models, meaning that points are measured 100 km apart. If that grid can be made smaller – say, 1 to 2 km apart – then more accurate forecasting is possible.
The finer the grid, the more points evaluated: This means modeling takes more computational time and energy, and power bills shoot through the roof. The computational power required for any of today's supercomputers to model clouds with the kind of accuracy climatologists are seeking is prohibitively expensive. Also, it doesn't exist.
A supercomputer that could accomplish such tasks would be bigger than any ever built, and the power bill alone could easily reach $150 million a year. Michael Wehner, a senior scientist at Lawrence Berkeley National Lab in California, US, has been pushing some of the largest supercomputers in the world to use a 25 km resolution.
"Climate modeling is the basis of much of our understanding for what the future climate will be like because of greenhouse gas emissions," Wehner says. "What's been identified as probably the single most important thing to be improved in the current generation of climate models is the way they treat clouds. There are no 25 km clouds like the grid might support. If we can model down to 1 or 2 km, the increase in computing time is something like a million."
There's something called a "timestep" at play here, which factors into the computational cost. A timestep is a new calculation performed to predict the state of the weather system in small increments. If the increments are too big, then they lose accuracy because they can't correctly account for factors that affect the weather, like geography and wind patterns. If the grid spacing shrinks, so must the timestep size. Each step takes a small amount of energy, which when totaled amounts to a lot.
Like so many of us do today, Wehner and his colleagues found the answer to this computational conundrum in their smartphones. They recognized that the specialized processors used in smartphones do one thing really well: They save energy. This class of processors presented the researchers with a way to design custom processors that include only the portions necessary to run climate models as quickly as possible. All other features were removed, thereby reducing energy demands.
A team of hardware designers, software engineers, and application scientists designed all three aspects of Green Flash simultaneously. This is unusual in supercomputer design – where the design cycle can take years, and each piece is typically produced separately from its counterparts – but this method of co-designing software and hardware saves significant time.
The team borrowed design tools from Tensilica, a specialized processor design company. Optimizing the Tensilica processors and linking millions of them together, they developed Green Flash’s exascale hardware design.
"When we first started the project, a lot of climate modelers just laughed at us and said you can never port a climate model to a cell phone, which is missing the point, but that's what they said. They said it was far too complicated," adds Wehner.
"We were able to demonstrate supercomputing first on one processor and then on multiple emulated processors. So the porting of a climate model to a simple processor like a cell phone processor is actually quite feasible and, in fact, was basically done by one guy. It was not nearly as difficult a problem as people thought it would be."
With optimized processors and communication-minimizing algorithms in place, Green Flash could theoretically model clouds and predict climate change more accurately than any conventional supercomputer. A possible drawback, however, of building something so specific is that Green Flash's design won’t work with most other high-performance computing applications (such as those that analyze genes or financial transactions) unless the problem has a similar computational profile.
Green Flash would mostly model climates, unlike the majority of mass-produced supercomputers, which are generally built to be multifunctional. However, what it lacks in flexibility, it makes up in accuracy and efficiency. "We feel it would be far cheaper to actually build each of these disciplines their own application-specific computer that's more suitable for their application," Wehner says. "It would be more efficient, it would be cheaper from the power point of view, and it would work better. There would also be more available cycles, because you wouldn't have to share between various scientists."
The design is so cost-effective, and the method so easily generalized, that it would be entirely possible to design a low-power exascale computer – one capable of running any of the 30 to 50 climate models used around the world. In fact, Wehner envisions the need for just six or seven of these custom systems, each tailored to the specific needs of a single discipline. -- by Amanda Aubuchon, © i SGTW
See Now: NASA's Juno Spacecraft's Rendezvous With Jupiter's Mammoth Cyclone
Join the Conversation