Reading time ( words)
Whether designed to predict the spread of an epidemic, understand the potential impacts of climate change, or model the acoustical signature of a newly designed ship hull, computer simulations are an essential tool of scientific discovery. By using mathematical models that capture the complex physical phenomena of the real world, scientists and engineers can validate theories and explore system dynamics that are too costly to test experimentally and too complicated to analyze theoretically. Over the past half century, as supercomputers got faster and more powerful, such simulations became ever more accurate and useful. But in recent years even the best computer architectures haven’t been able to keep up with demand for the kind of simulation processing power needed to handle exceedingly complex design optimization and related problems.
To address this challenge, DARPA has announced its Accelerated Computation for Efficient Scientific Simulation (ACCESS) program. The program builds on inputs from a request for information issued in 2015 for novel hybrid computing concepts.
A standard computer cluster is equipped with multiple central processing units (CPUs), each programmed to tackle a particular piece of a problem. This conventional design is not suited to solve the kinds of equations at the core of large-scale simulations, such as those describing complex fluid dynamics and plasmas. These critical equations, known as partial differential equations, describe fundamental physical principles like motion, diffusion, and equilibrium. But because they involve dynamics over a large range of physical parameters and spatial scales relating to the problems of interest, they do not lend themselves to being easily broken up and solved in discrete pieces by individual CPUs. A processor specially designed for such systems of equations may enable revolutionary new simulation capabilities for design, prediction, and discovery. But what might that processor look like?
“Supercomputers today face bottlenecks in converting physical systems into and out of binary form. We are going to explore if there are fundamentally better ways to solve multi-scale partial differential equations that describe complex physical systems, such as those encountered in plasmas and fluid dynamics,” said Vincent Tang, DARPA program manager. “The goal is to develop new hybrid computational architectures for scalable approaches to simulating these complex systems, in order to allow the equivalent of petaflops or more of computational power to be effectively applied across a simulation, all on a benchtop form factor.”
As part of the new program, DARPA is interested in pursuing the somewhat counterintuitive premise that “old fashioned” analog approaches may be part of the solution. Analog computers, which solve equations by manipulating continuously changing values instead of discrete measurements, have been around for more than a century. But in the 1950s and 1960s, as transistor-based digital computers proved more efficient for most kinds of problems, analog methods fell into disuse. They haven’t been forgotten however. And their potential to excel at dynamical problems too challenging for today’s digital processors may today be bolstered by other recent breakthroughs, including advances in microelectromechanical systems, optical engineering, microfluidics, metamaterials and even approaches to using DNA as a computational platform. It is conceivable, according to Tang, that novel computational substrates could vastly exceed the performance of modern CPUs for certain specialized problems, if they can be scaled and integrated into modern computer architectures.
“Today, we need a room full of supercomputers to handle the simulations, which can take weeks or months for results,” Tang said. “With ACCESS, we’re aiming for a benchtop set-up that can solve large problems of complex physical systems in a matter of hours.”
For full program details, view the BAA here.