News

Sandia Labs News Releases

Supercomputing on the XPRESS track

Sandia aims to create exascale computing operating system

ALBUQUERQUE, N.M. — In the stratosphere of high-performance supercomputing, a team led by Sandia National Laboratories is designing an operating system that can handle the million trillion mathematical operations  per second of future exascale computers, and then create prototypes of several programming components.

Called the XPRESS project (eXascale Programming Environment and System Software), the effort to achieve a major milestone in million-trillion-operations-per-second supercomputing is funded at $2.3 million a year for three years by DOE’s Office of Science. The team includes Indiana University and Louisiana University; the universities of North Carolina, Oregon and Houston; and Oak Ridge and Lawrence Berkeley national laboratories. Work began Sept. 1.

“The project’s goal is to devise an innovative operating system and associated components that will enable exascale computing by 2020, making contributions along the way to improve current petaflop (a million billion operations a second) systems,” said Sandia program lead Ron Brightwell.

Scientists in industry and in research institutions believe that exascale computing speeds will more accurately simulate the most complex reactions in such fields as nuclear weapons, atmospheric science and chemistry and biology, but enormous preparation is necessary before the next generation of supercomputers can achieve such speeds.

“System software on today’s parallel-processing computers is largely based on ideas and technologies developed more than twenty years ago, before processors with hundreds of computing cores were even imagined,” said Brightwell. “The XPRESS project aims to provide a system software foundation designed to maximize the performance and scalability of future large-scale parallel computers, as well as enable a new approach to the science and engineering applications that run on them.”

Current supercomputers operate through a method called parallel processing, in which individual chips work out parts of a problem and contribute results in an order controlled by a master program, much like the output of instruments in an orchestra is controlled by a conductor. Chip speed itself thus plays a less important role than the ability to synchronize individual results, since the method relies on the addition of chips for greater traction in solving harder problems in a reasonable amount of time.  

But merely adding more chips to a supercomputer “orchestra” can make the orchestra unwieldy, the conductor’s job more difficult and, in the end, impossible.

In addition to such programming difficulties, massive arrays of processors generate excess heat that wastes energy and increase the chances some will fail. Designing convenient locations to store data so it’s immediately available to processors is another problem.

The conundrum is, in short, that an exascale computer using current technologies could have the unwanted complexity of a Rube Goldberg contraption that uses the energy of a small city and demands round-the-clock upkeep.

To reduce these problems and start researchers on the road to solutions, the multi-institution XPRESS effort will address specific factors known to degrade fast supercomputer performance. These include “starvation,” the insufficiency of concurrent partial problem-solving at particular processing locations. This hinders both efficiency and scalability because it can require more parallelism. Information delays, known as latency effects, need to be reduced through a combination of better locality management, reduction of superfluous messaging and the hiding of information unnecessary to the problem. Overhead limits the interpretation of granularities that can be effectively unearthed through inference. This reduces scalability. Waiting — because the same memory is needed by several processors — also causes slowdowns.

The team brings together researchers with expertise not only in operating systems, said Brightwell, but also other system software capabilities, such as performance analysis and dynamic resource management, that are crucial to supporting the features needed to effectively manage the complexities of future exascale systems.


Sandia National Laboratories is a multi-program laboratory operated by Sandia Corporation, a wholly owned subsidiary of Lockheed Martin company, for the U.S. Department of Energy’s National Nuclear Security Administration. With main facilities in Albuquerque, N.M., and Livermore, Calif., Sandia has major R&D responsibilities in national security, energy and environmental technologies and economic competitiveness.

Sandia news media contact: Neal Singer  nsinger@sandia.gov  (505) 845-7078