With supercomputers constantly setting new speed records, some predict that we will have exascale supercomputers by the end of the decade. An exascale supercomputer would be able to perform a quintillion (or 1,000 quadrillion) floating point operations per second– approximately 1,000 times faster than today’s behemoths.
Unfortunately, software is not yet ready to keep the same blistering pace. So even if we were to have an exascale supercomputer available, it isn’t likely to meet up to its reputation for some time, some experts claim.
But Jack Dongarra, distinguished professor of computer science at the University of Tennessee in Knoxville, and one of the creators of the Top500 supercomputer list, aims to fix that with the Parallel Runtime Scheduling and Execution Controller, or PaRSEC.
The PaRSEC site states that “PaRSEC is a generic framework for architecture aware scheduling and management of micro-tasks on distributed many-core heterogeneous architectures. … The framework includes libraries, a runtime system, and development tools to help application developers tackle the difficult task of porting their applications to highly heterogeneous and diverse environment.”
Dongarra will fund the project with a three-year, $1 million grant from the U.S. Department of Energy to study obstacles to realizing an exascale level computer.
Dongarra noted recently that work needs to be done now to develop the techniques and software needed for exascale computers to happen. He said that while today’s supercomputers have processor counts in the millions, tomorrow’s exascale supercomputers will have processors numbering in the billions. He also expects the design of these new supercomputers to be different, taking advantage of multiple central processing units and hybrid systems that overcome the challenges of heat and power consumption, leading voltage and limited bandwidth on a single chip.
He is also developing an algorithm to overcome a reliability problem associated with the increasing number of processors.