DOE Awards Intel $19 Million for Exascale by 2020 1 Comment

 

The Department of Energy (DOE) announced a $19 million award to Intel Federal to develop exascale processors, next-generation memories, and ultra-fast input/output (I/O) technologies for Lawrence Livermore National Security’s (LLNS’s) Extreme-Scale Computing Research and Development “FastForward” program.

 By enlisting broad cooperative contributions from industry, academia, and other national laboratories, DOE’s LLNS aims to develop all the necessary high-performance computing (HPC) capabilities needed to achieve exascale systems by 2020.

As the first member of its Many Integrated Core (MIC) architecture, Intel's Xeon Phi coprocessor board makes use of its 22-nanometer 3-D Tri-gate transistors. Source: Intel®

“Public-private partnerships will significantly help move high-performance computing forward [allowing] current and future generations of scientists and engineers to develop breakthrough advancements,” says David Patterson, president of Intel Federal LLC.

Intel Federal LLC, a wholly owned subsidiary of Intel Corp., will act as a subcontractor to LLNS’s FastForward program, developing exascale Xeon-Phi-based Many Integrated Cores (MICs), advanced memory technologies and high-speed interconnects.

 DOE also has announced awards to AMD, EMC, HDF Group, Nvidia, and Whamcloud. All contracts aim to help LLNS develop the necessary computing infrastructure to meet its mandates of applying exascale computing technologies to aid the economy, increase security, shepherd the aging U.S. nuclear arsenal, and optimize energy consumption.

 ”Exascale systems are critical for achieving the Department of Energy’s goals – to ensure national security and promote scientific advancements,” states William Harrod, Division Director of Research in the DOE Office of Science’s Advanced Scientific Computing Research. “From long-term weather forecasting and developing drugs for the most severe diseases to analyzing new ways to use energy efficiently, science and engineering researchers need much more computing capacity than is available today in petascale systems.”

Intel's 3-D memory chip stacks developed with Micron Technologies have 15 times the performance of a single DDR3 memory module, use 70 percent less energy, and occupy one-tenth the space. Source: Intel®

Exascale computing requires a 1000-fold increase in computing power over today’s petascale systems, but aims to achieve those goals with only a modest increase in power consumption to about 20 megawatts. As a result, all processor, memory, and I/O speedups will have to be achieved with more innovative approaches than merely turning up the clock speed. LLNS’s FastForward program is aimed at cultivating all the technologies needed to enable exascale HPCs within its trim energy budget.

Intel Federal’s twofold contract will apply the whole range of Intel’s capabilities, from basic research to development to prototyping to systems integration. Besides its MIC architecture and Xeon Phi massively parallel coprocessors, Intel also will enlist its 3-D memory technology developed with Micron Technology, called the Hybrid Memory Cube, as well as its high-speed interconnect technologies acquired earlier this year: Infiniband’s switched fabric network acquired from QLogic, and the Gemini interconnect technology acquired from Cray.

________________________________________________________________

Colin Johnson is a Geeknet contributing editor and veteran electronics journalist, writing for publications from McGraw-Hill’s Electronics to UBM’s EETimes. Colin has written thousands of technology articles covered by a diverse range of major media outlets, from the ultra-liberal National Public Radio (NPR) to the ultra-conservative Rush Limbaugh Show. A graduate of the University of Michigan’s Computer, Control and Information Engineering (CICE) program, his master’s project was to “solve” the parallel processing problem 20 years ago when engineers thought it would only take a few years. Since then, he has written extensively about the challenges of parallel processors, including emulating those in the human brain in his John Wiley & Sons book Cognizers – Neural Networks and Machines that Think.

Posted on by Colin Johnson, Geeknet Contributing Editor
1 comments
Sort: Newest | Oldest
R. Colin Johnson
R. Colin Johnson

DoE's biggest concern in its call for proposals in its "FastForward" program for exascale supercomputing is to cut the power-to-performance ratio by a factor of five. Here is how DoE's request-for-proposals puts it: "To get the additional factor of five improvements in power efficiency over projections, a number of technical areas in hardware design need to be explored. These may include: energy efficient hardware building blocks--central processing units, memory, interconnect--novel cooling, and packaging, silicon-photonic communication, and power-aware runtime software and algorithms." It also promised follow-on funding to make sure it is developing every technology necesary to realize supercomputers with exaFLOPS of performance, but which only consume 20megaWatts of power, by 2020.