Intel’s massively parallel Xeon Phi coprocessor powered a Top 10 Supercomputer on the 20th anniversary of the Top500 Supercomputer List. At 2.6-petaFLOPS, Stampede ranked seventh out of 500 supercomputers, one of only 23 petaflop-caliber systems on this year’s list.
Located at the Texas Advanced Computing Center at the University of Texas (Austin), Stampede is powered by a Dell PowerEdge C8220 chassis using Xeon E5 main and Xeon Phi coprocessors. Only 62 of the Top500 supercomputers used coprocessor technologies to perform acceleration this year, including eight using advanced prototypes of Intel’s forthcoming Xeon Phi.
The other Xeon Phi based winners in the Top 100 included the Discover supercomputer, using IBM’s iDataPlex DX360M4 chassis and total of 35,568 Xeon E5-2670 8C 2.600GHz and Xeon Phi 5110P coprocessor cores, connected by Infiniband QDR.
Also in the Top 100 was the Endeavor — Intel’s own Cluster at its Customer Response Team (CRT) Datacenter (DuPont, Washington). Endeavor uses 27,489 Xeon E5-2670 8C 2.600GHz and Xeon Phi coprocessor cores communicating over Infiniband FDR.
Just behind Endeavor was the MVS-10P Tornado with 28,704 Xeon E5-2690 8C 2.900GHz and Xeon Phi cores connected by Infiniband FDR at RSC Group (Russia), which had two winning supercomputers using the Xeon Phi coprocessors at its Joint Supercomputer Center.
Also, the supercomputer called Maia at NASA/Ames Research Center came in 118th by using 17,408 Xeon and Xeon Phi cores. And the Appro GreenBlade GB824M at the National Institute for Computational Sciences at the University of Tennessee took 254th place, harnessing 9,216 Xeon E5-2670 8C 2.600GHz and Intel Xeon Phi 5110P cores connected by Infiniband FDR.
Intel, Infiniband Dominate
Systems with multi-core processors dominated the list. More than 84% using six or more cores, 46% eight or more cores. Intel processors dominated the multi-core architectures, with 76% of the Top 500 supercomputers using Intel processors. Intel Xeon multi-core processors powered three of the top 10 winners, including Texas Advanced Computing Center’s Stampede with its Xeon Phi coprocessors.
Just ahead of Stampede was Super MUC, an IBM Xeon cluster of 147,456 cores using its iDataPlex DX360M4 architecture with Xeon E5-2680 8C 2.70GHz processors communicating over Infiniband FDR. Just behind the 2.9 petaFLOPS SuperMUC, was the 2.6 petaflop Stampede, which used Dell PowerEdge C8220 chassis to hold a total of 204,900 Xeon E5-2680 and Xeon Phi coprocessor cores communicating over Infiniband FDR.
And just behind Stampede in the top ten was the 2.55 petaflop parallel supercomputer called the Tianhe-1A at the National Supercomputing Center (Tianjin, China), powered by 186,368 cores from Intel Xeon “Nehalem” X5670 6C 2.930GHz processors and Nvidia GPUs.
In twelfth place was the 1.4 petaFLOP “green infrastructure” supercomputer, which aims to reduce energy consumption and carbon footprint for high-performance computing (HPC) in Europe. The Curie “thin node” supercomputer at the Très Grand Centre de calcul (TGCC) used a Bull SA cluster of 77,184 total cores from its 2700 Intel Xeon E5-2680 8C 2.700GHz multi-core processors configured as a parallel processor communicating over Infiniband QDR. Curie is operated by Comité Européen des Assurances (CEA) and owned by Grand Equipement National de Calcul Intensif (GENCI, France).
Of the top 500 supercomputers in the world, 380 used Intel processors, and 225 used InfiniBand, marketed separately by Intel and Mellanox for interconnection fabric – a total of 45% of the supercomputers on the Top500 list. Of those not using Infiniband, most used gigabit Eithernet–188 of the remaining winners.
Most of the 87 using custom or proprietary interconnects.
Read more about this year’s list here: http://www.top500.org/blog/lists/2012/11/press-release/
Intel processors powered 76% of the Top500 Supercomputer Sites worldwide.