In this blog, I’m going to explore how supercomputers led the way in enabling today’s computers to boast such incredible performance. It all started with the Atlas supercomputer in 1962. This first supercomputer outperformed all other computers by a wide margin. These early developments led to the era of Cray supercomputers, and eventually to other competitors. And these all played a role in the development of today’s personal computer, whether in the form of a desktop, a laptop, a tablet, or a phone.
The Need for Speed
The faster a CPU is, the more computations it can perform. The early supercomputers boasted about 10 megaFLOPS. FLOPS stands for FLoating-point Operations Per Second, so this means that the early computers could execute 10,000,000 FLOPS. In contrast, the supercomputers of today perform 10 petaFLOPS (10,000,000,000,000,000 FLOPS). As supercomputers were developed, the technology that enabled today’s personal computers to inexpensively perform 40 gigaFLOPS (40,000,000,000 FLOPS) was developed as a by-product of the supercomputer development. We enjoy fast CPUs as a direct result of the research and development done on supercomputers.
Substantial Storage and Memory
Many of us can recall early personal computers with 64KB of RAM. But as of this writing, 16GB of RAM costs $94, a relatively small price to pay for what even today is a lot of RAM. A lot of memory enables a computer to perform better and faster. Calculations can be done much faster because software is able to create what are known as lookup tables in memory. These lookup tables pre-calculate the results of commonly used equations. If the software can simply go to RAM and get an answer instead of performing a calculation, there is a good chance that the software’s performance will improve dramatically. (For computer scientists such as those in my classes, this is a dynamic programming technique called memoization.)
There is no doubt of the role that supercomputers played in bringing down the price of RAM. Supercomputers rely on vast banks of RAM. In order to feed this need, the technology of manufacturing large amounts of RAM for lowering prices was essential. The low memory prices of today, as well as the large sizes of RAM chips, are a direct result of supercomputer research and development.
Multiple Processors and Parallel Processing
Even the early supercomputers had multiple processors. They could manage many simultaneous tasks, and they could also work in parallel on a single task. All of the parallel computing paradigms that we use were developed in these early days. And the difficulties that parallelization brings to the table were also dealt with. For instance, those of us who write parallel code face a condition known as memory races. This is when parallel threads store data to a single memory location. During these early days, many approaches were developed including locks and reductions–approaches that we take for granted today.
Even the thought of multiple processors in a single personal computer was introduced from the supercomputer research and development establishment. In the early days of the IBM 8088/8086, nobody could have imagined multiple processors in a personal computer. But the ever-present influence of supercomputers had its impact as virtually all personal computers of today have multiple cores.
Besides the multiple cores that we all enjoy, there are some staggering gains to be had with today’s video cards. My NVidia card has 2,000 processors. If I write my software correctly, I can upload code to the video card’s GPUs, and have them perform calculations completely independently of the main CPU. Many software packages of today do this including PhotoShop and Maya. Using the GPUs can double or even triple the effectiveness of correctly written software.
The early supercomputers utilized an approach called vector processing. This provided a fast way to pipeline a series of tasks. This idea has been integrated into Intel CPUs with a process called vectorization. It is based on a technology called Single Instruction Multiple Data (SIMD). With this technology, a single operation can be performed simultaneously to multiple array elements. For instance, I might have an array of numbers, and let’s say they are 16-bit integers. With vectorization, I can add a value such as 5 to the entire array, but SIMD will do it 8 integers at a time. This provides a gargantuan speed optimization providing the software developer wrote vectorization-aware code that followed a few simple rules.
Today’s vectorization is a direct descendent of the original supercomputer vector processing. This is one strong example of how the supercomputer processing paradigms played a major role in the development of personal computer processing.
The early days of supercomputers faced the formidable task of cooling the components in order to prevent failure. Great lengths were taken in order to implement adequate cooling. Much of it was done with cooling fans similar to those that our computers have today. But the most radical cooling was required for the CPUs, and this was almost always some form of liquid cooling. Often circulating water was sufficient. But many times similar mechanisms to air conditioning were used where chemicals such as Freon circulated to remove heat.
My computer has three large fans in different places, each of which serve a different purpose. One is adjacent to the power supply. One is directly attached to the CPU, and one is a general system fan. Without these fans, my computer would quickly fail. The techniques discovered during supercomputer research and development lent a significant amount to the knowledgebase from which the functionality of these fans is based.
It is clear that without the rise in the supercomputer, there would not have been the absolutely amazing rise in personal computers. And Moore’s Law may have possibly never been possible.