Logo: National Center for High-Performance Computing
An Overview of HPC and Challenges for the FutureˇiPart1ˇj
:::*Technology HighlightsHome » Technology Highlights » An Overview of HPC and Challenges for the FutureˇiPart1ˇj



The keynote speech given by Dr. Jack Dongarra at HPC Asia 2009 examines the history of high performance computing (HPC) from its beginnings in the 1950s, through the present, and into the very near future. The talk also takes an in-depth look at the TOP500 supercomputing list which was begun by Dr. Dongarra and several of his colleagues in 1993. 
This extremely educational and enlightening talk also takes a look at current trends in HPC such as “many-core” chips and GPUs as well as examines future obstacles in the ongoing development of HPC.


Part1
 Part2  |  Part3  |  Part4  |  Part5  |  Part6  |  Part7   [PDF Download]  [Video]

Part 1ˇGThe History of HPC and the Top500 List

What I’d like to do today is to talk about some issues relating to High Performance Computing (HPC) and also take a look forward at the end of the talk to some of the challenges we face in dealing with HPC in the future. You know the story as well as I do, in terms of the tremendous improvements that have been made over time in HPC. What we’re looking at here on this first PowerPoint slide is the evolution of HPC from 1950 until today as measured in terms of performance. “Performance” for HPC is usually thought of in terms of floating point operations per second or flop/s. When I talk about floating point operations, I’ll be very specific and talk about 64 bit operations and those“operations” that I’m referring to are addition and multiplication.

Back in the 50s, we had computers that operated around one KFlop/s, that is, 1,000 Flop/s. Those machines were rather sequential in nature meaning that they executed one instruction and after it completed, they went on to the next instruction. In the 70s, we had machines that,because of some architectural features and hardware enhancements, raised the performance which gave rise to “superscalar” instructions. What I mean by “superscalar” is what we usually refer to today as having “RISC architectures.” That is, they are able to execute multiple instructions and multiple parts of the instructions simultaneously. That, in turn, gave rise to performance levels around the MFlop/s, or 1,000,000 Flops, in the 70s. The 70s were characterized by the computers at the CDC and also from IBM. In the 80s, we had “vector computing.” Now the idea behind “vector computing” is to issue one instruction and have that instruction take place on a whole sequence of data. Perhaps the original Cray system is the best known example of a vector computer. Next, in the 80s, we saw performance levels approaching a GFlop/s, that is 1,000,000,000 Flop/s. Then, in the 90s, we had “parallel computing” take place in a rather large way. In the 90s, we were making machines that had hundreds and even thousands of processors associated with them and we had performance levels reaching into the TFlop/s range. Today were at the PFlop/s point where we have superscalar, special purpose parallel machines. So we’re looking at 1015 flop/s with these machines today.

What I’d like to do is look at some data that we’ve collected over time that looks at HPC. This is data that we call the TOP500 list. So think of it like this…..every six months we go out and we look at all the supercomputers in the world and we rank the top 500 fastest machines according to a benchmark. For better or for worse, the benchmark we use is something called the LINPACK benchmark. LINPACK solves a system of linear equations: Ax = b. The idea is to have a dense matrix, make it as big as possible on your machine, and solve that system of equations on your computer. And what we’re gonna look at is the rate of execution for your solving that problem on your system using the best approach and best methods that you can. Typically what we see happening is that as we increase the size of the problem, the rate of execution rises until it reaches and asymptotic point. We’re interested in capturing that asymptotic point as the peak performance of your computer. This list of the 500 fastest machines is updated twice a year. It’s updated in November at the supercomputing conference held in the U.S. and again in June at the international supercomputer conference held in Germany. The most recent data we have is from the meeting in November of 2008.


Part 2
TOP500: Computational Growth 1993~2019

 

HomeTopBackPrint

Latest Update: 2014/10/30
The guide map of HPC Business Unit
Hsinchu Headquarters (e-Map) No. 7, R&D 6th Rd., Hsinchu Science Park, Hsinchu City, Taiwan, R.O.C. 30076 TEL: 886-3-5776085ˇB0800-351-510  FAX: 886-3-5776082
The guide map of Taichung Branch
Taichung Branch (e-Map) No. 22, Keyuan Rd., Central Taiwan Science Park, Taichung City, Taiwan, R.O.C. 40763 TEL: 886-4-2462-0202ˇB886-4-2465-0818  FAX: 886-4-2462-7373
The guide map of Tainan Branch
Tainan Branch (e-Map) No. 28, Nan-Ke 3rd Rd., Hsin-Shi Dist., Tainan City, Taiwan, R.O.C. 74147 TEL: 886-6-5050940  FAX: 886-6-5050945
Web Priority A+ Accessibility Approval
:::
Unlocking the Power of GPU
more:

moremore

 
NCHC provides you more