The NCHC's Foray into GPU Supercomputing
The GPU grew rapidly, with 800 Parallel Processing Cores!
About three or four years ago, several programming and conversion tools began to appear. The Compute Unified Device Architecture (CUDA) technology, developed by Nvidia, takes advantage of the processing power of the Graphics Processing Unit (GPU) that ships with most PCs as of recent. Using a programming language similar to C, the technology extracts a process power equivalent to ten servers from a single PC. Thus, the PC is now able to give the supercomputer a run for its money.
The National Center for High-Performance Computing (NCHC), Taiwan's only full-service supercomputing facility, also noticed the GPU application development trend and began incorporating GPU into the HPC environment last year. Dr. Jyh Shyong Ho, head of the NCHC's GPU unit, said, "The most suitable usage for GPU is in repetitive large-scale computation fields where data can be dissected into smaller chunks and fed to each GPU. The performance from the multi-core computation will be faster than a single processor alone."
In recent years, the price of multi-core GPUs has declined steadily and more uses for them have been found. The GPU manufactured by Nvidia contains 240 cores (i.e. GTX280) while its competitor, AMD/ATI, makes them with 800 cores (i.e. Firestream). Floating point calculations from the AMD chip approach one TeraFlop (1,000 GigaFlops). These numbers differ greatly from the dual-core, quad-core, or even the eight-core CPUs due to arrive next year. Therefore, incorporating GPU into the HPC environment will not only reduce the cost of purchasing servers, but also increase computational capacity. The NCHC's end users will benefit as a result.
With wide-ranging applications, the onus is on the programmers!
It is becoming increasingly common to use GPU to accelerate computation. Other non-HPC related fields have successfully used GPU for their computational needs. Examples are MRI medical imaging processing and compression and decompression of audio-visual data. Domestic universities have several labs using GPU to increase their computational power, including astronomy (universe modeling), high-energy and particle physics (Monte Carlo simulation), fluid dynamics, structural mechanics, and molecular physics. Due to the difference in computing methods used in different fields, the increased speed from the GPU is not uniform. Some realize a ten fold increase in efficiency whereas some 100 fold. There is no set standard.
How to most effectively utilize the GPU is the question that the NCHC is trying to answer. As Taiwan's innovator in the HPC field, the NCHC is rapidly developing and researching ways to take advantage of the GPU HPC environment. The main concept behind GPU computing is to break up large amounts of data into smaller chunks that can be calculated in a parallel fashion by multiple cores. To take full advantage of GPU computing, the researchers have to tailor the computing algorithm to the GPU hardware architecture. The commercial sector currently requires the experience of the research institutions in order to successfully integrate the GPU technology.
To increase domestic HPC capacity, the NCHC promotes the GPU computation environment.
The NCHC provides efficient, low cost HPC services. The PC Cluster 1350 supercomputer built by the NCHC last year was ranked first in Taiwan and 35th in the world at the time of its construction. However, the race to be the world's fastest supercomputer is never ending. On the Top500 supercomputer list announced earlier this year, the NCHC's PC Cluster 1350 had already fallen to the 91st place. This large gap represents the advancements of other nations in the HPC race. More and more countries are aggressively pursuing the GPU computation field. France, for example, will announce by the end of this year, a commercial GPU environment built using clusters.
San Liang Chu, head of the NCHC's Computational Service Division, explaines, "The evolution from main frame computers to cluster computers means that we can achieve high computation capacity with less money. If GPU can be utilized further, the effect will be amazing. For our center, that means more space savings and a greener facility. For users, it means more utilization of our computing power through more time saved."
The NCHC built upon its HPC-related experience and started a GPU testing environment last year. A team was formed to experiment and test the environment's user interface and possible applications. The team has made headway writing software for surgical cranial implants and the waterways modeling project. Currently the testing environment consists of 5 PCs with 2~4 GPU cards. Early next year, Taiwan's first computer cluster with 16 GPUs will be assembled with projected floating point calculations of 64 TeraFlops. Those interested in testing the CUDA program are welcome to apply for an account.
The GPU's Effect on the Future of HPC Development
The rise of GPU computing is due to the physical limitations of the advancement in CPU miniaturization. There will come a point where power consumption and heat dissipation will not be possible for the miniscule components. The CPU's foray into multi-core parallel processing can hardly compare to the GPU's hundreds of cores.
The GPU takes its instruction from the CPU. Its role is in accelerating computing tasks and will become an important component in HPC. The sooner GPU computing is mastered, the greater its cost savings and its competitive advantages can be realized.
Center left: Dr. Jyh Shyong Ho, head of GPU research team. Center: Director Eugene Yeh. Center right: Researcher San Liang Chu, head of the Computational Service Division.
- CUDA is a software technology developed by Nvidia.
- Nvidia and AMD/ATI related information mentioned in this article is public information. To find out more, please visit their websites.