::: 回首頁News & EventsActivities

Accelerate Sciences with Large-Scale GPU Parallel Computing

Publicsh Date:2024.12.12
113.12.12 Announcement: The review process has concluded. The list of approved proposals.

113.11.29 Announcement: The deadline is due, with a total of 19 submissions received. The results will be announced and applicants will be notified following the review meeting.

NCHC invites researchers and scientists to engage in large-scale computational studies aiming at accelerating scientific and engineering simulations with GPGPU via porting existing CPU-based tools or devising new simulation software.

Introduction
With the boom of generative AI, data centers worldwide are building or adding large-scale GPU clusters to meet the needs for training large language models. In general, GPU clusters are not only a necessity but also a more economical choice in terms of math operations per unit cost. For these very reasons, accelerating scientific and engineering computations with widely available GPU compute power appears to be of fundamental importance both from strategic and technical viewpoints. With the scale of GPU clusters reaching hundreds of peta FLOPS and in some cases exceeding 1 exaflops, the accelerated computations can also handle very large-scale problems that have not been attempted before. In fact, the top tier GPU supercomputers in the Top500 list all are devoting to very large-scale scientific and engineering simulations, in addition to the booming AI workloads. This call for proposal encourages the research community to challenge very large-scale problems with available GPU power provided by NCHC in the next 3–4 years. Priorities are given to applications that have high impacts in scientific discovery or can greatly shorten the design time in engineering problems. Proposals that address parallel efficiency for many-nodes GPU computations are strongly encouraged.

Objective
  • Enable researchers to tackle complex problems that require immense computational power by leveraging the parallel processing capabilities of multiple, cross-nodes GPUs.
  • Facilitate the development of innovative algorithms and techniques that can effectively utilize the CPU computes without incurring data movement penalties.
  • Encourage collaborations between researchers and AI computing experts to push the boundaries of what is possible with parallel computing, including unattempted problem sizes.

Important dates and Procedure Date
  • Call Opening 2024/11/01
  • Final Submission 2024/11/29
  • Awards Announcement 2024/12/10

Features
  • Project period: 1 years.
  • Eligible organizations: research institutions (such as universities, university colleges, strategic research centers, and national laboratories)
  •  We are accepting two types of proposals:
    - Existing codes GPU Parallelization: Proposals that focus on optimizing existing codes or developing new algorithms specifically designed for large-scale simulations utilizing multiple nodes and GPUs. This includes enhancing computational efficiency through effective parallelization strategies. Codes that require bulk development from scratch are allowed as long as the work can be done within the project period.
    - Open-Source Package GPU Workflow Optimization: Proposals aimed at improving the performance of open-source software packages by optimizing their GPU workflows. The goal is to enhance high throughput and distributed GPU computing efficiency, making these tools more accessible for scientific research.
  • Acceptable costs: At least NTD 500,000 and at most NTD 1,000,000 per year for the entire project. Includes research staffing, operation, consumable cost.
  • Access to HPC facilities including auxiliary CPU usage will be offered by NCHC, free of charge.
  • Submission of applications through EMAIL: acyang@narlabs.org.tw
  • Download application_form <PDF> <ODF> <Docx>

Proposal Guidelines
Proposals should outline how GPU acceleration will be utilized in your research project. Key components to include:
  1. Project Overview: Describe the research objectives, the computational challenges, and the potential impact of the project.
  2. Multi-GPU Utilization: Explain how you plan to utilize multiple GPUs to accelerate your computations. Provide details on the specific algorithms, techniques, or applications that will be parallelized.
  3. Performance Improvements: Quantify the expected performance improvements and speedups that can be achieved by using multiple GPUs compared to a single GPU or CPU-based system.
  4. Scalability and Efficiency: Discuss the scalability of your approach and how you plan to ensure efficient utilization of the available GPU resources.
  5. Expected Outcomes and Short-Term Goals: Outline the anticipated results of your project, including any specific milestones you aim to achieve in the short term, such as prototype development, initial testing results, or publications.
  6. Budget and Resources: Provide a detailed budget outlining the resources required for the successful implementation of the project, including GPU hardware, software licenses, and personnel costs.
Proposals will be reviewed based on innovation, feasibility, potential impact, and effective utilization of multiple GPUs for large-scale parallel computing.
Join us in revolutionizing scientific research through the power of multi-GPU parallel computing and contribute to the advancement of knowledge and discovery!