Logo: National Center for High-Performance Computing
ALPS — Acer AR585 F1 Cluster
:::*Areas of Service首頁 » Areas of Service » ALPS — Acer AR585 F1 Cluster

ALPS — Acer AR585 F1 Cluster



Machine Name ALPS — Acer AR585 F1 Cluster
Machine Structure SMP Cluster
Hostname apls
Installing Time 2011/05
Field genernal-purpose
Installing Software ABINIT, Amber, CASINO, CHARMM, Chemsoft, CPMD, DL_POLY, GAMESS, Gaussian, GROMACS, Molpro, NAMD, NWChem, octopus, OpenMX, Quantum ESPRESSO, siesta, VASP, WIEN2k

Technical Specs

ALPS is short for “Advanced Large-scale Parallel Supercluster” (also known as “御風者” in Chinese). It is a supercomputer that offers an aggregate performance of over 177 TFLOPS. The system uses the AMDR Opteron 6100 processors, and has a total of 8 compute clusters, 1 large memory cluster, and over 25,600 compute cores.

ALPS is designed to support a wide range of applications — so many that NCHC could only begin to fathom the jobs that could potentially be run across it. APLS is also designed to function as a test bed for new application design and research; support for open source and academic-specific applications is a must. To offer the most robust level of support, the system was first planned with every angle in mind — not simply raw compute performance, and not necessarily best performance per watt.

> Configuration

  1. System in Brief

    Login Node:
    Interactive Nodes :
    alpi1.nchc.org.tw (
    alpi2.nchc.org.tw (
    alpi3.nchc.org.tw (
    alpi4.nchc.org.tw (
    alpi5.nchc.org.tw (
    /home (209 TB), for user's home
    /pkg (16 TB), for package
    /work (162 TB), for working scratch
  2. Hardware Specification

    System Family:
    Acer Group Cluster
    System Model:
    Acer AR585 F1 Cluster
    AMD Opteron 6174, 12 cores, 2.2GHz (compute nodes)
    AMD Opteron 6136, 8 cores, 2.4GHz (fat nodes)
    Main Memory (per node):
    128 GB (compute nodes)
    256 GB (fat nodes)
  3. Software Stack

    Operating System:
    Novell SuSE Linux Enterprise 11 SP1
    Job Scheduler & Queuing System:
    Platform LSF (Load Sharing Facility) 7.06
    Parallel Filesystem:
    Development Tools:
    Intel Cluster Toolkit, PGI CDK/Server, x86 Open64, GCC
    Allinea DDT
    Platform MPI (formerly HP-MPI), Intel MPI, PGI MPI/OpenMP, MVAPICH/MVAPICH2
    Intel MKL (Math Kernel Library), AMD ACML (AMD Coe Math Library)

> Queuing System

NCHC users specify one of the following submit classes to queue jobs. Upon submission the job is routed to the appropriate LSF class according to the following criteria.

Queue Name CPU(s)/job Max CPUs/user (CPU/Run)time limit/job
serial 1 20 12 days
12cpu 2 ~ 12 120 8 days
48cpu 13 ~ 48 240 6 days
192cpu 49 ~ 192 768 5 days
384cpu 193 ~ 384 768 3 days
768cpu 385 ~ 768 1536 2 days
short 128 128 2 days
medium 128 128 6 days
long 128 128 10 days

All batch jobs must be submitted to a valid submit class. If the class doesn“t exist, job will be terminated and an error message will be issued.


Latest Update: 2014/12/17
The guide map of HPC Business Unit
Hsinchu Headquarters (e-Map) No. 7, R&D 6th Rd., Hsinchu Science Park, Hsinchu City, Taiwan, R.O.C. 30076 TEL: 886-3-5776085、0800-351-510  FAX: 886-3-5776082
The guide map of Taichung Branch
Taichung Branch (e-Map) No. 22, Keyuan Rd., Central Taiwan Science Park, Taichung City, Taiwan, R.O.C. 40763 TEL: 886-4-2462-0202、886-4-2465-0818  FAX: 886-4-2462-7373
The guide map of Tainan Branch
Tainan Branch (e-Map) No. 28, Nan-Ke 3rd Rd., Hsin-Shi Dist., Tainan City, Taiwan, R.O.C. 74147 TEL: 886-6-5050940  FAX: 886-6-5050945
Web Priority A+ Accessibility Approval