General

Resource Management System


General

System overview

HPCPOWER system consist of 128 nodes. Each node contains two Intel Xeon processors and 2GB of RAM. Each processors consist of 512kb of cache.

The hpcpower uses the Redhat Linux 9.0 as its operating system. Reading of this user guide and using of the system assumes familiarity with the Linux/Unix software environment. In order to get an understanding of the Linux/UNIX, please study the UNIX user’s guide in the Information Technology Services’ website.

The hpcpower uses the Portable Batch System (PBS) software to distribute the computational workload across the processors. PBS is a batch job scheduling application that provides the facility for building, submitting and processing batch jobs on the system.

Jobs are submitted to the system by creating a PBS job command file that specifies certain attributes of the job, such as how long the job is expected to run and, in the case of parallel programs, how many processors are needed, and so forth. PBS then schedules when the job is to start running on the cluster (based in part on those attributes), runs and monitors the job at the scheduled time, and returns any output to the user once the job completes.

Logging into & transferring file to the system

Logins to the hpcpower system can be done inside HKU campus network by SSH. SSH is not bundled by MS Windows that you may require to download SSH client like PuTTY.  This places you on the master node, which acts as the control console for any interactive work such as source code editing, compilation, program testing and submitting jobs through PBS. When you log on to the master node you should be in your home directory which is also accessible by batch nodes.

Similarly, to transfer files to the hpcpower system you have to use SCP inside HKU campus network. SCP is not bundled by MS Windows that you may require to download SCP client like WINSCP.

Please visit http://www.its.hku.hk/documentation/guide/infosys/web/ssh for more details on “SSH and Secure File Transfer” connection.

Editing the program

You can use the command pico to edit programs. Please refer to the UNIX user’s guide for detail.

Important notice for Microsoft Windows users: do not use a standard Microsoft Windows editor such as Notepad to edit files that will be used on the Linux or other Unix systems. The two systems use different sequences of control characters to mark the end of line (EOL). If you are using the system from a Microsoft Windows desktop machine, please SSH to the master node and edit the program directly using pico.

Configuring your account

Every user account is pre-configured with the neccessary environment. You can use all software in the system without any modification to the system files like .rhosts, .bashrc or .bash_profile.

You can copy the most up-to-date system files from the directory /etc/skel in case your copies are deleted by accident.

When a job is submitted to the cluster through PBS a new login to your account is initiated, and any initialization commands in your startup files (.bashrc, .bash_profile, etc) are executed. In this case (running in batch mode) it is necessary not to put interactive commands such as setting tset and stty or generating outputs in the startup files. If these precautions are not taken then error messages will be written to the batch jobs error file and your program cannot run.

Program Compilation and Testing

  1. Compilers
    The PGI Cluster Development Kit suite of compilers is installed. This includes compilers for Fortran 77(pgf77), Fortran 90(pgf90), High Performance Fortran (pghpf), C(pgcc) and C++ (pgCC). For more details on using PGI compiler in hpcpower system, please visit PGI Compiler.

    Except PGI compiler, there are also other compilers supported with parallel libraries(MPICH/MPICH2/Open MPI). Please visit HPC Software List for more details.

  2. Test Serial program
    In order to test serial program, you can use this command :

    ./program.exe

    The “./” tells the system to run the program in the current directory.

  3. Test MPI program
    Use this command to test MPI program in the master node:

    mpirun ./proragm.exe

    You do not need to specify the number of processes to test your job. For interactive testing, the default number of processes is 4. You can add “-np xxx” to specify the number of processes for testing.


Resource Management System

PBS (Portable Batch System)

The PBS resource management system handles the management and monitoring of the computational workload on the hpcpower. Users submit “jobs” to the resource management system where they are queued up until the system is ready to run them. PBS selects which jobs to run, what time to run, and which nodes to run, according to a predetermined site policy meant to balance competing user needs and to maximize efficient use of the cluster resources.

To use PBS, you create a batch job command file which you submit to the PBS server to run on the system. A batch job file is simply a shell script containing the set of commands you want to run on the batch nodes. It also contains directives which specify the characteristics (attributes), and resource requirements (e.g. number of nodes and maximum runtime) that your job needs. Once you create your PBS job file, you can reuse it if you wish or modify it for subsequent runs.

Since the system is set up to support large computation jobs, the following maximum number of CPUs and maximum processing time are allowed for each batch job:

  • Maximum number CPUs for each program job = 64 (i.e. 32 compute-nodes of 2 CPUs)
  • Maximum processing time for each program job = 24 Hours (wall clock time)

Furthermore, the job scheduling is set in such a fashion that higher priority will be given parallel jobs requiring a larger number of processors.

In order to provide a fair share environment for all users, the system is set such that each user can put no more than 15 jobs on the job queue and no more than 10 of them should be running at the same time

PBS Job Command file

To submit a job to run on the cluster system, a PBS job command file must be created. The job command file is a shell script that contains PBS directives which are preceded by #PBS.

The following is an example of a PBS command file to run a parallel job, which would require 4 nodes and 2 processor on each node. You should only need to change items indicated in red. This file is also located in the system as /etc/skel/pbs.cmd.

#!/bin/sh
### Job name
#PBS -N test

### Declare job non-rerunable
#PBS -r n

### Mail to user
#PBS -m ae

### Queue name (qprod or oneweek)
### qprod is the queue for running production jobs.
### 126 nodes can run jobs in this queue.
### Each job in this queue can use 1-32 nodes.
### Parallel jobs will be favoured by the system.
### Walltime can be 00:00:01 to 10:00:00
#PBS -q qprod

### Wall clock time required. This example is 2 hours
#PBS -l walltime=02:00:00

### Number of nodes
### The following means 1 node and 1 processor.
### Clearly, this is for serial job
###PBS -l nodes=1:ppn=1

### The following means 4 nodes required. Processor Per Node=2,
### i.e., total 8 processors needed to be allocated.
### ppn (Processor Per Node) can be either 1 or 2.
#PBS -l nodes=4:ppn=2


### The following stuff will be executed in the first allocated node.
### Please don't modify it.
cd $PBS_O_WORKDIR
### Define number of processors
NPROCS=`wc -l < $PBS_NODEFILE`
echo $PBS_JOBID : $NPROCS processors allocated : `cat $PBS_NODEFILE`
echo =======================================================


### Run serial executable "a.out" in one batch node like this :
# ./a.out > ${PBS_JOBNAME}.`echo ${PBS_JOBID} | cut -c 1-5 `

### Run the parallel MPI executable "a.out"
### Remember, users don't need to specify "-np" nor "-machinefile"
### in batch nodes.
mpirun ./a.out > ${PBS_JOBNAME}.`echo ${PBS_JOBID} | cut -c 1-5 `

After the PBS directives in the command file, the shell executes a change directory command to $PBS_O_WORKDIR, a PBS variable indicating the directory where the PBS job was submitted and nominally where the program executable is located. Other shell commands can be executed as well. In the last line, the executable itself is invoked.

If we are running MPI program, then the command ” mpirun ./programfile ” should be used. There is no need to tell the MPI how many nodes and where the machine file is because the PBS has already passed these information to the MPI system.

The parameter ” > ${PBS_JOBNAME}.`echo ${PBS_JOBID} | cut -c 1-5 ` ” would redirect the standard output of the program to a text file JobName.JobID. You can inspect this file from time to time to check the progress of the program.

Submitting a Job

To submit the job, we use this command qsub

[h0xxxxxx@hpcpower test]$ qsub pbs.cmd
2234.hpcpower

Upon successful submission of a job, PBS returns a job identifier of the form JobID.hpcpower where JobID is an integer number assigned by PBS to that job. You’ll need the job identifier for any actions involving the job, such as checking job status or deleting the job.

When the job is being executed, it stores the program outputs to the file JobName.xxxx where xxxx is the job identifier of the job. At the end of the job, the file JobName.oxxxx and JobName.exxxx would also be copied to the working directory to show the standard output and error which were not explicited redirected in the job command file.

Also, an e-mail message would be sent to the user when the job is finished. The following is an example of a PBS email notification to the user at the end of the job:

Date: Fri, 12 Dec 2003 17:16:52 +0800
From: adm <adm@hpcpower.hku.hk>
To: h0xxxxxx@hpcpower.hku.hk
Subject: PBS JOB 1717.hpcpower

 PBS Job Id: 1717.hpcpower
 Job Name:   pd2-sn1
 Execution terminated
 Exit_status=0
 resources_used.cput=00:18:07
 resources_used.mem=92104kb
 resources_used.vmem=330380kb
 resources_used.walltime=00:18:24

Note that the walltime-used information in the e-mail message should be used to accurately estimate the walltime resource requirement in the PBS job command file for future job submissions so that PBS can more effectively schedule the job. When submitting a particular PBS job for the first time, the walltime requirement should be overestimated to prevent premature job termination.

Manipulating a Job

There are some commands for manipulating the jobs

  1. List all jobs

    [h0xxxxxx@hpcpower test]$ qa
    
    hpcpower.hku.hk:
                                                                Req'd  Req'd   Elap
    Job ID          Username Queue    Jobname    SessID NDS TSK Memory Time  S Time
    --------------- -------- -------- ---------- ------ --- --- ------ ----- - -----
    2230.hpcpower   ycleung  parallel FORCED_CHA   5597  16  --    --  10:00 R 07:17
    2231.hpcpower   ycleung  parallel CHANNEL_LI  11048  16  --    --  10:00 R 04:02
    2233.hpcpower   chliu    oneday   HUGE_CAVIT   4485  16  --    --  24:00 R 01:51
  2. List all nodes

    [h0xxxxxx@hpcpower test]$ pa
    g15.hpcc.hku.hk
         jobs = 0/2231.hpcpower, 1/2231.hpcpower
    g16.hpcc.hku.hk
         jobs = 0/2231.hpcpower, 1/2231.hpcpower
    g17.hpcc.hku.hk
         jobs = 0/2231.hpcpower, 1/2231.hpcpower
    g18.hpcc.hku.hk
         jobs = 0/2231.hpcpower, 1/2231.hpcpower
    g19.hpcc.hku.hk
         jobs = 0/2231.hpcpower, 1/2231.hpcpower
    g20.hpcc.hku.hk
         jobs = 0/2231.hpcpower, 1/2231.hpcpower
    g21.hpcc.hku.hk
         jobs = 0/2231.hpcpower, 1/2231.hpcpower
    g22.hpcc.hku.hk
         jobs = 0/2231.hpcpower, 1/2231.hpcpower
    e05.hpcc.hku.hk
    e04.hpcc.hku.hk
    e03.hpcc.hku.hk
    ......

  3. List detail of the job
    Command: qstat -f <Job ID>

    [h0xxxxxx@hpcpower test]$ qstat -f 2230
    Job Id: 2230.hpcpower.hku.hk
        Job_Name = FORCED_CHANNEL
        Job_Owner = ycleung@hpcpower.hku.hk
        resources_used.cput = 00:00:01
        resources_used.mem = 4328kb
        resources_used.vmem = 15780kb
        resources_used.walltime = 07:19:34
        ......

  4. Delete a job
    Command : qdel <Job ID>

    [h0xxxxxx@hpcpower test]$ qdel 2236