AI-Research is an advanced computing platform with state-of-the-art GPU accelerators to facilitate sophistication in various disciplines of research.


This guide consists of five major parts:

  1. Hardware specification
  2. System Login
  3. Accessing GPU
  4. SLURM scheduler which controls access to GPUs
  5. How to use containers to run software with enroot or Singularity
  6. Other local commands

(Note: Default per user disk quota is 50GB and default group quota is 5TB. However,  group quota is currently capped at 1TB due to technical difficulties in connecting to the Lustre storage.)


Hardware Specification

The system is a NVIDIA DGX A100 machine which consists of:

  • Dual AMD EPYC 7742, 2.25GHz CPU(base), 3.4Ghz (max boost)
    (128-cores total; 256-threads due to Simultaneous multithreading (SMT) feature)
  • 1TB DDR4 RAM
  • Eight NVIDIA A100 SXM4 GPUs with 40GB GDDR6
  • NVSwitch
  • 14TB NVMe SSD local storage
  • 200Gb/s HDR InfiniBand
  • Ubuntu 20.04 LTS

Further hardware details are available in its official datasheet.

System Login

The AI-research system,, can be accessed via Secure Shell from any device within HKU campus network (physically connected to HKU campus network, or using SSID “HKU” while on campus. If you need off-campus access, please use HKUVPN2FA service).

Following SSH features are enabled:

Accessing GPU

Here we list some common-pitfalls where new user may come across:

`nvidia-smi` shows nothing?

$ nvidia-smi
Failed to initialize NVML: Unknown Error

Users whom logon to the node DO NOT have GPU access immediately (you will need to use the SLURM scheduler to apply for GPU). If you are inside an interactive session, or submitted a job script, then you will be allowed access to the GPUs.

Do not assume no one is using the GPUs.
There maybe other users using the GPUs (either interactively, or have their submitted jobs running). Check the immediate availability of GPUs with the gpu_avail command. If you have requested interactive job and the GPU resources is not enough to fulfill your request, you will be put into waiting (and thus it appears as hung). To achieve efficient and fair use of resources, please refrain from requesting for more GPU than your job is able to use.

Why I cannot use docker?

Our scheduler (SLURM) is currently not compatible with docker, as the GPUs are allocated by SLURM to ensure fair share of resources, docker access by general users are not available. Please use enroot or singularity to pull docker from the docker repositories instead.

SLURM scheduler

Access to GPU card is not granted upon system login as resources are controlled and scheduled via the SLURM scheduler. To gain access to GPUs, you have to submit a SLURM job. Jobs could be interactive ( you can type commands during a session as if you directly logged in ) or batch ( you prepare a job script to include all the commands to be executed for your tasks and submit to run where you cannot interact with the system since then) depending on what you want to do.

Interactive Job

Interactive job allows you to use GPU interactively (of course, when there is immediately-available GPUs). To submit an interactive job with 1 GPU card for a maximum runtime of 5 minutes, use:

$ srun --pty --gres gpu:1 --time 5 /bin/bash

Before submitting an interactive job, you might check the number of available GPUs with the gpu_avail command:

$ gpu_avail
out of 8 GPUs are occupied: GPU 1 GPU 2 GPU 3 GPU 4 GPU 5 GPU 6 GPU 7 GPU 8

The printout of the command should be self-explanatory. If you use a terminal multiplexer (screen/tmux), it should be done outside your srun command. 

Currently SLURM only controls GPU resources. Although users may use CPU and disk resources without having an active GPU job, users are advised to use them only for:

  • Transferring files from/to the node
  • Preparing container images (see later sections) before SLURM job submission
  • Submitting job to SLURM


Currently two docker equivalence, enroot and Singularity are installed on the system. They both support docker images and will be discussed in details below:


Configuration For NVIDIA GPU Cloud (Optional)

Some containers on NVIDA GPU cloud requires authentication. Once you get the API token (check this on how to get your own token), you may store it in your environment via:

$ cat > ~/.local/share/enroot/.credentials <<END
machine login \$oauthtoken password MmdhYOUR_NGC_TOKEN_0aXFn_DONT_COPY_THIS_lM2Y0NjMtZGFhZi00YWRlLTk0ODYtMDNiN2U3YzBiOWE5


After that, you may add the following into your ~/.bashrc, which will be effective in your next login:

$ export ENROOT_CONFIG_PATH=~/.local/share/enroot

To take effect immediately, you should run: source ~/.bashrc

Importing Images

To import the image which you would normally pull (via docker) using docker pull , you should use:

$ enroot import docker://

After import, enroot will create squash file with a name, e.g. nvidia+cuda+11.0-devel.sqsh in the example above. You may add -o filename.sqsh after the “import” keyword in order to save it to another file name.

Creating Container from Imported Image

To create a container with a squash file, you should use:

$ enroot create nvidia+cuda+11.0-devel.sqsh

By default, the command will extract the squash file into ~/.local/share/enroot/containername, where containername was generated from your squash file name. You may add -n containername after the “create” keyword in order to set a name to the container.

Listing Containers

To get a list of enroot containers under your folder, you should use:

$ enroot list

Running Container

(For enroot, the container only starts if you have a valid GPU connection)

To run the container with the name nvidia+cuda+11.0-devel and starting a bash shell, you should use:

$ enroot start nvidia+cuda+11.0-devel /bin/bash

Similarly you may start a container with other name and other programs. You should add -w (write) if you would like to change files inside the container.

(Variants such as batch and exec are also supported by enroot.)

Deleting Container

To delete a container called containername, you may use:

$ enroot remove containername

You will be asked to delete the folder containing the root filesystem. You have to answer “y” or “N” (typing “yes” will do nothing).

Notes on Accessing Files in a container

A user may mount other folder to the container to make it accessible inside. For example, to mount your current working directory into the container as /mnt inside:

$ enroot start --mount .:mnt nvidi+cuda+11.0-devel /bin/bash

User may interact with the folder in order to access files inside the container even when the container is not running. The root filesystem of the container is at:

Further Information

Further information of enroot‘s usage can be found at


Configuration For NVIDIA GPU Cloud (Optional)

Some containers on NVIDA GPU cloud requires authentication. Once you get the API token (check this on how to get your own token), you may add the following into your ~/.bashrc, which will be effective in your next login:

$ export SINGULARITY_DOCKER_USERNAME="\$oauthtoken"

To take effect immediately, you should run: source ~/.bashrc

Importing Images and Creating Container

To import the image which you would normally pull (via docker) using docker pull , you should use:

$ singularity build cuda11.simg docker://

The command will create a “simg” file (cuda11.simg) which is the container. You may create containers with differing names by creating multiple “simg” files.

Running Container

To run the container simg file, you should use:

$ singularity shell --nv cuda11.simg

You will get a shell starting with Singularity> which behaves like a normal shell at the same path where you run the singularity command.

(Variants such as exec and run are supported by singularity.)

Deleting Container

Simply deleting the simg file is okay. You should run:

$ rm cuda11.simg

Further Information

Further information on usage of Singularity is at


Other Local Commands

There are several local commands for users’ convenience when using the system:


This is the local command for checking immediate GPU availability. Just type “gpu_avail” on the shell and you will see something like this:

2 out of 8 GPUs are allocated:
GPU 0 GPU 1 [GPU 2][GPU 3] GPU 4 GPU 5 GPU 6 GPU 7

The coloured output should be self-explanatory and this allows user to enquire how many GPUs are immediately available if they would like to submit interactive jobs.


This is the local command for checking GPU resource usage for a user’s own running jobs. In normal cases if one have an interactive session and running their compute loads, they would like to confirm the occupancy of their GPUs. But a simple run of “nvidia-smi” at another command prompt would be barred from accessing such information (as this separate command prompt is not part of any job).

Just type “gpu_smi” on the shell and you will get something like this if you have a running job, for each of them:

Running Job ID: 5033 [ GPU2 GPU3 ]
# gpu pwr gtemp mtemp sm mem enc dec mclk pclk
# Idx W C C % % % % MHz MHz
0 50 23 23 0 0 0 0 1215 210
1 51 22 22 0 0 0 0 1215 210
# gpu pid type sm mem enc dec command
# Idx # C/G % % % % name 0 - - - - - - - 1 - - - - - - -

For each of the user’s running job, the system will list the job number along with the physical GPU the job is using, then a (per job) print out of GPU’s usage in an abridged form (“dmon” and “pmon”). Note that the GPU IDs always start from “0” in the listing, but these IDs do not correspond to the physical GPUs but the GPUs visible to the user’s job. This allows users to find if their job are actually using the GPU.