What is GROMACS

GROMACS is a versatile package to perform molecular dynamics, i.e. simulate the Newtonian equations of motion for systems with hundreds to millions of particles. It is primarily designed for biochemical molecules like proteins and lipids that have a lot of complicated bonded interactions, but since GROMACS is extremely fast at calculating the nonbonded interactions that usually dominate simulations, many groups are also using it for research on non-biological systems, e.g. polymers.

GROMACS is highly tuned for high performance computing environment. It supports various parallelization and acceleration schemes for different computing units.

 

Modules

To use GROMACS, please use the following commands or refer to the sample PBS scripts:

System Version Command
HPC2015 5.1.1 module load impi gromacs/impi/5.1.1
2018.2 module load gromacs/impi/2018.2
module load gromacs/impi/2018.2-gpu
HPC2021 2021.3 module load gromacs/2021.3

HPC2015 System

Both parallel(with MPI) and non-parallel(without MPI) versions of GROMACS are installed in our cluster systems. Available versions is available at here. As in GROMACS only mdrun is MPI-aware currently, mdrun is the only MPI-enabled parallel GROMACS command installed.

There are two levels of precision that can be utilized in GROMACS – mixed(single in previous GROMACS version) and double. Both are floating-point formats that allow data to occupy different amounts of memory.

Starting from version 5.x, GROMACS provides a single gmx wrapper binary for launching all tools for preparing(e.g. grompp), running(e.g. mdrun) and analysing(e.g. rms) dynamics simulations. You can use command ‘gmx help‘ or ‘gmx help command‘ to check the wrapper binary options.

Precision MPI enabled GROMACS 4.x GROMACS 5.x
Command Using gmx wrapper Equivalent command
Mixed No grompp gmx grompp —-
g_tune_pme gmx tune_pme —-
mdrun gmx mdrun mdrun
Yes mdrun_mpi gmx_mpi mdrun mdrun_mpi
Double No grompp_d gmx_d grompp —-
g_tune_pme_d gmx_d tune_pme —-
mdrun_d gmx_d mdrun mdrun_d
Yes mdrun_mpi_d gmx_mpi_d mdrun mdrun_mpi_d

Sample examples with PBS script for running GROMACS in parallel are available at /share1/gromacs/sample.

 

HPC2021 System

Partition MPI enabled Thread-MPI enabled GPU enabled Executable
intel Y Y gmx_mpi
amd Y Y gmx_mpi
gpu Y Y gmx

Sample examples with SLURM script for running GROMACS in parallel are available at /share1/gromacs/sample.

 

Documentation

GROMACS documentations
GROMACS Tutorial
GROMACS Benchmarks