GROMACS is a versatile package to perform molecular dynamics, i.e. simulate the Newtonian equations of motion for systems with hundreds to millions of particles. It is primarily designed for biochemical molecules like proteins and lipids that have a lot of complicated bonded interactions, but since GROMACS is extremely fast at calculating the nonbonded interactions that usually dominate simulations, many groups are also using it for research on non-biological systems, e.g. polymers.
GROMACS is highly tuned for high performance computing environment. It supports various parallelization and acceleration schemes for different computing units.
To use GROMACS, please use the following commands or refer to the sample PBS scripts:
|HPC2015||5.1.1||module load impi gromacs/impi/5.1.1|
|2018.2||module load gromacs/impi/2018.2
module load gromacs/impi/2018.2-gpu
Sample examples with PBS script for running GROMACS in parallel are available at /share1/gromacs/sample.
Both parallel(with MPI) and non-parallel(without MPI) versions of GROMACS are installed in our cluster systems. Available versions is available at here. As in GROMACS only mdrun is MPI-aware currently, mdrun is the only MPI-enabled parallel GROMACS command installed.
There are two levels of precision that can be utilized in GROMACS – mixed(single in previous GROMACS version) and double. Both are floating-point formats that allow data to occupy different amounts of memory.
Starting from version 5.x, GROMACS provides a single gmx wrapper binary for launching all tools for preparing(e.g. grompp), running(e.g. mdrun) and analysing(e.g. rms) dynamics simulations. You can use command ‘gmx help‘ or ‘gmx help command‘ to check the wrapper binary options.
|Precision||MPI enabled||GROMACS 4.x||GROMACS 5.x|
|Command||Using gmx wrapper||Equivalent command|