GROMACS & RELION

Molecular simulation has evolved into a standard technique employed in virtually ​all ​high-impact publications e.g. on new protein structures. The main bottleneck for scaling in GROMACS is the 3D-FFT used in the particle-mesh Ewald electrostatics (PME). Since PME is very fast, and used by MD codes world wide, it is worth investigating if the communication overhead can be lowered. This will be done in collaboration with PDSE (see the 3D-FFT sub-project). For extreme scaling, we will also investigate the fast multipole method (FMM) since it has better scaling complexity. A problem was always energy conservation, which is now solved in collaboration with the numerical analysis community, and we will integrate the ExaFMM code of Rio Yokota (Tokyo Tech) into GROMACS.

We also need to rethink the MPI communication setup to improve strong scaling, including different communication patterns, non-blocking collectives and persistent communication (when integrated in MPI), and given modern hardware developments we will spend efforts on improving performance on very “fat” nodes that e.g. have multiple accelerators and high-end CPUs/networking with task-based parallelism. Finally, we will work on ensemble-level parallelism where e.g. Markov State Models and enhanced sampling are used to loosely couple simulations to sample complex dynamics, and we will continue the recent very high-impact work we have done on improving 3D reconstruction of cryo-electron microscopy data in the RELION code: The challenge here is to improve both GPU-acceleration and task-based parallelization to the extent where reconstruction can be done on-the-fly instead as a separate overnight job after collecting the data, which should have very large potential for additional high-impact publications.