Skip to main content

Modern high core count CPUs and accelerators, such as GPUs, have been shown to provide significant performance benefits in many different applications. However, for a novice, or even for a moderately experienced scientist or programmer, it is not always clear which applications could potentially benefit from these resources and which do not. For example, a Nvidia V100 GPU can perform artificial intelligence (AI) related computations in a fraction of the time it takes a regular CPU to perform the same computations but ill-informed OpenACC compiler pragmas can actually make a code run slower. Why is this? When should one invest time in parallelisation and/or GPU-acceleration? How much speedup can be expected with a given application? The main goal of this two-day online course is to start answering these questions. The course also covers the basics of GPU programming and aims to provide the necessary information for avoiding the most common pitfalls.

There will be a hands-on part of the course, where the participants get the opportunity to try and run some examples on the Kebnekaise system.

The course if free to anyone in Swedish academia. The course lasts two days and is given through Zoom.

November 4th and 5th, 2020, 09:00-17:00 each day.

For more information and registration, see: