SCIAMA

High Performance Compute Cluster

Menu
  • Home
  • Getting Started
    • How to get an account
    • How to Log in
    • Log in from Windows
    • Using X2Go
    • Create an SSH key pair
    • SSH Fingerprints
    • Using SSH keys
    • Training Exercises
    • Slurm MPI Batch Script
  • Using SCIAMA
    • Using Jupyter Notebooks
    • Using JupyterHub
    • Using Spyder
    • Storage Policy
    • Submitting jobs
    • Using GPUs
    • Using Matlab
    • Using Conda Environments
    • SLURM for PBS Users
    • File transfers
    • GitHub/BitBucket
    • Cloud Backup
    • Reserving Compute Nodes
    • Acknowledging SCIAMA
  • Resources
    • Hardware
    • Queues
    • Software
      • Using Software Modules
      • Changing Default Modules
      • Mixing MPI and OpenMP
      • Using CONDA
  • System Status
    • Core Usage
    • Disk Usage
    • Users Logged In
    • Jobs Status
    • Jobs by Group
    • Node Status
    • SCIAMA Grafana Dashboard
    • Maintenance and Outages
  • Contact Us

Mixing MPI and OpenMP

Some applications offer a mix of MPI and OpenMP. In this instance MPI is used to start a process on a distributed node which itself is OpenMP enabled. Unless this is controlled this can potentially overload a node. By default SLURM will only allocate a single core for every task (i.e. MPI process) and OpenMP will use all available cores on a node. The can be fixed by telling SLURM to allocate additional cores to each task by using the '--cpus-per-task' while the number of threads created by OpenMP is limited by setting the environmental variable 'OMP_NUM_THREADS' (hint: Instead of updating the variable manually to the number of requested cores when modified, you can also simply set it to the SLURM env variable $SLURM_CPUS_PER_TASK). Below is an example job script:

Image of example jobscript

Top of Page

Copyright © 2022 ICG, University of Portsmouth