SCIAMA

High Performance Compute Cluster

Menu
  • Home
  • Getting Started
    • How to get an account
    • How to Log in
    • Log in from Windows
    • Using X2Go
    • Create an SSH key pair
    • SSH Fingerprints
    • Using SSH keys
    • Training Exercises
    • Slurm MPI Batch Script
  • Using SCIAMA
    • Using Jupyter Notebooks
    • Using JupyterHub
    • Using Spyder
    • Storage Policy
    • Submitting jobs
    • Using GPUs
    • Using Matlab
    • Using Conda Environments
    • SLURM for PBS Users
    • File transfers
    • GitHub/BitBucket
    • Cloud Backup
    • Reserving Compute Nodes
    • Acknowledging SCIAMA
  • Resources
    • Hardware
    • Queues
    • Software
      • Using Software Modules
      • Changing Default Modules
      • Mixing MPI and OpenMP
      • Using CONDA
  • System Status
    • Core Usage
    • Disk Usage
    • Users Logged In
    • Jobs Status
    • Jobs by Group
    • Node Status
    • SCIAMA Grafana Dashboard
    • Maintenance and Outages
  • Contact Us

Using GPUs on SCIAMA

See Submitting Jobs on SCIAMA if you are not already familiar with how to do this.

Using GPUs interactively

There are two GPU-enabled nodes: gpu01 and gpu02, with 2 A100 GPU's each.
To use them interactively, type:

srun --pty --mem=1g --gres=gpu:1 -J interactive -p gpu.q /bin/bash
and to check which gpu:
nvidia-smi

Submitting batch GPU jobs

To run batch jobs on GPU nodes, ensure your job submission script includes a request for GPUs:

#!/bin/bash
#SBATCH --partition=gpu.q
#SBATCH --gres=gpu

The GPU's on gpu02 are partitioned in to smaller chunks to allow more than one job to run on the GPUs, rather than 1 job taking the whole 40GB GPU.
The GPU's are partitioned as 7x 1g.5gb, 2x 2g.10gb and 1x 3g.20gb. To use a smaller partition state --gres:1g.5gb:1 in the command or --gres=gpu:1g.5gb:1 in a batch file, this will allocate 1x 5GB MIG partition. You can use multiple partions on one GPU or across 2 GPU's with the following:

#!/bin/bash
#SBATCH --partition=gpu.q
#SBATCH --gres=gpu:1g.5gb:7
#This will use all 7 partitions on one GPU

#SBATCH --gres=gpu:2g.10gb:2,gpu:3g.20gb:1
#This will use the whole of GPU 1

#SBATCH --gres=gpu:2g.10gb:1,gpu:1g.5gb:4
#Uses 1 partition of 10GB on GPU 1 and 4 partitions of 5GB on GPU 0

Output of nvidia-smi on GPU01


Output of nvidia-smi on GPU02

Top of Page

Copyright © 2022 ICG, University of Portsmouth