SCIAMA

High Performance Compute Cluster

Menu
  • Home
  • Getting Started
    • How to get an account
    • How to Log in
    • Log in from Windows
    • Using X2Go
    • Create an SSH key pair
    • SSH Fingerprints
    • Using SSH keys
    • Training Exercises
    • Slurm MPI Batch Script
  • Using SCIAMA
    • Using Jupyter Notebooks
    • Using JupyterHub
    • Using Spyder
    • Storage Policy
    • Submitting jobs
    • Using GPUs
    • Using Matlab
    • Using Conda Environments
    • SLURM for PBS Users
    • File transfers
    • GitHub/BitBucket
    • Cloud Backup
    • Reserving Compute Nodes
    • Acknowledging SCIAMA
  • Resources
    • Hardware
    • Queues
    • Software
      • Using Software Modules
      • Changing Default Modules
      • Mixing MPI and OpenMP
      • Using CONDA
  • System Status
    • Core Usage
    • Disk Usage
    • Users Logged In
    • Jobs Status
    • Jobs by Group
    • Node Status
    • SCIAMA Grafana Dashboard
    • Maintenance and Outages
  • Contact Us

Using SLURM on SCIAMA (for PBS users)

For a full list of SLURM commands, please consult the SLURM documentation.

Submitting jobs

For details on how to submit (batch or interactive)jobs on SCIAMA using slurm, please read this article on how to do this.

Migration guide for Torque/PBS users

For people who are used to the "old" Torque/PBS system, here a quick translation table for the most important commands:

FunctionTorqueSLURM
Interactive shell on compute nodeqsub -Isinteractive
Batch job submissionqsub sbatch
Queue statusqstat squeue
Delete jobqdel scancel
Hold jobqhold scontrol hold
Release jobqrls scontrol release

Below is a simple job submission script. Basically #PBS is replaced by #SBATCH.

The Torque line of the form "#PBS -l nodes=2:ppn=2" is replaced by the lines "#SBATCH –nodes=2" and "#SBATCH –ntasks-per-node=2"

A "queue (-q)" in Torque is replaced by a "partition(-p)" in SLURM.

#!/bin/bash
#SBATCH --nodes=2
#SBATCH --ntasks-per-node=2
#SBATCH --time=0-2:00
#SBATCH --job-name=starccm
#SBATCH -p sciama4.q
#SBATCH -D /users/burtong/test-starccm
#SBATCh --error=starccm.err.%j
#SBATCH --output=starccm.out.%j

##SLURM: ====> Job Node List (DO NOT MODIFY)
echo "Slurm nodes assigned :$SLURM_JOB_NODELIST"
echo "SLURM_JOBID="$SLURM_JOBID
echo "SLURM_JOB_NODELIST"=$SLURM_JOB_NODELIST
echo "SLURM_NNODES"=$SLURM_NNODES
echo "SLURMTMPDIR="$SLURMTMPDIR
echo "working directory = "$SLURM_SUBMIT_DIR
echo "SLURM_NTASKS="$SLURM_NTASKS

echo ------------------------------------------------------
echo 'This job is allocated on '${SLURM_NTASKS}' cpu(s)'
echo 'Job is running on node(s): '
echo $SLURM_JOB_NODELIST
echo ------------------------------------------------------

module purge
module add starccm/12.06.011

starccm+ -rsh ssh -batch test1.sim -fabric tcp -power -podkey -np ${SLURM_NTASKS}

Top of Page

Copyright © 2022 ICG, University of Portsmouth