SCIAMA

High Performance Compute Cluster

Menu
  • Home
  • Getting Started
    • How to get an account
    • How to Log in
    • Log in from Windows
    • Using X2Go
    • Create an SSH key pair
    • SSH Fingerprints
    • Using SSH keys
    • Training Exercises
    • Slurm MPI Batch Script
  • Using SCIAMA
    • Using Jupyter Notebooks
    • Using JupyterHub
    • Using Spyder
    • Storage Policy
    • Submitting jobs
    • Using GPUs
    • Using Matlab
    • Using Conda Environments
    • SLURM for PBS Users
    • File transfers
    • GitHub/BitBucket
    • Cloud Backup
    • Reserving Compute Nodes
    • Acknowledging SCIAMA
  • Resources
    • Hardware
    • Queues
    • Software
      • Using Software Modules
      • Changing Default Modules
      • Mixing MPI and OpenMP
      • Using CONDA
  • System Status
    • Core Usage
    • Disk Usage
    • Users Logged In
    • Jobs Status
    • Jobs by Group
    • Node Status
    • SCIAMA Grafana Dashboard
    • Maintenance and Outages
  • Contact Us

Sciama Queues

There are five main queues/partitions on SCIAMA:

Name Number of Nodes Node Names Cores per Node Shared Memory Notes
sciama2.q 95 node100 to node194 16 64GB
sciama3.q 48 node200 to node247 20 64GB
sciama4.q 12 node300 to node311 32 192GB
sciama4-12.q 16 node312 to node327 32 384GB
sciama4-4.q 4 node401 to node404 128 1TB Total of 512 cores and 4TB shared memory
himem.q 1 vhmem01 16 512GB This is a high memory node. Only use this queue if your job requires over 384 Gbytes of memory.
hicpu.q 2 node350 and node351 128 125GB Ideal for multiple short jobs that doesn’t require much memory.
gpu.q 2 gpu01 and gpu02 1TB GPU nodes each with 2x A100
Special Queues
jupyter.q Subset of sciama3.q nodes and is used by default to run jupyter notebook servers on SCIAMA.

Copyright © 2022 ICG, University of Portsmouth