SCIAMA

High Performance Compute Cluster

Menu
  • Home
  • Getting Started
    • How to get an account
    • How to Log in
    • Log in from Windows
    • Using X2Go
    • Create an SSH key pair
    • SSH Fingerprints
    • Using SSH keys
    • Training Exercises
    • Slurm MPI Batch Script
  • Using SCIAMA
    • Using Jupyter Notebooks
    • Using JupyterHub
    • Using Spyder
    • Storage Policy
    • Submitting jobs
    • Using GPUs
    • Using Matlab
    • Using Conda Environments
    • SLURM for PBS Users
    • File transfers
    • GitHub/BitBucket
    • Cloud Backup
    • Reserving Compute Nodes
    • Acknowledging SCIAMA
  • Resources
    • Hardware
    • Queues
    • Software
      • Using Software Modules
      • Changing Default Modules
      • Mixing MPI and OpenMP
      • Using CONDA
  • System Status
    • Core Usage
    • Disk Usage
    • Users Logged In
    • Jobs Status
    • Jobs by Group
    • Node Status
    • SCIAMA Grafana Dashboard
    • Maintenance and Outages
  • Contact Us

Latest News

January/February

IS are upgrading the Cisco network switches that SCIAMA uses to connect to the outside world. This upgrade will temporarily disrupt connection to SCIAMA's login nodes and Jupyterhub server for a few minutes. Actual date to be confirmed.

5th September

We have a critical hardware failure on SCIAMA which requires urgent replacement. This requires us to stop lustre and take the users' home directories offline. We have scheduled this for Monday 5th September. During the outage you will not be able to log on to SCIAMA or read/write any data to /mnt/lustre so we suggest you hold off running any further jobs until afterwards.

6th May 2022

New Lustre storage has been installed and ready to use, please check the storage policy and contact sciama support to request disk space. The new compute and GPU nodes have also been installed, check the Using GPUs section for Slurm config.

We will be carrying out maintenance on SCIAMA in June and schedule regular maintenance windows in the future so please keep an look out for any updates/news for details. Read more

New SCIAMA hardware, comprising Lustre storage, CPU and GPU compute nodes, are being installed. SCIAMA will need to be updated to be able to use the new hardware. Read more

Useful Links

Institute of Cosmology & Gravitation

SLURM

Sciama High Performance Compute Cluster

Sciama is a High Performance Compute Cluster (HPCC) which is supported by the Institute of Cosmology and Gravitation (ICG), SEPNet and the University of Portsmouth, UK. It was built in 2011 and is currently in its fourth iteration. The cluster was named after Dennis Sciama, a leading figure in the development of astrophysics and cosmology, but it is also an acronym that stands for SEPnet Computing Infrastructure for Astrophysical Modelling and Analysis. It comprises:

  • 4416 cores
  • 1.8PB Lustre storage
  • 8 login nodes
  • 180 compute nodes
  • 4 A100 GPUs
  • 1 application (JupyterHub) node.

Please remember to acknowledge your usage of SCIAMA in your research. Details can be found here.

Cabinets in the Data Centre housing Sciama hardware
Cabinets housing Sciama hardware

Copyright © 2022 ICG, University of Portsmouth