SCIAMA

High Performance Compute Cluster

Menu
  • Home
  • Getting Started
    • How to get an account
    • How to Log in
    • Log in from Windows
    • Using X2Go
    • Create an SSH key pair
    • SSH Fingerprints
    • Using SSH keys
    • Training Exercises
    • Slurm MPI Batch Script
  • Using SCIAMA
    • Using Jupyter Notebooks
    • Using JupyterHub
    • Using Spyder
    • Storage Policy
    • Submitting jobs
    • Using GPUs
    • Using Matlab
    • Using Conda Environments
    • SLURM for PBS Users
    • File transfers
    • GitHub/BitBucket
    • Cloud Backup
    • Reserving Compute Nodes
    • Acknowledging SCIAMA
  • Resources
    • Hardware
    • Queues
    • Software
      • Using Software Modules
      • Changing Default Modules
      • Mixing MPI and OpenMP
      • Using CONDA
  • System Status
    • Core Usage
    • Disk Usage
    • Users Logged In
    • Jobs Status
    • Jobs by Group
    • Node Status
    • SCIAMA Grafana Dashboard
    • Maintenance and Outages
  • Contact Us

The Sciama Cluster

The Sciama Cluster has 4288 Compute Cores and 1.8PB of Lustre file storage. It uses QDR Infiniband networking with 100Gb/sec throughput (4x EDR).
All SCIAMA nodes are x86_64 architecture with a mixture of AMD and Intel processors.

Nodes

Nodes Processor CPUs per Node Memory per Node
Login1-4 Intel Xeon 2.67GHz CPU 12 23GB
Login5-8 AMD Opteron 2.8GHz CPU 12 15GB
node100-195 Intel Xeon 2.6GHz 16 62GB Shared
node200-247 Intel Xeon E5-260 2.3GHz 20 62GB Shared
node300-311 Intel Xeon Gold 6130 2.1GHz 32 187GB Shared
node312-327 Intel Xeon 2.1GHZ 32 376GB Shared
vhmem01 Intel Xeon E5-2650 2.6GHz 16 503GB
node350-351 AMD EPYC 7662 2Ghz 128 128GB Shared
node401-404 AMD EPYC 7713 2Ghz 128 1TB Shared
gpu01-02 AMD EPYC 7713 2Ghz, with 2 A100 4GB GPUs 128 1TB Shared
appserv1 AMD EPYC 7542 2.9GHz 32 256GB

Schematic of Sciama-4 Network

Click on the image below to enlarge.

Schematic of Sciama-4 servers and nodes

Top of Page

Copyright © 2022 ICG, University of Portsmouth