SCIAMA
High Performance Compute Cluster
Sciama Queues
There are five main queues/partitions on SCIAMA:
Name | Number of Nodes | Node Names | Cores per Node | Shared Memory | Notes |
---|---|---|---|---|---|
sciama2.q | 95 | node100 to node194 | 16 | 64GB | |
sciama3.q | 48 | node200 to node247 | 20 | 64GB | |
sciama4.q | 12 | node300 to node311 | 32 | 192GB | |
sciama4-12.q | 16 | node312 to node327 | 32 | 384GB | |
sciama4-4.q | 4 | node401 to node404 | 128 | 1TB | Total of 512 cores and 4TB shared memory |
himem.q | 1 | vhmem01 | 16 | 512GB | This is a high memory node. Only use this queue if your job requires over 384 Gbytes of memory. |
hicpu.q | 2 | node350 and node351 | 128 | 125GB | Ideal for multiple short jobs that doesn’t require much memory. |
gpu.q | 4 | gpu01 to gpu04 | 128 | 512GB | GPU nodes each with 2x A100 |
Special Queues | |||||
jupyter.q | Subset of sciama3.q nodes and is used by default to run jupyter notebook servers on SCIAMA. |