SCIAMA

High Performance Compute Cluster

Menu
  • Home
  • Getting Started
    • How to get an account
    • How to Log in
    • Log in from Windows
    • Using X2Go
    • Create an SSH key pair
    • SSH Fingerprints
    • Using SSH keys
    • Training Exercises
    • Slurm MPI Batch Script
  • Using SCIAMA
    • Using Jupyter Notebooks
    • Using JupyterHub
    • Using Spyder
    • Storage Policy
    • Submitting jobs
    • Using GPUs
    • Using Matlab
    • Using Conda Environments
    • SLURM for PBS Users
    • File transfers
    • GitHub/BitBucket
    • Cloud Backup
    • Reserving Compute Nodes
    • Acknowledging SCIAMA
  • Resources
    • Hardware
    • Queues
    • Software
      • Using Software Modules
      • Changing Default Modules
      • Mixing MPI and OpenMP
      • Using CONDA
  • System Status
    • Core Usage
    • Disk Usage
    • Users Logged In
    • Jobs Status
    • Jobs by Group
    • Node Status
    • SCIAMA Grafana Dashboard
    • Maintenance and Outages
  • Contact Us

Sciama Storage Policy for Users

The Sciama environment was provisioned to provide a platform for complex numerical computations. Ample storage has been made available to assure temporary space for the most demanding of tasks.

There are two storage areas on Sciama. There is an area in your home directory and an area for project data.


User Accounts Home Directory

We do not currently enforce the suggested limit on home areas which is 20GBytes, if this is exceeded we will contact you to discuss your requirements.
If your account remains idle, not logged in for more than 1 year, we will attempt to contact you to discuss what to do with your data. If we are unable to make contact we will remove your SCIAMA account and all data in your /home directory.

Lustre Project Data

All project data should be stored on the Lustre filesystem and should be considered transient data. Lustre is not backed up!
Each project will be allocated a certain amount of disk space as agreed with each project leader and the ICG computing team. In the ICG the project leader will be responsible for the data, outside the ICG the owner of the data will be the initial contact requesting Lustre space.
If you are the owner of data on Lustre and leave the ICG you must inform us of the new owner, failure to do so may result in the data being deleted.
A soft quota will be applied and if exceeded the project leader will be informed, a period of one month will be given to reduce the amount of data otherwise you will no longer be able to save any more files and simulations will fail. A hard quota will also be enforced, again as agreed with project leaders and ICG IT committee. Once a project has been allocated Lustre disk space this can only be changed under exceptional circumstances and must be approved by ICG computing team.
A very small amount of Lustre disk space, less than 10TB, may be allocated to individuals not working on projects in the ICG.
To request new storage space on Lustre please contact sciama-support@port.ac.uk where we will discuss requirements with you at the next ICG IT meeting.

Copyright © 2022 ICG, University of Portsmouth