Project

General

Profile

Dione user instructions » History » Revision 6

Revision 5 (Anonymous, 2019-10-04 15:51) → Revision 6/8 (Anonymous, 2019-10-04 15:57)

h1. User instructions for Dione cluster 

 
 University of Turku 
 Åbo Akademi 
 Jussi Salmi (jussi.salmi@utu.fi) 

 h2. 1. Resources 

 h3. 1.1. Computation nodes 

 <pre> 
 PARTITION NODES NODELIST    MEMORY 
 normal      36      di[1-36]    192GB 
 gpu         6       di[37-42] 384GB 
 </pre> 

 Dione has 6 GPU-nodes where the user can perform calculation which benefits from very fast and parallel number crunching. This includes e.g. neural nets. The 36 other nodes are general purpose processors. The nodes are connected via a fast network, Infiniband, enabling MPI (Message Passing Interface) usage in the cluster. In addition, the cluster is connected to the EGI-grid (European Grid Infrastructure) and NORDUGRID which are allowed to use a part of the computational resources. The website 

 https://p55cc.utu.fi/ 

 Contains information on the cluster, a cluster monitor and provides instructions on getting access and using the cluster. 

 h3. 1.2. Disk space 

 The system has an NFS4 file system with 100TB capacity on the home partition. The system is not backed up anywhere, so the user must handle backups himself/herself. 

 h3. 1.3. Software 

 The system uses the SLURM workload manager (Simple Linux Utility for Resource Management) for scheduling the jobs. 

 The cluster uses the module-system for loading software modules with different version for execution. 

 h2. 2. Executing jobs in the cluster 

 The user may not execute jobs on the login node. All jobs must be dispatched to the cluster by using SLURM commands. Normally a script is used to define the jobs and the parameters for SLURM. There is a large number of parameters and environment variables that can be used to define how the jobs should be executed, please look at the SLURM manual for a complete list. 

 A typical script for starting the jobs can look as follows (name:batch-submit.job): 

 <pre> 
 #!/bin/bash 
 #SBATCH --job-name=test 
 #SBATCH -o result.txt 
 #SBATCH --workdir=<Workdir path> 
 #SBATCH -c 1 
 #SBATCH -t 10:00 
 #SBATCH --mem=10M 
 module purge # Purge modules for a clean start 
 module load <desired modules if needed> # You can either inherit module environment, or insert one here 

 srun <executable> 
 srun sleep 60 
 </pre> 


 The script is run with 

 sbatch batch-submit.job 

 The script defines several parameters that will be used for the job. 

 <pre> 
 --job-name      defines the name 
 -o result.txt redirects the standard output to results.txt 
 --workdir       defines the working directory 
 -c 1            sets the number of cpus per task to 1 
 -t 10:00        the time limit of the task is set to 10 minutes. After that the process is stopped 
 --mem=10M       the memory required for the task is 10MB. 
 </pre> 

 srun starts a task. When starting the task SLURM gives it a job id which can be used to track it’s execution with e.g. the squeue command. 


 h2. 3. The module system 

 Many of the software packages in Dione require you to load the kernel modules prior to using the software. Different versions of the software can be used with module. 

 <pre> 
 module avail Show available modules 

 module list Show loaded modules 

 module unload <module> Unload a module 

 module load <module> Load a module 

 module load <module>/10.0 Load version 10.0 of <module> 

 module purge unload all modules 
 </pre> 


 h2. 4. Useful commands in SLURM 

 <pre> 
 sinfo shows the current status of the cluster. 

 sinfo -p gpu Shows the status of the GPU-partition 
 sinfo -O all Shows a comprehensive status report node per node 

 sstat <job id> Shows information on your job 

 squeue The status of the job queue 
 squeue -u <username> Show only your jobs 

 srun <command> Dispatch jobs to the scheduler 

 sbatch <script> Run a script defining jobs to be run 

 scontrol Control your jobs in many aspects 
 scontrol show job <job id> Show details about the job 
 scontrol -u <username> Show only a certain users jobs 

 scancel <job id> Cancel a job 
 scancel -u <username> Cancel all your jobs 
 </pre> 


 h2. 5. Further information 

 Further information can be asked from the administrators (fgi-admins@lists.utu.fi).