Update Slurm job arrays authored by LOGER Benoit's avatar LOGER Benoit
......@@ -6,7 +6,7 @@ An other approach to run several independent tasks in parallel is to use Slurm a
#SBATCH --ntasks=1
#SBATCH --cpus-per-task=1
#SBATCH --array=1-10%5
#SBATCH --mem-per-cpu=1gb
#SBATCH --mem=1gb
#SBATCH --time=0-00:30:00
#SBATCH --output=0-00:01:00
......@@ -16,7 +16,7 @@ This script (note the --array=1-10 option) will run one *parent* job that will r
**What is different ?**
- The *parent* job will run a new *child* job everytime there is enough computation ressources available (instead of waiting to be able to run them all in parallel)
- You can specify how many *child* jobs (at most) should be run in parallel *--array=1-10%5* will run at most 5 *child* jobs simultaneously
- You can specify how many *child* jobs (at most) should be run in parallel *--array=1-10%5* will run at most 5 *child* jobs simultaneously
- You can define the set of identifiers for your *child* jobs (i.e. use --array=1,5,6)
## Separated outputs
......@@ -36,7 +36,7 @@ Here is an example of how you can use slurm variables to configure the execution
#SBATCH --job-name=myarray_job # Name of the parent job
#SBATCH --ntasks=1 # Each child job run 1 task
#SBATCH --cpus-per-task=1 # Each task require 1 cpu
#SBATCH --array=1-10%5 # Running 10 child jobs with IDs in [1,10]
#SBATCH --array=1-10%5 # Running 50 child jobs with IDs in [1,50]
#SBATCH --mem-per-cpu=1gb # Using at most 1gb of memory per cpu
#SBATCH --time=0-00:30:00 # Child jobs will be killed if longer than 30 minutes
#SBATCH --output=logs/array_%A-%a.logs # One log file per child job
......
......