| ... | ... | @@ -57,3 +57,26 @@ Here is an example of file: |
|
|
|
|
|
|
|
```
|
|
|
|
This file will run one task using at most 4 cpus, 1gb of memory for at most 1 hour and print the output in example_%j.log (%j will be replaced by the job allocation id).
|
|
|
|
|
|
|
|
A second example of file:
|
|
|
|
```
|
|
|
|
#!/bin/bash
|
|
|
|
#SBATCH --job-name=parallel_job_test # Job name
|
|
|
|
#SBATCH --mail-type=END,FAIL # Mail events (Not available on SRVOAD)
|
|
|
|
#SBATCH --mail-user=email@imt-atlantique.fr # Where to send mail
|
|
|
|
#SBATCH --nodes=1 # Run all processes on a single node
|
|
|
|
#SBATCH --ntasks=10 # Number of processes
|
|
|
|
#SBATCH --cpus-per-task=1 # Number of CPU per task
|
|
|
|
#SBATCH --mem=1gb # Total memory limit
|
|
|
|
#SBATCH --time=01:00:00 # Time limit hrs:min:sec
|
|
|
|
#SBATCH --output=example_%j.log # Standard output and error log
|
|
|
|
|
|
|
|
for i in {1..10}
|
|
|
|
do
|
|
|
|
./my-app arg1 $i
|
|
|
|
done
|
|
|
|
wait
|
|
|
|
|
|
|
|
```
|
|
|
|
|
|
|
|
This will run 10 different tasks in parallel. Note that this will appear as a single job if you use *squeue* and that you can't separate the outputs in a different file for each task (To do that, you can refer to the [Job array section](Slurm job array)). |
|
|
\ No newline at end of file |