HPC Slurm --ntasks and Matlab parcluster NumWorkers question
Show older comments
Hi,
I have a question regarding number of tasks (--ntasks ) in Slurm , to execute a .m file containing (‘UseParallel’) to run ONE genetic algorithm (‘ga’).
Maximum physical cpu is 64 per node at HPC.
In Slurm .bash file, this works:
#SBATCH --cpus-per-task=64
#SBATCH --nodes=1
#SBATCH --ntasks=1
But if I want to do
#SBATCH --cpus-per-task=128
#SBATCH --nodes=2
#SBATCH --ntasks=1
Is not allowed. "sbatch: Warning: can't run 1 processes on 2 nodes, setting nnodes to 1"
I simply think to get 64 cpus from 1 node, so 128 cpus from 2 nodes, etc, to run ONE TASK ONLY in the following Matlab .m file.
But this tells me slurm cannot use 2 nodes to run 1 task, and I have to make it "ntasks=2" in the .bash file to still request 64+64 cpus and do some tricks in Matlab .m file and make Matlab buy them as total 128 cpus for 1 task?
In Matlab .m file, I did
num_cpu=64; % I want to increase to 128
parpool(parcluster, num_cpu)
options = optimoptions('ga','UseParallel', true, , 'UseVectorized', false,...
'PopulationSize',num_cpu-1,...)
[x,…]=ga(@(x)cost_fun(x)…, options);
Since multiple nodes in Slurm to do one task is not allowed. I was previously suggested to define a cluster profile in Matlab instead to make HPC accept multiple nodes. https://www.mathworks.com/help/parallel-computing/discover-clusters-and-use-cluster-profiles.html
Is there a way to let NumWorkers to be 128 by using 2 nodes and 1 task in either a Matlab .m or Slurm .batch file ?
Accepted Answer
More Answers (0)
Categories
Find more on Third-Party Cluster Configuration in Help Center and File Exchange
Community Treasure Hunt
Find the treasures in MATLAB Central and discover how the community can help you!
Start Hunting!