Parallel programming with createJob

8 views (last 30 days)
Hi. I'm having some confusion regarding the functions createJob and createTask.
I have a function, compute(a,b), that gives back the number of primes between a and b. It doesn't use the function isprime(x). Instead it uses a cycle to check if each number between a and b is prime or not.
Now let's say my goal was to find the number of primes between a=1 and b=30. I could simply write NumPrimes = compute(1,30); and I would get the desired value. I wanted to make the script run faster tho, so I decided to parallelize (is that a word?) the code.
I had three workers available, so what I did was separate the numbers between 1 and 30 in three groups: from 1 to 10, 11 to 20 and 21 to 30.
(In my actual script I obviously used larger numbers, and I made sure that the time it took for a lab to execute each group of numbers was similar)
Simply using a parfor / spmd function, assigning each group to a different lab, cuts the time it takes for the execution to run. I wanted to do it in a different way tho. That's when I started having trouble with my code.
I decided to create a job that I'd submit with three different tasks, each one taking care of a different group of numbers. The tasks were supposed to be ran in parallel and once the job was complete I'd retrieve the outputs of each task. Here's the code:
matlabpool open 3
job = createJob('configuration','local','FileDependencies',{'compute.m'});
createTask(job,@compute,1,{1,10})
createTask(job,@compute,1,{11,20})
createTask(job,@compute,1,{21,30})
submit(job)
wait(job)
out = getAllOutputArguments(job);
This didn't run as expected. It took about 3 times longer than when I simply run the code in series NumPrimes = compute(1,30); which is the opposite of what I desired...
I think I'm not understanding how the createJob and createTask functions work. I thought that when one creates three tasks and runs the job they would run in parallel in different workers, cutting the time of execution in comparison to a serial execution. That can't be it tho. The job doesn't seem to be running in series also, since the time increases by a factor of 3...
I'm really confused. If anyone could point me in the right direction I'd appreciate.
Thanks. Daniel

Accepted Answer

Thomas Ibbotson
Thomas Ibbotson on 11 Feb 2013
Hi Daniel,
The problem is in the first line of code where you open a matlabpool. This is not required when using jobs and tasks, and will in fact use up resources on your system that will not then be available for use by the submitted job. My guess is that you have a 4-core machine, the matlabpool is using 3 of those cores, which leaves 1 left over for the job to run on, and as you have 3 tasks running on one core it is taking 3 times longer.
Everything else you have done is fine, just remove the first line that opens the matlabpool and try again.
Cheers, Tom
  2 Comments
Daniel Jaló
Daniel Jaló on 11 Feb 2013
Hi. Thanks for the answer. I now can see the main problem with my code. I still have one doubt tho. I ran two versions of the code. In one I created a job with 3 tasks, and submitted the job. In the second version I created three jobs, each with one task, and submitted them all at the same time. In both versions the code took about the same time to run as the serial version of the code. That's when my doubts arose.
  • When I create a job with 3 tasks are the tasks ran in parallel or in series?
  • When I create three jobs with 1 task the jobs are ran in parallel. Why doesn't it speed up the code? When I use spmd to do the same, define one task to each lab, the execution time decreases. When I use this method tho the time of execution doesn't change in comparison to the serial execution.
  • When I create a Job/Task can I define how many labs will be responsible for running it? As you correctly guessed I have a 4-core machine, and with this code I will be only using three workers, leaving one free that I could use to speed up the code.
Daniel
Thomas Ibbotson
Thomas Ibbotson on 12 Feb 2013
When you create an independent job (using createJob) each task of that job is independent of every other task, which means each task can be run at the same time on the available workers.
There is little difference between creating 3 jobs each with 1 task, and 1 job with 3 tasks in each case the tasks will run at the same time on the available workers. Each worker will execute one task only in an independent job. Note that if you are using an independent job your code should not have 'spmd' or 'parfor' in it, these are for use with matlabpools only.
If you require communication between workers, you can use either a parallel job, or a matlabpool job. In both of these there is one task which is automatically replicated to all the available workers and run simultaneously. The code for a parallel task is equivalent to the body of an spmd block (but should not contain an spmd statement) and can use labSend, labReceive etc.
The code for a matlabpool job can include spmd and parfor statements, and reserves one of the workers to act as the 'client', and the rest of the workers are used to form the matlabpool. This allows you to write parallel code as you would using an interactive matlabpool on your client, but run it asynchronously on the cluster.
As for why your code is not speeding up when using jobs and tasks, in comparison to spmd, you must make sure that you do not have a matlabpool running on the cluster before you run your jobs. Otherwise I would expect to see the same speedup.
These are complicated concepts to understand and describe, if you have any more questions let me know and I'll try to clarify.
Tom

Sign in to comment.

More Answers (0)

Categories

Find more on MATLAB Parallel Server in Help Center and File Exchange

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!