multiple for loops split data

6 views (last 30 days)
Owner5566
Owner5566 on 15 Jan 2020
Commented: Owner5566 on 16 Jan 2020
hey guys,
currently my function is really slow because of the mass of the data and because it uses only one thread.
Since i have a multicore Processor (Ryzen 5 3600, 6 Cores / 12 Threads), i want to make use of it by splitting my data and using multiple times the same function on these data and putting them back together.
I have found the spmd and parfor command
The raw steps which i want to to:
  1. split the Data (tables) n times
  2. give each worker enough parts of the splitted data and the raw data (which i need for the function)
  3. run a function which modifies the splitted data on each worker
  4. put all the splitted data back together
Also i am limited to functions in Matlab 2015b for my use.
How can i do that? Can you please help me?
This is what i tried:
workers = 12;
divider = ceil(specs.numberOfRows/workers);
split1 = data((data.ID <= divider),:);
split2 = data((data.ID > divider) & (data.ID <= divider*2),:);
split3 = data((data.ID > divider*2) & (data.ID <= divider*3),:);
split4 = data((data.ID > divider*3) & (data.ID <= divider*4),:);
split5 = data((data.ID > divider*4) & (data.ID <= divider*5),:);
split6 = data((data.ID > divider*5) & (data.ID <= divider*6),:);
split7 = data((data.ID > divider*6) & (data.ID <= divider*7),:);
split8 = data((data.ID > divider*7) & (data.ID <= divider*8),:);
split9 = data((data.ID > divider*8) & (data.ID <= divider*9),:);
split10 = data((data.ID > divider*9) & (data.ID <= divider*10),:);
split11 = data((data.ID > divider*10) & (data.ID <= divider*11),:);
split12 = data((data.ID > divider*11) & (data.ID <= specs.numberOfRows),:);
dataset_array={split1, split2,split3,split4,split5,split6,split7,split8,split9,split10,split11,split12};
parfor i=1:12
newDataset_array(i) = myFunction(dataset_array(i),data);
end
for i = 1:1:12
newData = [newData;newDataset_array(i)]
end
Thanks in Advance
  11 Comments
Owner5566
Owner5566 on 15 Jan 2020
okay i already did it that way, just wanted to know if i missed anything.
But thanks. Works like a charm ;)
Guillaume
Guillaume on 15 Jan 2020
Comment by Owner5566 mistakenly posted as an Answer moved here:
Now i just need a way, to make the big data Available to all workers
The way i do it now, they all get it in the function, which leads to a lot of memory use.
Cant i make it available to all?
I need it for filtering in the functions

Sign in to comment.

Accepted Answer

Guillaume
Guillaume on 15 Jan 2020
For the record, this is my suggested modification to the original code:
workers = 12;
destination = discretize(data.ID, linspace(min(data.ID), max(data.ID), workers + 1)); %split ID into workers bins
dataset_array = splitapply(@(rows) {data(rows, :)}, (1:height(data))', destination);
which is a good demonstration of why numbered variables are bad. 3 lines instead of 15 and dead easy to change the number of workers.
However that doesn't help at all with your parallel computation. I'm not entirely clear why you'd want to pass the whole dataset to each worker. If all the data is needed by each, then you're sort of losing the benefit of parallelisation. In addition, it may well be that the overhead of passing the data to each worker cancels any speed up in parallelisation.
If you need to pass the whole table to each worker, then there's not much benefit of passing a section of the table at the same time. You're better off just passing the row indices that the worker should work on and let the worker extract these rows. That should result in less overhead:
workers = 12;
destination = discretize(data.ID, linspace(min(data.ID), max(data.ID), workers + 1)); %split ID into workers bins
processeddata = cell(1, workers);
parfor i = 1:numel(workers)
processeddata{i} = dowork(data, destination == i); %pass the whole of data and a logical vector indicating which row the worker should work on
end
with
function result = dowork(data, workingrows)
datatoworkon = data(workingrows, :);
%...
end
But, if you can I would strongly recommend you upgrade to a more recent version of matlab. R2016b introduced tall arrays and tables which are basically tables designed for big data. Operations on these are automatically parallelised if you have the parallel toolbox.
Finally, for processing big data you also have the mapreduce functions which should be available in your version. Again, mapreduce automatically parallelise the work for you. mapreduce is not suitable to all kind of processing and can be a bit of a learning curve if you've never used it but it may be useful for what you're doing.
  9 Comments
Guillaume
Guillaume on 16 Jan 2020
Do'h! Didn't notice the numel which is clearly a typo. numel(workers) is always going to be one. It should indeed have been
parfor i = 1:workers
%...
end
or
parfor i = 1:numel(processeddata)
%...
end
Owner5566
Owner5566 on 16 Jan 2020
okay, then thank you again!

Sign in to comment.

More Answers (0)

Categories

Find more on Parallel for-Loops (parfor) in Help Center and File Exchange

Products


Release

R2015b

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!