MATLAB Answers

3

Memory spike when using previously declared variable in spmd block

Asked by Anders Hoff on 27 Mar 2012
Latest activity Commented on by Matti Kummu on 17 Jan 2017
I have a piece of code where a few (reasonably) large vectors are initialized inside a script, before they are used in an spmd block. Almost all the elements of the vectors are used on each worker, so I really need for the whole vector to be available on each worker. The code is similar to the below code in structure.
matlabpool open local 4
a = rand(1+2^27,1); % 1 GB, in my code this is not a random vector
spmd,
b = a(1); % my code does some more involved work here,
% but this still illustrates my problem.
end
After having executed this code the total memory usage is close to 6GBs, as expected. However, when the spmd block is started the memory usage spikes to between 8 and 10 GBs. I figure this has something to do with transmitting the variable 'a', but I fail to understand why the spike is so large.
After looking through the questions here, and reading the PCT documentation, I am still drawing a blank.
I have two concrete questions:
  1. Can somebody explain what the cause of the spike in memory usage is?
  2. Is there a way to distribute the variables without getting this memory spike, or at least reduce it?
I am aware of distributed arrays, but the communication overhead of using distributed arrays for this is too large for the tests I have done. However, I am naturally open for any suggestions that involve distributed arrays as well; I do, after all, not pronounce myself an expert in PCT.
In addition, if I change the assignment above to 'rand(2^28,1)' I get the following error:
Error using distcompserialize
Error during serialization
Error in spmdlang.RemoteSpmdExecutor/initiateComputation (line 82)
fcns = distcompMakeByteBufferHandle( ...
Error in spmdlang.spmd_feval_impl (line 14)
blockExecutor.initiateComputation();
Error in spmd_feval (line 8)
spmdlang.spmd_feval_impl( varargin{:} );
My hope is that maybe an answer to my 2. question can rid me of this error message as well?
If it matters I am using MATLAB 2011b and PCT version 5.2. in Linux.
Thank you for your time.
Anders.

  0 Comments

Sign in to comment.

5 Answers

Answer by Edric Ellis
on 28 Mar 2012
 Accepted Answer

One more thing, if you need the same data on each worker, you could also do this:
c = Composite();
c{1} = getMyLargeData();
c(2:end) = cell(1, numel(c) - 1);
spmd
c = labBroadcast( 1, c );
% use c
end

  0 Comments

Sign in to comment.


Answer by Konrad Malkowski on 28 Mar 2012

You can build the random vector directly on the workers:
spmd
c = codistributed.rand(1+2^27, 1);
end
As for the spike in memory usage that you are seeing. Without getting too much in details of implementation, it is caused by send and receive buffers on both the client MATLAB (the one you are interacting with), and the worker MATLABs (MATLABPOOL Workers). You will have on buffer on the client, and one buffer per worker.
The reason for the second error is that at the moment there is a serialization limit of 2GB for communications between client and workers.

  1 Comment

thanks for your answer.
i realize that i wasn't clear on this in my question, but i am not actually initializing a random vector. the vector contains something completely different. hence, building a codistributed random matrix like you suggest is sadly not an option.
thanks for explaining the spikes. i had a feeling it was something to do with memory buffers.

Sign in to comment.


Answer by Edric Ellis
on 28 Mar 2012

If you need to build data from client-side, you can use the explicit Composite method. Something like this:
c = Composite();
for ii = 1:numel(c)
c{ii} = getMyLargeData(ii);
end
spmd
% use 'c'
end
This is the most memory efficient way to do things as it sends only the required data to each worker. Konrad's explanation tells you why you are seeing the memory spike doing things the other way.

  1 Comment

Thank you; this helped a lot!

Sign in to comment.


Answer by Thomas Lai on 7 Jun 2012

Hi Konrad, is there any way to get around the serialization limit of 2GB? Because, 2GB is much too small for any significant large datasets that I'm working with.

  0 Comments

Sign in to comment.


Answer by Henryk Modzelewski on 23 Mar 2013

Is there a way to increase serialization limit of 2GB for communications between client and workers? 2GB is ridiculously low for big data sizes.

  1 Comment

This restriction was removed in R2013a.

Sign in to comment.