Optimizing a fmincon function by vectorising loops?

I received a function that is supposed to maximize a log-likelihood function and I need to speed it up, since it doesn't converge in a reasonable amount of time for my function, but I'm not sure what the issue is. The main function:
function [fitness,ExpW,P]=Main_DECM(InputMatrix)
BinaryMatrix=sign(InputMatrix);
Sout=sum(InputMatrix')';
Sin=sum(InputMatrix)';
W=sum(Sout);
Kout=sum(BinaryMatrix')';
Kin=sum(BinaryMatrix)';
Link=sum(sum(BinaryMatrix));
LengthMarginals=length(Kout);
x0=[0.9*ones(length(Kout),1);0.9*ones(length(Kout),1);0.9*ones(LengthMarginals,1);0.9*ones(LengthMarginals,1)];
options=optimset('Display','iter','Algorithm','interior-point','GradObj','off','DerivativeCheck','off','MaxFunEvals',10^5,'MaxIter',10^5,'TolX',10^(-32),'TolFun',10^(-32));
fitness=fmincon(@(x)SolverFun_DECM(x,Kout,Kin,Sout,Sin,LengthMarginals),x0,[],[],[],[],zeros(4*LengthMarginals,1),[ones(2*LengthMarginals,1)*Inf;ones(2*LengthMarginals,1)+0.1],[],options);
The matrix is a 7800x7800 matrix, so all vectors (Kout, Kin, Sout, Sin, etc) are 7800x1 of integers between 0 and ~10^6. The SolverFun_DECM function that is being optimized looks like this:
function [G]=SolverFun_DECM(m,Kout,Kin,Sout,Sin,LengthMarginals)
xOut=m(1:LengthMarginals);
xIn=m(LengthMarginals+1:2*LengthMarginals);
yOut=m(2*LengthMarginals+1:3*LengthMarginals);
yIn=m(3*LengthMarginals+1:4*LengthMarginals);
g=0;
f=0;
for i=1:LengthMarginals
g=g+Kout(i)*log(xOut(i))+Kin(i)*log(xIn(i))+Sout(i)*log(yOut(i))+Sin(i)*log(yIn(i));
end
for i=1:LengthMarginals
for j=1:LengthMarginals
if i~=j
f=f+log(1-yOut(i)*yIn(j))-log(1-yOut(i)*yIn(j)+xOut(i)*xIn(j)*yOut(i)*yIn(j));
end
end
end
G=-(g+f);
The profiler shows that the function doesn't even pass the loop for f in +- 3 days, so I tried to vectorise this whole function, e.g. by replacing the loop over g with something like:
g = zeros(LengthMarginals,1);
g = Kout.*log(xOut')+Kin'.*log(xIn')+Sout.*log(yOut')+Sin'.*log(yIn');
g = sum(sum(g));
In testing with a smaller matrix, these two things return the same, but when the fmincon function is applied, it doesn't converge. I'm sure it has something to do with how the fmincon-function works, but I'm not sure.
Is vectorising the right approach to speeding up the function? If yes, why doesn't it work like this? If no, what might be a better way to speed up the function?

2 Comments

The matrix is a 7800x7800 matrix
Which matrix? Your unknowns are 31200x1.
Kout=rand(7800,1);
LengthMarginals=length(Kout);
x0=[0.9*ones(length(Kout),1);0.9*ones(length(Kout),1);0.9*ones(LengthMarginals,1);0.9*ones(LengthMarginals,1)];
size(x0)
ans = 1×2
31200 1
The matrix that is the input for the Main_DECM function (InputMatrix), which in my research case is a 7800x7800 matrix

Sign in to comment.

Answers (1)

Matt J
Matt J on 16 Dec 2021
Edited: Matt J on 16 Dec 2021
Is vectorising the right approach to speeding up the function? If yes, why doesn't it work like this?
Yes, assuming you also vectorized the computation of f.
Also, for such a large number of variables, you would need to supply a vectorized gradient calculation (SpecifyObjectiveGradient=true) to see significant speed-up. The finite difference calculations will dominate the CPU time otherwise.
The reasons for the failure of convergence are not clear, since you have only called fmincon with one output. Therefore, we cannot see the exitflag or other diagnostic output that fmincon can provide. Perhaps the objective is poorly conditioned, or the initial guess is bad, or you aren't running enough iterations. Providing an analytical gradient can also improve convergence.

Products

Release

R2021b

Asked:

on 16 Dec 2021

Edited:

on 16 Dec 2021

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!