Does jacobian function in matlab equivilent to gradfd function in Guass?

2 views (last 30 days)
I have an old code that calculating the gradient of object function and use the result to construct weighting matrix. The problem is the code is written in Gauss and I want to translate it to Matlab. The original code is like
{dfb} = gradfd(&fcn,b); @ Gradient of covariances at minimum - df/db @
where fcn is a user-defined function and b is the parameter vector I read the Matlab document and find two related commands: gradient() and jacobian(). Which should I use?
[dfb] = jacobian(fcn,b)
seem just calculate the gradient of fcn w.r.t b symbolically.It throw me error message like
Error using fcn (line 7)
Not enough input arguments.
Error in mini_dist (line 47)
dfb = jacobian(fcn,b);
during execution
What I want is the value so I need to feed the value on which the gradient are calculated. I don't know how to achieve this. Is dfb a symbolic function that I can feed value directly?
The fcn function
%******************************
% Parameterized Moments
%******************************
function fm = fcn(b)
global T T_z T_e
teta =0;
zt =b(1:T_z);
et =b(1+T_z:T_z+T_e);
dify =zeros(T,T);
dify(1,1)=zt(1)+et(1)+(1-teta)^2*et(1)+teta^2*et(1);
dify(2,2)=zt(1)+et(2)+(1-teta)^2*et(1)+teta^2*et(1);
dify(3,3)=zt(1)+et(3)+(1-teta)^2*et(2)+teta^2*et(1);
j=4;
while j<=T-3;
dify(j,j)=zt(j-2)+et(j)+(1-teta)^2*et(j-1)+teta^2*et(j-2); %when teta=0,we have zt(j-2)+et(j)+et(j-1)
j=j+1;
end;
dify(T-2,T-2)=zt(T-4)+et(T-2)+(1-teta)^2*et(T-3)+teta^2*et(T-4);
dify(T-1,T-1)=zt(T-4)+et(T-2)+(1-teta)^2*et(T-2)+teta^2*et(T-3);
dify(T,T) =zt(T-4)+et(T-2)+(1-teta)^2*et(T-2)+teta^2*et(T-2);
dify(1,2)=-(1-teta)^2*et(1);
j=3;
while j<=T-1;
dify(j-1,j)=-(1-teta)*et(j-1)+teta*(1-teta)*et(j-2); %Identification of transition shock v_t (C2)
j=j+1;
end;
dify(T-1,T-2)=-(1-teta)^2*et(T-2);
j=3;
while j<=T;
dify(j-2,j)=-teta*et(j-2);
j=j+1;
end;
i=2;
while i<=T;
j=i;
while j<=T;
dify(j,i-1)=dify(i-1,j);
j=j+1;
end;
i=i+1;
end;
% Final matrix %
dif(1:T,1:T) =dify;
fm=dif(:);
end
and the main program (only the fraction)
%{b,fun,grad,ok} = optmum(&cov,b0); %Optimisation procedure
[b,fun,flag] = fminsearch(@cov,b0);
%dfb = gradient(fcn,b);
dfb = jacobian(fcn,b);
% Choose weighting matrix
ww = inv(diag(diag(var1)));
covth = inv(dfb'*ww*dfb)*dfb'*ww*var1*ww*dfb*inv(dfb'*ww*dfb);

Answers (0)

Tags

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!