How to do a Gradient Search to Minimize the Coherence of this Matrix?
4 views (last 30 days)
Show older comments
Hi all, thank you for taking a look at my question. I apologize for the initial wall of text but the code will be at the bottom of it all. I just am first providing some relevant background information.
I'm trying to create a function that takes in a matrix, and after a little bit returns a minimal value. To be precise, I need something called a measurement matrix. This measurement matrix, A, will look like [1 cos(t1) cos(t2); 0 sin(t1) sin(t2)]. The input values are t1 and t2 (theta 1 and theta 2 respectively, and both range from 0 to 2pi).
I then take this measurement matrix and calculate its coherence, denoted by mu. So I need mu(A) = max(max(abs(A’*A - eyes(3)))). My goal is to minimize the coherence of my measurement matrix A. I’m trying to do this with a gradient search type construction.
Next I’m finding the gradient vector of mu. I’m doing this because the gradient points in the direction of greatest increase so I’ll just take the negative of it so now I’ll have the direction of the gradient that points towards the greatest decrease. But to find the gradient I need the partials, call them p1 and p2, but I also need two parameters, delta and lambda, so I just have them as input values. Now, the gradient vector, gradmu, will just look like [p1, p2] with each partial looking like
p1 = ((mu(A(t1+delta, t2)))-(mu(A(t1, t2))))/delta
p2= ((mu(A(t1, t2+delta)))-(mu(A(t1, t2))))/delta
You’re probably wondering about that lambda term. In my notes I have two lines that say
t1 = t1 - lambda*p1
t2 = t2 - lambda*p2
But I’m not entirely sure where to put these but I know they’re important and need to be used.
I imagine I’ll maybe have a nested while Loop or two so that you have your initial matrix, and then for the varying theta values it calculates the coherence of A and then spits out the minimum of all the coherence calculations (which probably entails keeping track of all the results/iterations and taking the smallest of them), then calculates the partials and then spits out the negative of the gradient. It is this function that I am trying to create that does all of the above but I do not know how to code it, though but am trying. Here is what I have so far:
% Parameters
n = 2; % Dimension of space. Specifically we are in R^2
delta = input('Enter small numeric value for delta: '); % The delta used in calculating partial derivatives
lambda = input('Enter small numveric value for lambda: '); % Step size (learning rate)
max_iter = 10000; % maximum number of iterations
func_tol = 0.000001 % termination tolerance for what I believe will be mu (basing this off of the gradient search algo on the Wiki page)
fvals = []; % stores mu values across iterations
A = @(t1, t2) [1 cos(t1) cos(t2); 0 sin(t1) sin(t2)]:
mu = @A max(max(abs(A'*A - eyes(3))));
p1 = ((mu((A(t1+delta, t2))))-(mu(A(t1, t2))))/delta;
p2 = ((mu((A(t1, t2+delta))))-(mu(A(t1, t2))))/delta;
% Iterate
iter = 1; % iterations counter
fvals(iter) = mu;
while iter < max_iter && fvals(end) > func_tol
iter = iter + 1;
t1 = t1 - lambda * p1; % gradient descent
t2 = t2 - lambda * p2 % gradient descent
A = @(t1, t2) [1 cos(t1) cos(t2); 0 sin(t1) sin(t2)]:
mu = @A max(max(abs(A'*A - eyes(3))));
disp(min(mu))
end
I'm sure a lot of what I have above is wrong, but I appreciate the time taken to look into this. Thank you!
0 Comments
Answers (0)
See Also
Categories
Find more on Creating and Concatenating Matrices in Help Center and File Exchange
Community Treasure Hunt
Find the treasures in MATLAB Central and discover how the community can help you!
Start Hunting!