Asked by Joaquim
on 27 Feb 2017

Hi everybody, I have two vectores, lets say x and y. I calculate the difference vector d=x-y; I want to minimize d through the 12-norm of the difference (Euclidean distance) vector over 100 iterations. Is there anyone who could help me?

Kind regards, Joaquim

Answer by John D'Errico
on 28 Feb 2017

Edited by John D'Errico
on 28 Feb 2017

Accepted Answer

This is quite confusing, because you have not ever explained what you really want. In your words:

"x is a weighted sum of two vectores A and B, x= weight1 *A + weigth2*B. The purpose of the minimization is to find the weights that minimize x-y."

You say that you have vector y, and two vectors A and B. Then you need to compute the weighted linear combination x of vectors A and B, that minimizes the norm (lets not worry about which norm for the moment), so

norm(x-y) = norm((w1*A + w1*B) - y)

If you really want the 12norm here, this is a straight optimization problem, but the 12 norm is really not much different from the infinity norm. Solving an inf norm problem will reduce to a linear programming problem, whereas solving the 2-norm problem is simple linear least squares.

For example, to solve the 2-norm (i.e., Euclidean distance, which you did say above) problem, all you need do is:

W12 = [A(:),B(:)]\y(:);

weight1 = W12(1);

weight2 = W12(2);

That simple. So it is trivial to do. There are at least a half dozen other ways to compute the same set of weights in MATLAB, but the above solution is simplest. It yields a set of weights that minimize the 2-norm of the difference between x and y. There are no iterations needed. That is the true solution, IF you really wanted a 2-norm.

However, if you really wanted a 12 norm, and would not be interested in the infinity norm, then some iterative scheme would be employed.

Do you really need to do this as a 12 norm? Why would you pick such an arbitrary value as 12? As I said, it will be relatively little different from the infinity norm. For example:

v = randn(1,100);

norm(v,12)

ans =

2.926

norm(v,inf)

ans =

2.811

Whereas, the 2-norm is quite different.

norm(v,2)

ans =

9.8925

So if you insist on computing the solution under a 12 norm via some (unspecified iterative scheme) I can show you some ways to do so. Will ANY reasonable iterative scheme suffice?

For example, lets make up some data, then I'll solve it under a 12-norm using fminsearch.

y = randn(100,1);

A = randn(100,1);

B = randn(100,1);

The 2 norm solution is:

W12_2norm = [A(:),B(:)]\y(:)

W12_2norm =

-0.0024203

0.11984

Now, create an objective function to use for an optimizer.

fun_12 = @(W12) norm((A*W12(1) + B*W12(2)) - y,12);

fun_12(W12_2norm)

ans =

2.8766

The 2-norm solution is not the minimum set of weights for the 12norm problem. But it will provide starting values for the optimization.

[W12_12norm,fval] = fminsearch(fun_12,W12_2norm)

W12_12norm =

0.31792

-0.11325

fval =

2.7072

So a different set of weights has been obtained, which yields a somewhat lower value for the 12-norm.

Again, the 12-norm seems a bit arbitrary to me. A different solution comes from the infinity norm.

fun_inf = @(W12) norm((A*W12(1) + B*W12(2)) - y,inf);

[W12_infnorm,fval] = fminsearch(fun_inf,W12_2norm)

W12_infnorm =

-0.0047671

-0.68259

fval =

2.7561

I need to be a bit careful there, because the inf norm is really not well-solved using fminsearch, because the inf norm will not be a smooth, differentiable function. fminsearch would prefer differentiability. I'd have been better off using a linprog solution on that, but since you have not asked at all for an inf norm solution, I'm not going to go that deeply into the solution, especially since I have no idea what you really want.

If you can explain more clearly what you really need, I could give a more precise answer. And there are lots of iterative ways I could have solved this problem (fminbnd, for example), but I don't see why any particular iterative solution would be better than what I've shown already.

Joaquim
on 28 Feb 2017

I'll try to be more clear. First I have two images, A and B. From these two images, I'll obtain X, throught

X= weight1*A+weight2*B, with weight1+weight2=1

Then I calculate the difference

difference=abs(X-Y);

What I want to do is optimize the weights by minimizing the difference matrix. The paper I'm following is not clear about if it is the Euclidean distance or the 12-norm, since both are referred. However, it is always referred that the minimization is done over 100 iterations. From the answers I'm getting, the Euclidean distance doesn't need the iterations.

If you could help me getting possible solutions for both ways, I would very grateful. Please notice that the sum of weights has to be 1.

I'm sorry if I wasn't still very clear.

Much appreciation, Joaquim

John D'Errico
on 28 Feb 2017

Ok. One extra piece of information, that the sum of the weights must be 1. I'd probably implement that as:

X = w*A + (1-w)*B

or it can be written as:

X = B + w*(A-B)

In either case, we have the constraint built in implicitly. The former is perhaps more obvious to follow what is being done though. Whatever weight you get for w, subtract it from 1 to get the second weight.

There is no need to compute the absolute difference between them though, since any norm deals with that. The norm of the simple difference will suffice. If they are images, make sure the result is a double. And if they are image arrays, then make sure you string out the difference array as a vector, since you really want a vector norm here, not a matrix norm.

So, a simple algorithm for the 2-norm takes one line of code, assuming image arrays A, B, and Y. They will need to be doubles of course, since otherwise integer arithmetic will cause problems.

w = (A(:)-B(:))\(Y - B(:));

Effectively, what I did was use the form:

X = B + w*(A-B)

Then, we will minimize the norm of the difference,

norm(Y - (B + w*(A-B),2)

A, B and Y are known here. Only w is an unknown. The solution in MATLAB is as I wrote it.

w = (A(:)-B(:))\(Y - B(:));

But that is the 2-norm solution, also known as the linear least squares solution. Note that the least squares solution has no explicit constraint on the weights. So a weight could arise that is less than zero, or greater than 1. In the latter case, the counter-weight would be negative, since they were forced to sum to 1.

For a different norm, thus 12, 17, or 42, take your pick. :) I would do it using fminbnd.

fun = @(w) norm(Y(:) - (B(:) + w*(A(:)-B(:)),12);

w = fminbnd(fun,[0,1]);

weight1 = w;

weight2 = 1-w;

So minimize fun as a function of w, over the range for w of [0,1]. That presumes both weights must be bounded in the interval [0,1].

Can one of the weights be greater than 1 or less than zero? I guess I should have asked that. But if you are willing to see weights that range from -1 to 2 but still summing to 1 for example, then this will suffice:

w = fminbnd(fun,[-1,2]);

Now, how would I solve the 2-norm problem, if we added the information about bounding the weights to the interval [0,1]? Let me know if that is an issue, and I can do that for you too. (I'm mulling over at least 3 choices of algorithm for that as I write. Do you have the optimization toolbox?)

Sign in to comment.

Opportunities for recent engineering grads.

Apply Today
## 7 Comments

## Walter Roberson (view profile)

## Direct link to this comment

https://fr.mathworks.com/matlabcentral/answers/327166-how-to-minimize-the-12th-norm-of-a-difference-vector#comment_432471

## Joaquim (view profile)

## Direct link to this comment

https://fr.mathworks.com/matlabcentral/answers/327166-how-to-minimize-the-12th-norm-of-a-difference-vector#comment_432577

## Torsten (view profile)

## Direct link to this comment

https://fr.mathworks.com/matlabcentral/answers/327166-how-to-minimize-the-12th-norm-of-a-difference-vector#comment_432633

## John D'Errico (view profile)

## Direct link to this comment

https://fr.mathworks.com/matlabcentral/answers/327166-how-to-minimize-the-12th-norm-of-a-difference-vector#comment_432656

## Joaquim (view profile)

## Direct link to this comment

https://fr.mathworks.com/matlabcentral/answers/327166-how-to-minimize-the-12th-norm-of-a-difference-vector#comment_432730

## Joaquim (view profile)

## Direct link to this comment

https://fr.mathworks.com/matlabcentral/answers/327166-how-to-minimize-the-12th-norm-of-a-difference-vector#comment_432734

## John D'Errico (view profile)

## Direct link to this comment

https://fr.mathworks.com/matlabcentral/answers/327166-how-to-minimize-the-12th-norm-of-a-difference-vector#comment_432761

Sign in to comment.