# Normalization of complex eigenvector

11 views (last 30 days)
P Maity on 1 Feb 2020
Edited: P Maity on 3 Feb 2020
What is the best way of normalisation of a complex eigenvector of a complex hermitian matrix. Here I am doing this in the following way, but norm remaim as it is.
syms theta phi
a=[cos(theta) sin(theta)*exp(1i*phi);sin(theta)*exp(-1i*phi) cos(theta)];
[V,~]=eig(a);
V(:,1)/norm(V(:,1))
This produces the vector as
exp(phi*1i)/(exp(-2*imag(phi)) + 1)^(1/2)
1/(exp(-2*imag(phi)) + 1)^(1/2)
But Normalization factor remain in symbolic form, but it should by sqrt(2). Pl somebody help me to understand.

Show 2 older comments
Walter Roberson on 1 Feb 2020
So? What does that have to do with sqrt(2)?
P Maity on 1 Feb 2020
@walter: Is it little bit of faster if I run the Matlab code using cmd in windows compare to Matlab script? I mean without opening Matlab display?
Walter Roberson on 1 Feb 2020
Possibly, but I would not expect the difference to be much.

Vladimir Sovkov on 1 Feb 2020
You probably want
V(:,1) = V(:,1)/norm(V(:,1));
Besides, if your theta and phi are supposed to be real, the overall computation would be simpler with the assumptions
syms theta phi
assume(theta,'real');
assume(phi,'real');

Walter Roberson on 3 Feb 2020
I mentioned earlier that r can be broken up into the sum of 4 parts, and that two of the parts integrate to 0 over 2 to 2*pi. I mentioned that the magnitude of the other two parts is generally fairly different.
With further testing, I see that the magnitude of the imaginary component of the third part is up to 10^17 times larger than the magnitude of the imaginary component of the second part, and often is more than 10^14 times larger.
I also see that the magnitude of the real component of the third part is up to 10^60 times larger than the magnitude of the real component of the second part,
But these are "up to", and there are places where the magnitudes are much closer, especially in the places where the parts approach 0. The numeric integrations end up needing to spend a lot of time to resolve the differences, because for some phi and some k1 they end up being important for the value there, even if the value there is minor compared to the value at some other point.
If accuracy within (roughly) 2% is acceptable, then I would say to extract just the third component of r, and create a matrix of theta and k1 values that you substitute into r[3], vpa() that, and then integrate over phi from 0 to 2*pi . This approach will give you some round-off errors, with some very small components not exactly vanishing when they theoretically should... but I did say accuracy only within roughly 2%.
When I taylor r[3] near k1=5, order 5, at theta=2*Pi/180 (2 degrees, right in the middle of your range) and integrate over phi=0..2*pi then the result involves values ranging between 10^16 and 10^115, and the values are all magically supposed to cancel out to give you a final result in the range 0 to 1 for the real component and 0 to 10 for the imaginary component. This is implausible numerically in binary floating point calculations: the integrals .
At theta = acos(999/100) and k1=4, the values involved are on the order of of 10^74 for real and imaginary components, and the values cancel down to the order of 10^21 and are then multiplied by 10^-21 to get the same kind of real and imaginary range mentioned above. This will never work out in binary floating point.
In binary floating point, your equations are numerically unstable and could produce answers that are pretty far off. Even symbolically with the default of 32 digits you are going to get nonsense -- you need somewhere on the order of 80 digits to get reasonable cancellation.
I would recommend giving up on this equation.
P Maity on 3 Feb 2020
@Walter: Thanks for your brilliant analysis and observation. I am really great ful to you for your valuable advice. So there is no hope to go further with this kind of situation at all according to your suggestion. Thanks a lot once again.
Walter Roberson on 3 Feb 2020
Well, given sufficient time you can get some result, but if you want values that mean anything, you need to evaluate symbolically to 80 or so digits and set a relatively large numeric integration target.
If you are willing to give up that approximately 2% contribution term then the process would be to extract the 3rd term of r, evaluate it at a matrix of theta and k1, evalf(), and integrate the matrix over phi = 0..2*pi and the integration will not take a grindingly long time. If you keep the approximately 2% contribution term you would probably have to set a larger permitted relative error term in order to prevent the numeric integration from being very very slow.
Integrating r as a whole and then substituting k1 and theta into it is probably going to take much too long.
If you substitute in a particular theta value then the third term of r expands to a surprisingly long expression involving k1 and phi that is not profitable to do an exact integration on with respect to phi.
I don't know if an exact integral of r even exists. I had to kill the exact integration when it got up to 75 gigabytes of memory on my system.

### More Answers (1)

P Maity on 3 Feb 2020
Edited: P Maity on 3 Feb 2020
Thanks a lot. If I just avoid that 2% contribution, then the plotting will appear shortly? I need to see the graph anyhow. I had to see the nature of graph only. I am not worried about any particular data. If that 2% contributing term takes so long time, just avoid it. I think it will not change the nature of the graph drastically. If at least that can be done, then there is a positive sign for me.