Sorry, but this seems to make little sense, since most of that set of singular value decompositions will be trivial. Or, perhaps, let me ask what it is that you are looking to do here? Computing the SVD of each row of a matrix just means you want to find the SVD of a sequence of row vectors.
That is, if X is ANY row vector, then what is the svd(X)? The SVD returns three arrays, thus U,S,V. If X is a row vector, then we will always have U == 1.
Likewise, for any row vector X, S will be a row vector of the same size as X, with all identical zeros except for the first element, which will be norm(X).
Finally, V will be an nxn array, which n is the length of X. The first column of V will be X'/norm(X), so X scaled to have unit norm. The remaining columns of V will be the null space of X, thus a basis for the n-1 dimensional subspaces that is orthogonal to the vector X.
For example...
X = 1:3
X =
1 2 3
[U,S,V] = svd(X)
U =
1
S =
3.74165738677394 0 0
V =
0.267261241912424 -0.534522483824849 -0.801783725737273
0.534522483824849 0.774541920588438 -0.338187119117343
0.801783725737273 -0.338187119117343 0.492719321323986
Since it is trivial to compute U and S for any set of row vectors, and it is trivial to compute the 2-norm of any set of row vectors, AND it is trivial to scale a vector to unit length by dividing by the 2-norm, then you must be asking how to compute the nullspaces of each of a set of vectors?
Note that those nullspaces are not unique.
The point is, depending on what you really wanted from this, it can arguably be far easier to compute if we know what it is that you are looking to find. We also would need to know the length of your vectors, since there are easy ways to do this computation without any need for the SVD at all, if the dimension of the problem is low.
0 Comments
Sign in to comment.