Solving Ax=b but only for the values in some rows of x
2 views (last 30 days)
Show older comments
Tarek Hajj Shehadi
on 3 Jul 2021
Commented: Matt J
on 3 Jul 2021
I have a linear system where:
- is square, large (m and n on the order of ), asymmetric, very sparse (around non-zeros)
- is very sparse (also - non-zeros)
Due to the properties of the problem where the matrix and vector come from, I can guarantee the system has one unique solution .
Currently, I'm solving this system using sparse LU decomposition or BiCGSTAB and this is already quite fast. But in the search for more speed, which is my primary goal, I realised that, given how the vector is used once calculated, I only actually look at a very small subset of its values. In other words, if , then I actually only need to know the values of a few (few is about ). I know which values I care about before solving . After solving the system, all other values in can be completely incorrect so long as those important are right.
Is there any faster algorithm that can take advantage of this to reduce the amount of work done when solving ?
Note: An obvious general way to calculate values directly is using Cramer's rule but I don't know how it could be applied to such large sparse matrices quickly.
8 Comments
John D'Errico
on 3 Jul 2021
Oh. Multiple b, but fixed A? Then the answer should be easy. Let me write it up as an answer.
Accepted Answer
John D'Errico
on 3 Jul 2021
We are given a problem where A is a large sparse matrix. We wish to compute the solution for the problem
A*X == b
where b will vary, but A is fixed. The thing is, we want to solve for only a few elements of X, a small subset of them.
The trick would seem to be to effectively compute the corresponding rows of the inverse of A. For example, I'll create a very small problem, with A as a 6x6 matrix, so we can see what is happening. Any larger, and it will overflow the window, and what matters is the idea, not the implementation on your real problem.
A = randn(6)
A =
0.91539 1.2244 0.23128 2.5526 -0.044323 0.07913
-1.7792 1.1284 -0.54674 -1.255 -0.53844 -2.2881
-0.71475 0.44224 0.4884 1.0128 -0.53548 -0.43906
-0.092051 1.516 -1.918 -1.2029 -1.1266 -0.73841
0.72988 -0.048539 -0.53597 -1.0549 -0.53726 -0.99856
-0.059816 0.53947 1.7059 -0.21087 -0.86017 0.75975
Now suppose we wish to solve the problem A*X == b, but we care only about elements 1 and 3 of X?
The idea is to compute rows 1 and 3 of inv(A). The following very simple code does exactly that.
function Ainv = Ainvrows(A,rind)
% generate the rows of the inverse of A, for a small subset of rows
% A - a square matrix, presumed to be non-singular, presumed sparse
% rind - integer vector listing the rows of the inverse of A that are of interest
Asize = size(A);
n = Asize(1);
if n ~= Asize(2)
error('A must be square')
end
nr = numel(rind);
delta = sparse(rind,1:nr,1,n,nr);
% the trick now is to solve the problem A'\delta. That effectively
% extracts only the rows of the inverse of A that we wish to see.
Ainv = (A'\delta)';
If you prefer to use some other solver than a sparse backslash, perhaps bicgstab, just modify that last line. I think bicgstab is not set up to handle multiple right hand sides. So you would need to have a loop in there.
Did it work on my example matrix?
inv(A)
ans =
0.2899 -0.056298 -0.40007 -0.095469 0.47353 0.098636
0.45274 0.39894 -0.82201 0.11297 -0.36017 0.31568
0.13989 0.23266 -0.25796 -0.33976 0.12684 0.37352
0.069612 -0.17474 0.54229 -0.004086 -0.0012222 -0.22568
0.38001 0.51357 -1.1788 -0.33684 -0.42659 -0.062176
-0.16319 -0.27714 -0.052687 0.29264 -0.47509 0.12814
Ainv13 = Ainvrows(A,[1 3])
Ainv13 =
0.2899 -0.056298 -0.40007 -0.095469 0.47353 0.098636
0.13989 0.23266 -0.25796 -0.33976 0.12684 0.37352
Now, once we have the necessary rows of the inverse of A, all it takes to compute the value of elements 1 and 3 of the solution is a matrix multiply. For example:
b = rand(6,1);
A\b
ans =
0.10635
0.10011
0.19457
-0.073614
-0.14575
-0.17135
Ainv13*b
ans =
0.10635
0.19457
As you can see, it returns the 1 and 3 elements of the solution.
This code would be efficient if you have MANY vectors b, more than the number of elements of X that you care about the values. It will need to solve for as many right hand sides in delta as there are elements you will want to see the value for.
1 Comment
Matt J
on 3 Jul 2021
This code would be efficient if you have MANY vectors b
Unfortuantely, the OP has backpedalled on that. There is only one b.
More Answers (1)
Matt J
on 3 Jul 2021
Edited: Matt J
on 3 Jul 2021
Indeed, I am solving it for multiplie right hand sides with A fixed
If so, then you should be solving with multiple concatenated b,
x=A\[b1,b2,b3,...,bn];
This will make it so the decomposition of A is reused for multiple b_i.
Also, if you sort the columns of A so that the x(i) of interest are the final m variables, then an LU decomposion could be truncated to,
[L,U]=lu(A);
U=U(end+1-m:end, end+1-m);
Q=L\[b1,b2,b3,...,bn];
Q(1:end-m,:)=[];
x=U\Q; %only the x(i) of interest
3 Comments
Matt J
on 3 Jul 2021
Still, the technique of truncating U to consider only the x(i) of interest still applies.
See Also
Categories
Find more on Logical in Help Center and File Exchange
Community Treasure Hunt
Find the treasures in MATLAB Central and discover how the community can help you!
Start Hunting!