How to efficiently solve a matrix eqation when the matrix is updated?
8 views (last 30 days)
Show older comments
Dear All,
I need to solve a matrix linear equation: Ax = b. I know the most fast way to solve x is x = A\b. My question is: for a updated matrix A1 = A + deltaA, how can I make use of x=A\b to solve the updated equation A1x=b? The size of A is 5000 x 5000, and there are a very few non-zero elements in deltaA.
Thanks a lot.
Benson
7 Comments
Answers (1)
John D'Errico
on 24 Mar 2021
Edited: John D'Errico
on 24 Mar 2021
The problem is, it very much depends. If you can write the change in A as a rank 1 update, AND you write the code well, then you probably can gain by use of the infamous Sherman-Morrison formula. It will depend on the sparsity of the problem. Updates to a sparse matrix itself can be slow, so that alone can be a significant cost in time if new elements are created.
However, you state that the changes are few, but you do NOT state they can be written in the form of a rank 1 update. And if that is not the case, then you won't gain much by a rank 1 update formula. Now you would need to use Sherman-Morrison-Woodbury, a generalization. Where the break appears in efficiency is difficult to know. Again, it will be problem dependent. It will depend on the sparsity of your matrices. It will depend on their structure.
You may well be better off NOT computing the matrix inverse at all, instead computing a factorization of the matrix.
For example, consider this matrix:
n = 1000;
A = sparse(eye(n) + diag(rand(n-1,1),1) + diag(rand(n-1,1),-1));
A is tridiagonal. But the inverse?
spy(inv(A))
The inverse of A is full. And that will make it very slow to compute for large matrices.
However, if we look at an LU factorization of A, both factors are very well behaved, and still very sparse.
[L,U] = lu(A);
spy(L)
spy(U)
It is almost always the case that you should at least consider NOT computing a matrix inverse, as there are almost always better alternatives.
timeit(@() inv(A))
timeit(@() lu(A))
And even for a matrix as small as 1Kx1K, the LU is way faster. The greater speed of the sparse LU may make it fast enough that you won't even bother with an update at all. And for much of what you may do that needs the matrix inverse, you can often use an LU factorization just as easily. (We have not been told why you THINK you need the inverse, only that you do. And many times, the real solution to a problem is to know why you think you need a particular tool, but to tell you there are better tools out there.)
My point is, while you may think you need to compute the inverse, understanding linear algebra will help you in many cases to not need to do so at all.
0 Comments
See Also
Categories
Find more on Linear Algebra in Help Center and File Exchange
Community Treasure Hunt
Find the treasures in MATLAB Central and discover how the community can help you!
Start Hunting!