From the series: Differential Equations and Linear Algebra

*
Gilbert Strang, Massachusetts Institute of Technology (MIT)
*

An *m* by *n* matrix *A* has *n* columns each in **R**^{m}. Capturing all combinations Av of these columns gives the column space – a subspace of **R*** ^{m}*.

OK. We're coming to the point where we need matrices. That's the point when we have several equations, several differential equations instead of just one. And it's a matrix that does that coupling.

So can I-- this won't be a full course in linear algebra. That would be available, you may know on, open courseware for 18.06. That's the linear algebra course. But -- facts, and why not just say them here in a few minutes?

So I have a matrix. Well there's a matrix. That's a 3 by 3 matrix. And first I want to ask how does it multiply a vector. So there it is multiplying a vector, v1, v2, v3. And what's the result, key idea? It takes the answer on the right-hand side is this number v1, times that column, plus this number, that number times the second column, plus the third number, the third number times the third column, combination of the columns of a. That's what a times v is. That's what the notation of matrix multiplication produces.

That's really basic to see it as a combination of columns. Now I want to build on that. That's one particular, if you give me v1, v2, and v3, I know how to multiply it. I take the combination.

Now I would like you to think about the result from all v1, v2, and v3. If I take all those numbers, and I get a whole lot of answers. They're all vectors, the result of A times v is another vector, Av, And I want to think about Av, those outputs, for all inputs v.

So I take v1, v2, v3 to be -- numbers. And I get all combinations of those three columns. And usually I would get the whole 3-dimensional space. Usually I can produce any vector, any output b1, b2, b3 from A times v. But not for this matrix, not for this matrix. Because this matrix is, you could say, deficient.

That third column there, 2, 3, 3, is obviously the sum of columns one and column two. So this v3 times that third column just produces something that I could already get from column one and column two. That v3 times that column three, I could x out. That's the same as column one, plus column two for this matrix, not usually.

And then so I only really have a combination of two columns. It's a combination of three. But the third one was dependent on the others. And it's really a combination of two columns. So combinations of two columns, two vectors in 3-dimensional space produce a plane. I only get a plane. I don't get all of 3-dimensional space, only a plane. And I call that plane the column space, so the column space of the matrix.

So if you gave me a different matrix, if you change this 3 to an 11, probably the column space now changes to-- for that matrix I think the column space would be the whole 3-dimensional space. I get everything. But when this third column is this the sum of the first two columns, it's not giving me anything new. And the column space is only a plane.

And you can think of a matrix where the column space is only a line, just one independent column. OK. So that, we thought about this. -- is all combinations of the columns. In other words, it's all the results, all the outputs from A times v. It's all the outputs from A times v. Those are the combinations of the columns.

So we can answer the most basic question of linear algebra. When does Av equal b? Have --. When is there a v so that I can solve this? When is there a v that solves this equation?

So it's a question about b. What is it about b that must be true if this can be solved? Well this says that equation is saying b is a combination of the columns of a. So this has a solution when b must be-- shall I say must be in the column space. For that example, only b's that where we can get a solution on b's that are combinations of the first two columns. Because having the third column at our disposal gives us no help. It doesn't give us anything new.

It will be solvable if b equalled 1, 1, 1. That's a combination of the column, or if b equals 1, 2, 2. That's another simple combination of the columns. Or if b equals 2, 3, 3. But I'm only, I'm staying on a plane there. And most b's are off that plane.

Now when there is a solution. All right. Now a second key idea of linear algebra, can we do it in this short video? I want to know about the equation Av equals 0. So now I'm setting the right-hand side to be 0. That's the 0 vector, 0, 0, 0. Does it have a solution? Does it have a solution? Let's take this example.

1, 1, 1; 1, 2, 2; 2, 3, 3; now I'm looking at the solutions when the right side is all 0. Does that have a solution? Is there a combination of those three columns that gives 0? Well there is always one combination. I could take 0, 0, and 0. I could take nothing, 0 of everything. 0 of this column, 0 of that column, 0 of the third column, would give me to the 0. That solution is always available.

The big question is, is there another solution. And here for this deficient, singular, non-invertible matrix, there is. There is another solution. Let me just write it down. Let me put it in there. Do you see what the solution is? The third column is the sum of those two. So if I want one of that column, I should take minus 1 in other column.

So this is minus this column, minus this column, plus this column gives me the 0 column. That is a vector in the null space. That's a solution to Avn equals --. So the null space is all solutions to Av equals 0. It's all the v's. The null space is a bunch of v's. The column space was a bunch of b's. It's just going to just emphasize that difference.

I was looking at which b --. I wasn't paying attention to what that solution was, just is there a solution. Then that b is in the column space. I take b equals 0. I fixed that all important b. And now I'm looking at the solutions. And here I find one. Can you find any more solutions? I think minus 10, minus 10, and 10 would be another solution. It's 10 times as much.

And 0, 0, 0 is solution. -- line of solutions. We had a plane for the column space. But we have a line for the null space. Isn't that neat? One's a plane, one's a line, dimension two plus dimension one. Two for the plane, one for the line, adds to dimension three, the dimension of the whole space. OK. That's a little going at in. All right.

Now I ask, what our all solutions? Complete solution to Av equals, well let me choose some right-hand side where there is a solution. Let me choose a right-hand side, say if I add that column and that column, I'll get Av-- maybe I'll take two of that column plus one of that column. Two of the first column with one of the second would be 3, 2 plus that would be a 4, 2 plus that would be another 4. OK. That's my b.

It's a combination of the columns. You saw me create it from the first two columns. So now I ask, what are all the solutions? It's in the column space. It's 2 times the first column, plus the second column. But there may be other solutions. So all solutions, a complete solution, v complete is here's the key idea. And the point is that it's the same that we know from differential equations.

It's particular solution plus any null solution. Plus all, you can say all v null. Particular plus null solution. It's such an important concept we just want to see it again. One particular solution with that thing would be particular, v particular could be-- 2-- how did we produce that? Out of two these, plus one of these, plus zero of that.

So v particular could be 2, 1, 0. It works for that particular b, two of the first column, one of the second. Now then we could add in anything in the null solution. So we have infinitely many solutions here. We've got one solution plus added to that, a whole line of solutions. This, all the null space, would be all vectors like that.

OK. That's the picture that we've seen for differential equations. And I just want to bring it out again for matrix equations, using the language of linear algebra. That's what I'm introducing here. I have one particular solution, plus anything in the null -- space of vectors that is the heart of linear algebra. Thank you.

*dy/dt*
= *y, dy/dt*
= –*y, dy/dt*
= *2ty*
. The equation *dy/dt*
= *y*
**y*
is nonlinear.

*x ^{n}*
, sin(

*e ^{st}*
, from outside and exponential growth,

*t*
) produces an oscillating output with the same frequency ω (and a phase shift).

*q(s)*
by its growth factor and integrate those outputs*.*

*f =*
cos(ω*t*
) is the real part of the solution for *f = e ^{iωt}*
. That complex solution has magnitude

*e ^{-at}*
multiplies the differential equation, y’=ay+q, to give the derivative of

*–by ^{2}*
slows down growth and makes the equation nonlinear, the solution approaches a steady state

*t*
and the other in *y*
. The simplest is *dy/dt = y*
, when *dy/y*
equals *dt*
. Then ln(*y*
) = *t + C*
.

*f*
= cos(ω*t*
), the particular solution is *Y*
*cos(ω*t*
). But if the forcing frequency equals the natural frequency there is resonance.

*e ^{st}*
. The exponent

*g*
is the solution when the force is an impulse (a delta function). This also solves a null equation (no force) with a nonzero initial condition.

*y = G*
cos(ω*t –*
α). The damping ratio provides insight into the null solutions.

*L*
(inductance), *R*
(resistance), and *1/C*
(*C*
= capacitance).

*t*
, cosines/sines, exponentials), a particular solution has this same form.

*(at ^{2} + bt +c) e^{st}*
: substitute into the equation to find

*y _{1}*
and

*y(t)*
.

*s ^{2}Y*
and the algebra problem involves the transfer function

*(t)*
, the impulse response is *g(t)*
. When the force is *f(t)*
, the response is the “convolution” of *f*
and *g.*

*dy/dt = f(t,y)*
has an arrow with slope *f*
at each point *t, y*
. Arrows with the same slope lie along an isocline.

*(y, dy/dt)*
travels forever around an ellipse.

*y*
and *dy/dt*
. The matrix becomes a companion matrix.

*Y*
to the differential equation *y’ = f(y)*
. Near that *Y*
, the sign of *df/dy*
decides stability or instability.

*f(Y,Z)*
= 0 and *g(Y,Z)*
= 0. Near those constant solutions, the two linearized equations use the 2 by 2 matrix of partial derivatives of *f*
and *g*
.

*y’ = Ay*
are stable (solutions approach zero) when the trace of *A*
is negative and the determinant is positive.

*m*
by *n*
matrix *A*
has *n*
columns each in **R**
^{m}
. Capturing all combinations Av of these columns gives the column space – a subspace of **R**
* ^{m}*
.

*n*
nodes connected by *m*
edges (other edges can be missing). This is a useful model for the Internet, the brain, pipeline systems, and much more.

*A*
has a row for every edge, containing -1 and +1 to show the two nodes (two columns of *A*
) that are connected by that edge.

**x**
remain in the same direction when multiplied by the matrix (*A*
**x**
*=*
λ**x**
). An *n*
x *n*
matrix has *n*
eigenvalues.

*n*
independent eigenvectors. The diagonal matrix Λis the eigenvalue matrix.

*A*
= *V*
Λ*V ^{–1}*
also diagonalizes

*d*
**y**
*/dt = A*
**y**
contains solutions **y**
*= e ^{λt}*

**y**
*= e ^{At}*

*A*
and *B*
are “similar” if *B*
= *M ^{-1}AM*
for some matrix

*n*
perpendicular eigenvectors and *n*
real eigenvalues.

*d ^{2}y/dt^{2} + Sy =*
0 has

^{T}
Sv for every vector v. S = A^{T}
A is always positive definite if A has independent columns.

*A*
into an orthogonal matrix *U*
times a diagonal matrix Σ (the singular value) times another orthogonal matrix V^{T}
: rotation times stretch times rotation.

*y(0)*
and *dy/dt(0)*
to boundary conditions on *y(0)*
and *y(1)*
.

* ^{2}u/*
∂

*F(x)*
into a combination (infinite) of all basis functions cos(*nx)*
and sin(*nx)*
.

*F(–x) = F(x)*
) and odd functions use only sines. The coefficients *a _{n}*
and

*u*
(*r*
, θ) combines *r ^{n}*
cos(

*u*
/∂*t*
= ∂* ^{2}u*
/∂

* ^{2}u*
/∂