From the series: Differential Equations and Linear Algebra

*
Gilbert Strang, Massachusetts Institute of Technology (MIT)
*

A matrix produces four subspaces – column space, row space (same dimension), the space of vectors perpendicular to all rows (the nullspace), and the space of vectors perpendicular to all columns.

I would like you to see the big picture of linear algebra. We're not doing, in this set of videos, a full course on linear algebra. That's already on OpenCourseWare 1806. And now I'm concentrating on differential equations, but you got to see linear algebra this way.

And this way means subspaces. And there are four of them in the big picture. And we-- previous video described the column space and the null space. Now, we've got two more, making four. And let me look at this matrix-- it's for subspaces-- and put them into the big picture.

So the first space I'll look at is the row space. Now, the row space has these rows-- has the vector 1, 2, 3 and the vector 4, 5, 6, two vectors there, and all their combinations. That's the key idea in linear algebra, linear combinations. So 1, 2, 3 is a vector in three dimensional space. 4, 5, 6 is another one.

Now, if I take all their combinations, do you visualize that if I have two vectors, and I add them, and I get another vector that's in the same plane? Or if I subtract them, I'm still in that plane. Or if I take five of one and three of another, I'm still in that plane. And I fill the plane when I take all the combinations. So the row space-- can I try to draw a picture here?

It's a plane. This is the row space. I'll just put row. And in that plane are the vectors 1, 2, 3 and the vectors 4, 5, 6, those two rows. And the plane fills our combinations. Well, I can't draw am infinite plane on this MIT blackboard. But you get the idea. It's a plane. And we're sitting in three dimensions.

Now, the other-- so there's more. We've only got one-- a plane here, a flat part, like a tabletop, extending to infinity, but not filling 3D because we've got another direction. And in that other direction is the null space. That's the nice thing.

So I would like to know the null space of that matrix. I'd like to solve so the null space, N of A-- I'm solving Av equals all 0s. So some combination of those three columns will give me the 0 column. Let me write it in as a 0 column.

What could v be? What combination of that column, that column, and that column give 0, 0? Now, I know there are some interesting combinations because I-- only amounts to two equations with three unknowns, v1, v2, v3. I want to multiply that by v1, that by v2, that by v3. So I have three unknowns, but I've only two 0s to get, only two equations.

And if I have three unknowns and two equations, there will be lots of solutions. And I can see one. Do you see that if I add that and that, I get 4, 10 And that's the same-- 4, 10 is the same as 2 times 2, 5.

In other words, I believe v equal-- if I took 1 of the first and 1 of the third, and if I subtracted 2 of the second column-- so Av will give me 1 of the first column, 1 of the third column, and subtracting 2 of the second column will give me 0, 0. So here's my null space. My null space heads off in this direction, in the direction of 1, minus 2, 1.

But, of course, I get more solutions by multiplying v by any number. 10 times that vector would still give me 0s and still be in the null space. So I really have-- the null space is a whole line of vectors. It's that vector and any multiple of that vector. So it's a whole infinite line, which is a one dimensional subspace, the null space.

So the null space in my picture-- here is the null space. Well, it is not very thick is it because it's just a line. So I'll call this N of A, this line. Well, you see I'm trying to draw a three dimensional space. That line goes both ways. But it's perpendicular to the plane. That's the fabulous part. That's wonderful.

This line, the null space, is perpendicular to this plane, the row space. You want to know why? You want to just see it? Because if I take A times v, that would be 1, 2, 3 times v. 1, 2, 3 is perpendicular to that. How do I check perpendicular for two vectors? 1, 2, 3 dot product. 1, minus 2, 1, the dot product is 1 times 1, minus 2 times 2-- that's 4-- plus 3 times 1, that's 3. 1 minus 4 plus 3 is 0. And, similarly, 4 minus 10 plus 6 is 0.

So this is a right angle here. It's a right angle, 90 degrees between those two subspaces. And again, in this example, one space is two dimensional, a plane. The other space is one dimensional, a perpendicular line. I can show with my hands, but I can't draw on this flat blackboard. I have the plane going infinitely far, and I have the line going perpendicular to it and meeting, of course, at 0-- at the 0 vector.

That solves Av equals 0, and it also-- it's a combination, a 0 combination of the rows. That's half of the big picture, the row space and the null space.

Now, I'm ready for the other half, which is a second side of the other-- the right-hand side of the big picture contains the column space first of all. So what's the column space of that matrix?

So the column space of a matrix, we take all combinations of those three columns. And that will fill out a space. Now, I have-- so I take the vector 1, 4. And I take the vector 2, 5, maybe there. And then I'm going to also take the vector 3, 6. Well, I've got three columns. So I'm counting out 3, up 6. Good.

Take those combinations of those vectors, and what do you get? This is a picture in two dimensional space because these columns are in two dimensions, 1,4; 2, 5; 3, 6. When I take the combinations of 1, 4 and 2, 5, those are in different directions. The combinations already give me all of two dimensional space, so the column space is the whole space, including 0, 0 because I could take 0 of 1 plus 0 of the other vector.

And that third column can't contribute anything new. It's sitting in the column space. It's a combination of those two. But the first two are independent. Their combinations give the whole plane. So the column space is the whole plane. Column space.

There's not much room for our fourth subspace. But the fourth subspace, in this example, is quite small. Let me tell you about the fourth subspaces then. So we know the null space, N of A. And we know the column space, C of A. The null space was in this picture. The column space was in that picture.

Now, what about-- what's the name for the row space? Well, if I transpose the matrix, the row space turns into the column space. Transpose rows into columns of the matrix A transpose. So by transposing a matrix, it turns these two rows into two columns. And that's what I have here.

The row space is the-- this is the column space of the transpose matrix. I like it. I don't want to introduce a new letter for the row space. I like having just column space and null space. So I-- and I'm OK to go to A transpose. Now, what's that fourth guy?

Oh, just by beautifulness, general principles of elegance here. If I have columns space and null space of A, and if I have column space of A transpose, the fourth guy has to be the null space of A transpose. Sorry, I wrote that so small, so small. But I did write this a little larger.

The null space of A transpose, all the w's that solve that equation. A transpose w equals 0. The null space of A transpose is all the w's that solve that equation. What does that equation looks like? Ha! Well, that equation-- A transpose will have two columns. So A transpose-- this will be w1 of the first column. 1, 2, 3, when I transpose. And w2 of the second column, 4, 5, 6 equaling 0, 0, 0.

Well, now I've got, for this null space-- because my matrix here is 2 by 3, for this fourth subspace, I have three equations and only two unknowns, w1 and w2. And, in fact, the only solutions are w1 equals 0, w2 equals 0, because that's the only way I can get combination-- that's the only combination of that vector and that vector that gives me 0 is to take 0 of that and zero of that.

Do you see that the-- in this example, the null space of A transpose is just-- null space of A transpose-- is just what I call the 0 subspace. The subspace that has only one puny vector in it, the 0 vector. But that's OK. It follows the rule for subspaces. And it completes the picture of four subspaces.

In other examples, we could have all four subspaces nonzero. But we would have two over here that, together, complete the full N dimensional space. And over here, we have two that together complete the full M dimensional space. And here, for this matrix, M was 2, so that this is completed. The column space was all of R2 in this case.

All of two dimensional space was the column space, and that didn't leave any room for the left null space, the null space of A transpose. So do you see that picture? Let we may be just sketch it once more with a clean board.

So I have the row space. Let me draw it maybe going this way, the row space. And perpendicular to that is the null space. That's in-- we're here in N dimensions, and they are perpendicular, those spaces.

And then, over here, I have the column space. And perpendicular to that is the left null space. And we're here in M dimensions. Those are our four subspaces. And they have-- they sit in N dimensional space, two of them, two of them in M dimensional space, perpendicular. And I could tell you something about their dimensions.

So this row space, in that example, was two dimensional. It was a plane. In general, the dimension equals-- let's say R. That's an important number, the rank of A. Oh, that's a key number. Maybe, I better speak separately about the rank of a matrix. But I'll complete the idea here.

So the dimension of the row space is the number of independent rows. And I call that number R. And the beauty is that this has the same dimension. Dimension Is also the rank R. Can I say that wonderful fact in a sentence? The column space and the row space have the same dimension.

The number of independent rows equals the number of independent columns. That's like a miracle for a giant matrix, say 57 by 212, there might be 40 independent rows. Then, there would be 40 independent columns. And then the null space and the left null space have the remaining dimension. So the null space has dimension N minus R because, altogether-- together they have dimension N.

And this has dimension M minus R because, together, they have dimension M. That's the picture with the dimensions put in. And let me say a little more about the idea of dimension in a separate video. Thank you.

*dy/dt*
= *y, dy/dt*
= –*y, dy/dt*
= *2ty*
. The equation *dy/dt*
= *y*
**y*
is nonlinear.

*x ^{n}*
, sin(

*e ^{st}*
, from outside and exponential growth,

*t*
) produces an oscillating output with the same frequency ω (and a phase shift).

*q(s)*
by its growth factor and integrate those outputs*.*

*f =*
cos(ω*t*
) is the real part of the solution for *f = e ^{iωt}*
. That complex solution has magnitude

*e ^{-at}*
multiplies the differential equation, y’=ay+q, to give the derivative of

*–by ^{2}*
slows down growth and makes the equation nonlinear, the solution approaches a steady state

*t*
and the other in *y*
. The simplest is *dy/dt = y*
, when *dy/y*
equals *dt*
. Then ln(*y*
) = *t + C*
.

*f*
= cos(ω*t*
), the particular solution is *Y*
*cos(ω*t*
). But if the forcing frequency equals the natural frequency there is resonance.

*e ^{st}*
. The exponent

*g*
is the solution when the force is an impulse (a delta function). This also solves a null equation (no force) with a nonzero initial condition.

*y = G*
cos(ω*t –*
α). The damping ratio provides insight into the null solutions.

*L*
(inductance), *R*
(resistance), and *1/C*
(*C*
= capacitance).

*t*
, cosines/sines, exponentials), a particular solution has this same form.

*(at ^{2} + bt +c) e^{st}*
: substitute into the equation to find

*y _{1}*
and

*y(t)*
.

*s ^{2}Y*
and the algebra problem involves the transfer function

*(t)*
, the impulse response is *g(t)*
. When the force is *f(t)*
, the response is the “convolution” of *f*
and *g.*

*dy/dt = f(t,y)*
has an arrow with slope *f*
at each point *t, y*
. Arrows with the same slope lie along an isocline.

*(y, dy/dt)*
travels forever around an ellipse.

*y*
and *dy/dt*
. The matrix becomes a companion matrix.

*Y*
to the differential equation *y’ = f(y)*
. Near that *Y*
, the sign of *df/dy*
decides stability or instability.

*f(Y,Z)*
= 0 and *g(Y,Z)*
= 0. Near those constant solutions, the two linearized equations use the 2 by 2 matrix of partial derivatives of *f*
and *g*
.

*y’ = Ay*
are stable (solutions approach zero) when the trace of *A*
is negative and the determinant is positive.

*m*
by *n*
matrix *A*
has *n*
columns each in **R**
^{m}
. Capturing all combinations Av of these columns gives the column space – a subspace of **R**
* ^{m}*
.

*n*
nodes connected by *m*
edges (other edges can be missing). This is a useful model for the Internet, the brain, pipeline systems, and much more.

*A*
has a row for every edge, containing -1 and +1 to show the two nodes (two columns of *A*
) that are connected by that edge.

**x**
remain in the same direction when multiplied by the matrix (*A*
**x**
*=*
λ**x**
). An *n*
x *n*
matrix has *n*
eigenvalues.

*n*
independent eigenvectors. The diagonal matrix Λis the eigenvalue matrix.

*A*
= *V*
Λ*V ^{–1}*
also diagonalizes

*d*
**y**
*/dt = A*
**y**
contains solutions **y**
*= e ^{λt}*

**y**
*= e ^{At}*

*A*
and *B*
are “similar” if *B*
= *M ^{-1}AM*
for some matrix

*n*
perpendicular eigenvectors and *n*
real eigenvalues.

*d ^{2}y/dt^{2} + Sy =*
0 has

^{T}
Sv for every vector v. S = A^{T}
A is always positive definite if A has independent columns.

*A*
into an orthogonal matrix *U*
times a diagonal matrix Σ (the singular value) times another orthogonal matrix V^{T}
: rotation times stretch times rotation.

*y(0)*
and *dy/dt(0)*
to boundary conditions on *y(0)*
and *y(1)*
.

* ^{2}u/*
∂

*F(x)*
into a combination (infinite) of all basis functions cos(*nx)*
and sin(*nx)*
.

*F(–x) = F(x)*
) and odd functions use only sines. The coefficients *a _{n}*
and

*u*
(*r*
, θ) combines *r ^{n}*
cos(

*u*
/∂*t*
= ∂* ^{2}u*
/∂

* ^{2}u*
/∂