From the series: Differential Equations and Linear Algebra
Gilbert Strang, Massachusetts Institute of Technology (MIT)
A matrix produces four subspaces – column space, row space (same dimension), the space of vectors perpendicular to all rows (the nullspace), and the space of vectors perpendicular to all columns.
I would like you to see the big picture of linear algebra. We're not doing, in this set of videos, a full course on linear algebra. That's already on OpenCourseWare 1806. And now I'm concentrating on differential equations, but you got to see linear algebra this way.
And this way means subspaces. And there are four of them in the big picture. And we-- previous video described the column space and the null space. Now, we've got two more, making four. And let me look at this matrix-- it's for subspaces-- and put them into the big picture.
So the first space I'll look at is the row space. Now, the row space has these rows-- has the vector 1, 2, 3 and the vector 4, 5, 6, two vectors there, and all their combinations. That's the key idea in linear algebra, linear combinations. So 1, 2, 3 is a vector in three dimensional space. 4, 5, 6 is another one.
Now, if I take all their combinations, do you visualize that if I have two vectors, and I add them, and I get another vector that's in the same plane? Or if I subtract them, I'm still in that plane. Or if I take five of one and three of another, I'm still in that plane. And I fill the plane when I take all the combinations. So the row space-- can I try to draw a picture here?
It's a plane. This is the row space. I'll just put row. And in that plane are the vectors 1, 2, 3 and the vectors 4, 5, 6, those two rows. And the plane fills our combinations. Well, I can't draw am infinite plane on this MIT blackboard. But you get the idea. It's a plane. And we're sitting in three dimensions.
Now, the other-- so there's more. We've only got one-- a plane here, a flat part, like a tabletop, extending to infinity, but not filling 3D because we've got another direction. And in that other direction is the null space. That's the nice thing.
So I would like to know the null space of that matrix. I'd like to solve so the null space, N of A-- I'm solving Av equals all 0s. So some combination of those three columns will give me the 0 column. Let me write it in as a 0 column.
What could v be? What combination of that column, that column, and that column give 0, 0? Now, I know there are some interesting combinations because I-- only amounts to two equations with three unknowns, v1, v2, v3. I want to multiply that by v1, that by v2, that by v3. So I have three unknowns, but I've only two 0s to get, only two equations.
And if I have three unknowns and two equations, there will be lots of solutions. And I can see one. Do you see that if I add that and that, I get 4, 10 And that's the same-- 4, 10 is the same as 2 times 2, 5.
In other words, I believe v equal-- if I took 1 of the first and 1 of the third, and if I subtracted 2 of the second column-- so Av will give me 1 of the first column, 1 of the third column, and subtracting 2 of the second column will give me 0, 0. So here's my null space. My null space heads off in this direction, in the direction of 1, minus 2, 1.
But, of course, I get more solutions by multiplying v by any number. 10 times that vector would still give me 0s and still be in the null space. So I really have-- the null space is a whole line of vectors. It's that vector and any multiple of that vector. So it's a whole infinite line, which is a one dimensional subspace, the null space.
So the null space in my picture-- here is the null space. Well, it is not very thick is it because it's just a line. So I'll call this N of A, this line. Well, you see I'm trying to draw a three dimensional space. That line goes both ways. But it's perpendicular to the plane. That's the fabulous part. That's wonderful.
This line, the null space, is perpendicular to this plane, the row space. You want to know why? You want to just see it? Because if I take A times v, that would be 1, 2, 3 times v. 1, 2, 3 is perpendicular to that. How do I check perpendicular for two vectors? 1, 2, 3 dot product. 1, minus 2, 1, the dot product is 1 times 1, minus 2 times 2-- that's 4-- plus 3 times 1, that's 3. 1 minus 4 plus 3 is 0. And, similarly, 4 minus 10 plus 6 is 0.
So this is a right angle here. It's a right angle, 90 degrees between those two subspaces. And again, in this example, one space is two dimensional, a plane. The other space is one dimensional, a perpendicular line. I can show with my hands, but I can't draw on this flat blackboard. I have the plane going infinitely far, and I have the line going perpendicular to it and meeting, of course, at 0-- at the 0 vector.
That solves Av equals 0, and it also-- it's a combination, a 0 combination of the rows. That's half of the big picture, the row space and the null space.
Now, I'm ready for the other half, which is a second side of the other-- the right-hand side of the big picture contains the column space first of all. So what's the column space of that matrix?
So the column space of a matrix, we take all combinations of those three columns. And that will fill out a space. Now, I have-- so I take the vector 1, 4. And I take the vector 2, 5, maybe there. And then I'm going to also take the vector 3, 6. Well, I've got three columns. So I'm counting out 3, up 6. Good.
Take those combinations of those vectors, and what do you get? This is a picture in two dimensional space because these columns are in two dimensions, 1,4; 2, 5; 3, 6. When I take the combinations of 1, 4 and 2, 5, those are in different directions. The combinations already give me all of two dimensional space, so the column space is the whole space, including 0, 0 because I could take 0 of 1 plus 0 of the other vector.
And that third column can't contribute anything new. It's sitting in the column space. It's a combination of those two. But the first two are independent. Their combinations give the whole plane. So the column space is the whole plane. Column space.
There's not much room for our fourth subspace. But the fourth subspace, in this example, is quite small. Let me tell you about the fourth subspaces then. So we know the null space, N of A. And we know the column space, C of A. The null space was in this picture. The column space was in that picture.
Now, what about-- what's the name for the row space? Well, if I transpose the matrix, the row space turns into the column space. Transpose rows into columns of the matrix A transpose. So by transposing a matrix, it turns these two rows into two columns. And that's what I have here.
The row space is the-- this is the column space of the transpose matrix. I like it. I don't want to introduce a new letter for the row space. I like having just column space and null space. So I-- and I'm OK to go to A transpose. Now, what's that fourth guy?
Oh, just by beautifulness, general principles of elegance here. If I have columns space and null space of A, and if I have column space of A transpose, the fourth guy has to be the null space of A transpose. Sorry, I wrote that so small, so small. But I did write this a little larger.
The null space of A transpose, all the w's that solve that equation. A transpose w equals 0. The null space of A transpose is all the w's that solve that equation. What does that equation looks like? Ha! Well, that equation-- A transpose will have two columns. So A transpose-- this will be w1 of the first column. 1, 2, 3, when I transpose. And w2 of the second column, 4, 5, 6 equaling 0, 0, 0.
Well, now I've got, for this null space-- because my matrix here is 2 by 3, for this fourth subspace, I have three equations and only two unknowns, w1 and w2. And, in fact, the only solutions are w1 equals 0, w2 equals 0, because that's the only way I can get combination-- that's the only combination of that vector and that vector that gives me 0 is to take 0 of that and zero of that.
Do you see that the-- in this example, the null space of A transpose is just-- null space of A transpose-- is just what I call the 0 subspace. The subspace that has only one puny vector in it, the 0 vector. But that's OK. It follows the rule for subspaces. And it completes the picture of four subspaces.
In other examples, we could have all four subspaces nonzero. But we would have two over here that, together, complete the full N dimensional space. And over here, we have two that together complete the full M dimensional space. And here, for this matrix, M was 2, so that this is completed. The column space was all of R2 in this case.
All of two dimensional space was the column space, and that didn't leave any room for the left null space, the null space of A transpose. So do you see that picture? Let we may be just sketch it once more with a clean board.
So I have the row space. Let me draw it maybe going this way, the row space. And perpendicular to that is the null space. That's in-- we're here in N dimensions, and they are perpendicular, those spaces.
And then, over here, I have the column space. And perpendicular to that is the left null space. And we're here in M dimensions. Those are our four subspaces. And they have-- they sit in N dimensional space, two of them, two of them in M dimensional space, perpendicular. And I could tell you something about their dimensions.
So this row space, in that example, was two dimensional. It was a plane. In general, the dimension equals-- let's say R. That's an important number, the rank of A. Oh, that's a key number. Maybe, I better speak separately about the rank of a matrix. But I'll complete the idea here.
So the dimension of the row space is the number of independent rows. And I call that number R. And the beauty is that this has the same dimension. Dimension Is also the rank R. Can I say that wonderful fact in a sentence? The column space and the row space have the same dimension.
The number of independent rows equals the number of independent columns. That's like a miracle for a giant matrix, say 57 by 212, there might be 40 independent rows. Then, there would be 40 independent columns. And then the null space and the left null space have the remaining dimension. So the null space has dimension N minus R because, altogether-- together they have dimension N.
And this has dimension M minus R because, together, they have dimension M. That's the picture with the dimensions put in. And let me say a little more about the idea of dimension in a separate video. Thank you.
1.1: Overview of Differential Equations Linear equations include dy/dt = y, dy/dt = –y, dy/dt = 2ty . The equation dy/dt = y *y is nonlinear.
1.2: The Calculus You Need The sum rule, product rule, and chain rule produce new derivatives from the derivatives of xn , sin(x ) and ex . The Fundamental Theorem of Calculus says that the integral inverts the derivative.
1.4b: Response to Exponential Input, exp(s*t) With exponential input, est , from outside and exponential growth, eat , from inside, the solution, y(t), is a combination of two exponentials.
1.4c: Response to Oscillating Input, cos(w*t) An oscillating input cos(ωt ) produces an oscillating output with the same frequency ω (and a phase shift).
1.4d: Solution for Any Input, q(t) To solve a linear first order equation, multiply each input q(s) by its growth factor and integrate those outputs.
1.4e: Step Function and Delta Function A unit step function jumps from 0 to 1. Its slope is a delta function: zero everywhere except infinite at the jump.
1.5: Response to Complex Exponential, exp(i*w*t) = cos(w*t)+i*sin(w*t) For linear equations, the solution for f = cos(ωt ) is the real part of the solution for f = eiωt . That complex solution has magnitude G (the gain).
1.6: Integrating Factor for a Constant Rate, a The integrating factor e-at multiplies the differential equation, y’=ay+q, to give the derivative of e-at y: ready for integration.
1.6b: Integrating Factor for a Varying Rate, a(t) The integral of a varying interest rate provides the exponent in the growing solution (the bank balance).
1.7: The Logistic Equation When –by2 slows down growth and makes the equation nonlinear, the solution approaches a steady state y( ∞) = a/b.
1.7c: The Stability and Instability of Steady States Steady state solutions can be stable or unstable – a simple test decides.
2.1: Second Order Equations For the oscillation equation with no damping and no forcing, all solutions share the same natural frequency.
2.1b: Forced Harmonic Motion With forcing f = cos(ωt ), the particular solution is Y *cos(ωt ). But if the forcing frequency equals the natural frequency there is resonance.
2.3: Unforced Damped Motion With constant coefficients in a differential equation, the basic solutions are exponentials est . The exponent s solves a simple equation such as As2 + Bs + C = 0 .
2.3c: Impulse Response and Step Response The impulse response g is the solution when the force is an impulse (a delta function). This also solves a null equation (no force) with a nonzero initial condition.
2.4: Exponential Response - Possible Resonance Resonance occurs when the natural frequency matches the forcing frequency — equal exponents from inside and outside.
2.4b: Second Order Equations With Damping A damped forced equation has a particular solution y = G cos(ωt – α). The damping ratio provides insight into the null solutions.
2.5: Electrical Networks: Voltages and Currents Current flowing around an RLC loop solves a linear equation with coefficients L (inductance), R (resistance), and 1/C (C = capacitance).
2.6: Methods of Undetermined Coefficients With constant coefficients and special forcing terms (powers of t , cosines/sines, exponentials), a particular solution has this same form.
2.6b: An Example of Method of Undetermined Coefficients This method is also successful for forces and solutions such as (at2 + bt +c) est : substitute into the equation to find a, b, c .
2.6c: Variations of Parameters Combine null solutions y1 and y2 with coefficients c1(t) and c2(t) to find a particular solution for any f(t).
2.7: Laplace Transform: First Order Equation Transform each term in the linear differential equation to create an algebra problem. You can then transform the algebra solution back to the ODE solution, y(t) .
2.7b: Laplace Transform: Second Order Equation The second derivative transforms to s2Y and the algebra problem involves the transfer function 1/ (As2 + Bs +C).
3.1: Pictures of the Solutions The direction field for dy/dt = f(t,y) has an arrow with slope f at each point t, y . Arrows with the same slope lie along an isocline.
3.2: Phase Plane Pictures: Source, Sink Saddle Solutions to second order equations can approach infinity or zero. Saddle points contain a positive and also a negative exponent or eigenvalue.
3.2b: Phase Plane Pictures: Spirals and Centers Imaginary exponents with pure oscillation provide a “center” in the phase plane. The point (y, dy/dt) travels forever around an ellipse.
3.2c: Two First Order Equations: Stability A second order equation gives two first order equations for y and dy/dt . The matrix becomes a companion matrix.
3.3: Linearization at Critical Points A critical point is a constant solution Y to the differential equation y’ = f(y) . Near that Y , the sign of df/dy decides stability or instability.
3.3b: Linearization of y'=f(y,z) and z'=g(y,z) With two equations, a critical point has f(Y,Z) = 0 and g(Y,Z) = 0. Near those constant solutions, the two linearized equations use the 2 by 2 matrix of partial derivatives of f and g .
3.3c: Eigenvalues and Stability: 2 by 2 Matrix, A Two equations y’ = Ay are stable (solutions approach zero) when the trace of A is negative and the determinant is positive.
5.1: The Column Space of a Matrix, A An m by n matrix A has n columns each in R m . Capturing all combinations Av of these columns gives the column space – a subspace of R m .
5.4: Independence, Basis, and Dimension Vectors v 1 to v d are a basis for a subspace if their combinations span the whole subspace and are independent: no basis vector is a combination of the others. Dimension d = number of basis vectors.
5.5: The Big Picture of Linear Algebra A matrix produces four subspaces – column space, row space (same dimension), the space of vectors perpendicular to all rows (the nullspace), and the space of vectors perpendicular to all columns.
5.6: Graphs A graph has n nodes connected by m edges (other edges can be missing). This is a useful model for the Internet, the brain, pipeline systems, and much more.
6.1: Eigenvalues and Eigenvectors The eigenvectors x remain in the same direction when multiplied by the matrix (A x = λx ). An n x n matrix has n eigenvalues.
6.2: Diagonalizing a Matrix A matrix can be diagonalized if it has n independent eigenvectors. The diagonal matrix Λis the eigenvalue matrix.
6.3: Solving Linear Systems d y /dt = A y contains solutions y = eλt x where λ and x are an eigenvalue / eigenvector pair for A .
6.4: The Matrix Exponential, exp(A*t) The shortest form of the solution uses the matrix exponential y = eAt y (0) . The matrix eAt has eigenvalues eλt and the eigenvectors of A.
6.4b: Similar Matrices, A and B=M^(-1)*A*M A and B are “similar” if B = M-1AM for some matrix M . B then has the same eigenvalues as A .
6.5: Symmetric Matrices, Real Eigenvalues, Orthogonal Eigenvectors Symmetric matrices have n perpendicular eigenvectors and n real eigenvalues.
7.2: Positive Definite Matrices, S=A'*A A positive definite matrix S has positive eigenvalues, positive pivots, positive determinants, and positive energy vT Sv for every vector v. S = AT A is always positive definite if A has independent columns.
7.2b: Singular Value Decomposition, SVD The SVD factors each matrix A into an orthogonal matrix U times a diagonal matrix Σ (the singular value) times another orthogonal matrix VT : rotation times stretch times rotation.
7.3: Boundary Conditions Replace Initial Conditions A second order equation can change its initial conditions on y(0) and dy/dt(0) to boundary conditions on y(0) and y(1) .
8.1: Fourier Series A Fourier series separates a periodic function F(x) into a combination (infinite) of all basis functions cos(nx) and sin(nx) .
8.1b: Examples of Fourier Series Even functions use only cosines (F(–x) = F(x) ) and odd functions use only sines. The coefficients an and bn come from integrals of F(x) cos(nx ) and F(x) sin(nx ).
8.1c: Fourier Series Solution of Laplace's Equation Inside a circle, the solution u (r , θ) combines rn cos(n θ) and rn sin(n θ). The boundary solution combines all entries in a Fourier series to match the boundary conditions.
8.3: Heat Equation The heat equation ∂u /∂t = ∂2u /∂x2 starts from a temperature distribution u at t = 0 and follows it for t > 0 as it quickly becomes smooth.