From the series: Differential Equations and Linear Algebra
Gilbert Strang, Massachusetts Institute of Technology (MIT)
An m by n matrix A has n columns each in Rm. Capturing all combinations Av of these columns gives the column space – a subspace of Rm.
OK. We're coming to the point where we need matrices. That's the point when we have several equations, several differential equations instead of just one. And it's a matrix that does that coupling.
So can I-- this won't be a full course in linear algebra. That would be available, you may know on, open courseware for 18.06. That's the linear algebra course. But -- facts, and why not just say them here in a few minutes?
So I have a matrix. Well there's a matrix. That's a 3 by 3 matrix. And first I want to ask how does it multiply a vector. So there it is multiplying a vector, v1, v2, v3. And what's the result, key idea? It takes the answer on the right-hand side is this number v1, times that column, plus this number, that number times the second column, plus the third number, the third number times the third column, combination of the columns of a. That's what a times v is. That's what the notation of matrix multiplication produces.
That's really basic to see it as a combination of columns. Now I want to build on that. That's one particular, if you give me v1, v2, and v3, I know how to multiply it. I take the combination.
Now I would like you to think about the result from all v1, v2, and v3. If I take all those numbers, and I get a whole lot of answers. They're all vectors, the result of A times v is another vector, Av, And I want to think about Av, those outputs, for all inputs v.
So I take v1, v2, v3 to be -- numbers. And I get all combinations of those three columns. And usually I would get the whole 3-dimensional space. Usually I can produce any vector, any output b1, b2, b3 from A times v. But not for this matrix, not for this matrix. Because this matrix is, you could say, deficient.
That third column there, 2, 3, 3, is obviously the sum of columns one and column two. So this v3 times that third column just produces something that I could already get from column one and column two. That v3 times that column three, I could x out. That's the same as column one, plus column two for this matrix, not usually.
And then so I only really have a combination of two columns. It's a combination of three. But the third one was dependent on the others. And it's really a combination of two columns. So combinations of two columns, two vectors in 3-dimensional space produce a plane. I only get a plane. I don't get all of 3-dimensional space, only a plane. And I call that plane the column space, so the column space of the matrix.
So if you gave me a different matrix, if you change this 3 to an 11, probably the column space now changes to-- for that matrix I think the column space would be the whole 3-dimensional space. I get everything. But when this third column is this the sum of the first two columns, it's not giving me anything new. And the column space is only a plane.
And you can think of a matrix where the column space is only a line, just one independent column. OK. So that, we thought about this. -- is all combinations of the columns. In other words, it's all the results, all the outputs from A times v. It's all the outputs from A times v. Those are the combinations of the columns.
So we can answer the most basic question of linear algebra. When does Av equal b? Have --. When is there a v so that I can solve this? When is there a v that solves this equation?
So it's a question about b. What is it about b that must be true if this can be solved? Well this says that equation is saying b is a combination of the columns of a. So this has a solution when b must be-- shall I say must be in the column space. For that example, only b's that where we can get a solution on b's that are combinations of the first two columns. Because having the third column at our disposal gives us no help. It doesn't give us anything new.
It will be solvable if b equalled 1, 1, 1. That's a combination of the column, or if b equals 1, 2, 2. That's another simple combination of the columns. Or if b equals 2, 3, 3. But I'm only, I'm staying on a plane there. And most b's are off that plane.
Now when there is a solution. All right. Now a second key idea of linear algebra, can we do it in this short video? I want to know about the equation Av equals 0. So now I'm setting the right-hand side to be 0. That's the 0 vector, 0, 0, 0. Does it have a solution? Does it have a solution? Let's take this example.
1, 1, 1; 1, 2, 2; 2, 3, 3; now I'm looking at the solutions when the right side is all 0. Does that have a solution? Is there a combination of those three columns that gives 0? Well there is always one combination. I could take 0, 0, and 0. I could take nothing, 0 of everything. 0 of this column, 0 of that column, 0 of the third column, would give me to the 0. That solution is always available.
The big question is, is there another solution. And here for this deficient, singular, non-invertible matrix, there is. There is another solution. Let me just write it down. Let me put it in there. Do you see what the solution is? The third column is the sum of those two. So if I want one of that column, I should take minus 1 in other column.
So this is minus this column, minus this column, plus this column gives me the 0 column. That is a vector in the null space. That's a solution to Avn equals --. So the null space is all solutions to Av equals 0. It's all the v's. The null space is a bunch of v's. The column space was a bunch of b's. It's just going to just emphasize that difference.
I was looking at which b --. I wasn't paying attention to what that solution was, just is there a solution. Then that b is in the column space. I take b equals 0. I fixed that all important b. And now I'm looking at the solutions. And here I find one. Can you find any more solutions? I think minus 10, minus 10, and 10 would be another solution. It's 10 times as much.
And 0, 0, 0 is solution. -- line of solutions. We had a plane for the column space. But we have a line for the null space. Isn't that neat? One's a plane, one's a line, dimension two plus dimension one. Two for the plane, one for the line, adds to dimension three, the dimension of the whole space. OK. That's a little going at in. All right.
Now I ask, what our all solutions? Complete solution to Av equals, well let me choose some right-hand side where there is a solution. Let me choose a right-hand side, say if I add that column and that column, I'll get Av-- maybe I'll take two of that column plus one of that column. Two of the first column with one of the second would be 3, 2 plus that would be a 4, 2 plus that would be another 4. OK. That's my b.
It's a combination of the columns. You saw me create it from the first two columns. So now I ask, what are all the solutions? It's in the column space. It's 2 times the first column, plus the second column. But there may be other solutions. So all solutions, a complete solution, v complete is here's the key idea. And the point is that it's the same that we know from differential equations.
It's particular solution plus any null solution. Plus all, you can say all v null. Particular plus null solution. It's such an important concept we just want to see it again. One particular solution with that thing would be particular, v particular could be-- 2-- how did we produce that? Out of two these, plus one of these, plus zero of that.
So v particular could be 2, 1, 0. It works for that particular b, two of the first column, one of the second. Now then we could add in anything in the null solution. So we have infinitely many solutions here. We've got one solution plus added to that, a whole line of solutions. This, all the null space, would be all vectors like that.
OK. That's the picture that we've seen for differential equations. And I just want to bring it out again for matrix equations, using the language of linear algebra. That's what I'm introducing here. I have one particular solution, plus anything in the null -- space of vectors that is the heart of linear algebra. Thank you.
1.1: Overview of Differential Equations Linear equations include dy/dt = y, dy/dt = –y, dy/dt = 2ty . The equation dy/dt = y *y is nonlinear.
1.2: The Calculus You Need The sum rule, product rule, and chain rule produce new derivatives from the derivatives of xn , sin(x ) and ex . The Fundamental Theorem of Calculus says that the integral inverts the derivative.
1.4b: Response to Exponential Input, exp(s*t) With exponential input, est , from outside and exponential growth, eat , from inside, the solution, y(t), is a combination of two exponentials.
1.4c: Response to Oscillating Input, cos(w*t) An oscillating input cos(ωt ) produces an oscillating output with the same frequency ω (and a phase shift).
1.4d: Solution for Any Input, q(t) To solve a linear first order equation, multiply each input q(s) by its growth factor and integrate those outputs.
1.4e: Step Function and Delta Function A unit step function jumps from 0 to 1. Its slope is a delta function: zero everywhere except infinite at the jump.
1.5: Response to Complex Exponential, exp(i*w*t) = cos(w*t)+i*sin(w*t) For linear equations, the solution for f = cos(ωt ) is the real part of the solution for f = eiωt . That complex solution has magnitude G (the gain).
1.6: Integrating Factor for a Constant Rate, a The integrating factor e-at multiplies the differential equation, y’=ay+q, to give the derivative of e-at y: ready for integration.
1.6b: Integrating Factor for a Varying Rate, a(t) The integral of a varying interest rate provides the exponent in the growing solution (the bank balance).
1.7: The Logistic Equation When –by2 slows down growth and makes the equation nonlinear, the solution approaches a steady state y( ∞) = a/b.
1.7c: The Stability and Instability of Steady States Steady state solutions can be stable or unstable – a simple test decides.
2.1: Second Order Equations For the oscillation equation with no damping and no forcing, all solutions share the same natural frequency.
2.1b: Forced Harmonic Motion With forcing f = cos(ωt ), the particular solution is Y *cos(ωt ). But if the forcing frequency equals the natural frequency there is resonance.
2.3: Unforced Damped Motion With constant coefficients in a differential equation, the basic solutions are exponentials est . The exponent s solves a simple equation such as As2 + Bs + C = 0 .
2.3c: Impulse Response and Step Response The impulse response g is the solution when the force is an impulse (a delta function). This also solves a null equation (no force) with a nonzero initial condition.
2.4: Exponential Response - Possible Resonance Resonance occurs when the natural frequency matches the forcing frequency — equal exponents from inside and outside.
2.4b: Second Order Equations With Damping A damped forced equation has a particular solution y = G cos(ωt – α). The damping ratio provides insight into the null solutions.
2.5: Electrical Networks: Voltages and Currents Current flowing around an RLC loop solves a linear equation with coefficients L (inductance), R (resistance), and 1/C (C = capacitance).
2.6: Methods of Undetermined Coefficients With constant coefficients and special forcing terms (powers of t , cosines/sines, exponentials), a particular solution has this same form.
2.6b: An Example of Method of Undetermined Coefficients This method is also successful for forces and solutions such as (at2 + bt +c) est : substitute into the equation to find a, b, c .
2.6c: Variations of Parameters Combine null solutions y1 and y2 with coefficients c1(t) and c2(t) to find a particular solution for any f(t).
2.7: Laplace Transform: First Order Equation Transform each term in the linear differential equation to create an algebra problem. You can then transform the algebra solution back to the ODE solution, y(t) .
2.7b: Laplace Transform: Second Order Equation The second derivative transforms to s2Y and the algebra problem involves the transfer function 1/ (As2 + Bs +C).
3.1: Pictures of the Solutions The direction field for dy/dt = f(t,y) has an arrow with slope f at each point t, y . Arrows with the same slope lie along an isocline.
3.2: Phase Plane Pictures: Source, Sink Saddle Solutions to second order equations can approach infinity or zero. Saddle points contain a positive and also a negative exponent or eigenvalue.
3.2b: Phase Plane Pictures: Spirals and Centers Imaginary exponents with pure oscillation provide a “center” in the phase plane. The point (y, dy/dt) travels forever around an ellipse.
3.2c: Two First Order Equations: Stability A second order equation gives two first order equations for y and dy/dt . The matrix becomes a companion matrix.
3.3: Linearization at Critical Points A critical point is a constant solution Y to the differential equation y’ = f(y) . Near that Y , the sign of df/dy decides stability or instability.
3.3b: Linearization of y'=f(y,z) and z'=g(y,z) With two equations, a critical point has f(Y,Z) = 0 and g(Y,Z) = 0. Near those constant solutions, the two linearized equations use the 2 by 2 matrix of partial derivatives of f and g .
3.3c: Eigenvalues and Stability: 2 by 2 Matrix, A Two equations y’ = Ay are stable (solutions approach zero) when the trace of A is negative and the determinant is positive.
5.1: The Column Space of a Matrix, A An m by n matrix A has n columns each in R m . Capturing all combinations Av of these columns gives the column space – a subspace of R m .
5.4: Independence, Basis, and Dimension Vectors v 1 to v d are a basis for a subspace if their combinations span the whole subspace and are independent: no basis vector is a combination of the others. Dimension d = number of basis vectors.
5.5: The Big Picture of Linear Algebra A matrix produces four subspaces – column space, row space (same dimension), the space of vectors perpendicular to all rows (the nullspace), and the space of vectors perpendicular to all columns.
5.6: Graphs A graph has n nodes connected by m edges (other edges can be missing). This is a useful model for the Internet, the brain, pipeline systems, and much more.
6.1: Eigenvalues and Eigenvectors The eigenvectors x remain in the same direction when multiplied by the matrix (A x = λx ). An n x n matrix has n eigenvalues.
6.2: Diagonalizing a Matrix A matrix can be diagonalized if it has n independent eigenvectors. The diagonal matrix Λis the eigenvalue matrix.
6.3: Solving Linear Systems d y /dt = A y contains solutions y = eλt x where λ and x are an eigenvalue / eigenvector pair for A .
6.4: The Matrix Exponential, exp(A*t) The shortest form of the solution uses the matrix exponential y = eAt y (0) . The matrix eAt has eigenvalues eλt and the eigenvectors of A.
6.4b: Similar Matrices, A and B=M^(-1)*A*M A and B are “similar” if B = M-1AM for some matrix M . B then has the same eigenvalues as A .
6.5: Symmetric Matrices, Real Eigenvalues, Orthogonal Eigenvectors Symmetric matrices have n perpendicular eigenvectors and n real eigenvalues.
7.2: Positive Definite Matrices, S=A'*A A positive definite matrix S has positive eigenvalues, positive pivots, positive determinants, and positive energy vT Sv for every vector v. S = AT A is always positive definite if A has independent columns.
7.2b: Singular Value Decomposition, SVD The SVD factors each matrix A into an orthogonal matrix U times a diagonal matrix Σ (the singular value) times another orthogonal matrix VT : rotation times stretch times rotation.
7.3: Boundary Conditions Replace Initial Conditions A second order equation can change its initial conditions on y(0) and dy/dt(0) to boundary conditions on y(0) and y(1) .
8.1: Fourier Series A Fourier series separates a periodic function F(x) into a combination (infinite) of all basis functions cos(nx) and sin(nx) .
8.1b: Examples of Fourier Series Even functions use only cosines (F(–x) = F(x) ) and odd functions use only sines. The coefficients an and bn come from integrals of F(x) cos(nx ) and F(x) sin(nx ).
8.1c: Fourier Series Solution of Laplace's Equation Inside a circle, the solution u (r , θ) combines rn cos(n θ) and rn sin(n θ). The boundary solution combines all entries in a Fourier series to match the boundary conditions.
8.3: Heat Equation The heat equation ∂u /∂t = ∂2u /∂x2 starts from a temperature distribution u at t = 0 and follows it for t > 0 as it quickly becomes smooth.
Choose your country to get translated content where available and see local events and offers. Based on your location, we recommend that you select: .Select
You can also select a location from the following list: