From the series: Differential Equations and Linear Algebra
Gilbert Strang, Massachusetts Institute of Technology (MIT)
Vectors v1 to vd are a basis for a subspace if their combinations span the whole subspace and are independent: no basis vector is a combination of the others. Dimension d = number of basis vectors.
So as long as I'm introducing the idea of a vector space, I better introduce the things that go with it. The idea of its dimension and, all important, the idea of a basis for that space. That space could be all of three dimensional space, the space we live in. In that, case the dimension is three, but what's the meaning of a basis-- a basis for three dimensional space. Or a basis for other spaces.
OK, so I have to explain independence, basis, and dimension. Dimension's easy if you get the first two. OK, independence. Are those vectors independent? Well, if I draw them, in three dimensional space, I can imagine 2, 1, 5 going in some direction. Let me draw it. How's that? 2, 1, 5, whatever! Goes there. That's a1. OK.
Now is a2 on the same line? If a2 is on the same line then it would be dependent. The two vectors would be dependent if they're on the same line. But this one is not on that line. A 4, 2, 0. So it doesn't go up and all. It's somewhere in this plane, 4, 2, 0. I'll say there. Whatever. a2. So those are independent.
So their combinations give me a space. The combinations of a1 and a2 give me a plane, a flat plane, in three dimensional space. That plane is, I would say, they span the plane. a1 and a2 span a plane. And here's the key word: span.
So there are two vectors. They're in three dimensional space. And the plane they span is all their combinations. That's what we're always doing: taking all the combinations of these vectors. OK.
So there-- and actually, a1 and a2 are a basis for that pane. a1 and a2 are a basis for that plane because their combinations fill the plane. And also, they're independent. I need them both. If I threw away one, I would only have one vector left, and it would only span a line. OK.
Now let me bring in a third vector in three dimensions. Well, what shall I take for that third vector? Ha! Suppose I take a1 plus a2 as my third vector. So 6, 3, 5. What about the vector 6, 3, 5? Well, what do I know? It's obviously special. It's a1 plus a2. It's in the same plane. So if I took a3 equal 6, 3, 5, that would be dependent. The three vectors would be dependent with that a3.
They would span the plane still. Their combinations would still give the plane, but they wouldn't be a basis for the plane. a1 and 12 and a3 together, that's too much, too many vectors for a single plane. The vectors are dependent. And we don't-- a basis has to be independent vectors. You have to need them all. We don't need all three here.
So that's a dependent one. It can't go into a basis with a1 and a2 because the three vectors are dependent. Now let me make a difference choice. So that one's dead. That did not do it. All right.
Let me take a3 equal to some other, not a combination of these, but headed off in some new direction. Well, I don't know what that new direction is. Maybe 1, 0, 0. What the heck? I believe-- I hope I'm right-- that 1, 0, 0 is not a combination here. I say 1, 0, 0 goes off. It's pretty short. Here's a3. Better a3 then that loser 6, 3, 5. 1 0, 0 is a winner. These three vectors--
So now a1, a2, and let me add in a3, all three of them span a-- what do they span? What are all the combinations of a1, a2, a3? It's three dimensional? It's the whole three dimensional space. They span all of 3D, the whole three dimensional space. They're a basis for the whole three dimensional space. They're independent.
So let me-- you see that picture before I move it? a1, a2, a3 are independent. None of them is a combination of the others. They fill a three dimensional space. They're are a basis for that three dimensional space. And that space is, in this example, is the whole of our three.
So let me just write down on the next blackboard what I mean. Independent. Independent. So independent columns of a matrix. Independent columns of a matrix A means the only solution to Av equals 0 is v equals 0. So if I have independent columns, then I haven't got any null space. If I have independent columns, then the null space of the matrix is just the 0 vector.
So let me write down that example again. A was the matrix 2, 1, 5, 4, 2, 0, 1, 0, 0. So I believe that matrix has independent columns. So its column space is the full three dimensional space. It's null space only contains-- let me put it, make that clear that that's a vector. And now I'm ready to write down the idea of a basis.
So what is a basis for the space? A basis for a space, a subspace. Independent vectors. That's the key. Independent vectors that span the space, the subspace. Whatever it is.
By the way, if the column space is all a three dimensional space, as it is here, that's a subspace too. It's the whole space, but the whole space counts as a subspace of itself. And the 0 vector alone counts as the smallest possible. So if we're in three dimensions, the idea of subspaces has-- we have just the 0 vector. Just one point. That's a smallest.
We have the whole three dimensional space. That's the biggest. And then we have all the lines through 0. Those are on the small side. We have all the planes through 0. Those are a bit bigger. And those dimensions are 0, 1, 2, 3. The possible dimensions is told to us by how many basis vectors we need.
So let me look at that and then come to dimension. OK. So independent means that the only-- that no combination, no other combination of the vectors, no combination of these vectors gives the 0 vector except to take 0 of that, 0 of that, and 0 of that. So those are a basis for the column space because they're independent and their combinations give the whole column space. OK.
And now I wanted to say something about dimensions. OK. Dimension. It's a number. It's the number of basis vectors for the subspace. Oh! But you might say, that the subspace has other bases, not just the one you happen to think of first. And I agree. Many different bases. For this example, all I need to get a basis for, in this case, for three dimensional space is I need three independent vectors. Any three.
But the point is, the point about dimension is that I need exactly three. I can never get two vectors that span all of our three. And I can never get four vectors that are independent in our three. If I have fewer than the dimension number, I don't have enough. They don't span. If I have too many, than the dimension, they're dependent. They won't be independent. They can't be a basis.
Every basis has the same number. And that number is the dimension of the subspace. All right, let's just take an example, just with a picture. I'll stay in three dimensional space, but my subspace will just be a plane.
So here I'm in three dimensional space. Good. Now I have my subspace is a plane. So it goes through the origin, but it's only a plane. So I'm expecting that I could take a vector in the plane, and I could take another vector in the plane, and they could be independent. They are. They're different directions. I couldn't find a third independent vector in the plane. Every basis for the plane--
So here every basis for this plane contains two vectors. Always two. And that number two is the dimension of a plane. Well, I'm just saying the plane there is two dimensional. It's not the same as r2. it's not the same. That plane is a plane in r3. It's not ordinary two dimensional space. But its dimension is two because it takes any vector. And if I didn't like the looks of this one, well, that's no problem. Let me go that way. That's just as good.
Those two vectors are independent. They span the plane. They're a basis for the plane. The plane is two dimensional. That's the set of key ideas. Independent. Span. Basis. Basis is fundamental. Basis is a bunch of vectors. And dimension is how many vectors.
OK. Those are key ideas in linear algebra. And you'll see them come into the big picture of linear algebra. Thank you.
1.1: Overview of Differential Equations Linear equations include dy/dt = y, dy/dt = –y, dy/dt = 2ty . The equation dy/dt = y *y is nonlinear.
1.2: The Calculus You Need The sum rule, product rule, and chain rule produce new derivatives from the derivatives of xn , sin(x ) and ex . The Fundamental Theorem of Calculus says that the integral inverts the derivative.
1.4b: Response to Exponential Input, exp(s*t) With exponential input, est , from outside and exponential growth, eat , from inside, the solution, y(t), is a combination of two exponentials.
1.4c: Response to Oscillating Input, cos(w*t) An oscillating input cos(ωt ) produces an oscillating output with the same frequency ω (and a phase shift).
1.4d: Solution for Any Input, q(t) To solve a linear first order equation, multiply each input q(s) by its growth factor and integrate those outputs.
1.4e: Step Function and Delta Function A unit step function jumps from 0 to 1. Its slope is a delta function: zero everywhere except infinite at the jump.
1.5: Response to Complex Exponential, exp(i*w*t) = cos(w*t)+i*sin(w*t) For linear equations, the solution for f = cos(ωt ) is the real part of the solution for f = eiωt . That complex solution has magnitude G (the gain).
1.6: Integrating Factor for a Constant Rate, a The integrating factor e-at multiplies the differential equation, y’=ay+q, to give the derivative of e-at y: ready for integration.
1.6b: Integrating Factor for a Varying Rate, a(t) The integral of a varying interest rate provides the exponent in the growing solution (the bank balance).
1.7: The Logistic Equation When –by2 slows down growth and makes the equation nonlinear, the solution approaches a steady state y( ∞) = a/b.
1.7c: The Stability and Instability of Steady States Steady state solutions can be stable or unstable – a simple test decides.
2.1: Second Order Equations For the oscillation equation with no damping and no forcing, all solutions share the same natural frequency.
2.1b: Forced Harmonic Motion With forcing f = cos(ωt ), the particular solution is Y *cos(ωt ). But if the forcing frequency equals the natural frequency there is resonance.
2.3: Unforced Damped Motion With constant coefficients in a differential equation, the basic solutions are exponentials est . The exponent s solves a simple equation such as As2 + Bs + C = 0 .
2.3c: Impulse Response and Step Response The impulse response g is the solution when the force is an impulse (a delta function). This also solves a null equation (no force) with a nonzero initial condition.
2.4: Exponential Response - Possible Resonance Resonance occurs when the natural frequency matches the forcing frequency — equal exponents from inside and outside.
2.4b: Second Order Equations With Damping A damped forced equation has a particular solution y = G cos(ωt – α). The damping ratio provides insight into the null solutions.
2.5: Electrical Networks: Voltages and Currents Current flowing around an RLC loop solves a linear equation with coefficients L (inductance), R (resistance), and 1/C (C = capacitance).
2.6: Methods of Undetermined Coefficients With constant coefficients and special forcing terms (powers of t , cosines/sines, exponentials), a particular solution has this same form.
2.6b: An Example of Method of Undetermined Coefficients This method is also successful for forces and solutions such as (at2 + bt +c) est : substitute into the equation to find a, b, c .
2.6c: Variations of Parameters Combine null solutions y1 and y2 with coefficients c1(t) and c2(t) to find a particular solution for any f(t).
2.7: Laplace Transform: First Order Equation Transform each term in the linear differential equation to create an algebra problem. You can then transform the algebra solution back to the ODE solution, y(t) .
2.7b: Laplace Transform: Second Order Equation The second derivative transforms to s2Y and the algebra problem involves the transfer function 1/ (As2 + Bs +C).
3.1: Pictures of the Solutions The direction field for dy/dt = f(t,y) has an arrow with slope f at each point t, y . Arrows with the same slope lie along an isocline.
3.2: Phase Plane Pictures: Source, Sink Saddle Solutions to second order equations can approach infinity or zero. Saddle points contain a positive and also a negative exponent or eigenvalue.
3.2b: Phase Plane Pictures: Spirals and Centers Imaginary exponents with pure oscillation provide a “center” in the phase plane. The point (y, dy/dt) travels forever around an ellipse.
3.2c: Two First Order Equations: Stability A second order equation gives two first order equations for y and dy/dt . The matrix becomes a companion matrix.
3.3: Linearization at Critical Points A critical point is a constant solution Y to the differential equation y’ = f(y) . Near that Y , the sign of df/dy decides stability or instability.
3.3b: Linearization of y'=f(y,z) and z'=g(y,z) With two equations, a critical point has f(Y,Z) = 0 and g(Y,Z) = 0. Near those constant solutions, the two linearized equations use the 2 by 2 matrix of partial derivatives of f and g .
3.3c: Eigenvalues and Stability: 2 by 2 Matrix, A Two equations y’ = Ay are stable (solutions approach zero) when the trace of A is negative and the determinant is positive.
5.1: The Column Space of a Matrix, A An m by n matrix A has n columns each in R m . Capturing all combinations Av of these columns gives the column space – a subspace of R m .
5.4: Independence, Basis, and Dimension Vectors v 1 to v d are a basis for a subspace if their combinations span the whole subspace and are independent: no basis vector is a combination of the others. Dimension d = number of basis vectors.
5.5: The Big Picture of Linear Algebra A matrix produces four subspaces – column space, row space (same dimension), the space of vectors perpendicular to all rows (the nullspace), and the space of vectors perpendicular to all columns.
5.6: Graphs A graph has n nodes connected by m edges (other edges can be missing). This is a useful model for the Internet, the brain, pipeline systems, and much more.
6.1: Eigenvalues and Eigenvectors The eigenvectors x remain in the same direction when multiplied by the matrix (A x = λx ). An n x n matrix has n eigenvalues.
6.2: Diagonalizing a Matrix A matrix can be diagonalized if it has n independent eigenvectors. The diagonal matrix Λis the eigenvalue matrix.
6.3: Solving Linear Systems d y /dt = A y contains solutions y = eλt x where λ and x are an eigenvalue / eigenvector pair for A .
6.4: The Matrix Exponential, exp(A*t) The shortest form of the solution uses the matrix exponential y = eAt y (0) . The matrix eAt has eigenvalues eλt and the eigenvectors of A.
6.4b: Similar Matrices, A and B=M^(-1)*A*M A and B are “similar” if B = M-1AM for some matrix M . B then has the same eigenvalues as A .
6.5: Symmetric Matrices, Real Eigenvalues, Orthogonal Eigenvectors Symmetric matrices have n perpendicular eigenvectors and n real eigenvalues.
7.2: Positive Definite Matrices, S=A'*A A positive definite matrix S has positive eigenvalues, positive pivots, positive determinants, and positive energy vT Sv for every vector v. S = AT A is always positive definite if A has independent columns.
7.2b: Singular Value Decomposition, SVD The SVD factors each matrix A into an orthogonal matrix U times a diagonal matrix Σ (the singular value) times another orthogonal matrix VT : rotation times stretch times rotation.
7.3: Boundary Conditions Replace Initial Conditions A second order equation can change its initial conditions on y(0) and dy/dt(0) to boundary conditions on y(0) and y(1) .
8.1: Fourier Series A Fourier series separates a periodic function F(x) into a combination (infinite) of all basis functions cos(nx) and sin(nx) .
8.1b: Examples of Fourier Series Even functions use only cosines (F(–x) = F(x) ) and odd functions use only sines. The coefficients an and bn come from integrals of F(x) cos(nx ) and F(x) sin(nx ).
8.1c: Fourier Series Solution of Laplace's Equation Inside a circle, the solution u (r , θ) combines rn cos(n θ) and rn sin(n θ). The boundary solution combines all entries in a Fourier series to match the boundary conditions.
8.3: Heat Equation The heat equation ∂u /∂t = ∂2u /∂x2 starts from a temperature distribution u at t = 0 and follows it for t > 0 as it quickly becomes smooth.