From the series: Differential Equations and Linear Algebra
Gilbert Strang, Massachusetts Institute of Technology (MIT)
With constant coefficients and special forcing terms (powers of t, cosines/sines, exponentials), a particular solution has this same form.
OK. So this is about the world's fastest way to solve differential equations. And you'll like that method. First we have to see what equations will we be able to solve. Well, linear, constant coefficients. I made all the coefficients 1, but no problem to change those to A, B, C. So the nice left-hand side.
And on the right-hand side, we also need something nice. We want a nice function. And I'll tell you which are the nice functions. So I can say right away that e to the exponentials are nice functions, of course. They're are always at the center of this course.
So for example, equal e to the st. That would be a nice function. OK. And the key is, we're looking for a particular solution, because we know how to find null solutions. We're looking for particular solution for this equation. One function, some function that solves this equation with right-hand side e to the st.
And the point is, we know what to look for. We just have some coefficient to find. And we'll find that by substituting in the equation. Now, do you remember what we look for when the right-hand side is e to the st? Then look for y equals some constant times e to the st, right?
When f of t-- maybe I'll put the equal sign down there. If f of t is e to the st, then I just look for a multiple of it. That's one coefficient to be determined by substitute this into the equation. Do you remember the results? So this is our best example.
When I put this in the equation, I'll get the derivative brings an s. Second derivative brings another s. So I get s squared and an s and a 1 times y e to the st is equal to e to the st. We've done that before. Here we see it as a case with undetermined coefficient y. But by plugging it in, I've discovered that y is 1 over that.
So that's a nice function then. e to the st is a nice function. What are the other nice functions? So now, let me move to the other board, next board, and ask, what other right-hand sides could we solve? So I'll keep this left-hand side equal to. So e to the st was 1.
What about t? What about t? A polynomial. Well, that only has one term. So what would be a particular solution to that equation? So I really have to say, what is the-- try y particular equals-- now, if I see a t there, then I'm going to look for a t in y. And I'll also look for a constant. So a plus bt would be the correct form to look for.
Let me just show you how that works. So this now has two undetermined coefficients. And we determine them by putting that into the equation and making it right. So try yp is a plus bt in this equation. OK, the second derivative of a plus bt is 0. The first derivative of that is b. So I get a b from that. And y itself is a plus bt. And that's supposed to give t.
You see, I plugged it in. I got to this equation. Now I can determine a and b by matching t. So then b has to be 1. We get b equal to 1. So the t equals t. But if b is 1, I need a to b minus 1 to cancel that. So a is minus 1. And my answer is minus 1 plus 1t. t minus 1.
And if I put that into the equation, it will be correct. So I have found a particular solution, and that's my goal, because I know how to find null solutions. And then together, that's the complete solution. So we've learned what to try with polynomials. With a power of t, we want to include that power and all lower powers, all the way down through the constants. OK.
With exponentials, we just have to include the exponential. What next? How about sine t or cosine t? Say sine t. So that case works. Now we want to try y double prime plus y prime plus y equals, say, sine t. OK. What form do we assume for that?
Well, I can tell you quickly. We assume a sine t in it. And we also need to assume a cosine t. The rule is that the things we try-- so I'll try y. y particular is what we're always finding. Some c1 cos t, and some c2 sine of t. That will do it.
In fact, if I plug that in, and I match the two sides, I determine c1 and c2, I'm golden. Let me just comment on that, rather than doing out every step. Again, the steps are just substitute that in and make the equation correct by choosing a good c1 and c2.
I just noticed that, you remember from Euler's great formula that the cosine is a combination of e to the it and e to the minus it. So in a way, we're really using the original example. We're using this example, e to the st, with two s's, e to the it, and e to the minus it. So we have two exponentials in a cosine. So I'm not surprised that there are two constants to find.
And now, finally, I have to say, is this the end of nice functions? So nice functions include exponentials, polynomials. These are really exponentials, complex exponentials. And no, there's one more possibility that we can deal with in this simple way.
And that possibility is a product of-- so now I'll show you what to do if it was t times sine t. Suppose we have the right-hand side, the f of t, the forcing term, is t times sine t. What is the form to assume? That's really all you have to know is what form to assume? OK.
Now, that t-- so we have here a product, a polynomial times a sine or cosine or exponential. I could have done t e to the st there. But what do I have to do when the t shows up there? Then I have to try something more with that t in there. So now I have a product of polynomial times sine, cosine, or exponential.
So what I try is at plus-- or rather, a plus bt. I try a product. Times cos t and c plus dt times sine t. That's about as bad a case as we're going to see. But it's still quite pleasant. So what do I see there?
Because of the t here, I needed to assume polynomials up to that same degree 1. So a plus bt. Had to do that, just the way I did up there when there was a t. But now it multiplies sine t. So I have to allow sine t and also cosine t.
The pattern is, really, we've sort of completed the list of nice functions. Exponentials, polynomials, and polynomial times exponential. That's really what a nice function is. A polynomial times an exponential. Or we could have a sum of those guys. We could have two or three polynomial times exponential, like there and another one. And that's still a nice function.
And what's the real key to nice functions? The key point is, why is this such a good bunch of functions? Because, if I take its derivative, I get a function of the same form. If I take the derivative of that right-hand side, and I use the product rule, you see I'll get this times the derivative of that. So I'll have something looking with a sine in there. And I get this times the derivative of that, which is just a b.
So again, it fits the same form, polynomial times cosine, polynomial times sine. So here I have a case where I have actually four coefficients. But they'll all fall out when you plug that into the equation. You just match terms and your golden. So it really is a straightforward method. Straightforward.
So the key about nice functions is-- and they're nice for Laplace transforms, they're nice at every step. But it's the same good functions that we keep discovering as our best examples. The key about nice functions is that the-- that's a form of a nice function because its derivative has the same form. The derivative of that function fits that pattern again. And then the second derivative fits. All the derivatives fit. So when we put them in the equation, everything fits.
And always in the last minute of a lecture, there's a special case. There's a special case. And let's remember what that is. So special case when we have to change the form. And why would we have to do that?
Let me do y double prime minus y, say, is e to the t. What is special about that? What's special is that this right-hand side, this f function, solves this equation. If I try e to the t, it will fail. Try y equals some Y e to the t. Do you see how that's going to fail?
If I put that into the equation, the second derivative will cancel the y and I'll have 0 on the left side. Failure, because that's the case called resonance. This is a case of resonance, when the form of the right-hand side is a null solution at the same time. It can't be a particular solution. It won't work because it's also a null solution.
And do you remember how to escape resonance? How to deal with resonance? What happens with resonance? The solution is a little more complicated, but it fits everything here. We have to assume to allow a t. We have to allow a t.
So instead of this multiple, the and in this thing, we have to allow-- so I'm going to assume-- I have to have-- I need a t in there. Oh, no. Actually, I don't. I just need a t. That would do it.
When there's resonance, take the form you would normally assume and multiply by that extra factor t. Then, when I substitute that into the differential equation, I'll find Y's quite safely. I'll find Y entirely safely. So I do that. So that's the resonant case, the sort of special situation when e to the t solved this.
So we need something new. And the way we get the right new thing is to have a t in there. So when I plug that in, I take the second derivative of that, subtract off that itself, match e to the t. And that will tell me the number Y.
Perhaps it's 1/2 or 1. I won't do it. Maybe I'll leave that as an exercise. Put that into the equation and determine the number capital Y. OK. Let me pull it together. So we have certain nice functions, which we're going to see again, because they're nice. Every method works well for these functions.
And these functions are exponentials, polynomials, or polynomials times exponentials. And within exponentials, I include sine and cosine. And for those functions, we know the form. We plug it into the equation. We make it match. We choose these undetermined coefficients. We determine them so that they solve the equation. And then we've got a particular solution.
So this is the best equations to solve to find particular solutions. Just by knowing the right form and finding the constants, it did come out of the particular equation. OK. All good, thanks.
1.1: Overview of Differential Equations Linear equations include dy/dt = y, dy/dt = –y, dy/dt = 2ty . The equation dy/dt = y *y is nonlinear.
1.2: The Calculus You Need The sum rule, product rule, and chain rule produce new derivatives from the derivatives of xn , sin(x ) and ex . The Fundamental Theorem of Calculus says that the integral inverts the derivative.
1.4b: Response to Exponential Input, exp(s*t) With exponential input, est , from outside and exponential growth, eat , from inside, the solution, y(t), is a combination of two exponentials.
1.4c: Response to Oscillating Input, cos(w*t) An oscillating input cos(ωt ) produces an oscillating output with the same frequency ω (and a phase shift).
1.4d: Solution for Any Input, q(t) To solve a linear first order equation, multiply each input q(s) by its growth factor and integrate those outputs.
1.4e: Step Function and Delta Function A unit step function jumps from 0 to 1. Its slope is a delta function: zero everywhere except infinite at the jump.
1.5: Response to Complex Exponential, exp(i*w*t) = cos(w*t)+i*sin(w*t) For linear equations, the solution for f = cos(ωt ) is the real part of the solution for f = eiωt . That complex solution has magnitude G (the gain).
1.6: Integrating Factor for a Constant Rate, a The integrating factor e-at multiplies the differential equation, y’=ay+q, to give the derivative of e-at y: ready for integration.
1.6b: Integrating Factor for a Varying Rate, a(t) The integral of a varying interest rate provides the exponent in the growing solution (the bank balance).
1.7: The Logistic Equation When –by2 slows down growth and makes the equation nonlinear, the solution approaches a steady state y( ∞) = a/b.
1.7c: The Stability and Instability of Steady States Steady state solutions can be stable or unstable – a simple test decides.
2.1: Second Order Equations For the oscillation equation with no damping and no forcing, all solutions share the same natural frequency.
2.1b: Forced Harmonic Motion With forcing f = cos(ωt ), the particular solution is Y *cos(ωt ). But if the forcing frequency equals the natural frequency there is resonance.
2.3: Unforced Damped Motion With constant coefficients in a differential equation, the basic solutions are exponentials est . The exponent s solves a simple equation such as As2 + Bs + C = 0 .
2.3c: Impulse Response and Step Response The impulse response g is the solution when the force is an impulse (a delta function). This also solves a null equation (no force) with a nonzero initial condition.
2.4: Exponential Response - Possible Resonance Resonance occurs when the natural frequency matches the forcing frequency — equal exponents from inside and outside.
2.4b: Second Order Equations With Damping A damped forced equation has a particular solution y = G cos(ωt – α). The damping ratio provides insight into the null solutions.
2.5: Electrical Networks: Voltages and Currents Current flowing around an RLC loop solves a linear equation with coefficients L (inductance), R (resistance), and 1/C (C = capacitance).
2.6: Methods of Undetermined Coefficients With constant coefficients and special forcing terms (powers of t , cosines/sines, exponentials), a particular solution has this same form.
2.6b: An Example of Method of Undetermined Coefficients This method is also successful for forces and solutions such as (at2 + bt +c) est : substitute into the equation to find a, b, c .
2.6c: Variations of Parameters Combine null solutions y1 and y2 with coefficients c1(t) and c2(t) to find a particular solution for any f(t).
2.7: Laplace Transform: First Order Equation Transform each term in the linear differential equation to create an algebra problem. You can then transform the algebra solution back to the ODE solution, y(t) .
2.7b: Laplace Transform: Second Order Equation The second derivative transforms to s2Y and the algebra problem involves the transfer function 1/ (As2 + Bs +C).
3.1: Pictures of the Solutions The direction field for dy/dt = f(t,y) has an arrow with slope f at each point t, y . Arrows with the same slope lie along an isocline.
3.2: Phase Plane Pictures: Source, Sink Saddle Solutions to second order equations can approach infinity or zero. Saddle points contain a positive and also a negative exponent or eigenvalue.
3.2b: Phase Plane Pictures: Spirals and Centers Imaginary exponents with pure oscillation provide a “center” in the phase plane. The point (y, dy/dt) travels forever around an ellipse.
3.2c: Two First Order Equations: Stability A second order equation gives two first order equations for y and dy/dt . The matrix becomes a companion matrix.
3.3: Linearization at Critical Points A critical point is a constant solution Y to the differential equation y’ = f(y) . Near that Y , the sign of df/dy decides stability or instability.
3.3b: Linearization of y'=f(y,z) and z'=g(y,z) With two equations, a critical point has f(Y,Z) = 0 and g(Y,Z) = 0. Near those constant solutions, the two linearized equations use the 2 by 2 matrix of partial derivatives of f and g .
3.3c: Eigenvalues and Stability: 2 by 2 Matrix, A Two equations y’ = Ay are stable (solutions approach zero) when the trace of A is negative and the determinant is positive.
5.1: The Column Space of a Matrix, A An m by n matrix A has n columns each in R m . Capturing all combinations Av of these columns gives the column space – a subspace of R m .
5.4: Independence, Basis, and Dimension Vectors v 1 to v d are a basis for a subspace if their combinations span the whole subspace and are independent: no basis vector is a combination of the others. Dimension d = number of basis vectors.
5.5: The Big Picture of Linear Algebra A matrix produces four subspaces – column space, row space (same dimension), the space of vectors perpendicular to all rows (the nullspace), and the space of vectors perpendicular to all columns.
5.6: Graphs A graph has n nodes connected by m edges (other edges can be missing). This is a useful model for the Internet, the brain, pipeline systems, and much more.
6.1: Eigenvalues and Eigenvectors The eigenvectors x remain in the same direction when multiplied by the matrix (A x = λx ). An n x n matrix has n eigenvalues.
6.2: Diagonalizing a Matrix A matrix can be diagonalized if it has n independent eigenvectors. The diagonal matrix Λis the eigenvalue matrix.
6.3: Solving Linear Systems d y /dt = A y contains solutions y = eλt x where λ and x are an eigenvalue / eigenvector pair for A .
6.4: The Matrix Exponential, exp(A*t) The shortest form of the solution uses the matrix exponential y = eAt y (0) . The matrix eAt has eigenvalues eλt and the eigenvectors of A.
6.4b: Similar Matrices, A and B=M^(-1)*A*M A and B are “similar” if B = M-1AM for some matrix M . B then has the same eigenvalues as A .
6.5: Symmetric Matrices, Real Eigenvalues, Orthogonal Eigenvectors Symmetric matrices have n perpendicular eigenvectors and n real eigenvalues.
7.2: Positive Definite Matrices, S=A'*A A positive definite matrix S has positive eigenvalues, positive pivots, positive determinants, and positive energy vT Sv for every vector v. S = AT A is always positive definite if A has independent columns.
7.2b: Singular Value Decomposition, SVD The SVD factors each matrix A into an orthogonal matrix U times a diagonal matrix Σ (the singular value) times another orthogonal matrix VT : rotation times stretch times rotation.
7.3: Boundary Conditions Replace Initial Conditions A second order equation can change its initial conditions on y(0) and dy/dt(0) to boundary conditions on y(0) and y(1) .
8.1: Fourier Series A Fourier series separates a periodic function F(x) into a combination (infinite) of all basis functions cos(nx) and sin(nx) .
8.1b: Examples of Fourier Series Even functions use only cosines (F(–x) = F(x) ) and odd functions use only sines. The coefficients an and bn come from integrals of F(x) cos(nx ) and F(x) sin(nx ).
8.1c: Fourier Series Solution of Laplace's Equation Inside a circle, the solution u (r , θ) combines rn cos(n θ) and rn sin(n θ). The boundary solution combines all entries in a Fourier series to match the boundary conditions.
8.3: Heat Equation The heat equation ∂u /∂t = ∂2u /∂x2 starts from a temperature distribution u at t = 0 and follows it for t > 0 as it quickly becomes smooth.