From the series: Differential Equations and Linear Algebra
Gilbert Strang, Massachusetts Institute of Technology (MIT)
The second derivative transforms to s2Y and the algebra problem involves the transfer function 1/ (As2 + Bs +C).
OK, this is my second video on the Laplace transform, and this one will be about solving second order equations.
So let me remember the plan. So there's our second order equation. And I'm taking this example first, with the delta function on the right-hand side. You remember, that's a key example. And actually, we have a special letter for the solution. This is an impulse. And the solution is the impulse response. And I use a little g. So I should have turned this y into a g. So that g is the solution, starting from 0 initial conditions, with a delta.
So what's the plan? We want to take the transform of every term. We have to check, what is the transform, the Laplace transform of the delta function? You remember the definition of the transform. It's this integral. You take your function-- whatever it is, here delta-- multiply by e to the minus st, and integrate. s equals 0 to infinity.
Well, easy to integrate with a delta function there. It's 0, except where the impulse is, at t equals 0. And at t equals 0, this is 1. So the answer is 1. That's the nice Laplace transform of the impulse. And now I want the transform of the impulse response.
So the impulse response comes from this equation. So now, transform every term. So the transform of g will be called capital G. And the transform of the delta function is 1. And the derivative, you remember, has an extra factor, s. The transform of the derivative from the first Laplace transform video was s times g of s.
And the second derivative, another s. So s squared, times g of s. We're not surprised to see the very familiar quadratic, whose roots are the two exponents s1 and s2, showing up here. We've seen this every time, because we have constant coefficients. We always see this quantity. And now we see that, look, the transform, capital G. I divide by that. It's exactly the transfer function. So that's connecting in the idea of the transfer function to the Laplace transform of the impulse response, because I have this in the denominator.
OK, so I want now-- I've taken the transform of the equation. I've got the transform of little g, the impulse response. So now I'm ready to find G, the impulse response by invert Laplace transform. How do I find the function with that Laplace transform?
Right now, it's 1 over a quadratic. The whole idea of partial fractions is, split this so this G of s, final step. It's G of s. Well, you remember that this is the polynomial that has the two roots, s1 and s2. I'm going to write that as 1 over s minus s1, and s minus s2.
Those are the two roots from the quadratic formula-- the two poles, we could say, poles of G of s. And now I want to use partial fractions. So I want to separate this into two fractions. And it turns out that they are 1 over s minus s1, minus 1 over s minus s2. And it turns out there's a factor there to make it correct. You could check. When you put these, this over a common denominator, you get this. And when you put that over a common denominator, you'll get a numerator, which you have to cancel. OK.
So then, now I have two simple poles. And I could write here what I know, what g of t is. Remember, the function with that transform is just e to the s1 t. The one most important of all transforms is the transform of the exponential, is that simple pole.
So now I invert the Laplace transform. So this gives me an e to the s1 t, minus the function with that transform, is e to the s2 t. And I still have this constant, s1 minus s2.
I've re-discovered the impulse response. It's a solution to my equation with impulse force. And it's that particular function that plays such an important part in the whole subject of constant coefficient differential equations. Because you see it's the critical thing here. Here's the critical transfer function, and here is the inverse Laplace transform. The Laplace transform of that is that. Good. That's that example.
And now I just want to take another function than delta. OK. I could go all the way to take f of t, any f of t. But let me stay with examples. So now I'm going to do y double prime, plus b y prime, plus cy. They're all the most important example. The best one we could do would be cosine of omega t. An oscillating problem, with an oscillating force, at a frequency different from the natural frequency, and we also have damping there.
So this is the standard, spring mass dashpot problem, or RLC circuit problem, certainly RLC circuits, highly important. And you might remember the solution gets a little messy. A solution gets a little messy. It's highly important, but-- so I'll carry it through to the last step. But I probably won't take that final step. Just, what I want you to see is another example of the inverse Laplace transform.
So what do we need? We need the Laplace transform of that. I plan to take the Laplace transform of every term. s squared plus bs plus c. We'll multiply. I'm transforming everything. And here I have to put the transform of that. OK. So how will I get that? And of course, there might be a sine omega t in there. I would really like to get them both at once.
So let me put down them both at once. The cosine and the sine are the real and imaginary parts of e to the i omega t, right? Euler's great formula. e to the i omega t is cosine omega t, the real part, plus i sine omega t, the imaginary part.
But now I know the transform of e to the a t, e to the i omega t. So that transforms to-- so I want the real and the imaginary parts of 1 over, you remember what it is. It's just that simple pole again, s minus the exponent i omega.
So I'm going to get the cosine and the sine at the same time, from one calculation, finding the real and imaginary parts of this, 1 over s plus s minus i omega. How do you deal with a pole, when if you want the real and imaginary parts, you're happy to get a real number in the denominator and see the real and imaginary parts up above. I don't like it when it's down there.
So what I'm going to do is, real and imaginary parts of, I'll multiply s minus i omega. This is the key trick with complex numbers. It comes up enough, so it's good to learn. I multiply that by its conjugate, s plus i omega over s plus i omega. So I multiplied by 1.
But you see, what's happened now is, real and imaginary parts of-- I've got what I want, s plus i omega is now up in the numerator, where I can see it. And what do I have down below? s minus i omega times s plus i omega, the very important quantity. S squared minus i omega s, plus i omega s, minus i squared, that's plus 1 omega squared plus omega squared. OK. We've done it.
I've transformed the cosine and the sine into s over this, and omega over this. So two at once, two transforms. And of course, as always, we're able to do, and recognize, and work with the transforms of a special group of nice functions, exponentials above all, sines and cosines coming from exponentials, delta functions. It's a short list. Those are the ones we can do, and fortunately those are the ones that we need to do.
OK, so I'm now ready to put in the right-- what did that turn out to be? It turned out to be s from the cosine, I'm not doing the sine now, the cosine. I'm taking the real part. It's s over that positive s squared plus omega squared. Are you with me? Left-hand side, all normal. I'm starting from 0 initial conditions, otherwise I would see the initial values in here. But I don't, because they're 0.
And over here, I've got the transform of the right-hand side. OK? Then I just bring that down there. So finally, I know y of s is s over this all-important quadratic, and then I have the s squared plus omega squared.
OK. Well, you see it did get harder. We have s squared from a quadratic there. We have two quadratics. We have a fourth-degree polynomial down there. Partial fractions will work. Partial fractions can simplify this, for any polynomials, but the algebra gets quickly worse when you get up to degree four. But actually, this can be, it could be done. And I don't plan to do it. To me, this would eventually give the solution to this example. But we have other ways to get that solution. And I believe, for me at least, the other ways are simpler.
I know that the solution is a combination of cos omega t and sine omega t. And this is the particular solution I'm talking about. And I can figure out what those combinations are, because I know the right form. Here, I have to deal with partial fractions and degree four, down there. I'm going to chicken out on that one. I won't completely chicken out. I'll say what the pieces look like. But I won't figure out all the numbers. OK.
So all this thing, I see as some constant over-- well, this factors into s minus s1 s minus s2. Those are the two roots, as we saw above. So I have a linear, a linear, and a quadratic. And partial fraction says that I can separate out the first linear, the second linear, and the third quadratic, which I could factor too, but-- I could factor s squared plus omega squared into that times that, but it brings in imaginary numbers. So I'll put a cs and a d. OK. Four numbers to be determined, or rather not to be determined. Because I'm going to stop there.
What I've discovered is this part would be the null solution, would be a null solution because it's involves e to the s1 t and e to the s2 t. This part, when I find the inverse transform, it must be the combination of cos omega t and sine omega t. So that's some combination, when I find that it's the transform of--
1.1: Overview of Differential Equations Linear equations include dy/dt = y, dy/dt = –y, dy/dt = 2ty . The equation dy/dt = y *y is nonlinear.
1.2: The Calculus You Need The sum rule, product rule, and chain rule produce new derivatives from the derivatives of xn , sin(x ) and ex . The Fundamental Theorem of Calculus says that the integral inverts the derivative.
1.4b: Response to Exponential Input, exp(s*t) With exponential input, est , from outside and exponential growth, eat , from inside, the solution, y(t), is a combination of two exponentials.
1.4c: Response to Oscillating Input, cos(w*t) An oscillating input cos(ωt ) produces an oscillating output with the same frequency ω (and a phase shift).
1.4d: Solution for Any Input, q(t) To solve a linear first order equation, multiply each input q(s) by its growth factor and integrate those outputs.
1.4e: Step Function and Delta Function A unit step function jumps from 0 to 1. Its slope is a delta function: zero everywhere except infinite at the jump.
1.5: Response to Complex Exponential, exp(i*w*t) = cos(w*t)+i*sin(w*t) For linear equations, the solution for f = cos(ωt ) is the real part of the solution for f = eiωt . That complex solution has magnitude G (the gain).
1.6: Integrating Factor for a Constant Rate, a The integrating factor e-at multiplies the differential equation, y’=ay+q, to give the derivative of e-at y: ready for integration.
1.6b: Integrating Factor for a Varying Rate, a(t) The integral of a varying interest rate provides the exponent in the growing solution (the bank balance).
1.7: The Logistic Equation When –by2 slows down growth and makes the equation nonlinear, the solution approaches a steady state y( ∞) = a/b.
1.7c: The Stability and Instability of Steady States Steady state solutions can be stable or unstable – a simple test decides.
2.1: Second Order Equations For the oscillation equation with no damping and no forcing, all solutions share the same natural frequency.
2.1b: Forced Harmonic Motion With forcing f = cos(ωt ), the particular solution is Y *cos(ωt ). But if the forcing frequency equals the natural frequency there is resonance.
2.3: Unforced Damped Motion With constant coefficients in a differential equation, the basic solutions are exponentials est . The exponent s solves a simple equation such as As2 + Bs + C = 0 .
2.3c: Impulse Response and Step Response The impulse response g is the solution when the force is an impulse (a delta function). This also solves a null equation (no force) with a nonzero initial condition.
2.4: Exponential Response - Possible Resonance Resonance occurs when the natural frequency matches the forcing frequency — equal exponents from inside and outside.
2.4b: Second Order Equations With Damping A damped forced equation has a particular solution y = G cos(ωt – α). The damping ratio provides insight into the null solutions.
2.5: Electrical Networks: Voltages and Currents Current flowing around an RLC loop solves a linear equation with coefficients L (inductance), R (resistance), and 1/C (C = capacitance).
2.6: Methods of Undetermined Coefficients With constant coefficients and special forcing terms (powers of t , cosines/sines, exponentials), a particular solution has this same form.
2.6b: An Example of Method of Undetermined Coefficients This method is also successful for forces and solutions such as (at2 + bt +c) est : substitute into the equation to find a, b, c .
2.6c: Variations of Parameters Combine null solutions y1 and y2 with coefficients c1(t) and c2(t) to find a particular solution for any f(t).
2.7: Laplace Transform: First Order Equation Transform each term in the linear differential equation to create an algebra problem. You can then transform the algebra solution back to the ODE solution, y(t) .
2.7b: Laplace Transform: Second Order Equation The second derivative transforms to s2Y and the algebra problem involves the transfer function 1/ (As2 + Bs +C).
3.1: Pictures of the Solutions The direction field for dy/dt = f(t,y) has an arrow with slope f at each point t, y . Arrows with the same slope lie along an isocline.
3.2: Phase Plane Pictures: Source, Sink Saddle Solutions to second order equations can approach infinity or zero. Saddle points contain a positive and also a negative exponent or eigenvalue.
3.2b: Phase Plane Pictures: Spirals and Centers Imaginary exponents with pure oscillation provide a “center” in the phase plane. The point (y, dy/dt) travels forever around an ellipse.
3.2c: Two First Order Equations: Stability A second order equation gives two first order equations for y and dy/dt . The matrix becomes a companion matrix.
3.3: Linearization at Critical Points A critical point is a constant solution Y to the differential equation y’ = f(y) . Near that Y , the sign of df/dy decides stability or instability.
3.3b: Linearization of y'=f(y,z) and z'=g(y,z) With two equations, a critical point has f(Y,Z) = 0 and g(Y,Z) = 0. Near those constant solutions, the two linearized equations use the 2 by 2 matrix of partial derivatives of f and g .
3.3c: Eigenvalues and Stability: 2 by 2 Matrix, A Two equations y’ = Ay are stable (solutions approach zero) when the trace of A is negative and the determinant is positive.
5.1: The Column Space of a Matrix, A An m by n matrix A has n columns each in R m . Capturing all combinations Av of these columns gives the column space – a subspace of R m .
5.4: Independence, Basis, and Dimension Vectors v 1 to v d are a basis for a subspace if their combinations span the whole subspace and are independent: no basis vector is a combination of the others. Dimension d = number of basis vectors.
5.5: The Big Picture of Linear Algebra A matrix produces four subspaces – column space, row space (same dimension), the space of vectors perpendicular to all rows (the nullspace), and the space of vectors perpendicular to all columns.
5.6: Graphs A graph has n nodes connected by m edges (other edges can be missing). This is a useful model for the Internet, the brain, pipeline systems, and much more.
6.1: Eigenvalues and Eigenvectors The eigenvectors x remain in the same direction when multiplied by the matrix (A x = λx ). An n x n matrix has n eigenvalues.
6.2: Diagonalizing a Matrix A matrix can be diagonalized if it has n independent eigenvectors. The diagonal matrix Λis the eigenvalue matrix.
6.3: Solving Linear Systems d y /dt = A y contains solutions y = eλt x where λ and x are an eigenvalue / eigenvector pair for A .
6.4: The Matrix Exponential, exp(A*t) The shortest form of the solution uses the matrix exponential y = eAt y (0) . The matrix eAt has eigenvalues eλt and the eigenvectors of A.
6.4b: Similar Matrices, A and B=M^(-1)*A*M A and B are “similar” if B = M-1AM for some matrix M . B then has the same eigenvalues as A .
6.5: Symmetric Matrices, Real Eigenvalues, Orthogonal Eigenvectors Symmetric matrices have n perpendicular eigenvectors and n real eigenvalues.
7.2: Positive Definite Matrices, S=A'*A A positive definite matrix S has positive eigenvalues, positive pivots, positive determinants, and positive energy vT Sv for every vector v. S = AT A is always positive definite if A has independent columns.
7.2b: Singular Value Decomposition, SVD The SVD factors each matrix A into an orthogonal matrix U times a diagonal matrix Σ (the singular value) times another orthogonal matrix VT : rotation times stretch times rotation.
7.3: Boundary Conditions Replace Initial Conditions A second order equation can change its initial conditions on y(0) and dy/dt(0) to boundary conditions on y(0) and y(1) .
8.1: Fourier Series A Fourier series separates a periodic function F(x) into a combination (infinite) of all basis functions cos(nx) and sin(nx) .
8.1b: Examples of Fourier Series Even functions use only cosines (F(–x) = F(x) ) and odd functions use only sines. The coefficients an and bn come from integrals of F(x) cos(nx ) and F(x) sin(nx ).
8.1c: Fourier Series Solution of Laplace's Equation Inside a circle, the solution u (r , θ) combines rn cos(n θ) and rn sin(n θ). The boundary solution combines all entries in a Fourier series to match the boundary conditions.
8.3: Heat Equation The heat equation ∂u /∂t = ∂2u /∂x2 starts from a temperature distribution u at t = 0 and follows it for t > 0 as it quickly becomes smooth.